The document discusses the Petabye Scale Data Challenge for the Worldwide LHC Computing Grid (WLCG). It outlines the objectives and milestones of WLCG and describes the ASGC Tier-1 Center in Taiwan and its role in providing services to the Asia Pacific region. It then discusses the challenges of managing petabyte-scale data, including ensuring sufficient bandwidth and storage system performance, scalability, and manageability. The storage management system used is CASTOR and its various configurations including disk caches, tape pools, and core services are described. Plans for expanding storage capacity and upgrading the tape system are also mentioned.
Two Figures and related images of bacterial nanowires and metal concentrations in a bacterial cell that accompany Lattice paper by Lewis Larsen concerning possibility of bacterial transmutations dated December 7, 2010.
Sensors are becoming ubiquitous in our lives and possible applications are countless. Micro and nanotechnologies are the natural choice for enabling complex sensor nodes, as they are small (thus unobtrusive), cheap and low power. Carbon nanotubes (CNTs) are a perfect example of how nanosystems offer features unachievable with microsystems: their outstanding structural, mechanical and electronic properties have immediately resulted in numerous device demonstrators from transistors, to physical and chemical sensors, and actuators. A key idea of the project is to combine elements from the fundamental knowledge base on the physics of carbon nanotubes, gathered in the past several years, and the fundamental engineering sciences in the area of micro/nano-electromechanical systems, to develop novel devices and processes based on CNTs.
Specificaly, it seeks to demonstrate concepts and devices for ultra-low power, highly miniaturized functional blocks for sensing and electronics. Due to their small mass and high stiffness, doubly clamped CNTs can exhibit huge resonant frequencies. These are carbon nanotube resonators which, as recently demonstrated or predicted theoretically, can reach the multi-GHz range, can be tuned via straining over a wide range of frequency, offer an unprecedented sensitivity to strain or mass loading, exhibit high quality factors, and all these with a very low power consumption.
Review of CERN's objectives and how the computing infrastructure is evolving to address the challenges at scale using community supported software such as Puppet and OpenStack.
This is the presentation for a webinar that we recently held, explaining the use of energy-dispersive spectrometry (EDS) on the scanning electron microscope (SEM) and micro-X-ray fluorexcence spectrometry (µ-XRF) in geosciences. You will find lots of interesting applications from this field. If you are interested in viewing a recording of the webinar, please follow this link: https://bruker.webex.com/bruker/lsr.php?AT=pb&SP=EC&rID=65244917&rKey=c8fbbf90d4bab945
Rob Meagley and Andrew Bleloch at Health Extension Salon #5Health_Extension
Dr. Rob Meagley and Dr. Andrew Bleloch present their focused summary of the recent 2012 Foresight Nanotechnology Conference, answering the question: which will be powerfull enough to intervene in aging processes first: biotechnology or nanotechnology?
Two Figures and related images of bacterial nanowires and metal concentrations in a bacterial cell that accompany Lattice paper by Lewis Larsen concerning possibility of bacterial transmutations dated December 7, 2010.
Sensors are becoming ubiquitous in our lives and possible applications are countless. Micro and nanotechnologies are the natural choice for enabling complex sensor nodes, as they are small (thus unobtrusive), cheap and low power. Carbon nanotubes (CNTs) are a perfect example of how nanosystems offer features unachievable with microsystems: their outstanding structural, mechanical and electronic properties have immediately resulted in numerous device demonstrators from transistors, to physical and chemical sensors, and actuators. A key idea of the project is to combine elements from the fundamental knowledge base on the physics of carbon nanotubes, gathered in the past several years, and the fundamental engineering sciences in the area of micro/nano-electromechanical systems, to develop novel devices and processes based on CNTs.
Specificaly, it seeks to demonstrate concepts and devices for ultra-low power, highly miniaturized functional blocks for sensing and electronics. Due to their small mass and high stiffness, doubly clamped CNTs can exhibit huge resonant frequencies. These are carbon nanotube resonators which, as recently demonstrated or predicted theoretically, can reach the multi-GHz range, can be tuned via straining over a wide range of frequency, offer an unprecedented sensitivity to strain or mass loading, exhibit high quality factors, and all these with a very low power consumption.
Review of CERN's objectives and how the computing infrastructure is evolving to address the challenges at scale using community supported software such as Puppet and OpenStack.
This is the presentation for a webinar that we recently held, explaining the use of energy-dispersive spectrometry (EDS) on the scanning electron microscope (SEM) and micro-X-ray fluorexcence spectrometry (µ-XRF) in geosciences. You will find lots of interesting applications from this field. If you are interested in viewing a recording of the webinar, please follow this link: https://bruker.webex.com/bruker/lsr.php?AT=pb&SP=EC&rID=65244917&rKey=c8fbbf90d4bab945
Rob Meagley and Andrew Bleloch at Health Extension Salon #5Health_Extension
Dr. Rob Meagley and Dr. Andrew Bleloch present their focused summary of the recent 2012 Foresight Nanotechnology Conference, answering the question: which will be powerfull enough to intervene in aging processes first: biotechnology or nanotechnology?
Plenary lecture - XV B-MRS Meeting - Campinas, SP, Brazil - September, 25 to 29, 2016.
Author: Elvira Fortunato (CENIMAT, Universidade Nova de Lisboa, Portugal).
The Large Hadron Collider at CERN is the world's most largest machine, having stored magnet energy in excess of 5 Giga-Joules, and energy stored in each circulating beam in excess of 360 Mega-Joules.
A machine protection system has been implemented to protect LHC against the risks related to stored energies, whilst allowing the machine to operate close to the physical limits.
Big Fast Data in High-Energy Particle PhysicsAndrew Lowe
Experiments at CERN (the European Organization for Nuclear Research) generate colossal amounts of data. Physicists must sift through about 30 petabytes of data produced annually in their search for new particles and interesting physics. The tidal wave of data produced by the Large Hadron Collider (LHC) at CERN places an unprecedented challenge for experiments' data acquisition systems, and it is the need to select rare physics processes with high efficiency while rejecting high-rate background processes that drives the architectural decisions and technology choices. Although filtering and managing large data sets is of course not exclusive to particle physics, the approach that has been taken is somewhat unique. In this talk, I describe the typical journey taken by data from the readout electronics of one experiment to the results of a physics analysis.
Plenary lecture - XV B-MRS Meeting - Campinas, SP, Brazil - September, 25 to 29, 2016.
Author: Elvira Fortunato (CENIMAT, Universidade Nova de Lisboa, Portugal).
The Large Hadron Collider at CERN is the world's most largest machine, having stored magnet energy in excess of 5 Giga-Joules, and energy stored in each circulating beam in excess of 360 Mega-Joules.
A machine protection system has been implemented to protect LHC against the risks related to stored energies, whilst allowing the machine to operate close to the physical limits.
Big Fast Data in High-Energy Particle PhysicsAndrew Lowe
Experiments at CERN (the European Organization for Nuclear Research) generate colossal amounts of data. Physicists must sift through about 30 petabytes of data produced annually in their search for new particles and interesting physics. The tidal wave of data produced by the Large Hadron Collider (LHC) at CERN places an unprecedented challenge for experiments' data acquisition systems, and it is the need to select rare physics processes with high efficiency while rejecting high-rate background processes that drives the architectural decisions and technology choices. Although filtering and managing large data sets is of course not exclusive to particle physics, the approach that has been taken is somewhat unique. In this talk, I describe the typical journey taken by data from the readout electronics of one experiment to the results of a physics analysis.
Solving Network Throughput Problems at the Diamond Light SourceJisc
From Jisc's campus network engineering for data-intensive science workshop on 19 October 2016.
https://www.jisc.ac.uk/events/campus-network-engineering-for-data-intensive-science-workshop-19-oct-2016
Using Photonics to Prototype the Research Campus Infrastructure of the Future...Larry Smarr
08.02.21
Presentation
Philip Papadopoulos, Larry Smarr, Joseph Ford, Shaya Fainman, and Brian Dunne
University of California, San Diego
Title: Using Photonics to Prototype the Research Campus Infrastructure of the Future: The UCSD Quartzite Project
La Jolla, CA
Ayar Labs TeraPHY: A Chiplet Technology for Low-Power, High-Bandwidth In-Pack...inside-BigData.com
Today at Hot Chips 2019, Intel engineers presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O.
"To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”
Learn more: https://www.intel.ai/accelerating-for-ai/?elq_cid=1192980
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How to Terminate the GLIF by Building a Campus Big Data Freeway SystemLarry Smarr
12.10.11
Keynote Lecture
12th Annual Global LambdaGrid Workshop
Title: How to Terminate the GLIF by Building a Campus Big Data Freeway System
Chicago, IL
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Normal Labour/ Stages of Labour/ Mechanism of Labour
Petabye scale data challenge
1. Petabye Scale Data Challenge
- Worldwide LHC Computing Grid
ASGC/Jason Shih
Computex, Jun 2nd, 2010
2. Outline
Objectives & Milestones
WLCG experiment and ASGC Tier-1 Center
Petabyte Scale Challenge
Storage Management System
System Architecture, Configuration and
Performance
3. Objectives
Building sustainable research and collaboration
infrastructure
Support research by e-Science, on data intensive
sciences and applications require cross disciplinary
distributed collaboration
4. ASGC Milestone
Operational from the deployment of LCG0 since 2002
ASGC CA establish on 2005 (IGTF in same year)
Tier-1 Center responsibility start from 2005
Federated Taiwan Tier-2 center (Taiwan Analysis Facility, TAF)
is also collocated in ASGC
Rep. of EGEE e-Science Asia Federation while joining EGEE
from 2004
Providing Asia Pacific Regional Operation Center (APROC)
services to regional-wide WLCG/EGEE production
infrastructure from 2005
Initiate Avian Flu Drug Discovery Project and collaborate with
EGEE in 2006
Start of EUAsiaGrid Project from April 2008
5. LHC First Beam – Computing at the Petascale
General Purpose, pp, heavy ions
LHCb: B-physics, CP Violation ALICE: Heavy ions, pp
CMS: General Purpose, pp, heavy ions
ATLAS: General Purpose, pp, heavy ions
7. Standard Cosmology
Good model from 0.01 sec
after Big Bang
Energy, Density, Temperature
Supported by considerable
observational evidence
Time
Elementary Particle Physics
From the Standard Model into the
unknown: towards energies of
1 TeV and beyond: the Terascale
Towards Quantum Gravity
From the unknown into the
unknown...
http://www.damtp.cam.ac.uk/user/gr/public/bb_history.html
UNESCO Information 7
Preservation debate, April 2007 -
Jamie Shiers@cern ch
8. WLCG Timeline
First Beam on LHC, Sep.
10, 2008
Severe Incident after 3w
operation (3.5TeV)
9. Max CERN/T1-ASGC Point2Point
Inbound : 9.3 Gbps
ASGC - Introduction
1. Most Reliable T1: 98.83%
2. Very Highly Performing and
most Stable Site in CCRC08
Asia Pacific Regional
A Worldwide Grid Operation Center
Infrastructure
>250 sites, 48 countries
>68,000 CPUs, >25 PetaBytes
>10,000 users, >200 VOs
>150,000 jobs/day
Best Demo Award of EGEE’07
Grid Application Platform Avian Flu Drug Discovery
Lightweight Problem Solving Large Hadron Collider (LHC)
Framework
21
10. Collaborating e-Infrastructures
TWGRID
EUAsiaGrid
Potential for linking ~80 countries
“Production” =
Reliable, sustainable, with commitments to quality of service
11. WLCG Computing Model
- The Tier Structure
Tier-0 (CERN)
Data recording
Initial data reconstruction
Data distribution
Tier-1 (11 countries)
Permanent storage
Re-processing
Analysis
Tier-2 (~130 countries)
Simulation
End-user analysis
12. Enabling Grids for E-sciencE
Archeology
Astronomy
Astrophysics
Civil Protection
Comp. Chemistry
Earth Sciences
Finance
Fusion
Geophysics
High Energy Physics
Life Sciences
Multimedia
Material Sciences
…
EGEE-II INFSO-RI-031688 EGEE07, Budapest, 1-5 October 2007 4
13. Why Petabyte? Challenges
Why Petabyte?
Experiment Computing Model
Comparing with conventional data management
Challenges
Performance: LAN and WAN activities
Sufficient B/W between CPU Farm
Eliminate Uplink Bottleneck (Switch Tires)
Fast responding of Critical Events
Fabric Infrastructure & Service Level Agreement
Scalability and Manageability
Robust DB engine (Oracle RAC)
KB and Adequate Administration (Training)
17. WLCG Tier-1
- Defined Minimum Levels of Services.
Define response time refer to max delay before taking action.
Mean time repairing the service is also crucial but cover
indirectly through required availability target.
18. WLCG MoU & ASGC Resource Level
- Pledged Resources and Projection
Year CPU (HEP2k6) Disk (PB) Tape (PB)
End 2009 29.5K 2.6 2.4
Mou 2009 20K 3.0 3.0
Mou 2010 28K 3.5 3.5
6000 CPU MoU 6000
CPU
5000 5000
(Unit KSI2k)
Disk
TeraByte
4000 Tape 4000
DISK MoU
3000 3000
Tape MoU
2000 2000
1000 1000
0 0
2005 2006 2007 2008 2009 2010
19. Data Management System
CASTOR V1
CERN Advanced STORage
Satisfactorily serving 10s of 1K
Req/day/TB of Disk Cache
Limitation: 1M files in cache
Tape movement API not flexible
CASTOR V2
Centric DB Arch.
Scheduling Feature
GSI and Kerberos
Resource Mgmt
Resource Handling
20. CASTOR Configurations
- Current Infrastructure
Shared cores services
Serving: Atlas and CMS
Services:
Stager, NS, DLF, Repack, and LSF
DB cluster
Two DB Clusters (SRM and NS)
5 Services (DB) split into two clusters
5 Oracle Instances
Total capacity: 0.63PB and 0.7PB for CMS and Atlas resp.
Current usage: 63% and 44% for CMS and Atlas
21. CASTOR Configurations (cont’)
- Disk Cache
Disk pools & servers
Performance (IOPS)
With 0.5kB IO size: 76.4k and 54k for read & write resp.
Slightly decrease around 9% for both read and write
when inc. IO size to 4kB.
80 disk servers (+6 will be online end of 3rdw Oct)
Total capacity: 1.67PB (0.3PB allocate dynamically)
Current usage: 0.79PB (~58% usage)
14 disk pools (8 for atlas and 3 for CMS, another three
for bio, SAM, and dynamic)
22. at
la
sG
RO Total Capacity (TB)
bi UP
om D
0
50
100
150
200
250
300
350
400
at ed ISK
la D 450
cm sH 1T
sW otD 0
at A is
la N k
sP O
rd UT
at D
la 0
dt sS T1
at ea tag
at l a m e
la sM D
Install Capacity
sS C 0T
Disk Pool Configuration
c T 0
- T1 MSS (CASTOR)
at rat AP
l a ch E
Num of Disk Servers
sP D
cm rd isk
D
at sL 1T
l a TD 0
cm sM 0T
sP CD 1
rd ISK
D
S t 1T
an 0
db
Free Capacity
y
0
2
4
6
8
10
12
14
16
23. Distribution of Free Capacity
- Per Disk Servers vs. per Pool
Standby
dteamD0T0
cmsWANOUT
cmsPrdD1T0
cmsLTD0T1
biomedD1T0
Disk Pool
atlasStage
atlasScratchDisk
atlasPrdD1T0
atlasPrdD0T1
atlasMCTAPE
atlasMCDISK
atlasHotDisk
atlasGROUPDISK
0 50 100 150 200 250
Free Capacity (TB)
24. Storage Server Generation
- Drive vs. Total Capacity
Total Capacity of Storage
800 37
700 23
741TB
Generation (TB)
600 683TB
500
400
300 6 18
200 238TB 235.5TB
100
0
0 10 20 30 40
Numer of Raid Subsystem
25. CASTOR Configurations (cont’)
- Core Service Overview
Services OS Level Release Remark
Type
Core SLC 4.7/x86-64 2.1.7-19 Stager/NS/DLF
SRM SLC 4.7/x86-64 2.7-18 3 Head Nodes
Disk Svr. SLC 4.7/x86-64 2.1.7-19 80 Q3 2k9 (20+ in Q4)
Tape Svr. SLC 4.7/32 + 64 2.1.8-8 X86-64 OS deployed
26. CASTOR Configurations (cont’)
- CMS Disk Cache: Current Resource Level
Space Token
Capacity/ Disk TapePool/
Disk Pool
Job Limit Servers Capacity
cmsLTD0T1 278TB/488 9 *
cmsPrdD1T0 284TB/1560 13
cmsWanOut 72TB/220 4
* Dep. on tape family.
27. CASTOR Configurations (cont’)
- Atlas Disk Cache: Current Resource Level
Space Token Cap/JobLimit DiskServers TapePool/Cap.
atlasMCDISK 163TB/790 8 -
atlasMCTAPE 38TB/80 2 atlasMCtp/39TB
atlasPrdD1T0 278TB/810 15 -
atlasPrdtp/105T
atlasPrdD0T1 61TB/210 3
B
atlasGROUPDISK 19T/40 1 -
atlasScratchDisk 28TB/80 1 -
atlasHotDisk 2/40TB 2 -
Total 950TB/1835 46 -
28. IDC Collocation
Facility install complete at Mar 27th
Tape system delay after Apr 9th
Realignment
RMA for faulty parts
29. Storage Farm
~ 110 raid subsystem deployed since 2003.
Supporting both Tier1 and 2 storage fabric
DAS connection to frontend blade server
Flexible switching front end server upon
performance requirement
4-8G fiber channel connectivity
30. CASTOR Configurations (cont’)
- Tape Pool
Capacity Drive LTO3/4
Tape Pool
(TB)/Usage Dedication Mixed
atlasMCtp 8.98/40% N Y
atlasPrdtp 101/65% N Y
cmsCSA08cruzet 15.6/46% N N
cmsCSA08reco 5/0% N N
cmsCSAtp 639/99% N Y
cmsLTtp 34.4/44% N N
dteamTest 3.5/1% N N
31. MSS Monitoring Services
Std. Nagios Probes
NRPE + customized plugins
SMS to OSE/SM for all types of critical
alarms
Availability metrics
Tape metrics (SLS)
Throughput, capacity & scheduler per
VO and Diskpool
32. MSS Tape System
- Expansion/Upgrade Planning
Before incident:
LTO3 * 8 + LTO4 * 4
720TB with LTO3
530TB with LTO4
May 2009:
Two LOT3 drives
MES: 6 LTO4 drives end of May
Capacity: 1.3PB (old, LTO3,4 mixed) + 0.8PB (LTO4)
New S54 model introduce mid of 2009
2K slots with tier model
Required:
Upgrade ALMS
Enhanced gripper
MES Q3 2009
18 LTO4 drives
HA implementation resume in Q4
33. Expansion Planning
2008
0.5PB expansion of Tape system in Q2
Meet MOU target mid of Nov.
1.3MSI2k per rack base on recent E5450 processor.
2009 Q1
150 SMP/QC blade servers
Raid subsystem consider 2TB per drive
42TB net capacity per chassis and 0.75PB in total
2009 Q3-4
18 LTO4 drives – mid of Oct.
330 Xeon QC (SMP, Intel 5450) blades servers
2nd phase TAPE MES - 5 LTO4 drives + HA
3rd phase TAPE MES – 6 LTO4 drives
ETA 0.8PB expansion delivery: mid of Nov
34. Computing/Storage System Infrastructure
Da
ta
Ce
nte
ASGC CASTOR2 Disk Farm r – CASTOR2 Tape
C3 Servers
CASTOR2 Disk servers
Ar
ch
ive
Ro
om
2 * GE (LX) to 4F M160
(links to HK, JP Tier-2s)
2 * GE (LX) to 4F
20 x Quanta Blades - TaipeiGigaPoP-7609
WN Core Services – CE, (links to TW Tier-2s)
BladeCenter
RB, DPM, PX, BDII etc. 1
10GBASE-X
2 3 4
10G4X 41611
Diag
1
Stat
10GBASE-X 10G4X 41611
1 2 3 4
Diag
2
Stat
10GBASE-X 10G4X 41611
1 2 3 4
4 * GE (SX) to ASGC Distribution
D iag
3
Stat
10GBASE-X 10G4X 41611
1 2 3 4
Switch in Rack#49
Diag
4
Stat
10/100/1000BASE-T G48T 41511
1
5
Diag
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Diag
25
Stat
1 25 2 26 3 27 4 28 5
(links to Tier-1 Servers)
29 6 30 7 31 8 32 9 33 10 34 11 35 12 36 13 37 14 38 15 39 16 40 17 41 18 42 19 43 20 44 21 45 22 46 23
G 4 8X a
47 24 48
4 1 54 2
A
6
B
64 x IBM HS20
Stat
1 25 2 26 3 27 4 28 5 29 6 30 7 31 8 32 9 23 10 34 11 35 12 36 13 37 14 38 15 39 16 40 17 41 18 42 19 43 20 44 21 45 22 46 23 47 24 48
7
Blade system -
8
WN
9
BladeCenter
10
DC SMR 48V / 100A
1 2 3 4 5 6 7 8 9 10 11 12 13 14
142 x IBM HS21
Battery Battery
Blade system - #1 + #2 #3 + #4
WN
35. Throughput of WLCG Experiments
Throughput defined as Job Eff. x # Jobs running
Characteristic of 4 LHC Exp. depicting in-efficiency
is due to poor coding.
37. Summary
Deploy highly-scalable DM system and performance driven
storage infrastructure
Eliminate possible complexity of SRM abstraction layer
Resource utilization, provisioning and optimization
From POC to Production, the challenges remains:
Data Challenge, Service Challenge, CCRC08, STEP09, etc.
Motivation appear clear for: Medical, Climate, Cosmological
Operation wide:
Robust Database setup
KB for fabric infrastructure operation
Fast enough event processing and documentation
Consider beyond the data management use cases in WLCG:
commonality in many other disciplines in EGEE infrastructure
actively participate in e-Science collaboration within the region