The LCH Grid - High Performance Computing in High Energy Particle Physics

310 views

Published on

Luke Kreczko of CERN, presented at Bristol ITMegaMeet 2013

CERN hosts many experiments and accelerators. One of them, the Large Hadron Collider (LHC), is the world's largest particle accelerator. Its experiments produce enormous amounts of data which is analysed using the LHC Grid. I will describe the Grid framework developed for the CMS experiment that produces around 20PB of data every year and processes it in regular intervals.

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
310
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

The LCH Grid - High Performance Computing in High Energy Particle Physics

  1. 1. The  LHC  Grid   High Performance Computing in High Energy Particle Physics Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   1  
  2. 2. $  whoami   •  Lukasz  Kreczko  –  ParGcle  Physicist   •  Graduated  in  Physics  from  University  of   Hamburg  in  2009   •  2009  –  2013  PhD  in  ParGcle  Physics  at  the   University  of  Bristol   •  Currently  CompuGng  Research  Assistant  at  the   University  of  Bristol   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   2  
  3. 3. The  experiment:  a  big  digital  camera   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   3  
  4. 4. The  experiment:  a  big  digital  camera   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   4   40  million  “pictures”   per  second   Each  “picture”  around   1  MB!  
  5. 5. The  data:  a  structured  mess   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   5  
  6. 6. The  data:  a  much  nicer  picture   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   6   Muon:
 pT
=
71.5
GeV/c
 η
=
‐0.82
 Missing
ET:
 22.3
GeV
 Jet:
 pT
=
89.0
GeV/c
 η
=
2.14
 Jet:
 pT
=
85.3
GeV/c
 η
=
2.02
 Jet:
 pT
=
90.5
GeV/c
 η
=
‐1.40
 Run:









163583
 Event:
 
26579562
 Jet:
 pT
=
84.1
GeV/c
 η
=
‐2.24
 m(F)=1.2
TeV/c2
 _
  7. 7. The  goal:  extend  our  knowledge   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   7   Muon:
 pT
=
71.5
GeV/c
 η
=
‐0.82
 Missing
ET:
 22.3
GeV
 Jet:
 pT
=
89.0
GeV/c
 η
=
2.14
 Jet:
 pT
=
85.3
GeV/c
 η
=
2.02
 Jet:
 pT
=
90.5
GeV/c
 η
=
‐1.40
 Run:









163583
 Event:
 
26579562
 Jet:
 pT
=
84.1
GeV/c
 η
=
‐2.24
 m(F)=1.2
TeV/c2
 _ Billions  of                        +  simulaGon     (GeV)γγm 110 120 130 140 150 S/(S+B)WeightedEvents/1.5GeV 0 500 1000 1500 Data S+B Fit B Fit Component σ1± σ2± -1 = 8 TeV, L = 5.3 fbs-1 = 7 TeV, L = 5.1 fbsCMS (GeV)γγm 120 130 Events/1.5GeV 1000 1500 Unweighted
  8. 8. The  goal:  extend  our  knowledge   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   8   Muon:
 pT
=
71.5
GeV/c
 η
=
‐0.82
 Missing
ET:
 22.3
GeV
 Jet:
 pT
=
89.0
GeV/c
 η
=
2.14
 Jet:
 pT
=
85.3
GeV/c
 η
=
2.02
 Jet:
 pT
=
90.5
GeV/c
 η
=
‐1.40
 Run:









163583
 Event:
 
26579562
 Jet:
 pT
=
84.1
GeV/c
 η
=
‐2.24
 m(F)=1.2
TeV/c2
 _ Billions  of                        +  simulaGon     (GeV)γγm 110 120 130 140 150 S/(S+B)WeightedEvents/1.5GeV 0 500 1000 1500 Data S+B Fit B Fit Component σ1± σ2± -1 = 8 TeV, L = 5.3 fbs-1 = 7 TeV, L = 5.1 fbsCMS (GeV)γγm 120 130 Events/1.5GeV 1000 1500 Unweighted That’s  the  famous  Higgs  boson  
  9. 9. Analysing  all  data   •  CMS  records  10  000  Terabytes  of  data  every   year  (around  70  years  of  full  HD  movies)   •  +  same  amount  of  simulaGon   •  To  analyse  this  on  a  single  computer  would   take        64,000  years!   9  
  10. 10. Analysing  all  data   •  CMS  records  10  000  Terabytes  of  data  every   year  (around  70  years  of  full  HD  movies)   •  +  same  amount  of  simulaGon   •  To  analyse  this  on  a  single  computer  would   take        64,000  years!   10  
  11. 11. The  compuGng  model:  the  decision   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   11  
  12. 12. The  compuGng  model   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   12  
  13. 13. The  compuGng  model   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   13   Bristol  is  one   of  the  T2   centres  
  14. 14. The  computer  centres  
  15. 15. All  over  the  world  
  16. 16. All  over  the  world   On  a  normal  day,  the  grid  provides  100,000  CPU  days  execuGng     1  million  jobs  
  17. 17. PhEDEx:  A  Bristol  invenGon   •  short  for  “Physics  Experiment  Data  Export”   •  Sodware  for  data  placement  and  the  file   transfer  system   •  Development  started  in  Bristol   •  One  of  the  main  developers  (now  at  Cloudant)   is  giving  a  talk  later  today  “Your  Database  to   the  Cloud,  an  Intro  to  Cloudant  NoSQL”     Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   17  
  18. 18. PhEDEx:  The  components   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   18   Transfer  agent   Transfer   management   database   Management  agent   Tools  to  manage   transfers   Local  agents   Web  monitoring  
  19. 19. PhEDEx:  data  monitoring   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   19   150  PB   70  PB   hjp://www.flickr.com/photos/paulrossman/6555010435/sizes/l/in/photostream/  
  20. 20. Current  and  future  topics   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   20   Things  changed:   Network  is  genng  cheaper  as  well  
  21. 21. Current  and  future  topics   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   21   Things  changed:   Network  is  genng  cheaper  as  well   We  are  moving  into  the   cloud  
  22. 22. Current  and  future  topics   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   22   Things  changed:   Network  is  genng  cheaper  as  well   We  are  moving  into  the   cloud   Any  data  anywhere!  
  23. 23. Other  current  and  future  topics   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   23   Protocol   buffers   CouchDB   Hadoop   Provisioning  of   compuGng  
  24. 24. Summary  &  Outlook   •  The  LHC  grid  is  a  world-­‐wide  distributed   compuGng  network  that  makes  discoveries   possible   •  The  grid  is  changing  to  adopt  to  current   technologies   •  In  two  years  Gme  the  LHC  will  increase  its   energy  to  14  TeV  –  more  data,  more   discoveries  to  come!   Saturday,  1  June  13   Lukasz  Kreczko  -­‐  Bristol  IT  MegaMeet   24  

×