Accelerating science with Puppet

  • 4,076 views
Uploaded on

Review of CERN's objectives and how the computing infrastructure is evolving to address the challenges at scale using community supported software such as Puppet and OpenStack.

Review of CERN's objectives and how the computing infrastructure is evolving to address the challenges at scale using community supported software such as Puppet and OpenStack.

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
4,076
On Slideshare
0
From Embeds
0
Number of Embeds
6

Actions

Shares
Downloads
31
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Established by an international treaty at the end of 2nd world war as a place where scientists could work together for fundamental researchNuclear is part of the name but our world is particle physics
  • Our current understanding of the universe is incomplete. A theory, called the Standard Model, proposes particles and forces, many of which have been experimentally observed. However, there are open questions- Why do some particles have mass and others not ? The Higgs Boson is a theory but we need experimental evidence.Our theory of forces does not explain how Gravity worksCosmologists can only find 4% of the matter in the universe, we have lost the other 96%We should have 50% matter, 50% anti-matter… why is there an asymmetry (although it is a good thing that there is since the two anhialiate each other) ?When we go back through time 13 billion years towards the big bang, we move back through planets, stars, atoms, protons/electrons towards a soup like quark gluon plasma. What were the properties of this?
  • Biggest international scientific collaboration in the world, over 10,000 scientistsfrom 100 countriesAnnual Budget around 1.1 billion USDFunding for CERN, the laboratory, itselfcomesfrom the 20 member states, in ratio to the grossdomesticproduct… other countries contribute to experimentsincludingsubstantial US contribution towards the LHC experiments
  • The LHC is CERN’s largest accelerator. A 17 mile ring 100 meters underground where two beams of particles are sent in opposite directions and collided at the 4 experiments, Atlas, CMS, LHCb and ALICE. Lake Geneva and the airport are visible in the top to give a scale.
  • CERN is more than just the LHCCNGS neutrinos to Gran SassoCLOUD demonstrating impacts of cosmic rays on weather patternsAnti-hydrogen atoms contained for minutes in a magnetic vesselHowever, for those of you who have read Dan Brown’s Angels and Demons or seen the film, there are no maniacal monks with pounds of anti-matter running around the campus
  • LHC was conceived in the 1980s and construction was started in 2002 within the tunnel of a previous accelerator called LEP6,000 magnets lowered down 100m shafts weighing up to 35 tons each
  • The ring consists of two beam pipes, with a vacuum pressure 10 times lower than on the moon which contain the beams of protons accelerated to just below the speed of light. These go round 11,000 times per second being bent by the superconducting magnets cooled to 2K by liquid helium (-450F), colder than outer space. The beams themselves have a total energy similar to a high speed train so care needs to be taken to make sure they turn the corners correctly and don’t bump into the walls of the pipe.
  • - At 4 points around the ring, the beams are made to cross at points where detectors, the size of cathedrals and weighing up to 12,500 tonnes surround the pipe. These are like digital camera, but they take 100 mega pixel photos 40 million times a second. This produces up to 1 petabyte/s.
  • - Collisions can be visualised by the tracks left in the various parts of the detectors. With many collisions, the statistics allows particle identification such as mass and charge. This is a simple one…
  • To improve the statistics, we send round beams of multiple bunches, as they cross there are multiple collisions as 100 billion protons per bunch pass through each otherSoftware close by the detector and later offline in the computer centre then has to examine the tracks to understand the particles involved
  • To get Quark Gluon plasma, the material closest to the big bang, we also collide lead ions which is much more intensive… the temperatures reach 100,000 times that in the sun.
  • - We cannot record 1PB/s so there are hardware filters to remove uninteresting collisions such as those whose physics we understand already. The data is then sent to the CERN computer centre for recording via 10Gbit optical connections.
  • The Worldwide LHC Computing grid is used to record and analyse this data. The grid currently runs around 1 million jobs/day, less than 10% of the work is done at CERN. There is an agreed set of protocols for running jobs, data distribution and accounting between all the sites which co-operate in order to support the physicists across the globe.
  • So, to the Tier-0 computer centre at CERN… we are unusual in that we are public with our environment as there is no competitive advantage for us. We have thousands of visitors a year coming for tours and education and the computer center is a popular visit.The data centre has around 2.9MW of usable power looking after 12,000 servers.. In comparison, the accelerator uses 120MW, like a small town.With 64,000 disks, we have around 1,800 failing each year… this is much higher than the manufacturers’ MTBFs which is consistent with results from Google.Servers are mainly Intel processors, some AMD with dual core Xeon being the most common configuration.
  • Our data storage system has to record and preserve 25PB/year with an expected lifetime of 20 years. Keeping the old data is required to get the maximum statistics for discoveries. At times, physicists will want to skim this data looking for new physics. Data rates are around 6GB/s average, with peaks of 25GB/s.
  • Upstairs in the computer centre, a high roof was the fashion in the 1980s for mainframes but now is very difficult to get cooled efficiently
  • Tape robots from IBM and OracleAround 60,000 tape mounts / week so the robots are kept busyData copied every two years to keep up with the latest media densities
  • Asked member states for offers200Gbit/s links connecting the centresExpect to double computing capacity compared to today by 2015
  • Double the capacity, same manpowerNeed to rethink how to solve the problem… look at how others approach itWe had our own tools in 2002 and as they become more sophisticated, it was not possible to take advantage of other developments elsewhere without a major break.Doing this while doing their ‘day’ jobs so it re-enforces the approach of taking what we can from the community
  • Model based on Google Toolchain, Puppet is key for many operations. We’ve only had to write one new significant custom CERN software component which is in the certificate authority. Other parts such as Lemon for monitoring are from our previous implementation as we did not want to change all at once and they scale.
  • We’ve been very pleased with our choice of Puppet. Along with the obvious benefits of the functionality, there are soft benefits from the community model.
  • Many staff at CERN are short term contracts… good benefits for those staff to leave with puppet skills.Quattor is basically flat … did not register.
  • Puppet applies well to the cattle model but we’re also using it to handle the pet cases that can’t yet move over due to software limitations. So, they get cloud provisioning but flexible configuration management.
  • More presentations mentioning OpenStack than not ?
  • Complex to configure… take advantage of the experience of others
  • Communities integrating … when a new option is being used at CERN in OpenStack, we contribute the changes back to the puppet forge such as certificate handling. Even looking at Hyper-V/Windows openstack configuration…
  • LHC@Home is not an instruction on how to build your own accelerator but a magnet simulation tool to test multiple passes around the ring. We wanted to use it as a stress test tool and in ½ day, it was running on 1000 VMs.
  • Many areas going forward … Ben will cover lots of them in the deep dive but easing grid software configuration with Brookhaven labs along with managing non-service environments from desktop Macs/Linux to PDUs in the computer centre.
  • The project’s success comes down to community. A vibrant community has momentum of its own. As the WWW showed, many contributors can change how we see the world.Looking forward, as we help improve Puppet, remember that you will also be helping achieve a clearer understanding of the universe and how it works.
  • We purchase on an annuak cycle, replacing around ¼ of the servers. This purchasing is based on performance metrics such as cost per SpecInt or cost/GBGenerally, we are seeing dual core computer servers with Intel or AMD processors and bulk storage servers with 24 or 36 2TB disksThe operating system is Redhatlinux based distributon called Scientific Linux. We share the development and maintenance with Fermilab in Chicago. The choice of a Redhat based distribution comes from the need for stability across the grid, where keeping the 200 centres running compatible Linux distributions.

Transcript

  • 1. Accelerating Science with Puppet Tim Bell Tim.Bell@cern.ch @noggin143 PuppetConf San Francisco 28th September 2012PuppetConf 2012 Tim Bell, CERN 1
  • 2. What is CERN ?• Conseil Européen pour la Recherche Nucléaire – aka European Laboratory for Particle Physics• Between Geneva and the Jura mountains, straddling the Swiss-French border• Founded in 1954 with an international treaty• Our business is fundamental physics , what is the universe made of and how does it work PuppetConf 2012 Tim Bell, CERN 2
  • 3. Answering fundamental questions…• How to explain particles have mass? We have theories and accumulating experimental evidence.. Getting close…• What is 96% of the universe made of ? We can only see 4% of its estimated mass!• Why isn’t there anti-matter in the universe? Nature should be symmetric…• What was the state of matter just after the « Big Bang » ? Travelling back to the earliest instants of the universe would help…PuppetConf 2012 Tim Bell, CERN 3
  • 4. Community collaboration on an international scalePuppetConf 2012 Tim Bell, CERN 4
  • 5. The Large Hadron ColliderPuppetConf 2012 Tim Bell, CERN 5
  • 6. PuppetConf 2012 Tim Bell, CERN 6
  • 7. LHC constructionPuppetConf 2012 Tim Bell, CERN 7
  • 8. The Large Hadron Collider (LHC) tunnelPuppetConf 2012 Tim Bell, CERN 8
  • 9. PuppetConf 2012 Tim Bell, CERN 9
  • 10. Superconducting magnets – October 2008 A faulty connection between two superconducting magnets led to the release of a large amount of helium into the LHC tunnel and forced the machine to shut down for repairs for one yearPuppetConf 2012 Tim Bell, CERN 10
  • 11. Accumulating events in 2009-2011PuppetConf 2012 Tim Bell, CERN 11
  • 12. PuppetConf 2012 Tim Bell, CERN 12
  • 13. Heavy Ion CollisionsPuppetConf 2012 Tim Bell, CERN 13
  • 14. PuppetConf 2012 Tim Bell, CERN 14
  • 15. Tier-0 (CERN): •Data recording •Initial data reconstruction •Data distribution Tier-1 (11 centres): •Permanent storage •Re-processing •Analysis Tier-2 (~200 centres): • Simulation • End-user analysis• Data is recorded at CERN and Tier-1s and analysed in the Worldwide LHC Computing Grid• In a normal day, the grid provides 100,000 CPU days executing 1 million jobs PuppetConf 2012 Tim Bell, CERN 15
  • 16. • Data Centre by Numbers – Hardware installation & retirement • ~7,000 hardware movements/year; ~1,800 disk failures/year Racks 828 Disks 64,109 Tape Drives 160 Servers 11,728 Raw disk capacity (TiB) 63,289 Tape Cartridges 45,000 Processors 15,694 Memory modules 56,014 Tape slots 56,000 Cores 64,238 Memory capacity (TiB) 158 Tape Capacity (TiB) 73,000 HEPSpec06 482,507 RAID controllers 3,749 High Speed Routers 24 Xeon Xeon Xeon Other Fujitsu (640 Mbps → 2.4 Tbps) 3GHz 5150 5160 Xeon 0% 3% Xeon 4% 2% 10% E5335 Ethernet Switches 350 L5520 7% Xeon Hitachi 33% 23% 10 Gbps ports 2,000 E5345 14% HP Switching Capacity 4.8 Tbps Seagate 0% 15% 1 Gbps ports 16,939 Maxtor Western 0% 10 Gbps ports 558 Xeon Xeon Digital E5405 Xeon 59% L5420 6% IT Power Consumption 2,456 KW 8% E5410 16% Total Power Consumption 3,890 KW PuppetConf 2012 Tim Bell, CERN 16
  • 17. Our Challenges - Data storage • 25PB/year to record • >20 years retention • 6GB/s average • 25GB/s peaksPuppetConf 2012 Tim Bell, CERN 17
  • 18. PuppetConf 2012 Tim Bell, CERN 18
  • 19. 45,000 tapes holding 73PB of physics dataPuppetConf 2012 Tim Bell, CERN 19
  • 20. New data centre to expand capacity • Data centre in Geneva reaches limit of electrical capacity at 3.5MW • New centre chosen in Budapest, Hungary • Additional 2.7MW of usable power • Hands off facility • Deploying from 2013PuppetConf 2012 Tim Bell, CERN 20
  • 21. Time to change strategy• Rationale – Need to manage twice the servers as today – No increase in staff numbers – Tools becoming increasingly brittle and will not scale as-is• Approach – We are no longer a special case for compute – Adopt an open source tool chain model – Strong engineering skills allows rapid adoption of new technologies • Evaluate solutions in the problem domain • Identify functional gaps and challenge them – Contribute new function back to the communityPuppetConf 2012 Tim Bell, CERN 21
  • 22. Building Blocks mcollective, yum Bamboo Puppet AIMS/PXE Foreman JIRA OpenStack Nova git Koji, Mock Yum repo Active Directory / Pulp LDAP Lemon / Hardware Hadoop database Puppet-DBPuppetConf 2012 Tim Bell, CERN 22
  • 23. Training and Support• Buy the book rather than guru mentoring• Newcomers are rapidly productive (and often know more than us)• Community and Enterprise support means we’re not on our ownPuppetConf 2012 Tim Bell, CERN 23
  • 24. Staff Motivation• Skills valuable outside of CERN when an engineer’s contracts endPuppetConf 2012 Tim Bell, CERN 24
  • 25. Prepare the move to the clouds• Improve operational efficiency – Machine reception and testing – Hardware interventions with long running programs – Multiple operating system demand• Improve resource efficiency – Exploit idle resources, especially waiting for tape I/O – Highly variable load such as interactive or build machines• Improve responsiveness – Self-Service – Coffee break response timePuppetConf 2012 Tim Bell, CERN 25
  • 26. Service Model • Pets are given names like pussinboots.cern.ch • They are unique, lovingly hand raised and cared for • When they get ill, you nurse them back to health • Cattle are given numbers like vm0042.cern.ch • They are almost identical to other cattle • When they get ill, you get another one • Future application architectures tend towards Cattle but Pets with configuration management are also viablePuppetConf 2012 Tim Bell, CERN 26
  • 27. OpenStack• Open source cloud run by an independent foundation with over 6,000 members from 850 organisations• Started in 2010 but maturing rapidly with public cloud services from Rackspace, HP and UbuntuPlatinum Members PuppetConf 2012 Tim Bell, CERN 27
  • 28. Many OpenStack Components to Configure HORIZON KEYSTONE GLANCE NOVA Compute Scheduler Registry Image Volume NetworkPuppetConf 2012 Tim Bell, CERN 28
  • 29. When communities combine…• OpenStack’s many components and options make configuration complex out of the box• Puppet forge module from PuppetLabs (Thanks, Dan Bode)• The Foreman adds OpenStack provisioning for user kioskPuppetConf 2012 Tim Bell, CERN 29
  • 30. Scaling up with Puppet and OpenStack• Use LHC@Home based on BOINC for simulating magnetics guiding particles around the LHC• Naturally, there is a puppet module puppet-boinc• 1000 VMs spun up to stress test the hypervisors with Puppet, Foreman and OpenStackPuppetConf 2012 Tim Bell, CERN 30
  • 31. Next Steps• Expand tool chain – Mcollective – Puppet-DB• Deploy at scale in production – Move towards 15,000 hypervisors over next two years – Extimate 100-300,000 virtual machines• Work with labs on common solutions for scientific computing – Batch system configurations – Grids – Publishing to http://github.com/cernops• Investigate desktop and device management – Linux desktops – Macs – KVMs, PDUsPuppetConf 2012 Tim Bell, CERN 31
  • 32. Final Thoughts • A small project to share documents at CERN in the ‘90s created the massive phenomenon that is today’s world wide web • Open Source • Vibrant community and eco-system • Working with the Puppet and OpenStack communities has shown the power of collaboration • We have built a toolchain in one year with part time resources • Running 15,000 servers and up to 300,000 VMs is scary but achievable • Looking forward to further contributions as we move to large scale deploymentPuppetConf 2012 Tim Bell, CERN 32
  • 33. For more details, see Ben Jones’ talk at 15:50 todayConfiguration Management at CERN – FromHomegrown to Industry Standard Tim Bell
  • 34. ReferencesCERN http://public.web.cern.ch/public/Scientific Linux http://www.scientificlinux.org/Worldwide LHC Computing Grid http://lcg.web.cern.ch/lcg/ http://rtm.hep.ph.ic.ac.uk/Jobs http://cern.ch/jobsDetailed Report on Agile Infrastructure http://cern.ch/go/N8wp PuppetConf 2012 Tim Bell, CERN 34
  • 35. Backup SlidesPuppetConf 2012 Tim Bell, CERN 35
  • 36. CERN’s tools• The world’s most powerful accelerator: LHC – A 27 km long tunnel filled with high-tech instruments – Equipped with thousands of superconducting magnets – Accelerates particles to energies never before obtained – Produces particle collisions creating microscopic “big bangs”• Very large sophisticated detectors – Four experiments each the size of a cathedral – Hundred million measurement channels each – Data acquisition systems treating Petabytes per second• Top level computing to distribute and analyse the data – A Computing Grid linking ~200 computer centres around the globe – Sufficient computing power and storage to handle 25 Petabytes per year, making them available to thousands of physicists for analysisPuppetConf 2012 Tim Bell, CERN 36
  • 37. Our Infrastructure• Hardware is generally based on commodity, white-box servers – Open tendering process based on SpecInt/CHF, CHF/Watt and GB/CHF – Compute nodes typically dual processor, 2GB per core – Bulk storage on 24x2TB disk storage-in-a-box with a RAID card• Vast majority of servers run Scientific Linux, developed by Fermilab and CERN, based on Redhat Enterprise – Focus is on stability in view of the number of centres on the WLCGPuppetConf 2012 Tim Bell, CERN 37
  • 38. New architecture data flowsPuppetConf 2012 Tim Bell, CERN 38
  • 39. OpenStackGold Members PuppetConf 2012 Tim Bell, CERN 39