Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
James Casey<br />CERN IT Department<br />Grid Technologies Group<br />FUSE Community Day, London, 2010<br />Using ActiveMQ...
Overview<br />What we do at CERN<br />Current ActiveMQ Usage<br />Monitoring a distributed infrastructure<br />Lessons Lea...
LHC is a very large scientific instrument…<br />CMS<br />LHCb<br />ALICE<br />ATLAS<br />Large Hadron Collider<br />27 km ...
… based on advanced technology<br />27 km of superconducting magnetscooled in superfluid helium at 1.9 K<br />
What are we looking for?<br />To answer fundamental questions about the construction of the universe<br />Why have we got ...
This Requires…….<br />1. Accelerators : powerful machines that accelerate particles to extremely high energies and then br...
View of the ATLAS detector during construction<br />Length  : ~ 46 m<br />Radius  : ~ 12 m<br />Weight : ~ 7000 tons<br />...
 A collision at LHC<br />Bunches, each containing 100 billion protons, cross 40 million times a second in the centre of ea...
The Data Acquisition<br />Cannot possibly extract and record 40 TB/s. Essentially 2 stages of selection <br />- dedicated ...
Tier 0 at CERN: Acquisition, First pass processing, Storage & Distribution<br />Ian.Bird@cern.ch<br />10<br />
First Beam day – 10 Sep. 2008<br />
The LHC Computing Challenge<br />Experiments will produce about 15 Million Gigabytes (15 PB) of data each year (about 20 m...
Solution: the Grid<br />Use the Grid to unite computing resources of particle physics institutions around the world<br />T...
LHC Computing Grid project (WLCG)<br />The grid is complex<br />Highly distributed<br />No central control<br />Lots of so...
My Problem - Monitoring the operational grid infrastructure<br />Tools for Operations and Monitoring<br />Build and run mo...
Open Source to the core<br />Design and develop services for Open Science based on:<br />Open source software<br />Open pr...
Use Case– Availability Monitoring and Reporting<br />Monitoring of reliability and availability of European distributed co...
Solution<br />Distributed monitoring based on Nagios<br />Tied together with ActiveMQ<br />Network of 4 brokers in 3 count...
Architecture<br />
Component drilldown<br />
Current Status<br />16 national level Nagios servers<br />Will grow to ~40 in next 3 months<br />Clients distributed acros...
Lessons (1)<br />Just using STOMP is sub-optimal<br />Pros:<br />Very simple<br />Good for lightweight clients in many lan...
Lessons (2)	<br />JMS Durable consumer suck<br />Fragile in Network of Brokers<br />Many problems fixed now by FUSE<br />V...
Lessons (3)<br />Network of brokers seem attractive<br />Pros:<br />It’s all a cloud<br />Clients connect anywhere and it ...
Lessons (4)<br />Know the code<br />Most of it is very simple<br />Even for non-java developers<br />If you keep away from...
Lessons (5)<br />Stay in the ballpark<br />If it’s not in tests:<br />Think twice about using the feature in that way…<br ...
Nagios for ActiveMQ<br />We use Nagios to monitor <br />Brokers<br />Producer/consumers<br />Uses jmx4perl to reduce JVM l...
Broker Monitoring<br />Standard OS information<br />Filesystem full, processes running, socket counts, open file counts<br...
Virtual Topic monitoring<br />Full testing of consumers from producers on all brokers in Network of Brokers<br />Consumers...
Nagios broker status check<br />
To the future – a generic messaging service<br />Many concurrent applications …<br />… in many languages …<br />… over the...
Isolate clients from messaging via filesystem<br />Particularly in the WAN<br />Always assume messaging could be uncontact...
Design Thoughts– AMQP style remote messaging<br />Queues bound to broker nodes <br />IP-like routing sends messages to des...
Summary<br />ActiveMQ is a key technology choice for operating and monitoring the WLCG grid infrastructure<br />It provide...
Thank you for your attention<br />Questions?<br />
Upcoming SlideShare
Loading in …5
×

1005 cern-active mq-v2

2,451 views

Published on

A presentation talking about how we use Apache ActiveMQ (fuse-message-broker) at CERN for monitoring the Large Hadron Collider.

Presented at the FUSE community day - London 2010

Published in: Technology
  • Be the first to comment

1005 cern-active mq-v2

  1. 1. James Casey<br />CERN IT Department<br />Grid Technologies Group<br />FUSE Community Day, London, 2010<br />Using ActiveMQ at CERNfor the Large Hadron Collider<br />
  2. 2. Overview<br />What we do at CERN<br />Current ActiveMQ Usage<br />Monitoring a distributed infrastructure<br />Lessons Learned<br />Future ActiveMQ Usage<br />Building a generic messaging service<br />
  3. 3. LHC is a very large scientific instrument…<br />CMS<br />LHCb<br />ALICE<br />ATLAS<br />Large Hadron Collider<br />27 km circumference<br />Lake Geneva<br />
  4. 4. … based on advanced technology<br />27 km of superconducting magnetscooled in superfluid helium at 1.9 K<br />
  5. 5. What are we looking for?<br />To answer fundamental questions about the construction of the universe<br />Why have we got mass ? (Higgs Boson)<br />Search for a Grand Unified Theory<br />Supersymmetry<br />Dark Matter, Dark Energy<br />Antimatter/matter asymmetry<br />
  6. 6. This Requires…….<br />1. Accelerators : powerful machines that accelerate particles to extremely high energies and then bring them into collision with other particles<br />2. Detectors : gigantic instruments that record the resulting particles as they “stream” out from the point of collision.<br />4. People : Only a worldwide collaboration of thousands of scientists, engineers, technicians and support <br />staff can design, build and operate the complex “machines”<br />3. Computers :to collect, store, distribute and analyse the vast amount of data produced by the detectors<br />
  7. 7. View of the ATLAS detector during construction<br />Length : ~ 46 m<br />Radius : ~ 12 m<br />Weight : ~ 7000 tons<br />~108 electronic channels<br />
  8. 8. A collision at LHC<br />Bunches, each containing 100 billion protons, cross 40 million times a second in the centre of each experiment<br />1 billion proton-proton interactions per second in ATLAS & CMS !<br />Large Numbers of collisions per event<br /> ~ 1000 tracks stream into the detector every 25 ns<br /> a large number of channels (~ 100 M ch)  ~ 1 MB/25ns i.e. 40 TB/s !<br />8<br />
  9. 9. The Data Acquisition<br />Cannot possibly extract and record 40 TB/s. Essentially 2 stages of selection <br />- dedicated custom designed hardware processors  40 MHz  100 kHz<br />- then each ‘event’ sent to a free core in a farm of ~ 30k CPU-cores 100 kHz  few 100 Hz<br />9<br />
  10. 10. Tier 0 at CERN: Acquisition, First pass processing, Storage & Distribution<br />Ian.Bird@cern.ch<br />10<br />
  11. 11. First Beam day – 10 Sep. 2008<br />
  12. 12.
  13. 13. The LHC Computing Challenge<br />Experiments will produce about 15 Million Gigabytes (15 PB) of data each year (about 20 million CDs!)<br />LHC data analysis requires a computing power equivalent to ~100,000 of today's fastest PC processors (140MSi2K)<br />Analysis carried out at more than 140 computing centres <br />12 large centres for primary data management: CERN (Tier-0) and eleven Tier-1s<br />38 federations of smaller Tier-2 centres<br />
  14. 14. Solution: the Grid<br />Use the Grid to unite computing resources of particle physics institutions around the world<br />The World Wide Web provides seamless access to information that is stored in many millions of different geographical locations<br />The Grid is an infrastructure that provides seamless access to computing power and data storage capacity distributed over the globe.<br />It makes multiple computer centres look like a single system to the end-user.<br />
  15. 15. LHC Computing Grid project (WLCG)<br />The grid is complex<br />Highly distributed<br />No central control<br />Lots of software in many languages<br />Grid middleware SLOC – 1.7M Total<br />C++ 850K, C 550K, SH 160K, Java 115K, Python 50K, Perl 35K<br />Experiment code e.g. ATLAS – C++ 7M SLOC<br />Complex services dependencies<br />
  16. 16. My Problem - Monitoring the operational grid infrastructure<br />Tools for Operations and Monitoring<br />Build and run monitoring infrastructure for WLCG<br />Operational tools for management of grid infrastructures<br />Examples:<br />Configuration database<br />Helpdesk/ ticketing<br />Monitoring<br />Availability reporting<br />Early design decision: <br />Use messaging as an integration framework<br />
  17. 17. Open Source to the core<br />Design and develop services for Open Science based on:<br />Open source software<br />Open protocols<br />Funded by a series of EU Projects<br />EDG, EGEE, EGI.eu, EMI<br />Backed by industry support<br />All our code is open source and freely available<br />Results published in Open Access journals<br />
  18. 18. Use Case– Availability Monitoring and Reporting<br />Monitoring of reliability and availability of European distributed computing infrastructure <br />Data must be reliable<br />Definitive source of availability and accounting reports<br />Distributed operations model<br />Grid implies ‘cross-administrative domain’<br />No root login !<br />Global ticketing<br />Distributed operations dashboards<br />
  19. 19. Solution<br />Distributed monitoring based on Nagios<br />Tied together with ActiveMQ<br />Network of 4 brokers in 3 countries<br />Linked to ticketing and alarm systems<br />Message level signing + encryption for verification of identity<br />Uses STOMP for all communication<br />Code in Perl & python<br />Topics with Virtual Consumers <br />All persistent messages<br />Topic naming used for filtering and selection<br />
  20. 20. Architecture<br />
  21. 21. Component drilldown<br />
  22. 22. Current Status<br />16 national level Nagios servers<br />Will grow to ~40 in next 3 months<br />Clients distributed across 40 countries<br />315 sites<br />5K services<br />500,000 test results/day<br />3 consumers of full data stream to database for analysis and post processing<br />40 distributed alarm dashboards with filtered feeds<br />
  23. 23. Lessons (1)<br />Just using STOMP is sub-optimal<br />Pros:<br />Very simple<br />Good for lightweight clients in many languages<br />Cons<br />Hard to write reliable long-lived clients<br />No NACK, No heartbeat<br />Ambiguities in the specification<br />Content-length and TextMessage<br />Content-encoding<br />Not really broker independent in practice<br />Interested in contributing to STOMP 1.1/2.0<br />
  24. 24. Lessons (2) <br />JMS Durable consumer suck<br />Fragile in Network of Brokers<br />Many problems fixed now by FUSE<br />Virtual Topics solve the problem<br />Pros:<br />Just like a queue<br />Can monitor queue length, purge <br />Cons<br />Issues with selectors<br />Startup race conditions (solvable via config)<br />
  25. 25. Lessons (3)<br />Network of brokers seem attractive<br />Pros:<br />It’s all a cloud<br />Clients connect anywhere and it “just works”<br />Cons:<br />It’s a very complicated area of code<br />Often you need to “ask the computer”<br />Or a core ActiveMQ developer<br />Trade off between resilience/scaling and complexity<br />
  26. 26. Lessons (4)<br />Know the code<br />Most of it is very simple<br />Even for non-java developers<br />If you keep away from “java-ish” stuff<br />JTA, XA, Spring<br />Plugin architecture is very easy to work with<br />Most things can be implemented by a plugin<br />E.g. Monitoring, logging, restricting features, AuthN/AuthZ<br />Docs currently don’t explain everything<br />Especially the interactions between plugins/features<br />
  27. 27. Lessons (5)<br />Stay in the ballpark<br />If it’s not in tests:<br />Think twice about using the feature in that way…<br />Write a test for it !<br />Examples<br />SSL and network connectors<br />Network of Brokers with odd topologies<br />STOMP/Openwiredifferences in feature support<br />
  28. 28. Nagios for ActiveMQ<br />We use Nagios to monitor <br />Brokers<br />Producer/consumers<br />Uses jmx4perl to reduce JVM load on Nagios machine<br />Exposes JMX information as JSON<br />Simple perl interface to write clients<br />Generic nagios checks<br />Looking how to make more available for the community<br />
  29. 29. Broker Monitoring<br />Standard OS information<br />Filesystem full, processes running, socket counts, open file counts<br />JMX for broker statistics<br />Store usage, JVM stats, inactive durable subs, queues with pending messages<br />JMX based scripts to manage brokers<br />Remove unwanted advisories<br />Purge queues with no consumers<br />
  30. 30. Virtual Topic monitoring<br />Full testing of consumers from producers on all brokers in Network of Brokers<br />Consumers instrumented to reply to test messages<br />Addressed to a single client-id on a topic<br />Send message to topic in Reply-To<br />Nagios sends messages to all brokers for a topic<br />Checks they all come back<br />Useful to check that all brokers in network are forwarding correctly<br />
  31. 31. Nagios broker status check<br />
  32. 32. To the future – a generic messaging service<br />Many concurrent applications …<br />… in many languages …<br />… over the WAN …<br />… with little control over the users<br />Not a typical messaging problem ?<br />
  33. 33. Isolate clients from messaging via filesystem<br />Particularly in the WAN<br />Always assume messaging could be uncontactable<br />Keeps “core” broker network small<br />And keeps complexity isolated<br />Allows all clients to use best language/protocol to talk to messaging <br />Design thoughts – File Based Queue<br />
  34. 34. Design Thoughts– AMQP style remote messaging<br />Queues bound to broker nodes <br />IP-like routing sends messages to destinations<br />Clients connect to specific instances<br />Better determinacy in network<br />Easier to manage explicit connections between brokers<br />
  35. 35. Summary<br />ActiveMQ is a key technology choice for operating and monitoring the WLCG grid infrastructure<br />It provides a scalable and adaptable platform for building a wide range of messaging based applications<br />FUSE fits our model of open source software with industrial support<br />
  36. 36. Thank you for your attention<br />Questions?<br />

×