SlideShare a Scribd company logo
1 of 33
Download to read offline
PES                      CERN's Cloud Computing infrastructure


                           CERN's Cloud Computing Infrastructure

                        Tony Cass, Sebastien Goasguen, Belmiro Moreira, Ewan Roche,
                                     Ulrich Schwickerath, Romain Wartel



                                      Cloudview conference, Porto, 2010


                      See also related presentations:
                           HEPIX spring and autumn meeting 2009, 2010
                           Virtualization vision, Grid Deployment Board (GDB) 9/9/2009
                           Batch virtualization at CERN, EGEE09 conference, Barcelona


CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
PES                                 Outline and disclaimer


                                      An introduction to CERN

                                      Why virtualization and cloud computing ?

                                      Virtualization of batch resources at CERN

                                      Building blocks and current status

                                      Image management systems: ISF and ONE

                                      Status of the project and first numbers

                      Disclaimer: We are still in the testing and evaluation phase. No final decision
                      has been taken yet on what we are going to use in the future.

                                     All given numbers and figures are preliminary

CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                           CERNs Cloud computing infrastructure - a status report - 2
PES                                 Introduction to CERN

           European Organization for Nuclear Research

                      The world’s largest particle physics laboratory
                      Located on Swiss/French border
                      Funded/staffed by 20 member states in 1954
                      With many contributors in the USA
                      Birth place of World Wide Web
                      Made popular by the movie “Angels and Demons”
                      Flag ship accelerator LHC
                      http://www.cern.ch
CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                      CERNs Cloud computing infrastructure - a status report - 3
PES Introduction to CERN: LHC and the experiments

                                         TOTEM
               LHCb                       LHCf                                Alice

                              Circumference of LHC: 26 659 m
                              Magnets : 9300
                              Temperature: -271.3°C (1.9 K)
                              Cooling: ~60T liquid He
                              Max. Beam energy: 7TeV
                              Current beam energy: 3.5TeV
                      ATLAS                                                CMS

CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                   CERNs Cloud computing infrastructure - a status report - 4
PES                   Introduction to CERN


   Data Signal/Noise ratio 10-9
   Data volume:
   High rate * large number of
   channels * 4 experiments
        15 PetaBytes of new data
           each year
   Compute power
   Event complexity * Nb. events *
   thousands users
        100 k CPUs (cores)
   Worldwide analysis & funding
   Computing funding locally in
   major regions & countries
   Efficient analysis everywhere
        GRID technology
CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                            CERNs Cloud computing infrastructure - a status report - 5
PES                   LCG computing GRID

                                                        Requried computing capacity:
                                                        ~100 000 processors



                                                          Number of sites:
                                                           T0: 1 (CERN), 20%
                                                           T1: 11 round the world
                                                           T2: ~160




                      http://lcg.web.cern.ch/lcg/public/

CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                          CERNs Cloud computing infrastructure - a status report - 6
PES                     The CERN Computer Center

                      Disk and tape:
                        1500 disk servers
                        5PB disk space
                        16PB tape storage

                      Computing facilities:
                        >20.000 CPU cores (batch only)
                        Up to ~10000 concurrent jobs
                        Job throughput ~200 000/day

CERN IT Department
           http://it-dep.web.cern.ch/it-dep/communications/it_facts__figures.htm
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                     CERNs Cloud computing infrastructure - a status report - 7
PES                   Why virtualization and cloud computing ?


                       Service consolidation:

                        Improve resource usage by squeezing mostly unused machines
                        onto single big hypervisors
                        Decouple hardware life cycle from applications running on the box
                        Ease management by supporting life migration

                        Virtualization of batch resources:

                         Decouple jobs and physical resources
                         Ease management of the batch farm resources
                         Enable the computer center for new computing models



                       This presentation is about virtualization of batch resources only

CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                       CERNs Cloud computing infrastructure - a status report - 8
PES                             Batch virtualization

                      CERN batch farm lxbatch:
                      ~3000 physical hosts
                      ~20000 CPU cores
                      >70 queues




                       Type 1:
                       Run my jobs in your VM

                       Type 2:
                       Run my jobs in my VM




CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                         CERNs Cloud computing infrastructure - a status report - 9
PES                          Towards cloud computing




                      Type 3:
                      Give me my infrastructure
                      i.e a VM or a batch of VMs

CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                    CERNs Cloud computing infrastructure - a status report - 10
PES                                          Philosophy


                                                                      (SLC = Scientific Linux CERN)
                      I

                                     (               Near future:

                                                        Batch

                          SLC4 WN    SLC5 WN
                                                    Physical                   Physical
                                                    SLC4 WN                    SLC5 WN
                      hypervisor cluster

                                                     (far) future ?

                             Batch           T0          development other/cloud applications


                                                  Internal cloud
CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                         CERNs Cloud computing infrastructure - a status report - 11
PES                          Visions beyond the current plan

                        Reusing/sharing images between different sites (phase 2)

                        HEPIX working group founded in autumn 2009 to define rules and
                      boundary conditions (https://www.hepix.org/)

                        Experiment specific images (phase 3)

                       Use of images which are customized for specific experiments


                        Use of resources in a cloud like operation mode (phase 4)


                        Images directly join experiment controlled scheduling systems
                      (phase 5)

                      Controlled by experiment
                      Spread across sites
CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                      CERNs Cloud computing infrastructure - a status report - 12
PES                               Virtual batch: basic ideas
                              Virtual batch worker nodes:

                                  Clones of real worker nodes, same setup
                                                      Platform Infrastructure Sharing Facility
                                  Mix with physical resources           (ISF)
                                  Dynamically join the batch farm as normal worker nodes
                                                         For high level VM management
                                  Limited lifetime: stop accepting jobs after 24h

                                  Destroy when empty

                                 Only one user job per VM at a time

                      Note:
                      The limited lifetime allows for a fully automated system which dynamically adapts
                      to the current needs, and automatically deploys intrusive updates.



CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                          CERNs Cloud computing infrastructure - a status report - 13
PES                   Virtual batch: basic ideas, technical

                         Images:

                             staged on hypervisors Infrastructure Sharing Facility
                                            Platform
                                                            (ISF)
                             master images, instances use LVM snapshots
                                                  For high level VM management
                             Start with few different flavors only



                         Image creation:

                            Derived from a centrally managed “golden node”
                            Regularly updated and distributed to get updates “in”



CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                   CERNs Cloud computing infrastructure - a status report - 14
PES                   Virtual batch: basic ideas




                      Images distribution:
                                         Platform Infrastructure Sharing Facility
                                                           (ISF)
                          Only shared file system available is AFS
                                              For high level VM management
                          Prefer peer to peer methods (more on that later)

                             SCP wave

                             Rtorrent




CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                               CERNs Cloud computing infrastructure - a status report - 15
PES                   Virtual batch: basic ideas



                      VM placement and Platform Infrastructure Sharing Facility
                                       management system
                                                         (ISF)
                          Use existing solutions
                                              For high level VM management
                          Testing both a free and a commercial solution

                            OpenNebula (ONE)

                            Platform's Infrastructure Sharing Facility (ISF)




CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                              CERNs Cloud computing infrastructure - a status report - 16
PES                     Batch virtualization: architecture

                                                                                Centrally managed
                                                Grey: physical resource
                                                                                  Golden nodes
                        Job submission          Colored: different VMs
                        CE/interactive
                                                                                        VM kiosk




                      Batch system
                      management                                             Hypervisors / HW resources


                                               VM worker nodes
                                               With limited lifetime          VM management system
CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                         CERNs Cloud computing infrastructure - a status report - 17
PES                        Status of building blocks (test system)


                                     Submission      Hypervisor       VM kiosk         VM
                                     and batch       cluster          and image        management
                                     managemen                        distribution     system

                      Initial
                      deployment         OK               OK                OK               OK


                      Central                                                             ISF OK,
                      management         OK               OK             Mostly             ONE
                                                                      implemented         missing

                      Monitoring
                      and alarming       OK          Switched off        missing           missing
                                                      for tests


CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                      CERNs Cloud computing infrastructure - a status report - 18
PES                   Image distribution: SCP wave versus rtorrent



                                                                                Slow nodes,
                                                                                (under investigation)
                                Preliminary !




                                                                               (BT = bit torrent)
CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                    CERNs Cloud computing infrastructure - a status report - 19
PES                    VM placement and management


                                          OpenNebula (ONE):
                      Basic model:

                            Single ONE master

                            Communication with hypervisors via ssh only

                            (Currently) no special tools on the hypervisors

                      Some scalability issues at the beginning (50 VM at the beginning)

                      Addressing issues as they turn up

                      Close collaboration with developers, ideas for improvements

                      Managed to start more than 7,500 VMs



CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                     CERNs Cloud computing infrastructure - a status report - 20
PES                     Scalability tests: some first numbers

                      “One shot” test with OpenNebula:
                           Inject virtual machine requests
                           And let them die
                           Record the number of alive machines seen by LSF every 30s




                                                                                Units: 1h5min
CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                      CERNs Cloud computing infrastructure - a status report - 21
PES                       VM placement and management


                      Platforms Infrastructure Sharing Facility (ISF)

                          One active ISF master node, plus fail-over candidates
                          Hypervisors run an agent which talks to XEN
                              Needed to be packaged for CERN
                          Resource management layer similar to LSF
                          Scalability expected to be good but needs verification
                           Tested with 2 racks (96 machines) so far, ramping up
                           Filled with ~2.000 VMs so far (which is the maximum)


                                        See: http://www.platform.com


CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                      CERNs Cloud computing infrastructure - a status report - 22
PES                                 Screen shots: ISF




                       VM status:


                                 Status: 1 of 2 racks available

CERN IT Department
                      Note: one out of 2 racks enabled for demonstration purpose
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                     CERNs Cloud computing infrastructure - a status report - 23
PES                   Screen shots: ISF




CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                       CERNs Cloud computing infrastructure - a status report - 24
PES                                Summary

                      Virtualization efforts at CERN are proceeding.
                                Still some work to be done.


                                        Main challenges

                           Scalability considerations
                                    provisioning systems
                                    of the batch system
                           No decision on provisioning system to be used yet
                           Reliability and speed of image distribution
                           General readiness for production (hardening)
                           Seamless integration into the existing infrastructure


CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                              CERNs Cloud computing infrastructure - a status report - 25
PES                                   Outlook




                      What's next ?

                            Continue testing of ONE and ISF in parallel

                            Solve remaining (known) issues

                            Release first VMs for testing by our users soon




CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                CERNs Cloud computing infrastructure - a status report - 26
PES                        Questions ?




CERN IT Department
                                ?
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                      CERNs Cloud computing infrastructure - a status report - 27
PES                                        Philosophy




                                                  Platform Infrastructure Sharing Facility
                      VM provisioning system(s)                     (ISF)

                                                      For high level VM management


                             OpenNebula                  pVMO




                                     Hypervisor cluster
                                      (physical resources)
CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                     CERNs Cloud computing infrastructure - a status report - 28
PES                     Details: Some additional explanations ...

                      “Golden node”:
                          A centrally managed (i.e. Quattor controlled) standard worker node
                      which
                            Is a virtual machine
                            Does not accept jobs
                            Receives regular updates
                       Purpose: creation of VM images

                      “Virtual machine worker node”:
                            A virtual machine derived from a golden node
                            Not updated during their life time
                            Dynamically adds itself to the batch farm
                            Accepts jobs for only 24h
                            Runs only one user job at a time
                            Destroys itself when empty



CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                       CERNs Cloud computing infrastructure - a status report - 29
PES                          VM kiosk and Image distribution

                      Boundary conditions at CERN

                           Only available shared file system is AFS
                           Network infrastructure with a single 1GE connection
                            No dedicated fast network for transfers that could be used
                          (eg 10GE, IB or similar)

                      Tested options:

                        Scp wave:
                             Developed at Clemson university
                             Based on simple scp

                        rtorrent:
                             Infrastructure developed at CERN
                             Each node starts serving blocks it already hosts

CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                         CERNs Cloud computing infrastructure - a status report - 30
PES                        Details: VM kiosk and Image distribution

                                    Local image distribution model at CERN


                       Virtual machines are instantiated as LVM snapshots of a base
                       image.This is process is very fast


                      Replacing a production image:


                       Approved images are moved to a central image repository (the “kiosk”)
                       Hypervisors check regularly for new images on the kiosk node
                       The new image is transferred to a temporary area on the hypervisors
                       When the transfer is finished, a new LV is created
                       The new image is unpacked into the new LV
                       The current production image is renamed (via lvrename)
                       The new image is renamed to become the production image

                      Note: may need a sophisticated locking strategy in this process
CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                       CERNs Cloud computing infrastructure - a status report - 31
PES                   Image distribution with torrrent: transfer speed




                                                                           7GB file compressed
                                                                           452 target nodes
                                                         y
                                                    in ar
                                                 lim
                                           Pre
                                                                     90% finished after 25min

                                                             Slow nodes, under investigation




CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                    CERNs Cloud computing infrastructure - a status report - 32
PES                   Image distribution: total distribution speed

                                → Unpacking still needs some tuning !



                                                                             7GB file compressed
                                                                             452 target nodes


                                            y
                                       in ar                                 All done after 1.5h

                                    lim
                              Pre




CERN IT Department
CH-1211 Genève 23
        Switzerland
   www.cern.ch/it
                                      CERNs Cloud computing infrastructure - a status report - 33

More Related Content

What's hot

XDAQ_AP_LNL-INFN_27112014
XDAQ_AP_LNL-INFN_27112014XDAQ_AP_LNL-INFN_27112014
XDAQ_AP_LNL-INFN_27112014Andrea PETRUCCI
 
Cern intro 2010-10-27-snw
Cern intro 2010-10-27-snwCern intro 2010-10-27-snw
Cern intro 2010-10-27-snwScott Adams
 
A High-Performance Campus-Scale Cyberinfrastructure for Effectively Bridging ...
A High-Performance Campus-Scale Cyberinfrastructure for Effectively Bridging ...A High-Performance Campus-Scale Cyberinfrastructure for Effectively Bridging ...
A High-Performance Campus-Scale Cyberinfrastructure for Effectively Bridging ...Larry Smarr
 
20190314 cern register v3
20190314 cern register v320190314 cern register v3
20190314 cern register v3Tim Bell
 
BonFIRE TridentCom presentation
BonFIRE TridentCom presentationBonFIRE TridentCom presentation
BonFIRE TridentCom presentationBonFIRE
 
L Forer - Cloudgene: an execution platform for MapReduce programs in public a...
L Forer - Cloudgene: an execution platform for MapReduce programs in public a...L Forer - Cloudgene: an execution platform for MapReduce programs in public a...
L Forer - Cloudgene: an execution platform for MapReduce programs in public a...Jan Aerts
 
Integration of Cloud and Grid Middleware at DGRZR
Integration of Cloud and Grid Middleware at DGRZRIntegration of Cloud and Grid Middleware at DGRZR
Integration of Cloud and Grid Middleware at DGRZRStefan Freitag
 
Virtual Network Functions as Real-Time Containers in Private Clouds
Virtual Network Functions as Real-Time Containers in Private CloudsVirtual Network Functions as Real-Time Containers in Private Clouds
Virtual Network Functions as Real-Time Containers in Private Cloudstcucinotta
 
Hpc Cloud project Overview
Hpc Cloud project OverviewHpc Cloud project Overview
Hpc Cloud project OverviewFloris Sluiter
 
An Evaluation of Adaptive Partitioning of Real-Time Workloads on Linux
An Evaluation of Adaptive Partitioning of Real-Time Workloads on LinuxAn Evaluation of Adaptive Partitioning of Real-Time Workloads on Linux
An Evaluation of Adaptive Partitioning of Real-Time Workloads on Linuxtcucinotta
 
大強子計算網格與OSS
大強子計算網格與OSS大強子計算網格與OSS
大強子計算網格與OSSYuan CHAO
 
Early Application experiences on Summit
Early Application experiences on Summit Early Application experiences on Summit
Early Application experiences on Summit Ganesan Narayanasamy
 
Parton Distributions: future needs, and the role of the High-Luminosity LHC
Parton Distributions: future needs, and the role of the High-Luminosity LHCParton Distributions: future needs, and the role of the High-Luminosity LHC
Parton Distributions: future needs, and the role of the High-Luminosity LHCjuanrojochacon
 
Energy Efficient GPS Acquisition with Sparse-GPS
Energy Efficient GPS Acquisition with Sparse-GPSEnergy Efficient GPS Acquisition with Sparse-GPS
Energy Efficient GPS Acquisition with Sparse-GPSPrasant Misra
 
Jet Energy Corrections with Deep Neural Network Regression
Jet Energy Corrections with Deep Neural Network RegressionJet Energy Corrections with Deep Neural Network Regression
Jet Energy Corrections with Deep Neural Network RegressionDaniel Holmberg
 
AI is Impacting HPC Everywhere
AI is Impacting HPC EverywhereAI is Impacting HPC Everywhere
AI is Impacting HPC Everywhereinside-BigData.com
 
Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and...
Targeting GPUs using OpenMP  Directives on Summit with  GenASiS: A Simple and...Targeting GPUs using OpenMP  Directives on Summit with  GenASiS: A Simple and...
Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and...Ganesan Narayanasamy
 
Research in Soft Real-Time and Virtualized Applications on Linux
Research in Soft Real-Time and Virtualized Applications on LinuxResearch in Soft Real-Time and Virtualized Applications on Linux
Research in Soft Real-Time and Virtualized Applications on Linuxtcucinotta
 
Multicore programmingandtpl
Multicore programmingandtplMulticore programmingandtpl
Multicore programmingandtplYan Drugalya
 

What's hot (20)

XDAQ_AP_LNL-INFN_27112014
XDAQ_AP_LNL-INFN_27112014XDAQ_AP_LNL-INFN_27112014
XDAQ_AP_LNL-INFN_27112014
 
Cern intro 2010-10-27-snw
Cern intro 2010-10-27-snwCern intro 2010-10-27-snw
Cern intro 2010-10-27-snw
 
A High-Performance Campus-Scale Cyberinfrastructure for Effectively Bridging ...
A High-Performance Campus-Scale Cyberinfrastructure for Effectively Bridging ...A High-Performance Campus-Scale Cyberinfrastructure for Effectively Bridging ...
A High-Performance Campus-Scale Cyberinfrastructure for Effectively Bridging ...
 
20190314 cern register v3
20190314 cern register v320190314 cern register v3
20190314 cern register v3
 
BonFIRE TridentCom presentation
BonFIRE TridentCom presentationBonFIRE TridentCom presentation
BonFIRE TridentCom presentation
 
L Forer - Cloudgene: an execution platform for MapReduce programs in public a...
L Forer - Cloudgene: an execution platform for MapReduce programs in public a...L Forer - Cloudgene: an execution platform for MapReduce programs in public a...
L Forer - Cloudgene: an execution platform for MapReduce programs in public a...
 
Integration of Cloud and Grid Middleware at DGRZR
Integration of Cloud and Grid Middleware at DGRZRIntegration of Cloud and Grid Middleware at DGRZR
Integration of Cloud and Grid Middleware at DGRZR
 
Virtual Network Functions as Real-Time Containers in Private Clouds
Virtual Network Functions as Real-Time Containers in Private CloudsVirtual Network Functions as Real-Time Containers in Private Clouds
Virtual Network Functions as Real-Time Containers in Private Clouds
 
Hpc Cloud project Overview
Hpc Cloud project OverviewHpc Cloud project Overview
Hpc Cloud project Overview
 
An Evaluation of Adaptive Partitioning of Real-Time Workloads on Linux
An Evaluation of Adaptive Partitioning of Real-Time Workloads on LinuxAn Evaluation of Adaptive Partitioning of Real-Time Workloads on Linux
An Evaluation of Adaptive Partitioning of Real-Time Workloads on Linux
 
Real-Time API
Real-Time APIReal-Time API
Real-Time API
 
大強子計算網格與OSS
大強子計算網格與OSS大強子計算網格與OSS
大強子計算網格與OSS
 
Early Application experiences on Summit
Early Application experiences on Summit Early Application experiences on Summit
Early Application experiences on Summit
 
Parton Distributions: future needs, and the role of the High-Luminosity LHC
Parton Distributions: future needs, and the role of the High-Luminosity LHCParton Distributions: future needs, and the role of the High-Luminosity LHC
Parton Distributions: future needs, and the role of the High-Luminosity LHC
 
Energy Efficient GPS Acquisition with Sparse-GPS
Energy Efficient GPS Acquisition with Sparse-GPSEnergy Efficient GPS Acquisition with Sparse-GPS
Energy Efficient GPS Acquisition with Sparse-GPS
 
Jet Energy Corrections with Deep Neural Network Regression
Jet Energy Corrections with Deep Neural Network RegressionJet Energy Corrections with Deep Neural Network Regression
Jet Energy Corrections with Deep Neural Network Regression
 
AI is Impacting HPC Everywhere
AI is Impacting HPC EverywhereAI is Impacting HPC Everywhere
AI is Impacting HPC Everywhere
 
Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and...
Targeting GPUs using OpenMP  Directives on Summit with  GenASiS: A Simple and...Targeting GPUs using OpenMP  Directives on Summit with  GenASiS: A Simple and...
Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and...
 
Research in Soft Real-Time and Virtualized Applications on Linux
Research in Soft Real-Time and Virtualized Applications on LinuxResearch in Soft Real-Time and Virtualized Applications on Linux
Research in Soft Real-Time and Virtualized Applications on Linux
 
Multicore programmingandtpl
Multicore programmingandtplMulticore programmingandtpl
Multicore programmingandtpl
 

Viewers also liked

A Mobile Sensing Architecture for Massive Urban Scanning
A Mobile Sensing Architecture for Massive Urban ScanningA Mobile Sensing Architecture for Massive Urban Scanning
A Mobile Sensing Architecture for Massive Urban ScanningEuroCloud
 
Fernando sousa
Fernando sousaFernando sousa
Fernando sousaEuroCloud
 
Cloudy Datacenter Survey
Cloudy Datacenter SurveyCloudy Datacenter Survey
Cloudy Datacenter SurveyEuroCloud
 
Fx apresentacao evento cloudview 2010
Fx apresentacao evento cloudview 2010Fx apresentacao evento cloudview 2010
Fx apresentacao evento cloudview 2010EuroCloud
 
Pedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviewsPedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviewsEuroCloud
 
Ignacio design and building of iaa s clouds
Ignacio design and building of iaa s cloudsIgnacio design and building of iaa s clouds
Ignacio design and building of iaa s cloudsEuroCloud
 
Cloud views2010
Cloud views2010Cloud views2010
Cloud views2010EuroCloud
 
Phil wainewright cloud views - cloud for business transformation
Phil wainewright cloud views - cloud for business transformationPhil wainewright cloud views - cloud for business transformation
Phil wainewright cloud views - cloud for business transformationEuroCloud
 
Eurocloud v01
Eurocloud   v01Eurocloud   v01
Eurocloud v01EuroCloud
 

Viewers also liked (9)

A Mobile Sensing Architecture for Massive Urban Scanning
A Mobile Sensing Architecture for Massive Urban ScanningA Mobile Sensing Architecture for Massive Urban Scanning
A Mobile Sensing Architecture for Massive Urban Scanning
 
Fernando sousa
Fernando sousaFernando sousa
Fernando sousa
 
Cloudy Datacenter Survey
Cloudy Datacenter SurveyCloudy Datacenter Survey
Cloudy Datacenter Survey
 
Fx apresentacao evento cloudview 2010
Fx apresentacao evento cloudview 2010Fx apresentacao evento cloudview 2010
Fx apresentacao evento cloudview 2010
 
Pedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviewsPedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviews
 
Ignacio design and building of iaa s clouds
Ignacio design and building of iaa s cloudsIgnacio design and building of iaa s clouds
Ignacio design and building of iaa s clouds
 
Cloud views2010
Cloud views2010Cloud views2010
Cloud views2010
 
Phil wainewright cloud views - cloud for business transformation
Phil wainewright cloud views - cloud for business transformationPhil wainewright cloud views - cloud for business transformation
Phil wainewright cloud views - cloud for business transformation
 
Eurocloud v01
Eurocloud   v01Eurocloud   v01
Eurocloud v01
 

Similar to Lxcloud

London Ceph Day: Ceph at CERN
London Ceph Day: Ceph at CERNLondon Ceph Day: Ceph at CERN
London Ceph Day: Ceph at CERNCeph Community
 
OSMC 2012 | Monitoring at CERN by Christophe Haen
OSMC 2012 | Monitoring at CERN by Christophe HaenOSMC 2012 | Monitoring at CERN by Christophe Haen
OSMC 2012 | Monitoring at CERN by Christophe HaenNETWAYS
 
Interactive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science CloudInteractive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science CloudHelix Nebula The Science Cloud
 
CERN User Story
CERN User StoryCERN User Story
CERN User StoryTim Bell
 
Cern Cloud Architecture - February, 2016
Cern Cloud Architecture - February, 2016Cern Cloud Architecture - February, 2016
Cern Cloud Architecture - February, 2016Belmiro Moreira
 
The World Wide Distributed Computing Architecture of the LHC Datagrid
The World Wide Distributed Computing Architecture of the LHC DatagridThe World Wide Distributed Computing Architecture of the LHC Datagrid
The World Wide Distributed Computing Architecture of the LHC DatagridSwiss Big Data User Group
 
HTCC poster for CERN Openlab opendays 2015
HTCC poster for CERN Openlab opendays 2015HTCC poster for CERN Openlab opendays 2015
HTCC poster for CERN Openlab opendays 2015Karel Ha
 
Big Data for Big Discoveries
Big Data for Big DiscoveriesBig Data for Big Discoveries
Big Data for Big DiscoveriesGovnet Events
 
The size and complexity of the CERN network
The size and complexity of the CERN networkThe size and complexity of the CERN network
The size and complexity of the CERN networkSuma Pria Tunggal
 
CERN Agile Infrastructure, Road to Production
CERN Agile Infrastructure, Road to ProductionCERN Agile Infrastructure, Road to Production
CERN Agile Infrastructure, Road to ProductionSteve Traylen
 
Swiss National Supercomputing Centre CSCS
Swiss National Supercomputing Centre CSCSSwiss National Supercomputing Centre CSCS
Swiss National Supercomputing Centre CSCSRoland Bruggmann
 
AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007Andrea PETRUCCI
 
C2MON - A highly scalable monitoring platform for Big Data scenarios @CERN by...
C2MON - A highly scalable monitoring platform for Big Data scenarios @CERN by...C2MON - A highly scalable monitoring platform for Big Data scenarios @CERN by...
C2MON - A highly scalable monitoring platform for Big Data scenarios @CERN by...J On The Beach
 
OpenStack at CERN : A 5 year perspective
OpenStack at CERN : A 5 year perspectiveOpenStack at CERN : A 5 year perspective
OpenStack at CERN : A 5 year perspectiveTim Bell
 
The OpenStack Cloud at CERN - OpenStack Nordic
The OpenStack Cloud at CERN - OpenStack NordicThe OpenStack Cloud at CERN - OpenStack Nordic
The OpenStack Cloud at CERN - OpenStack NordicTim Bell
 
Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015Belmiro Moreira
 
ULTRA WIDE BAND BODY AREA NETWORK
ULTRA WIDE BAND BODY AREA NETWORKULTRA WIDE BAND BODY AREA NETWORK
ULTRA WIDE BAND BODY AREA NETWORKaravind m t
 
Achieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-KernelsAchieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-KernelsJiannan Ouyang, PhD
 

Similar to Lxcloud (20)

London Ceph Day: Ceph at CERN
London Ceph Day: Ceph at CERNLondon Ceph Day: Ceph at CERN
London Ceph Day: Ceph at CERN
 
OSMC 2012 | Monitoring at CERN by Christophe Haen
OSMC 2012 | Monitoring at CERN by Christophe HaenOSMC 2012 | Monitoring at CERN by Christophe Haen
OSMC 2012 | Monitoring at CERN by Christophe Haen
 
Interactive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science CloudInteractive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science Cloud
 
CERN User Story
CERN User StoryCERN User Story
CERN User Story
 
Cern Cloud Architecture - February, 2016
Cern Cloud Architecture - February, 2016Cern Cloud Architecture - February, 2016
Cern Cloud Architecture - February, 2016
 
The World Wide Distributed Computing Architecture of the LHC Datagrid
The World Wide Distributed Computing Architecture of the LHC DatagridThe World Wide Distributed Computing Architecture of the LHC Datagrid
The World Wide Distributed Computing Architecture of the LHC Datagrid
 
HTCC poster for CERN Openlab opendays 2015
HTCC poster for CERN Openlab opendays 2015HTCC poster for CERN Openlab opendays 2015
HTCC poster for CERN Openlab opendays 2015
 
Big Data for Big Discoveries
Big Data for Big DiscoveriesBig Data for Big Discoveries
Big Data for Big Discoveries
 
The size and complexity of the CERN network
The size and complexity of the CERN networkThe size and complexity of the CERN network
The size and complexity of the CERN network
 
CERN Agile Infrastructure, Road to Production
CERN Agile Infrastructure, Road to ProductionCERN Agile Infrastructure, Road to Production
CERN Agile Infrastructure, Road to Production
 
Swiss National Supercomputing Centre CSCS
Swiss National Supercomputing Centre CSCSSwiss National Supercomputing Centre CSCS
Swiss National Supercomputing Centre CSCS
 
AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007AndreaPetrucci_ACAT_2007
AndreaPetrucci_ACAT_2007
 
C2MON - A highly scalable monitoring platform for Big Data scenarios @CERN by...
C2MON - A highly scalable monitoring platform for Big Data scenarios @CERN by...C2MON - A highly scalable monitoring platform for Big Data scenarios @CERN by...
C2MON - A highly scalable monitoring platform for Big Data scenarios @CERN by...
 
OpenStack at CERN : A 5 year perspective
OpenStack at CERN : A 5 year perspectiveOpenStack at CERN : A 5 year perspective
OpenStack at CERN : A 5 year perspective
 
cpc-152-2-2003
cpc-152-2-2003cpc-152-2-2003
cpc-152-2-2003
 
The OpenStack Cloud at CERN - OpenStack Nordic
The OpenStack Cloud at CERN - OpenStack NordicThe OpenStack Cloud at CERN - OpenStack Nordic
The OpenStack Cloud at CERN - OpenStack Nordic
 
Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015
 
ULTRA WIDE BAND BODY AREA NETWORK
ULTRA WIDE BAND BODY AREA NETWORKULTRA WIDE BAND BODY AREA NETWORK
ULTRA WIDE BAND BODY AREA NETWORK
 
Grid computing & its applications
Grid computing & its applicationsGrid computing & its applications
Grid computing & its applications
 
Achieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-KernelsAchieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-Kernels
 

More from EuroCloud

Cities in the Cloud
Cities in the CloudCities in the Cloud
Cities in the CloudEuroCloud
 
Building an Outsourcing Ecosystem for Science
Building an Outsourcing Ecosystem for ScienceBuilding an Outsourcing Ecosystem for Science
Building an Outsourcing Ecosystem for ScienceEuroCloud
 
Evaluation of Virtual Clusters Performance on a Cloud Computing Infrastructure
Evaluation of Virtual Clusters Performance on a Cloud Computing InfrastructureEvaluation of Virtual Clusters Performance on a Cloud Computing Infrastructure
Evaluation of Virtual Clusters Performance on a Cloud Computing InfrastructureEuroCloud
 
Self Optimizing transactional data grids for elastic cloud environments
Self Optimizing transactional data grids for elastic cloud environmentsSelf Optimizing transactional data grids for elastic cloud environments
Self Optimizing transactional data grids for elastic cloud environmentsEuroCloud
 
Cloudviews eurocloud rcosta
Cloudviews eurocloud rcostaCloudviews eurocloud rcosta
Cloudviews eurocloud rcostaEuroCloud
 
Cloud views2010 google docs privacy
Cloud views2010   google docs privacyCloud views2010   google docs privacy
Cloud views2010 google docs privacyEuroCloud
 
Cil 2010 cloud comp1.0
Cil 2010 cloud comp1.0Cil 2010 cloud comp1.0
Cil 2010 cloud comp1.0EuroCloud
 
CardMobili @ CloudViews2010
CardMobili @ CloudViews2010CardMobili @ CloudViews2010
CardMobili @ CloudViews2010EuroCloud
 
Hive solutions cloudviews 2010 presentation
Hive solutions cloudviews 2010 presentationHive solutions cloudviews 2010 presentation
Hive solutions cloudviews 2010 presentationEuroCloud
 
Closetask 10 mins en
Closetask 10 mins enClosetask 10 mins en
Closetask 10 mins enEuroCloud
 
Apresentacao produtiv cloud views
Apresentacao   produtiv cloud viewsApresentacao   produtiv cloud views
Apresentacao produtiv cloud viewsEuroCloud
 
Apresentação novastic mp
Apresentação novastic mpApresentação novastic mp
Apresentação novastic mpEuroCloud
 
Ap4 construction platform_presentation_cloud_views_2010
Ap4 construction platform_presentation_cloud_views_2010Ap4 construction platform_presentation_cloud_views_2010
Ap4 construction platform_presentation_cloud_views_2010EuroCloud
 
2010.05.21 invicta angels cloud views.callforbusiness
2010.05.21 invicta angels cloud views.callforbusiness2010.05.21 invicta angels cloud views.callforbusiness
2010.05.21 invicta angels cloud views.callforbusinessEuroCloud
 
Luis lima v3
Luis lima v3Luis lima v3
Luis lima v3EuroCloud
 
Fx apresentacao evento cloudview 2010
Fx apresentacao evento cloudview 2010Fx apresentacao evento cloudview 2010
Fx apresentacao evento cloudview 2010EuroCloud
 

More from EuroCloud (20)

Cities in the Cloud
Cities in the CloudCities in the Cloud
Cities in the Cloud
 
Building an Outsourcing Ecosystem for Science
Building an Outsourcing Ecosystem for ScienceBuilding an Outsourcing Ecosystem for Science
Building an Outsourcing Ecosystem for Science
 
Evaluation of Virtual Clusters Performance on a Cloud Computing Infrastructure
Evaluation of Virtual Clusters Performance on a Cloud Computing InfrastructureEvaluation of Virtual Clusters Performance on a Cloud Computing Infrastructure
Evaluation of Virtual Clusters Performance on a Cloud Computing Infrastructure
 
Self Optimizing transactional data grids for elastic cloud environments
Self Optimizing transactional data grids for elastic cloud environmentsSelf Optimizing transactional data grids for elastic cloud environments
Self Optimizing transactional data grids for elastic cloud environments
 
Cloudviews eurocloud rcosta
Cloudviews eurocloud rcostaCloudviews eurocloud rcosta
Cloudviews eurocloud rcosta
 
Cloud views2010 google docs privacy
Cloud views2010   google docs privacyCloud views2010   google docs privacy
Cloud views2010 google docs privacy
 
Cil 2010 cloud comp1.0
Cil 2010 cloud comp1.0Cil 2010 cloud comp1.0
Cil 2010 cloud comp1.0
 
CardMobili @ CloudViews2010
CardMobili @ CloudViews2010CardMobili @ CloudViews2010
CardMobili @ CloudViews2010
 
Muchbeta
MuchbetaMuchbeta
Muchbeta
 
Hive solutions cloudviews 2010 presentation
Hive solutions cloudviews 2010 presentationHive solutions cloudviews 2010 presentation
Hive solutions cloudviews 2010 presentation
 
Closetask 10 mins en
Closetask 10 mins enClosetask 10 mins en
Closetask 10 mins en
 
Cardmobili
CardmobiliCardmobili
Cardmobili
 
Apresentacao produtiv cloud views
Apresentacao   produtiv cloud viewsApresentacao   produtiv cloud views
Apresentacao produtiv cloud views
 
Apresentação novastic mp
Apresentação novastic mpApresentação novastic mp
Apresentação novastic mp
 
Ap4 construction platform_presentation_cloud_views_2010
Ap4 construction platform_presentation_cloud_views_2010Ap4 construction platform_presentation_cloud_views_2010
Ap4 construction platform_presentation_cloud_views_2010
 
2010.05.21 invicta angels cloud views.callforbusiness
2010.05.21 invicta angels cloud views.callforbusiness2010.05.21 invicta angels cloud views.callforbusiness
2010.05.21 invicta angels cloud views.callforbusiness
 
Jorge gomes
Jorge gomesJorge gomes
Jorge gomes
 
Luis lima v3
Luis lima v3Luis lima v3
Luis lima v3
 
Fx apresentacao evento cloudview 2010
Fx apresentacao evento cloudview 2010Fx apresentacao evento cloudview 2010
Fx apresentacao evento cloudview 2010
 
Jorge gomes
Jorge gomesJorge gomes
Jorge gomes
 

Recently uploaded

Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkPixlogix Infotech
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxfnnc6jmgwh
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024TopCSSGallery
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...itnewsafrica
 

Recently uploaded (20)

Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App Framework
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
 

Lxcloud

  • 1. PES CERN's Cloud Computing infrastructure CERN's Cloud Computing Infrastructure Tony Cass, Sebastien Goasguen, Belmiro Moreira, Ewan Roche, Ulrich Schwickerath, Romain Wartel Cloudview conference, Porto, 2010 See also related presentations: HEPIX spring and autumn meeting 2009, 2010 Virtualization vision, Grid Deployment Board (GDB) 9/9/2009 Batch virtualization at CERN, EGEE09 conference, Barcelona CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it
  • 2. PES Outline and disclaimer An introduction to CERN Why virtualization and cloud computing ? Virtualization of batch resources at CERN Building blocks and current status Image management systems: ISF and ONE Status of the project and first numbers Disclaimer: We are still in the testing and evaluation phase. No final decision has been taken yet on what we are going to use in the future. All given numbers and figures are preliminary CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 2
  • 3. PES Introduction to CERN European Organization for Nuclear Research The world’s largest particle physics laboratory Located on Swiss/French border Funded/staffed by 20 member states in 1954 With many contributors in the USA Birth place of World Wide Web Made popular by the movie “Angels and Demons” Flag ship accelerator LHC http://www.cern.ch CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 3
  • 4. PES Introduction to CERN: LHC and the experiments TOTEM LHCb LHCf Alice Circumference of LHC: 26 659 m Magnets : 9300 Temperature: -271.3°C (1.9 K) Cooling: ~60T liquid He Max. Beam energy: 7TeV Current beam energy: 3.5TeV ATLAS CMS CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 4
  • 5. PES Introduction to CERN Data Signal/Noise ratio 10-9 Data volume: High rate * large number of channels * 4 experiments  15 PetaBytes of new data each year Compute power Event complexity * Nb. events * thousands users  100 k CPUs (cores) Worldwide analysis & funding Computing funding locally in major regions & countries Efficient analysis everywhere  GRID technology CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 5
  • 6. PES LCG computing GRID Requried computing capacity: ~100 000 processors Number of sites: T0: 1 (CERN), 20% T1: 11 round the world T2: ~160 http://lcg.web.cern.ch/lcg/public/ CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 6
  • 7. PES The CERN Computer Center Disk and tape: 1500 disk servers 5PB disk space 16PB tape storage Computing facilities: >20.000 CPU cores (batch only) Up to ~10000 concurrent jobs Job throughput ~200 000/day CERN IT Department http://it-dep.web.cern.ch/it-dep/communications/it_facts__figures.htm CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 7
  • 8. PES Why virtualization and cloud computing ? Service consolidation: Improve resource usage by squeezing mostly unused machines onto single big hypervisors Decouple hardware life cycle from applications running on the box Ease management by supporting life migration Virtualization of batch resources: Decouple jobs and physical resources Ease management of the batch farm resources Enable the computer center for new computing models This presentation is about virtualization of batch resources only CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 8
  • 9. PES Batch virtualization CERN batch farm lxbatch: ~3000 physical hosts ~20000 CPU cores >70 queues Type 1: Run my jobs in your VM Type 2: Run my jobs in my VM CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 9
  • 10. PES Towards cloud computing Type 3: Give me my infrastructure i.e a VM or a batch of VMs CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 10
  • 11. PES Philosophy (SLC = Scientific Linux CERN) I ( Near future: Batch SLC4 WN SLC5 WN Physical Physical SLC4 WN SLC5 WN hypervisor cluster (far) future ? Batch T0 development other/cloud applications Internal cloud CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 11
  • 12. PES Visions beyond the current plan Reusing/sharing images between different sites (phase 2) HEPIX working group founded in autumn 2009 to define rules and boundary conditions (https://www.hepix.org/) Experiment specific images (phase 3) Use of images which are customized for specific experiments Use of resources in a cloud like operation mode (phase 4) Images directly join experiment controlled scheduling systems (phase 5) Controlled by experiment Spread across sites CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 12
  • 13. PES Virtual batch: basic ideas Virtual batch worker nodes: Clones of real worker nodes, same setup Platform Infrastructure Sharing Facility Mix with physical resources (ISF) Dynamically join the batch farm as normal worker nodes For high level VM management Limited lifetime: stop accepting jobs after 24h Destroy when empty Only one user job per VM at a time Note: The limited lifetime allows for a fully automated system which dynamically adapts to the current needs, and automatically deploys intrusive updates. CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 13
  • 14. PES Virtual batch: basic ideas, technical Images: staged on hypervisors Infrastructure Sharing Facility Platform (ISF) master images, instances use LVM snapshots For high level VM management Start with few different flavors only Image creation: Derived from a centrally managed “golden node” Regularly updated and distributed to get updates “in” CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 14
  • 15. PES Virtual batch: basic ideas Images distribution: Platform Infrastructure Sharing Facility (ISF) Only shared file system available is AFS For high level VM management Prefer peer to peer methods (more on that later) SCP wave Rtorrent CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 15
  • 16. PES Virtual batch: basic ideas VM placement and Platform Infrastructure Sharing Facility management system (ISF) Use existing solutions For high level VM management Testing both a free and a commercial solution OpenNebula (ONE) Platform's Infrastructure Sharing Facility (ISF) CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 16
  • 17. PES Batch virtualization: architecture Centrally managed Grey: physical resource Golden nodes Job submission Colored: different VMs CE/interactive VM kiosk Batch system management Hypervisors / HW resources VM worker nodes With limited lifetime VM management system CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 17
  • 18. PES Status of building blocks (test system) Submission Hypervisor VM kiosk VM and batch cluster and image management managemen distribution system Initial deployment OK OK OK OK Central ISF OK, management OK OK Mostly ONE implemented missing Monitoring and alarming OK Switched off missing missing for tests CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 18
  • 19. PES Image distribution: SCP wave versus rtorrent Slow nodes, (under investigation) Preliminary ! (BT = bit torrent) CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 19
  • 20. PES VM placement and management OpenNebula (ONE): Basic model: Single ONE master Communication with hypervisors via ssh only (Currently) no special tools on the hypervisors Some scalability issues at the beginning (50 VM at the beginning) Addressing issues as they turn up Close collaboration with developers, ideas for improvements Managed to start more than 7,500 VMs CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 20
  • 21. PES Scalability tests: some first numbers “One shot” test with OpenNebula: Inject virtual machine requests And let them die Record the number of alive machines seen by LSF every 30s Units: 1h5min CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 21
  • 22. PES VM placement and management Platforms Infrastructure Sharing Facility (ISF) One active ISF master node, plus fail-over candidates Hypervisors run an agent which talks to XEN Needed to be packaged for CERN Resource management layer similar to LSF Scalability expected to be good but needs verification Tested with 2 racks (96 machines) so far, ramping up Filled with ~2.000 VMs so far (which is the maximum) See: http://www.platform.com CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 22
  • 23. PES Screen shots: ISF VM status: Status: 1 of 2 racks available CERN IT Department Note: one out of 2 racks enabled for demonstration purpose CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 23
  • 24. PES Screen shots: ISF CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 24
  • 25. PES Summary Virtualization efforts at CERN are proceeding. Still some work to be done. Main challenges Scalability considerations provisioning systems of the batch system No decision on provisioning system to be used yet Reliability and speed of image distribution General readiness for production (hardening) Seamless integration into the existing infrastructure CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 25
  • 26. PES Outlook What's next ? Continue testing of ONE and ISF in parallel Solve remaining (known) issues Release first VMs for testing by our users soon CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 26
  • 27. PES Questions ? CERN IT Department ? CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 27
  • 28. PES Philosophy Platform Infrastructure Sharing Facility VM provisioning system(s) (ISF) For high level VM management OpenNebula pVMO Hypervisor cluster (physical resources) CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 28
  • 29. PES Details: Some additional explanations ... “Golden node”: A centrally managed (i.e. Quattor controlled) standard worker node which Is a virtual machine Does not accept jobs Receives regular updates Purpose: creation of VM images “Virtual machine worker node”: A virtual machine derived from a golden node Not updated during their life time Dynamically adds itself to the batch farm Accepts jobs for only 24h Runs only one user job at a time Destroys itself when empty CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 29
  • 30. PES VM kiosk and Image distribution Boundary conditions at CERN Only available shared file system is AFS Network infrastructure with a single 1GE connection No dedicated fast network for transfers that could be used (eg 10GE, IB or similar) Tested options: Scp wave: Developed at Clemson university Based on simple scp rtorrent: Infrastructure developed at CERN Each node starts serving blocks it already hosts CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 30
  • 31. PES Details: VM kiosk and Image distribution Local image distribution model at CERN Virtual machines are instantiated as LVM snapshots of a base image.This is process is very fast Replacing a production image: Approved images are moved to a central image repository (the “kiosk”) Hypervisors check regularly for new images on the kiosk node The new image is transferred to a temporary area on the hypervisors When the transfer is finished, a new LV is created The new image is unpacked into the new LV The current production image is renamed (via lvrename) The new image is renamed to become the production image Note: may need a sophisticated locking strategy in this process CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 31
  • 32. PES Image distribution with torrrent: transfer speed 7GB file compressed 452 target nodes y in ar lim Pre 90% finished after 25min Slow nodes, under investigation CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 32
  • 33. PES Image distribution: total distribution speed → Unpacking still needs some tuning ! 7GB file compressed 452 target nodes y in ar All done after 1.5h lim Pre CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it CERNs Cloud computing infrastructure - a status report - 33