SlideShare a Scribd company logo
Grappling with Massive
Data Sets
Gavin McCance, CERN IT
Digital Energy 2018
1 May 2018 | Aberdeen
06/06/2018 OpenStack at CERN 2
OpenStack at CERN : A 5 year perspective
Tim Bell
tim.bell@cern.ch
@noggin143
OpenStack Days Budapest 2018
About Me - @noggin143
• Responsible for
Compute and Monitoring
at CERN
• Elected member of the
OpenStack Foundation
board
• Member of the
OpenStack user
committee from 2013-
2015
06/06/2018 OpenStack at CERN 3
OpenStack at CERN 4
CERNa
Worldwide
collaboration
CERN’s primary mission:
SCIENCE
Fundamental research on particle physics,
pushing the boundaries of knowledge and
technology
06/06/2018
CERN
World’s largest
particle physics
laboratory
OpenStack at CERN 5
Image credit: CERN
06/06/2018
06/06/2018 OpenStack at CERN 6
Evolution of the Universe
Test the
Standard
Model?
What’s matter
made of?
What holds it
together?
Anti-matter?
(Gravity?)
OpenStack at CERN 7
The Large Hadron Collider: LHC
1232
dipole magnets
15 metres
35t EACH
27km
Image credit: CERN
06/06/2018
Image credit: CERN
COLDER
TEMPERATURES
than outer space
( 120t He )
OpenStack at CERN 8
LHC: World’s Largest Cryogenic System (1.9 K)
06/06/2018
Vacuum?
• Yes
OpenStack at CERN 9
LHC: Highest Vacuum
104 km
of PIPES
10-11bar (~ moon)
Image credit: CERN
06/06/2018
Image credit: CERN
Image credit: CERN
OpenStack at CERN 10
ATLAS, CMS, ALICE and LHCb
EIFFEL
TOWER
HEAVIER
than the
Image credit: CERN
06/06/2018
OpenStack at CERN 11
40 million
pictures
per second
1PB/s
Image credit: CERN
OpenStack at CERN 12
Data Flow to Storage and Processing
ALICE: 4GB/s
ATLAS: 1GB/s
CMS: 600MB/s
LHCB: 750MB/s
RUN 2CERN DC
06/06/2018
Image credit: CERN
OpenStack at CERN 13
CERN Data Centre: Primary Copy of LHC Data
Data Centre on Google Street View
90k disks
15k servers
> 200 PB
on TAPES
06/06/2018
About WLCG:
• A community of 10,000 physicists
• ~250,000 jobs running concurrently
• 600,000 processing cores
• 700 PB storage available worldwide
• 20-40 Gbit/s connect CERN to Tier1s
Tier-0 (CERN)
• Initial data reconstruction
• Data recording & archiving
• Data distribution to rest of world
Tier-1s (14 centres worldwide)
• Permanent storage
• Re-processing
• Monte Carlo Simulation
• End-user analysis
Tier-2s (>150 centres worldwide)
• Monte Carlo Simulation
• End-user analysis
WLCG: LHC Computing Grid
Image credit: CERN
170 sites
WORLDWIDE
> 10000
users
CERN in 2017
230 PB on tape
550 million files
2017
55 PB produced
TB
06/06/2018 OpenStack at CERN 15
Cloud
OpenStack at CERN 16
CERN Data Centre: Private OpenStack Cloud
More Than
300 000
cores
More Than
500 000
physics jobs
per day
06/06/2018
Infrastructure in 2011
• Data centre managed by home grown toolset
• Initial development funded by EU projects
• Quattor, Lemon, …
• Development environment based on CVS
• 100K or so lines of Perl
• At the limit for power and cooling in Geneva
• No simple expansion options
06/06/2018 OpenStack at CERN 17
Wigner Data Centre
06/06/2018 OpenStack at CERN 18
Started project in 2011 with
inauguration in June 2013
Getting resources in 2011
06/06/2018 OpenStack at CERN 19
OpenStack London July 2011
06/06/2018 OpenStack at CERN 20
2011 - First OpenStack summit talk
06/06/2018 OpenStack at CERN 21
https://www.slideshare.net/noggin143/cern-user-story
The Agile Infrastructure Project
2012, a turning point for CERN IT:
- LHC Computing and data requirements were
increasing … Moore’s law would help, but not enough
- EU funded projects for fabric management
toolset ended
- Staff fixed but must grow resources
- LS1 (2013) ahead, next window only in 2019!
- Other deployments have surpassed CERN‘s
Three core areas:
- Centralized Monitoring
- Config’ management
- IaaS based on OpenStack
“All servers shall be virtual!”
06/06/2018 OpenStack at CERN 22
CERN Tool Chain
06/06/2018 OpenStack at CERN 23
06/06/2018 OpenStack at CERN 24
And block storage.. February 2013
06/06/2018 OpenStack at CERN 25
Sharing with Central Europe – May 2013
06/06/2018 OpenStack at CERN 26
https://www.slideshare.net/noggin143/20130529-openstack-ceedayv6
Production in Summer 2013
06/06/2018 OpenStack at CERN 27
06/06/2018 OpenStack at CERN 28
CERN Ceph Clusters Size Version
OpenStack Cinder/Glance Production 5.5PB jewel
Satellite data centre (1000km away) 0.4PB luminous
CephFS (HPC+Manila) Production 0.8PB luminous
Manila testing cluster 0.4PB luminous
Hyperconverged HPC 0.4PB luminous
CASTOR/XRootD Production 4.2PB luminous
CERN Tape Archive 0.8PB luminous
S3+SWIFT Production 0.9PB luminous
29
+5PB in the pipeline
06/06/2018 OpenStack at CERN
Bigbang Scale Tests
• Bigbang scale tests mutually benefit
CERN & Ceph project
• Bigbang I: 30PB, 7200 OSDs, Ceph
hammer. Several osdmap limitations
• Bigbang II: Similar size, Ceph jewel.
Scalability limited by OSD/MON
messaging. Motivated ceph-mgr
• Bigbang III: 65PB, 10800 OSDs
30
https://ceph.com/community/new-luminous-scalability/
06/06/2018 OpenStack at CERN
OpenStack Magnum
An OpenStack API Service that allows creation of
container clusters
● Use your keystone credentials
● You choose your cluster type
● Multi-Tenancy
● Quickly create new clusters with advanced
features such as multi-master
OpenStack Magnum
$ openstack coe cluster create --cluster-template kubernetes --node-count 100 … mycluster
$ openstack cluster list
+------+----------------+------------+--------------+-----------------+
| uuid | name | node_count | master_count | status |
+------+----------------+------------+--------------+-----------------+
| .... | mycluster | 100 | 1 | CREATE_COMPLETE |
+------+----------------+------------+--------------+-----------------+
$ $(magnum cluster-config mycluster --dir mycluster)
$ kubectl get pod
$ openstack coe cluster update mycluster replace node_count=200
Single command cluster creation
33
Why Bare-Metal Provisioning?
• VMs not sensible/suitable for all of our use cases
- Storage and database nodes, HPC clusters, boot strapping,
critical network equipment or specialised network setups,
precise/repeatable benchmarking for s/w frameworks, …
• Complete our service offerings
- Physical nodes (in addition to VMs and containers)
- OpenStack UI as the single pane of glass
• Simplify hardware provisioning workflows
- For users: openstack server create/delete
- For procurement & h/w provisioning team: initial on-boarding, server re-assignments
• Consolidate accounting & bookkeeping
- Resource accounting input will come from less sources
- Machine re-assignments will be easier to track
06/06/2018 OpenStack at CERN
Compute Intensive Workloads on VMs
• Up to 20% loss on very large VMs!
• “Tuning”: KSM*, EPT**, pinning, … 10%
• Compare with Hyper-V: no issue
• Numa-awares & node pinning ... <3%!
• Cross over : patches from Telecom
(*) Kernel Shared Memory
(**) Extended Page Tables
06/06/2018 OpenStack at CERN 34
VM Before After
4x 8 7.8%
2x 16 16%
1x 24 20% 5%
1x 32 20% 3%
06/06/2018 OpenStack at CERN 35
A new use case: Containers on Bare-Metal
• OpenStack managed containers and bare
metal so put them together
• General service offer: managed clusters
- Users get only K8s credentials
- Cloud team manages the cluster and the underlying infra
• Batch farm runs in VMs as well
- Evaluating federated kubernetes for hybrid cloud integration
- 7 clouds federated demonstrated at Kubecon
- OpenStack and non-OpenStack transparently managed
Integration: seamless!
(based on specific template)
Monitoring (metrics/logs)?
 Pod in the cluster
 Logs: fluentd + ES
 Metrics: cadvisor + influx
• h/w purchases: formal procedure compliant with public procurements
- Market survey identifies potential bidders
- Tender spec is sent to ask for offers
- Larger deliveries 1-2 times / year
• “Burn-in” before acceptance
- Compliance with technical spec (e.g. performance)
- Find failed components (e.g. broken RAM)
- Find systematic errors (e.g. bad firmware)
- Provoke early failing due to stress
Whole process can take weeks!
Hardware Burn-in in the CERN Data Centre (1)
“bathtub curve”
06/06/2018 OpenStack at CERN 36
Hardware Burn-in in the CERN Data Centre (2)
• Initial checks: Serial Asset Tag and BIOS settings
- Purchase order ID and unique serial no. to be set in the BMC (node name!)
• “Burn-in” tests
- CPU: burnK7, burnP6, burnMMX (cooling)
- RAM: memtest, Disk: badblocks
- Network: iperf(3) between pairs of nodes
- automatic node pairing
- Benchmarking: HEPSpec06 (& fio)
- derivative of SPEC06
- we buy total compute capacity (not newest processors)
$ ipmitool fru print 0 | tail -2
Product Serial : 245410-1
Product Asset Tag : CD5792984
$ openstack baremetal node show CD5792984-245410-1
“Double peak” structure due
to slower hardware threads
OpenAccess paper
06/06/2018 OpenStack at CERN 37
38
Re-allocationAllocation
Foreman
Recently added
Burn-in
06/06/2018 OpenStack at CERN
39
Phase 1.
Nova Network
Linux Bridge
Phase 2.
Neutron
Linux Bridge
Phase 3.
SDN
Tungsten Fabric (testing)
Network Migration
New Region
coming in 2018
Already running
* But still used in 2018
*
Spectre / Meltdown
 In January, a security vulnerability was
disclosed a new kernel everywhere
 Campaign over two weeks from15th
January
 7 reboot days, 7 tidy up days
 By availability zone
 Benefits
 Automation now to reboot the cloud if
needed - 33,000 VMs on 9,000
hypervisors
 Latest QEMU and RBD user code on all
VMs
 Downside
 Discovered Kernel bug in XFS which may
mean we have to do it again soon
06/06/2018 OpenStack at CERN 40
Community Experience
 Open source collaboration sets model for in-house
teams
 External recognition by the community is highly
rewarding for contributors
 Reviews and being reviewed is a constant
learning experience
 Productive for job market for staff
 Working groups, like the Scientific and Large
Deployment teams, discuss wide range of topics
 Effective knowledge transfer mechanisms
consistent with the CERN mission
 110 outreach talks since 2011
 Dojos at CERN bring good attendance
 Ceph, CentOS, Elastic, OpenStack CH, …
06/06/2018 OpenStack at CERN 41
 Increased complexity due to much higher pile-up and
higher trigger rates will bring several challenges to
reconstruction algorithms
MS had to cope with monster pile-up
8b4e bunch structure à pile-up of ~ 60 events/x-ing
for ~ 20 events/x-ing)
CMS: event with 78 reconstructed vertices
CMS: event from 2017 with 78
reconstructed vertices
ATLAS: simulation for HL-LHC
with 200 vertices
06/06/2018 OpenStack at CERN 42
HL-LHC: More collisions!
06/06/2018 OpenStack at CERN 43
First run LS1 Second run Third run LS3 HL-LHC Run4
…2009 2013 2014 2015 2016 2017 201820112010 2012 2019 2023 2024 2030?20212020 2022 …2025
LS2
 Significant part of cost comes
from global operations
 Even with technology increase of
~15%/year, we still have a big
gap if we keep trying to do things
with our current compute models
Raw data volume
increases significantly
for High Luminosity LHC
2026
Commercial Clouds
06/06/2018 OpenStack at CERN 44
Development areas going forward
 Spot Market
 Cells V2
 Neutron scaling
 Magnum rolling upgrades
 Block Storage Performance
 Federated Kubernetes
 Collaborations with Industry and SKA
06/06/2018 OpenStack at CERN 45
Summary
 OpenStack has provided flexible infrastructure at
CERN since 2013
 The open infrastructure toolchain has been stable
at scale
 Clouds are part but not all of the solution
 Open source collaborations have been fruitful for
CERN, industry and the communities
 Further efforts will be needed to ensure that
physics is not limited by the computing resources
available
06/06/2018 OpenStack at CERN 46
Thanks for all your help .. Some links
 CERN OpenStack blog at http://openstack-
in-production.blogspot.com
 Recent CERN OpenStack talks at
Vancouver summit at
https://www.openstack.org/videos/search?se
arch=cern
 CERN Tools at https://github.com/cernops
06/06/2018 OpenStack at CERN 47
Backup Material
06/06/2018 OpenStack at CERN 48
Hardware Evolution
 Looking at new hardware
platforms to reduce the
upcoming resource gap
 Explorations have been made
in low cost and low power
ARM processors
 Interesting R&Ds in high
performance hardware
 GPUs for deep learning
network training and fast
simulation
 FPGAs for neural network
inference and data
transformations
49
Significant
algorithm changes
needed to benefit
from potential
06/06/2018 OpenStack at CERN

More Related Content

What's hot

10 Years of OpenStack at CERN - From 0 to 300k cores
10 Years of OpenStack at CERN - From 0 to 300k cores10 Years of OpenStack at CERN - From 0 to 300k cores
10 Years of OpenStack at CERN - From 0 to 300k cores
Belmiro Moreira
 
20161025 OpenStack at CERN Barcelona
20161025 OpenStack at CERN Barcelona20161025 OpenStack at CERN Barcelona
20161025 OpenStack at CERN Barcelona
Tim Bell
 
20150924 rda federation_v1
20150924 rda federation_v120150924 rda federation_v1
20150924 rda federation_v1
Tim Bell
 
Cern Cloud Architecture - February, 2016
Cern Cloud Architecture - February, 2016Cern Cloud Architecture - February, 2016
Cern Cloud Architecture - February, 2016
Belmiro Moreira
 
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
Belmiro Moreira
 
Containers on Baremetal and Preemptible VMs at CERN and SKA
Containers on Baremetal and Preemptible VMs at CERN and SKAContainers on Baremetal and Preemptible VMs at CERN and SKA
Containers on Baremetal and Preemptible VMs at CERN and SKA
Belmiro Moreira
 
Moving from CellsV1 to CellsV2 at CERN
Moving from CellsV1 to CellsV2 at CERNMoving from CellsV1 to CellsV2 at CERN
Moving from CellsV1 to CellsV2 at CERN
Belmiro Moreira
 
CERN Mass and Agility talk at OSCON 2014
CERN Mass and Agility talk at OSCON 2014CERN Mass and Agility talk at OSCON 2014
CERN Mass and Agility talk at OSCON 2014
Tim Bell
 
Hybrid Cloud for CERN
Hybrid Cloud for CERN Hybrid Cloud for CERN
Hybrid Cloud for CERN
Helix Nebula The Science Cloud
 
Learning to Scale OpenStack
Learning to Scale OpenStackLearning to Scale OpenStack
Learning to Scale OpenStack
Rainya Mosher
 
OpenStack Toronto Q3 MeetUp - September 28th 2017
OpenStack Toronto Q3 MeetUp - September 28th 2017OpenStack Toronto Q3 MeetUp - September 28th 2017
OpenStack Toronto Q3 MeetUp - September 28th 2017
Stacy Véronneau
 
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...
Igor Sfiligoi
 
Montreal OpenStack Q3-2017 MeetUp
Montreal OpenStack Q3-2017 MeetUpMontreal OpenStack Q3-2017 MeetUp
Montreal OpenStack Q3-2017 MeetUp
Stacy Véronneau
 
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in Detail
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in DetailOverlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in Detail
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in Detail
Jose Antonio Coarasa Perez
 
Helix Nebula - The Science Cloud, Status Update
Helix Nebula - The Science Cloud, Status UpdateHelix Nebula - The Science Cloud, Status Update
Helix Nebula - The Science Cloud, Status Update
Helix Nebula The Science Cloud
 
Burst data retrieval after 50k GPU Cloud run
Burst data retrieval after 50k GPU Cloud runBurst data retrieval after 50k GPU Cloud run
Burst data retrieval after 50k GPU Cloud run
Igor Sfiligoi
 
SkyhookDM - Towards an Arrow-Native Storage System
SkyhookDM - Towards an Arrow-Native Storage SystemSkyhookDM - Towards an Arrow-Native Storage System
SkyhookDM - Towards an Arrow-Native Storage System
JayjeetChakraborty
 
Data-intensive IceCube Cloud Burst
Data-intensive IceCube Cloud BurstData-intensive IceCube Cloud Burst
Data-intensive IceCube Cloud Burst
Igor Sfiligoi
 
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)
Jose Antonio Coarasa Perez
 

What's hot (20)

10 Years of OpenStack at CERN - From 0 to 300k cores
10 Years of OpenStack at CERN - From 0 to 300k cores10 Years of OpenStack at CERN - From 0 to 300k cores
10 Years of OpenStack at CERN - From 0 to 300k cores
 
20161025 OpenStack at CERN Barcelona
20161025 OpenStack at CERN Barcelona20161025 OpenStack at CERN Barcelona
20161025 OpenStack at CERN Barcelona
 
20150924 rda federation_v1
20150924 rda federation_v120150924 rda federation_v1
20150924 rda federation_v1
 
Cern Cloud Architecture - February, 2016
Cern Cloud Architecture - February, 2016Cern Cloud Architecture - February, 2016
Cern Cloud Architecture - February, 2016
 
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014
 
Containers on Baremetal and Preemptible VMs at CERN and SKA
Containers on Baremetal and Preemptible VMs at CERN and SKAContainers on Baremetal and Preemptible VMs at CERN and SKA
Containers on Baremetal and Preemptible VMs at CERN and SKA
 
Moving from CellsV1 to CellsV2 at CERN
Moving from CellsV1 to CellsV2 at CERNMoving from CellsV1 to CellsV2 at CERN
Moving from CellsV1 to CellsV2 at CERN
 
CERN Mass and Agility talk at OSCON 2014
CERN Mass and Agility talk at OSCON 2014CERN Mass and Agility talk at OSCON 2014
CERN Mass and Agility talk at OSCON 2014
 
Hybrid Cloud for CERN
Hybrid Cloud for CERN Hybrid Cloud for CERN
Hybrid Cloud for CERN
 
Learning to Scale OpenStack
Learning to Scale OpenStackLearning to Scale OpenStack
Learning to Scale OpenStack
 
OpenStack Toronto Q3 MeetUp - September 28th 2017
OpenStack Toronto Q3 MeetUp - September 28th 2017OpenStack Toronto Q3 MeetUp - September 28th 2017
OpenStack Toronto Q3 MeetUp - September 28th 2017
 
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...
 
Big Data Management at CERN: The CMS Example
Big Data Management at CERN: The CMS ExampleBig Data Management at CERN: The CMS Example
Big Data Management at CERN: The CMS Example
 
Montreal OpenStack Q3-2017 MeetUp
Montreal OpenStack Q3-2017 MeetUpMontreal OpenStack Q3-2017 MeetUp
Montreal OpenStack Q3-2017 MeetUp
 
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in Detail
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in DetailOverlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in Detail
Overlay Opportunistic Clouds in CMS/ATLAS at CERN: The CMSooooooCloud in Detail
 
Helix Nebula - The Science Cloud, Status Update
Helix Nebula - The Science Cloud, Status UpdateHelix Nebula - The Science Cloud, Status Update
Helix Nebula - The Science Cloud, Status Update
 
Burst data retrieval after 50k GPU Cloud run
Burst data retrieval after 50k GPU Cloud runBurst data retrieval after 50k GPU Cloud run
Burst data retrieval after 50k GPU Cloud run
 
SkyhookDM - Towards an Arrow-Native Storage System
SkyhookDM - Towards an Arrow-Native Storage SystemSkyhookDM - Towards an Arrow-Native Storage System
SkyhookDM - Towards an Arrow-Native Storage System
 
Data-intensive IceCube Cloud Burst
Data-intensive IceCube Cloud BurstData-intensive IceCube Cloud Burst
Data-intensive IceCube Cloud Burst
 
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)
 

Similar to OpenStack at CERN : A 5 year perspective

The OpenStack Cloud at CERN
The OpenStack Cloud at CERNThe OpenStack Cloud at CERN
The OpenStack Cloud at CERN
Arne Wiebalck
 
20181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v320181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v3
Tim Bell
 
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Databricks
 
CERN User Story
CERN User StoryCERN User Story
CERN User Story
Tim Bell
 
Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015
Belmiro Moreira
 
Deep Dive Into the CERN Cloud Infrastructure - November, 2013
Deep Dive Into the CERN Cloud Infrastructure - November, 2013Deep Dive Into the CERN Cloud Infrastructure - November, 2013
Deep Dive Into the CERN Cloud Infrastructure - November, 2013
Belmiro Moreira
 
CERN & Huawei collaboration to improve OpenStack for running large scale scie...
CERN & Huawei collaboration to improve OpenStack for running large scale scie...CERN & Huawei collaboration to improve OpenStack for running large scale scie...
CERN & Huawei collaboration to improve OpenStack for running large scale scie...
Helix Nebula The Science Cloud
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
Ceph Community
 
CloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use CaseCloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use Case
CloudLightning
 
20121115 open stack_ch_user_group_v1.2
20121115 open stack_ch_user_group_v1.220121115 open stack_ch_user_group_v1.2
20121115 open stack_ch_user_group_v1.2
Tim Bell
 
How HPC and large-scale data analytics are transforming experimental science
How HPC and large-scale data analytics are transforming experimental scienceHow HPC and large-scale data analytics are transforming experimental science
How HPC and large-scale data analytics are transforming experimental science
inside-BigData.com
 
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
Databricks
 
Helix Nebula the Science Cloud: Pre-Commercial Procurement pilot
Helix Nebula the Science Cloud: Pre-Commercial Procurement pilotHelix Nebula the Science Cloud: Pre-Commercial Procurement pilot
Helix Nebula the Science Cloud: Pre-Commercial Procurement pilot
Helix Nebula The Science Cloud
 
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
Larry Smarr
 
Kubernetes The New Research Platform
Kubernetes The New Research PlatformKubernetes The New Research Platform
Kubernetes The New Research Platform
Bob Killen
 
Hybrid Cloud for CERN
Hybrid Cloud for CERNHybrid Cloud for CERN
Hybrid Cloud for CERN
Helix Nebula The Science Cloud
 
CloudLab Overview
CloudLab OverviewCloudLab Overview
CloudLab Overview
Ed Dodds
 
Interactive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science CloudInteractive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science Cloud
Helix Nebula The Science Cloud
 
Integrating Bare-metal Provisioning into CERN's Private Cloud
Integrating Bare-metal Provisioning into CERN's Private CloudIntegrating Bare-metal Provisioning into CERN's Private Cloud
Integrating Bare-metal Provisioning into CERN's Private Cloud
Arne Wiebalck
 
01-10 Exploring new high potential 2D materials - Angioni.pdf
01-10 Exploring new high potential 2D materials - Angioni.pdf01-10 Exploring new high potential 2D materials - Angioni.pdf
01-10 Exploring new high potential 2D materials - Angioni.pdf
OCRE | Open Clouds for Research Environments
 

Similar to OpenStack at CERN : A 5 year perspective (20)

The OpenStack Cloud at CERN
The OpenStack Cloud at CERNThe OpenStack Cloud at CERN
The OpenStack Cloud at CERN
 
20181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v320181219 ucc open stack 5 years v3
20181219 ucc open stack 5 years v3
 
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...
 
CERN User Story
CERN User StoryCERN User Story
CERN User Story
 
Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015Unveiling CERN Cloud Architecture - October, 2015
Unveiling CERN Cloud Architecture - October, 2015
 
Deep Dive Into the CERN Cloud Infrastructure - November, 2013
Deep Dive Into the CERN Cloud Infrastructure - November, 2013Deep Dive Into the CERN Cloud Infrastructure - November, 2013
Deep Dive Into the CERN Cloud Infrastructure - November, 2013
 
CERN & Huawei collaboration to improve OpenStack for running large scale scie...
CERN & Huawei collaboration to improve OpenStack for running large scale scie...CERN & Huawei collaboration to improve OpenStack for running large scale scie...
CERN & Huawei collaboration to improve OpenStack for running large scale scie...
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
 
CloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use CaseCloudLightning and the OPM-based Use Case
CloudLightning and the OPM-based Use Case
 
20121115 open stack_ch_user_group_v1.2
20121115 open stack_ch_user_group_v1.220121115 open stack_ch_user_group_v1.2
20121115 open stack_ch_user_group_v1.2
 
How HPC and large-scale data analytics are transforming experimental science
How HPC and large-scale data analytics are transforming experimental scienceHow HPC and large-scale data analytics are transforming experimental science
How HPC and large-scale data analytics are transforming experimental science
 
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
 
Helix Nebula the Science Cloud: Pre-Commercial Procurement pilot
Helix Nebula the Science Cloud: Pre-Commercial Procurement pilotHelix Nebula the Science Cloud: Pre-Commercial Procurement pilot
Helix Nebula the Science Cloud: Pre-Commercial Procurement pilot
 
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
 
Kubernetes The New Research Platform
Kubernetes The New Research PlatformKubernetes The New Research Platform
Kubernetes The New Research Platform
 
Hybrid Cloud for CERN
Hybrid Cloud for CERNHybrid Cloud for CERN
Hybrid Cloud for CERN
 
CloudLab Overview
CloudLab OverviewCloudLab Overview
CloudLab Overview
 
Interactive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science CloudInteractive Data Analysis for End Users on HN Science Cloud
Interactive Data Analysis for End Users on HN Science Cloud
 
Integrating Bare-metal Provisioning into CERN's Private Cloud
Integrating Bare-metal Provisioning into CERN's Private CloudIntegrating Bare-metal Provisioning into CERN's Private Cloud
Integrating Bare-metal Provisioning into CERN's Private Cloud
 
01-10 Exploring new high potential 2D materials - Angioni.pdf
01-10 Exploring new high potential 2D materials - Angioni.pdf01-10 Exploring new high potential 2D materials - Angioni.pdf
01-10 Exploring new high potential 2D materials - Angioni.pdf
 

More from Tim Bell

CERN Status at OpenStack Shanghai Summit November 2019
CERN Status at OpenStack Shanghai Summit November 2019CERN Status at OpenStack Shanghai Summit November 2019
CERN Status at OpenStack Shanghai Summit November 2019
Tim Bell
 
OpenStack Paris 2014 - Federation, are we there yet ?
OpenStack Paris 2014 - Federation, are we there yet ?OpenStack Paris 2014 - Federation, are we there yet ?
OpenStack Paris 2014 - Federation, are we there yet ?Tim Bell
 
20141103 cern open_stack_paris_v3
20141103 cern open_stack_paris_v320141103 cern open_stack_paris_v3
20141103 cern open_stack_paris_v3
Tim Bell
 
20140509 cern open_stack_linuxtag_v3
20140509 cern open_stack_linuxtag_v320140509 cern open_stack_linuxtag_v3
20140509 cern open_stack_linuxtag_v3Tim Bell
 
Open stack operations feedback loop v1.4
Open stack operations feedback loop v1.4Open stack operations feedback loop v1.4
Open stack operations feedback loop v1.4Tim Bell
 
CERN clouds and culture at GigaOm London 2013
CERN clouds and culture at GigaOm London 2013CERN clouds and culture at GigaOm London 2013
CERN clouds and culture at GigaOm London 2013Tim Bell
 
20130529 openstack cee_day_v6
20130529 openstack cee_day_v620130529 openstack cee_day_v6
20130529 openstack cee_day_v6
Tim Bell
 
Academic cloud experiences cern v4
Academic cloud experiences cern v4Academic cloud experiences cern v4
Academic cloud experiences cern v4Tim Bell
 
Ceilometer lsf-intergration-openstack-summit
Ceilometer lsf-intergration-openstack-summitCeilometer lsf-intergration-openstack-summit
Ceilometer lsf-intergration-openstack-summitTim Bell
 
Havana survey results-final-v2
Havana survey results-final-v2Havana survey results-final-v2
Havana survey results-final-v2Tim Bell
 
Havana survey results-final
Havana survey results-finalHavana survey results-final
Havana survey results-final
Tim Bell
 
20121205 open stack_accelerating_science_v3
20121205 open stack_accelerating_science_v320121205 open stack_accelerating_science_v3
20121205 open stack_accelerating_science_v3Tim Bell
 
20121017 OpenStack Accelerating Science
20121017 OpenStack Accelerating Science20121017 OpenStack Accelerating Science
20121017 OpenStack Accelerating Science
Tim Bell
 
Accelerating science with Puppet
Accelerating science with PuppetAccelerating science with Puppet
Accelerating science with Puppet
Tim Bell
 
20120524 cern data centre evolution v2
20120524 cern data centre evolution v220120524 cern data centre evolution v2
20120524 cern data centre evolution v2
Tim Bell
 

More from Tim Bell (15)

CERN Status at OpenStack Shanghai Summit November 2019
CERN Status at OpenStack Shanghai Summit November 2019CERN Status at OpenStack Shanghai Summit November 2019
CERN Status at OpenStack Shanghai Summit November 2019
 
OpenStack Paris 2014 - Federation, are we there yet ?
OpenStack Paris 2014 - Federation, are we there yet ?OpenStack Paris 2014 - Federation, are we there yet ?
OpenStack Paris 2014 - Federation, are we there yet ?
 
20141103 cern open_stack_paris_v3
20141103 cern open_stack_paris_v320141103 cern open_stack_paris_v3
20141103 cern open_stack_paris_v3
 
20140509 cern open_stack_linuxtag_v3
20140509 cern open_stack_linuxtag_v320140509 cern open_stack_linuxtag_v3
20140509 cern open_stack_linuxtag_v3
 
Open stack operations feedback loop v1.4
Open stack operations feedback loop v1.4Open stack operations feedback loop v1.4
Open stack operations feedback loop v1.4
 
CERN clouds and culture at GigaOm London 2013
CERN clouds and culture at GigaOm London 2013CERN clouds and culture at GigaOm London 2013
CERN clouds and culture at GigaOm London 2013
 
20130529 openstack cee_day_v6
20130529 openstack cee_day_v620130529 openstack cee_day_v6
20130529 openstack cee_day_v6
 
Academic cloud experiences cern v4
Academic cloud experiences cern v4Academic cloud experiences cern v4
Academic cloud experiences cern v4
 
Ceilometer lsf-intergration-openstack-summit
Ceilometer lsf-intergration-openstack-summitCeilometer lsf-intergration-openstack-summit
Ceilometer lsf-intergration-openstack-summit
 
Havana survey results-final-v2
Havana survey results-final-v2Havana survey results-final-v2
Havana survey results-final-v2
 
Havana survey results-final
Havana survey results-finalHavana survey results-final
Havana survey results-final
 
20121205 open stack_accelerating_science_v3
20121205 open stack_accelerating_science_v320121205 open stack_accelerating_science_v3
20121205 open stack_accelerating_science_v3
 
20121017 OpenStack Accelerating Science
20121017 OpenStack Accelerating Science20121017 OpenStack Accelerating Science
20121017 OpenStack Accelerating Science
 
Accelerating science with Puppet
Accelerating science with PuppetAccelerating science with Puppet
Accelerating science with Puppet
 
20120524 cern data centre evolution v2
20120524 cern data centre evolution v220120524 cern data centre evolution v2
20120524 cern data centre evolution v2
 

Recently uploaded

Get Government Grants and Assistance Program
Get Government Grants and Assistance ProgramGet Government Grants and Assistance Program
Get Government Grants and Assistance Program
Get Government Grants
 
2024: The FAR - Federal Acquisition Regulations, Part 36
2024: The FAR - Federal Acquisition Regulations, Part 362024: The FAR - Federal Acquisition Regulations, Part 36
2024: The FAR - Federal Acquisition Regulations, Part 36
JSchaus & Associates
 
如何办理(uoit毕业证书)加拿大安大略理工大学毕业证文凭证书录取通知原版一模一样
如何办理(uoit毕业证书)加拿大安大略理工大学毕业证文凭证书录取通知原版一模一样如何办理(uoit毕业证书)加拿大安大略理工大学毕业证文凭证书录取通知原版一模一样
如何办理(uoit毕业证书)加拿大安大略理工大学毕业证文凭证书录取通知原版一模一样
850fcj96
 
一比一原版(QUT毕业证)昆士兰科技大学毕业证成绩单
一比一原版(QUT毕业证)昆士兰科技大学毕业证成绩单一比一原版(QUT毕业证)昆士兰科技大学毕业证成绩单
一比一原版(QUT毕业证)昆士兰科技大学毕业证成绩单
ukyewh
 
PACT launching workshop presentation-Final.pdf
PACT launching workshop presentation-Final.pdfPACT launching workshop presentation-Final.pdf
PACT launching workshop presentation-Final.pdf
Mohammed325561
 
Many ways to support street children.pptx
Many ways to support street children.pptxMany ways to support street children.pptx
Many ways to support street children.pptx
SERUDS INDIA
 
一比一原版(UOW毕业证)伍伦贡大学毕业证成绩单
一比一原版(UOW毕业证)伍伦贡大学毕业证成绩单一比一原版(UOW毕业证)伍伦贡大学毕业证成绩单
一比一原版(UOW毕业证)伍伦贡大学毕业证成绩单
ehbuaw
 
PD-1602-as-amended-by-RA-9287-Anti-Illegal-Gambling-Law.pptx
PD-1602-as-amended-by-RA-9287-Anti-Illegal-Gambling-Law.pptxPD-1602-as-amended-by-RA-9287-Anti-Illegal-Gambling-Law.pptx
PD-1602-as-amended-by-RA-9287-Anti-Illegal-Gambling-Law.pptx
RIDPRO11
 
Russian anarchist and anti-war movement in the third year of full-scale war
Russian anarchist and anti-war movement in the third year of full-scale warRussian anarchist and anti-war movement in the third year of full-scale war
Russian anarchist and anti-war movement in the third year of full-scale war
Antti Rautiainen
 
PNRR MADRID GREENTECH FOR BROWN NETWORKS NETWORKS MUR_MUSA_TEBALDI.pdf
PNRR MADRID GREENTECH FOR BROWN NETWORKS NETWORKS MUR_MUSA_TEBALDI.pdfPNRR MADRID GREENTECH FOR BROWN NETWORKS NETWORKS MUR_MUSA_TEBALDI.pdf
PNRR MADRID GREENTECH FOR BROWN NETWORKS NETWORKS MUR_MUSA_TEBALDI.pdf
ClaudioTebaldi2
 
ZGB - The Role of Generative AI in Government transformation.pdf
ZGB - The Role of Generative AI in Government transformation.pdfZGB - The Role of Generative AI in Government transformation.pdf
ZGB - The Role of Generative AI in Government transformation.pdf
Saeed Al Dhaheri
 
Opinions on EVs: Metro Atlanta Speaks 2023
Opinions on EVs: Metro Atlanta Speaks 2023Opinions on EVs: Metro Atlanta Speaks 2023
Opinions on EVs: Metro Atlanta Speaks 2023
ARCResearch
 
PPT Item # 7 - BB Inspection Services Agmt
PPT Item # 7 - BB Inspection Services AgmtPPT Item # 7 - BB Inspection Services Agmt
PPT Item # 7 - BB Inspection Services Agmt
ahcitycouncil
 
PPT Item # 6 - 7001 Broadway ARB Case # 933F
PPT Item # 6 - 7001 Broadway ARB Case # 933FPPT Item # 6 - 7001 Broadway ARB Case # 933F
PPT Item # 6 - 7001 Broadway ARB Case # 933F
ahcitycouncil
 
一比一原版(ANU毕业证)澳大利亚国立大学毕业证成绩单
一比一原版(ANU毕业证)澳大利亚国立大学毕业证成绩单一比一原版(ANU毕业证)澳大利亚国立大学毕业证成绩单
一比一原版(ANU毕业证)澳大利亚国立大学毕业证成绩单
ehbuaw
 
一比一原版(WSU毕业证)西悉尼大学毕业证成绩单
一比一原版(WSU毕业证)西悉尼大学毕业证成绩单一比一原版(WSU毕业证)西悉尼大学毕业证成绩单
一比一原版(WSU毕业证)西悉尼大学毕业证成绩单
evkovas
 
NHAI_Under_Implementation_01-05-2024.pdf
NHAI_Under_Implementation_01-05-2024.pdfNHAI_Under_Implementation_01-05-2024.pdf
NHAI_Under_Implementation_01-05-2024.pdf
AjayVejendla3
 
快速制作(ocad毕业证书)加拿大安大略艺术设计学院毕业证本科学历雅思成绩单原版一模一样
快速制作(ocad毕业证书)加拿大安大略艺术设计学院毕业证本科学历雅思成绩单原版一模一样快速制作(ocad毕业证书)加拿大安大略艺术设计学院毕业证本科学历雅思成绩单原版一模一样
快速制作(ocad毕业证书)加拿大安大略艺术设计学院毕业证本科学历雅思成绩单原版一模一样
850fcj96
 
一比一原版(UQ毕业证)昆士兰大学毕业证成绩单
一比一原版(UQ毕业证)昆士兰大学毕业证成绩单一比一原版(UQ毕业证)昆士兰大学毕业证成绩单
一比一原版(UQ毕业证)昆士兰大学毕业证成绩单
ehbuaw
 
PPT Item # 8 - Tuxedo Columbine 3way Stop
PPT Item # 8 - Tuxedo Columbine 3way StopPPT Item # 8 - Tuxedo Columbine 3way Stop
PPT Item # 8 - Tuxedo Columbine 3way Stop
ahcitycouncil
 

Recently uploaded (20)

Get Government Grants and Assistance Program
Get Government Grants and Assistance ProgramGet Government Grants and Assistance Program
Get Government Grants and Assistance Program
 
2024: The FAR - Federal Acquisition Regulations, Part 36
2024: The FAR - Federal Acquisition Regulations, Part 362024: The FAR - Federal Acquisition Regulations, Part 36
2024: The FAR - Federal Acquisition Regulations, Part 36
 
如何办理(uoit毕业证书)加拿大安大略理工大学毕业证文凭证书录取通知原版一模一样
如何办理(uoit毕业证书)加拿大安大略理工大学毕业证文凭证书录取通知原版一模一样如何办理(uoit毕业证书)加拿大安大略理工大学毕业证文凭证书录取通知原版一模一样
如何办理(uoit毕业证书)加拿大安大略理工大学毕业证文凭证书录取通知原版一模一样
 
一比一原版(QUT毕业证)昆士兰科技大学毕业证成绩单
一比一原版(QUT毕业证)昆士兰科技大学毕业证成绩单一比一原版(QUT毕业证)昆士兰科技大学毕业证成绩单
一比一原版(QUT毕业证)昆士兰科技大学毕业证成绩单
 
PACT launching workshop presentation-Final.pdf
PACT launching workshop presentation-Final.pdfPACT launching workshop presentation-Final.pdf
PACT launching workshop presentation-Final.pdf
 
Many ways to support street children.pptx
Many ways to support street children.pptxMany ways to support street children.pptx
Many ways to support street children.pptx
 
一比一原版(UOW毕业证)伍伦贡大学毕业证成绩单
一比一原版(UOW毕业证)伍伦贡大学毕业证成绩单一比一原版(UOW毕业证)伍伦贡大学毕业证成绩单
一比一原版(UOW毕业证)伍伦贡大学毕业证成绩单
 
PD-1602-as-amended-by-RA-9287-Anti-Illegal-Gambling-Law.pptx
PD-1602-as-amended-by-RA-9287-Anti-Illegal-Gambling-Law.pptxPD-1602-as-amended-by-RA-9287-Anti-Illegal-Gambling-Law.pptx
PD-1602-as-amended-by-RA-9287-Anti-Illegal-Gambling-Law.pptx
 
Russian anarchist and anti-war movement in the third year of full-scale war
Russian anarchist and anti-war movement in the third year of full-scale warRussian anarchist and anti-war movement in the third year of full-scale war
Russian anarchist and anti-war movement in the third year of full-scale war
 
PNRR MADRID GREENTECH FOR BROWN NETWORKS NETWORKS MUR_MUSA_TEBALDI.pdf
PNRR MADRID GREENTECH FOR BROWN NETWORKS NETWORKS MUR_MUSA_TEBALDI.pdfPNRR MADRID GREENTECH FOR BROWN NETWORKS NETWORKS MUR_MUSA_TEBALDI.pdf
PNRR MADRID GREENTECH FOR BROWN NETWORKS NETWORKS MUR_MUSA_TEBALDI.pdf
 
ZGB - The Role of Generative AI in Government transformation.pdf
ZGB - The Role of Generative AI in Government transformation.pdfZGB - The Role of Generative AI in Government transformation.pdf
ZGB - The Role of Generative AI in Government transformation.pdf
 
Opinions on EVs: Metro Atlanta Speaks 2023
Opinions on EVs: Metro Atlanta Speaks 2023Opinions on EVs: Metro Atlanta Speaks 2023
Opinions on EVs: Metro Atlanta Speaks 2023
 
PPT Item # 7 - BB Inspection Services Agmt
PPT Item # 7 - BB Inspection Services AgmtPPT Item # 7 - BB Inspection Services Agmt
PPT Item # 7 - BB Inspection Services Agmt
 
PPT Item # 6 - 7001 Broadway ARB Case # 933F
PPT Item # 6 - 7001 Broadway ARB Case # 933FPPT Item # 6 - 7001 Broadway ARB Case # 933F
PPT Item # 6 - 7001 Broadway ARB Case # 933F
 
一比一原版(ANU毕业证)澳大利亚国立大学毕业证成绩单
一比一原版(ANU毕业证)澳大利亚国立大学毕业证成绩单一比一原版(ANU毕业证)澳大利亚国立大学毕业证成绩单
一比一原版(ANU毕业证)澳大利亚国立大学毕业证成绩单
 
一比一原版(WSU毕业证)西悉尼大学毕业证成绩单
一比一原版(WSU毕业证)西悉尼大学毕业证成绩单一比一原版(WSU毕业证)西悉尼大学毕业证成绩单
一比一原版(WSU毕业证)西悉尼大学毕业证成绩单
 
NHAI_Under_Implementation_01-05-2024.pdf
NHAI_Under_Implementation_01-05-2024.pdfNHAI_Under_Implementation_01-05-2024.pdf
NHAI_Under_Implementation_01-05-2024.pdf
 
快速制作(ocad毕业证书)加拿大安大略艺术设计学院毕业证本科学历雅思成绩单原版一模一样
快速制作(ocad毕业证书)加拿大安大略艺术设计学院毕业证本科学历雅思成绩单原版一模一样快速制作(ocad毕业证书)加拿大安大略艺术设计学院毕业证本科学历雅思成绩单原版一模一样
快速制作(ocad毕业证书)加拿大安大略艺术设计学院毕业证本科学历雅思成绩单原版一模一样
 
一比一原版(UQ毕业证)昆士兰大学毕业证成绩单
一比一原版(UQ毕业证)昆士兰大学毕业证成绩单一比一原版(UQ毕业证)昆士兰大学毕业证成绩单
一比一原版(UQ毕业证)昆士兰大学毕业证成绩单
 
PPT Item # 8 - Tuxedo Columbine 3way Stop
PPT Item # 8 - Tuxedo Columbine 3way StopPPT Item # 8 - Tuxedo Columbine 3way Stop
PPT Item # 8 - Tuxedo Columbine 3way Stop
 

OpenStack at CERN : A 5 year perspective

  • 1.
  • 2. Grappling with Massive Data Sets Gavin McCance, CERN IT Digital Energy 2018 1 May 2018 | Aberdeen 06/06/2018 OpenStack at CERN 2 OpenStack at CERN : A 5 year perspective Tim Bell tim.bell@cern.ch @noggin143 OpenStack Days Budapest 2018
  • 3. About Me - @noggin143 • Responsible for Compute and Monitoring at CERN • Elected member of the OpenStack Foundation board • Member of the OpenStack user committee from 2013- 2015 06/06/2018 OpenStack at CERN 3
  • 4. OpenStack at CERN 4 CERNa Worldwide collaboration CERN’s primary mission: SCIENCE Fundamental research on particle physics, pushing the boundaries of knowledge and technology 06/06/2018
  • 5. CERN World’s largest particle physics laboratory OpenStack at CERN 5 Image credit: CERN 06/06/2018
  • 6. 06/06/2018 OpenStack at CERN 6 Evolution of the Universe Test the Standard Model? What’s matter made of? What holds it together? Anti-matter? (Gravity?)
  • 7. OpenStack at CERN 7 The Large Hadron Collider: LHC 1232 dipole magnets 15 metres 35t EACH 27km Image credit: CERN 06/06/2018
  • 8. Image credit: CERN COLDER TEMPERATURES than outer space ( 120t He ) OpenStack at CERN 8 LHC: World’s Largest Cryogenic System (1.9 K) 06/06/2018
  • 9. Vacuum? • Yes OpenStack at CERN 9 LHC: Highest Vacuum 104 km of PIPES 10-11bar (~ moon) Image credit: CERN 06/06/2018
  • 10. Image credit: CERN Image credit: CERN OpenStack at CERN 10 ATLAS, CMS, ALICE and LHCb EIFFEL TOWER HEAVIER than the Image credit: CERN 06/06/2018
  • 11. OpenStack at CERN 11 40 million pictures per second 1PB/s Image credit: CERN
  • 12. OpenStack at CERN 12 Data Flow to Storage and Processing ALICE: 4GB/s ATLAS: 1GB/s CMS: 600MB/s LHCB: 750MB/s RUN 2CERN DC 06/06/2018
  • 13. Image credit: CERN OpenStack at CERN 13 CERN Data Centre: Primary Copy of LHC Data Data Centre on Google Street View 90k disks 15k servers > 200 PB on TAPES 06/06/2018
  • 14. About WLCG: • A community of 10,000 physicists • ~250,000 jobs running concurrently • 600,000 processing cores • 700 PB storage available worldwide • 20-40 Gbit/s connect CERN to Tier1s Tier-0 (CERN) • Initial data reconstruction • Data recording & archiving • Data distribution to rest of world Tier-1s (14 centres worldwide) • Permanent storage • Re-processing • Monte Carlo Simulation • End-user analysis Tier-2s (>150 centres worldwide) • Monte Carlo Simulation • End-user analysis WLCG: LHC Computing Grid Image credit: CERN 170 sites WORLDWIDE > 10000 users
  • 15. CERN in 2017 230 PB on tape 550 million files 2017 55 PB produced TB 06/06/2018 OpenStack at CERN 15
  • 16. Cloud OpenStack at CERN 16 CERN Data Centre: Private OpenStack Cloud More Than 300 000 cores More Than 500 000 physics jobs per day 06/06/2018
  • 17. Infrastructure in 2011 • Data centre managed by home grown toolset • Initial development funded by EU projects • Quattor, Lemon, … • Development environment based on CVS • 100K or so lines of Perl • At the limit for power and cooling in Geneva • No simple expansion options 06/06/2018 OpenStack at CERN 17
  • 18. Wigner Data Centre 06/06/2018 OpenStack at CERN 18 Started project in 2011 with inauguration in June 2013
  • 19. Getting resources in 2011 06/06/2018 OpenStack at CERN 19
  • 20. OpenStack London July 2011 06/06/2018 OpenStack at CERN 20
  • 21. 2011 - First OpenStack summit talk 06/06/2018 OpenStack at CERN 21 https://www.slideshare.net/noggin143/cern-user-story
  • 22. The Agile Infrastructure Project 2012, a turning point for CERN IT: - LHC Computing and data requirements were increasing … Moore’s law would help, but not enough - EU funded projects for fabric management toolset ended - Staff fixed but must grow resources - LS1 (2013) ahead, next window only in 2019! - Other deployments have surpassed CERN‘s Three core areas: - Centralized Monitoring - Config’ management - IaaS based on OpenStack “All servers shall be virtual!” 06/06/2018 OpenStack at CERN 22
  • 23. CERN Tool Chain 06/06/2018 OpenStack at CERN 23
  • 25. And block storage.. February 2013 06/06/2018 OpenStack at CERN 25
  • 26. Sharing with Central Europe – May 2013 06/06/2018 OpenStack at CERN 26 https://www.slideshare.net/noggin143/20130529-openstack-ceedayv6
  • 27. Production in Summer 2013 06/06/2018 OpenStack at CERN 27
  • 29. CERN Ceph Clusters Size Version OpenStack Cinder/Glance Production 5.5PB jewel Satellite data centre (1000km away) 0.4PB luminous CephFS (HPC+Manila) Production 0.8PB luminous Manila testing cluster 0.4PB luminous Hyperconverged HPC 0.4PB luminous CASTOR/XRootD Production 4.2PB luminous CERN Tape Archive 0.8PB luminous S3+SWIFT Production 0.9PB luminous 29 +5PB in the pipeline 06/06/2018 OpenStack at CERN
  • 30. Bigbang Scale Tests • Bigbang scale tests mutually benefit CERN & Ceph project • Bigbang I: 30PB, 7200 OSDs, Ceph hammer. Several osdmap limitations • Bigbang II: Similar size, Ceph jewel. Scalability limited by OSD/MON messaging. Motivated ceph-mgr • Bigbang III: 65PB, 10800 OSDs 30 https://ceph.com/community/new-luminous-scalability/ 06/06/2018 OpenStack at CERN
  • 31. OpenStack Magnum An OpenStack API Service that allows creation of container clusters ● Use your keystone credentials ● You choose your cluster type ● Multi-Tenancy ● Quickly create new clusters with advanced features such as multi-master
  • 32. OpenStack Magnum $ openstack coe cluster create --cluster-template kubernetes --node-count 100 … mycluster $ openstack cluster list +------+----------------+------------+--------------+-----------------+ | uuid | name | node_count | master_count | status | +------+----------------+------------+--------------+-----------------+ | .... | mycluster | 100 | 1 | CREATE_COMPLETE | +------+----------------+------------+--------------+-----------------+ $ $(magnum cluster-config mycluster --dir mycluster) $ kubectl get pod $ openstack coe cluster update mycluster replace node_count=200 Single command cluster creation
  • 33. 33 Why Bare-Metal Provisioning? • VMs not sensible/suitable for all of our use cases - Storage and database nodes, HPC clusters, boot strapping, critical network equipment or specialised network setups, precise/repeatable benchmarking for s/w frameworks, … • Complete our service offerings - Physical nodes (in addition to VMs and containers) - OpenStack UI as the single pane of glass • Simplify hardware provisioning workflows - For users: openstack server create/delete - For procurement & h/w provisioning team: initial on-boarding, server re-assignments • Consolidate accounting & bookkeeping - Resource accounting input will come from less sources - Machine re-assignments will be easier to track 06/06/2018 OpenStack at CERN
  • 34. Compute Intensive Workloads on VMs • Up to 20% loss on very large VMs! • “Tuning”: KSM*, EPT**, pinning, … 10% • Compare with Hyper-V: no issue • Numa-awares & node pinning ... <3%! • Cross over : patches from Telecom (*) Kernel Shared Memory (**) Extended Page Tables 06/06/2018 OpenStack at CERN 34 VM Before After 4x 8 7.8% 2x 16 16% 1x 24 20% 5% 1x 32 20% 3%
  • 35. 06/06/2018 OpenStack at CERN 35 A new use case: Containers on Bare-Metal • OpenStack managed containers and bare metal so put them together • General service offer: managed clusters - Users get only K8s credentials - Cloud team manages the cluster and the underlying infra • Batch farm runs in VMs as well - Evaluating federated kubernetes for hybrid cloud integration - 7 clouds federated demonstrated at Kubecon - OpenStack and non-OpenStack transparently managed Integration: seamless! (based on specific template) Monitoring (metrics/logs)?  Pod in the cluster  Logs: fluentd + ES  Metrics: cadvisor + influx
  • 36. • h/w purchases: formal procedure compliant with public procurements - Market survey identifies potential bidders - Tender spec is sent to ask for offers - Larger deliveries 1-2 times / year • “Burn-in” before acceptance - Compliance with technical spec (e.g. performance) - Find failed components (e.g. broken RAM) - Find systematic errors (e.g. bad firmware) - Provoke early failing due to stress Whole process can take weeks! Hardware Burn-in in the CERN Data Centre (1) “bathtub curve” 06/06/2018 OpenStack at CERN 36
  • 37. Hardware Burn-in in the CERN Data Centre (2) • Initial checks: Serial Asset Tag and BIOS settings - Purchase order ID and unique serial no. to be set in the BMC (node name!) • “Burn-in” tests - CPU: burnK7, burnP6, burnMMX (cooling) - RAM: memtest, Disk: badblocks - Network: iperf(3) between pairs of nodes - automatic node pairing - Benchmarking: HEPSpec06 (& fio) - derivative of SPEC06 - we buy total compute capacity (not newest processors) $ ipmitool fru print 0 | tail -2 Product Serial : 245410-1 Product Asset Tag : CD5792984 $ openstack baremetal node show CD5792984-245410-1 “Double peak” structure due to slower hardware threads OpenAccess paper 06/06/2018 OpenStack at CERN 37
  • 39. 39 Phase 1. Nova Network Linux Bridge Phase 2. Neutron Linux Bridge Phase 3. SDN Tungsten Fabric (testing) Network Migration New Region coming in 2018 Already running * But still used in 2018 *
  • 40. Spectre / Meltdown  In January, a security vulnerability was disclosed a new kernel everywhere  Campaign over two weeks from15th January  7 reboot days, 7 tidy up days  By availability zone  Benefits  Automation now to reboot the cloud if needed - 33,000 VMs on 9,000 hypervisors  Latest QEMU and RBD user code on all VMs  Downside  Discovered Kernel bug in XFS which may mean we have to do it again soon 06/06/2018 OpenStack at CERN 40
  • 41. Community Experience  Open source collaboration sets model for in-house teams  External recognition by the community is highly rewarding for contributors  Reviews and being reviewed is a constant learning experience  Productive for job market for staff  Working groups, like the Scientific and Large Deployment teams, discuss wide range of topics  Effective knowledge transfer mechanisms consistent with the CERN mission  110 outreach talks since 2011  Dojos at CERN bring good attendance  Ceph, CentOS, Elastic, OpenStack CH, … 06/06/2018 OpenStack at CERN 41
  • 42.  Increased complexity due to much higher pile-up and higher trigger rates will bring several challenges to reconstruction algorithms MS had to cope with monster pile-up 8b4e bunch structure à pile-up of ~ 60 events/x-ing for ~ 20 events/x-ing) CMS: event with 78 reconstructed vertices CMS: event from 2017 with 78 reconstructed vertices ATLAS: simulation for HL-LHC with 200 vertices 06/06/2018 OpenStack at CERN 42 HL-LHC: More collisions!
  • 43. 06/06/2018 OpenStack at CERN 43 First run LS1 Second run Third run LS3 HL-LHC Run4 …2009 2013 2014 2015 2016 2017 201820112010 2012 2019 2023 2024 2030?20212020 2022 …2025 LS2  Significant part of cost comes from global operations  Even with technology increase of ~15%/year, we still have a big gap if we keep trying to do things with our current compute models Raw data volume increases significantly for High Luminosity LHC 2026
  • 45. Development areas going forward  Spot Market  Cells V2  Neutron scaling  Magnum rolling upgrades  Block Storage Performance  Federated Kubernetes  Collaborations with Industry and SKA 06/06/2018 OpenStack at CERN 45
  • 46. Summary  OpenStack has provided flexible infrastructure at CERN since 2013  The open infrastructure toolchain has been stable at scale  Clouds are part but not all of the solution  Open source collaborations have been fruitful for CERN, industry and the communities  Further efforts will be needed to ensure that physics is not limited by the computing resources available 06/06/2018 OpenStack at CERN 46
  • 47. Thanks for all your help .. Some links  CERN OpenStack blog at http://openstack- in-production.blogspot.com  Recent CERN OpenStack talks at Vancouver summit at https://www.openstack.org/videos/search?se arch=cern  CERN Tools at https://github.com/cernops 06/06/2018 OpenStack at CERN 47
  • 49. Hardware Evolution  Looking at new hardware platforms to reduce the upcoming resource gap  Explorations have been made in low cost and low power ARM processors  Interesting R&Ds in high performance hardware  GPUs for deep learning network training and fast simulation  FPGAs for neural network inference and data transformations 49 Significant algorithm changes needed to benefit from potential 06/06/2018 OpenStack at CERN

Editor's Notes

  1. Reference: Fabiola’s talk @ Univ of Geneva https://www.unige.ch/public/actualites/2017/le-boson-de-higgs-et-notre-vie/ European Centre for Nuclear research Founded in 1954, today 22 member state World largest particle physics laboratory ~2.300 staff, 13k users on site Budget 1k MCHF Mission Answer fundamental question on the universe Advance the technology frontiers Train scientist of tomorrow Bring nations together https://communications.web.cern.ch/fr/node/84
  2. For all this fundamental research, CERN provides different facilities to scientists, for example the LHC It’s a ring 27 km in circumference, crosses 2 countries, 100 mt underground, accelerates 2 particle beans to near the speed of light, and it make them collides to 4 different points where there are detectors to observe the fireworks. 2.500 people employed by CERN, > 10k users on the site Talk about LHC here, describe experiment, lake geneve , mont blanc, an then jump in Big ring is the LHC, the small one is the SPS, computer centre is not so far. Pushing the boundary of technology, It facilitate research, we just run the accelerators, experiment are done by institurtes, member states, university Itranco swiss border, very close to geneva
  3. Our flagship program is the LHC Trillions of protons race around the 27km ring in opposite directions over 11,000 times a second, travelling at 99.9999991 per cent the speed of light. Largest machine on earth
  4. With an operating temperature of about -271 degrees Celsius, just 1.9 degrees above absolute zero, the LHC is one of the coldest places in the universe 120T Helium, only at that temperature there is no resistence
  5. https://home.cern/about/engineering/vacuum-empty-interstellar-space Inside beam operate a vey high vacuum, comparable to vacuum of the moon, there actually 2 beam, proton beams going int 2 directions, vaccum to avoiud protocon interacting with other particles
  6. Technology very advanced beasts, 4 of them, ATLAS and CMS are the most well known ones, generale pouprose testing standard model properties, in those detector higgs particle have been discovered in 2012 In the picture you can see physicists. ALICE and LHCB To sample and record the debris from up to 600 million proton collisions per second, scientists are building gargantuan devices that measure particles with micron precision.
  7. 100 Mpixel camera, 40 Million picture per seconds https://www.ethz.ch/en/news-and-events/eth-news/news/2017/03/new-heart-for-cerns-cms.html
  8. https://home.cern/about/computing/processing-what-record First run abuot 5GB/s Size of the Fibers from pitches to DC?
  9. What we do with all this data? First thing we store it, the analysis is done offline, analysis can go one for years.
  10. Tiered-systems where l0 is CERN data is recorded, reconstruted and distributed. All these detectors will generate loads of data… about 1 PB (petabyte = million of gigabyte per… SECOND!) Impossible to store so much data. Anyway not needed. The event the experiments are trying to create and observe are very rare. That’s why we make so many collisions but we keep only the interesting ones. Therefore next to each detector is a «trigger», a kind of filter made of various layers (first electronic, then computers) which will select and keep only 1 collision out of a million average. In the end will still generate dozens of Petabytes of data each year. We need about 200’000 computer CPUs to analyze this data. As CERN has only about 100’000 CPUS we share the date over more than 100 computer centre over the planet (usually located in the physics institutes participating to the LHC collaboration). This is the Computing Grid, a gigantic planetary computer and hard drive! Biggest scientific Grid project in the world ~170 computer centers (site) 1 Tier 0 (distributed in two locations) 14 bigger centers (Tier 1) ~160 Tier 2 42 countries 10,000 users Running since Oct 2008 3 million jobs per day ~600.000 cores 300 PB data Do you want to contribute? http://lhcathome.web.cern.ch/
  11. Optimized the usage resources and computing ( ~2012 private cloud based on Openstack) focusing on virtualization etc. and scaling options.