The document discusses the CERN OpenStack cloud, which provides compute resources for the Large Hadron Collider experiment at CERN. It details the scale of the cloud, including over 6,700 hypervisors, 190,000 cores, and 20,000 VMs. It also describes the various use cases served, wide range of hardware, and operations of the cloud, including a retirement campaign and network migration to Neutron.
CERN, the European Organization for Nuclear Research, is one of the world’s largest centres for scientific research. Its business is fundamental physics, finding out what the universe is made of and how it works. At CERN, accelerators such as the 27km Large Hadron Collider, are used to study the basic constituents of matter. This talk reviews the challenges to record and analyse the 25 Petabytes/year produced by the experiments and the investigations into how OpenStack could help to deliver a more agile computing infrastructure.
10 Years of OpenStack at CERN - From 0 to 300k coresBelmiro Moreira
CERN, the European Laboratory for Particle Physics, provides the infrastructure and resources to thousands of scientists all around the world to uncover the mysteries of the Universe. In the quest to build a private Cloud Infrastructure to support its users, CERN started early evaluating the OpenStack project, building several prototypes and engaging with the community. Finally, in 2013 CERN released its production Cloud Infrastructure using OpenStack. Since then we moved from a few hundred cores to a multi-cell deployment spread between different regions. After 7 years deploying and managing OpenStack in production at a large scale, we now look back and discuss the challenges of building a massive scale infrastructure from 0 to +300K cores. In this talk we will dive into the history, architecture, tools and technical decisions behind the CERN Cloud Infrastructure over the years.
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014Belmiro Moreira
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale
OpenStack Design Summit, Paris - November, 2014
Belmiro Moreira - CERN
Matt Van Winkle - Rackspace
Sam Morrison - NeCTAR, University of Melbourne
CERN, the European Organization for Nuclear Research, is one of the world’s largest centres for scientific research. Its business is fundamental physics, finding out what the universe is made of and how it works. At CERN, accelerators such as the 27km Large Hadron Collider, are used to study the basic constituents of matter. This talk reviews the challenges to record and analyse the 25 Petabytes/year produced by the experiments and the investigations into how OpenStack could help to deliver a more agile computing infrastructure.
10 Years of OpenStack at CERN - From 0 to 300k coresBelmiro Moreira
CERN, the European Laboratory for Particle Physics, provides the infrastructure and resources to thousands of scientists all around the world to uncover the mysteries of the Universe. In the quest to build a private Cloud Infrastructure to support its users, CERN started early evaluating the OpenStack project, building several prototypes and engaging with the community. Finally, in 2013 CERN released its production Cloud Infrastructure using OpenStack. Since then we moved from a few hundred cores to a multi-cell deployment spread between different regions. After 7 years deploying and managing OpenStack in production at a large scale, we now look back and discuss the challenges of building a massive scale infrastructure from 0 to +300K cores. In this talk we will dive into the history, architecture, tools and technical decisions behind the CERN Cloud Infrastructure over the years.
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale - November, 2014Belmiro Moreira
Multi-Cell OpenStack: How to Evolve Your Cloud to Scale
OpenStack Design Summit, Paris - November, 2014
Belmiro Moreira - CERN
Matt Van Winkle - Rackspace
Sam Morrison - NeCTAR, University of Melbourne
The next generation of research infrastructure and large scale scientific instruments will face new magnitudes of data.
This talk presents two flagship programmes: the next generation of the Large Hadron Collider (LHC) at CERN and the Square Kilometre Array (SKA) radio telescope. Each in their way will push infrastructure to the limit.
The LHC has been one of the significant users of OpenStack in scientific computing. The SKA is now working to a final software architecture design and is focusing on OpenStack as an underlying middleware function.
Together, we plan to develop a common platform for scaling science: to accommodate new applications and software services, to deliver high ingest rate real-time and batch processing, to integrate high performance storage and to unlock the potential of software defined networking.
CERN OpenStack Cloud Control Plane - From VMs to K8sBelmiro Moreira
CERN is the home of the Large Hadron Collider (LHC), a 27km circular proton accelerator that generates petabytes of physics data every year. To process all this data, CERN runs an OpenStack Cloud (>300K cores) that helps scientists all around the world to unveil the mysteries of the Universe. The Infrastructure is also used to run all the IT services of the Organization.
Delivering these services, with high performance and reliable service levels has been one of the major challenges for the CERN Cloud engineering team. We have been constantly iterating the architecture and deployment model of the Cloud control plane.
In this presentation we will describe the different control plane architecture models that we relied over the years. Finally, we will describe all the work done to move the OpenStack Cloud control plane from VMs into a kubernetes cluster. We will report about our experience running this architecture at scale, its advantages and challenges.
Containers on Baremetal and Preemptible VMs at CERN and SKABelmiro Moreira
CERN the European Laboratory for Nuclear Research and SKA the Square Kilmeter Array are preparing the next generation of research infrastructure for the new large scale scientific instruments that will produce new magnitudes of data. In Sydney OpenStack Summit we presented the collaboration and the platform that we plan to develop for scaling science.
In this talk will present the work done related with Preemptible VMs and Containers on Baremetal.
Preemptible VMs are instances that use idle allocated resources in the infrastructure and can be terminated when this capacity is required. Containers in Baremetal eliminate the virtualization overhead enabling the container full performance required for scientific workloads.
We will present the current state, development and integration decisions and how these functionalities can be used in a common OpenStack infrastructure.
CERN, the European Organization for Nuclear Research, is running for several years a large OpenStack Cloud that helps thousands of scientists to analyze the data from the LHC.
In 2012, early in the design phase of the CERN Cloud we decided to use Nova Cells to enable the infrastructure to scale to thousands of nodes. Now with more than 280K cores spread across 70 cells that are hosted in two data centres we were faced with the challenge to migrate to Nova Cells V2 required in the Pike release.
In this presentation, we will describe how Nova Cells allowed CERN to scale to thousands of nodes, its advantages and how we mitigate the implementation issues of Nova Cells V1. Next, we will cover how we upgraded Nova from Newton with Cells V1 to Pike with Cells V2. We will explain the steps that we followed and the issues that we faced during the upgrade. Finally, we will report our experience with Cells V2 at scale, its caveats and how we are mitigate them.
What can I expect to learn?
This presentation describes how CERN migrated from Cells V1 to Cells V2 when upgraded from Newton to Pike release.
You will learn the procedures followed by CERN in order to migrate Cells V1 to Cells V2 in a large production environment.
The issues found during the upgrade and how we mitigate them will be discussed.
Also, we will present how Cells V2 behaves in a large scale deployment with serveral thounsands nodes in 70 cells.
Learning to Scale Openstack: A Case Study in Rackspace's Open Cloud Deployment was presented at OpenStack Design Summit in Portland, OR on April 17, 2013. Watch the recording of the presentation on youtube at the following link: http://www.youtube.com/watch?v=3x8X6f5mnzc
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015Belmiro Moreira
Tips Tricks and Tactics with Cells and Scaling OpenStack
OpenStack Design Summit, Paris - May, 2015
Belmiro Moreira - CERN
Matt Van Winkle - Rackspace
Sam Morrison - NeCTAR, University of Melbourne
CERN is the home of the Large Hadron Collider (LHC), a 27km circular proton accelerator generating tens of petabytes of new data every year. Data is stored and processed using a large amount of resources totaling over 250.000 cores and 1000s of storage servers, managed by OpenStack.
Networking is a critical part of our infrastructure and arguably the hardest to evolve. Given the size of CERN’s infrastructure, its flat network is partitioned in segments each representing a separate broadcast domain and potentially offering different levels of service. This fragmentation improves scalability and reduces the impact of misbehaving systems in the datacentre to individual segments. On the other hand, having multiple broadcast domains means features like floating and virtual IPs are much harder to offer.
We will tell the story of OpenStack Networking at CERN. First integration with Nova Network, the migration to Neutron and how we're adding SDN in our infrastructure.
tcp cloud presentation at OpenContrail Meetup in May 2015 Vancouver about OpenStack/OpenContrail implementations, Juno integration and SaltStack announcement.
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...Databricks
The physicists at CERN are increasingly turning to Spark to process large physics datasets in a distributed fashion with the aim of reducing time-to-physics with increased interactivity. The physics data itself is stored in CERN’s mass storage system: EOS and CERN’s IT department runs on-premise private cloud based on OpenStack as a way to provide on-demand compute resources to physicists. This provides both opportunity and challenges to Big Data team at CERN to provide elastic, scalable, reliable spark-as-a-service on OpenStack.
The talk focuses on the design choices made and challenges faced while developing spark-as-a-service over kubernetes on openstack to simplify provisioning, automate management, and minimize the operating burden of managing Spark Clusters. In addition, the service tooling simplifies submitting applications on the behalf of the users, mounting user-specified ConfigMaps, copying application logs to s3 buckets for troubleshooting, performance analysis and accounting of spark applications and support for stateful spark streaming applications. We will also share results from running large scale sustained workloads over terabytes of physics data.
The next generation of research infrastructure and large scale scientific instruments will face new magnitudes of data.
This talk presents two flagship programmes: the next generation of the Large Hadron Collider (LHC) at CERN and the Square Kilometre Array (SKA) radio telescope. Each in their way will push infrastructure to the limit.
The LHC has been one of the significant users of OpenStack in scientific computing. The SKA is now working to a final software architecture design and is focusing on OpenStack as an underlying middleware function.
Together, we plan to develop a common platform for scaling science: to accommodate new applications and software services, to deliver high ingest rate real-time and batch processing, to integrate high performance storage and to unlock the potential of software defined networking.
CERN OpenStack Cloud Control Plane - From VMs to K8sBelmiro Moreira
CERN is the home of the Large Hadron Collider (LHC), a 27km circular proton accelerator that generates petabytes of physics data every year. To process all this data, CERN runs an OpenStack Cloud (>300K cores) that helps scientists all around the world to unveil the mysteries of the Universe. The Infrastructure is also used to run all the IT services of the Organization.
Delivering these services, with high performance and reliable service levels has been one of the major challenges for the CERN Cloud engineering team. We have been constantly iterating the architecture and deployment model of the Cloud control plane.
In this presentation we will describe the different control plane architecture models that we relied over the years. Finally, we will describe all the work done to move the OpenStack Cloud control plane from VMs into a kubernetes cluster. We will report about our experience running this architecture at scale, its advantages and challenges.
Containers on Baremetal and Preemptible VMs at CERN and SKABelmiro Moreira
CERN the European Laboratory for Nuclear Research and SKA the Square Kilmeter Array are preparing the next generation of research infrastructure for the new large scale scientific instruments that will produce new magnitudes of data. In Sydney OpenStack Summit we presented the collaboration and the platform that we plan to develop for scaling science.
In this talk will present the work done related with Preemptible VMs and Containers on Baremetal.
Preemptible VMs are instances that use idle allocated resources in the infrastructure and can be terminated when this capacity is required. Containers in Baremetal eliminate the virtualization overhead enabling the container full performance required for scientific workloads.
We will present the current state, development and integration decisions and how these functionalities can be used in a common OpenStack infrastructure.
CERN, the European Organization for Nuclear Research, is running for several years a large OpenStack Cloud that helps thousands of scientists to analyze the data from the LHC.
In 2012, early in the design phase of the CERN Cloud we decided to use Nova Cells to enable the infrastructure to scale to thousands of nodes. Now with more than 280K cores spread across 70 cells that are hosted in two data centres we were faced with the challenge to migrate to Nova Cells V2 required in the Pike release.
In this presentation, we will describe how Nova Cells allowed CERN to scale to thousands of nodes, its advantages and how we mitigate the implementation issues of Nova Cells V1. Next, we will cover how we upgraded Nova from Newton with Cells V1 to Pike with Cells V2. We will explain the steps that we followed and the issues that we faced during the upgrade. Finally, we will report our experience with Cells V2 at scale, its caveats and how we are mitigate them.
What can I expect to learn?
This presentation describes how CERN migrated from Cells V1 to Cells V2 when upgraded from Newton to Pike release.
You will learn the procedures followed by CERN in order to migrate Cells V1 to Cells V2 in a large production environment.
The issues found during the upgrade and how we mitigate them will be discussed.
Also, we will present how Cells V2 behaves in a large scale deployment with serveral thounsands nodes in 70 cells.
Learning to Scale Openstack: A Case Study in Rackspace's Open Cloud Deployment was presented at OpenStack Design Summit in Portland, OR on April 17, 2013. Watch the recording of the presentation on youtube at the following link: http://www.youtube.com/watch?v=3x8X6f5mnzc
Tips Tricks and Tactics with Cells and Scaling OpenStack - May, 2015Belmiro Moreira
Tips Tricks and Tactics with Cells and Scaling OpenStack
OpenStack Design Summit, Paris - May, 2015
Belmiro Moreira - CERN
Matt Van Winkle - Rackspace
Sam Morrison - NeCTAR, University of Melbourne
CERN is the home of the Large Hadron Collider (LHC), a 27km circular proton accelerator generating tens of petabytes of new data every year. Data is stored and processed using a large amount of resources totaling over 250.000 cores and 1000s of storage servers, managed by OpenStack.
Networking is a critical part of our infrastructure and arguably the hardest to evolve. Given the size of CERN’s infrastructure, its flat network is partitioned in segments each representing a separate broadcast domain and potentially offering different levels of service. This fragmentation improves scalability and reduces the impact of misbehaving systems in the datacentre to individual segments. On the other hand, having multiple broadcast domains means features like floating and virtual IPs are much harder to offer.
We will tell the story of OpenStack Networking at CERN. First integration with Nova Network, the migration to Neutron and how we're adding SDN in our infrastructure.
tcp cloud presentation at OpenContrail Meetup in May 2015 Vancouver about OpenStack/OpenContrail implementations, Juno integration and SaltStack announcement.
Experience of Running Spark on Kubernetes on OpenStack for High Energy Physic...Databricks
The physicists at CERN are increasingly turning to Spark to process large physics datasets in a distributed fashion with the aim of reducing time-to-physics with increased interactivity. The physics data itself is stored in CERN’s mass storage system: EOS and CERN’s IT department runs on-premise private cloud based on OpenStack as a way to provide on-demand compute resources to physicists. This provides both opportunity and challenges to Big Data team at CERN to provide elastic, scalable, reliable spark-as-a-service on OpenStack.
The talk focuses on the design choices made and challenges faced while developing spark-as-a-service over kubernetes on openstack to simplify provisioning, automate management, and minimize the operating burden of managing Spark Clusters. In addition, the service tooling simplifies submitting applications on the behalf of the users, mounting user-specified ConfigMaps, copying application logs to s3 buckets for troubleshooting, performance analysis and accounting of spark applications and support for stateful spark streaming applications. We will also share results from running large scale sustained workloads over terabytes of physics data.
Charts for the presentation at the OpenStack Summit at Barcelona, October 2016. The video is available at:
https://www.openstack.org/videos/video/toward-10000-containers-on-openstack
CERN is the European Centre for Particle Physics based in Geneva. The home of the Large Hadron Collider and the birth place of the world wide web is expanding its computing resources with a second data centre to process over 35PB/year from one of the largest scientific experiments ever constructed.
Within the constraints of fixed budget and manpower, agile computing techniques and common open source tools are being adopted to support over 11,000 physicists in their search for how the universe works and what is it made of.
By challenging special requirements and understanding how other large computing infrastructures are built, we have deployed a 50,000 core cloud based infrastructure building on tools such as Puppet, OpenStack and Kibana.
In moving to a cloud model, this has also required close examination of the IT processes and culture. Finding the right approach between Enterprise and DevOps techniques has been one of the greatest challenges of this transformation.
This talk will cover the requirements, tools selected, results achieved so far and the outlook for the future.
OpenStack Ecosystem – Xen Cloud Platform and Integration into OpenStack - in...IndicThreads
Session presented at the 2nd IndicThreads.com Conference on Cloud Computing held in Pune, India on 3-4 June 2011.
http://CloudComputing.IndicThreads.com
Abstract: OpenStack is an Initiative by RackSpace and NASA that aims for building an Open cloud platform supported by a vibrant Ecosystem to encourage broad adoption in the market.This is currently a hot favorite of enterprises looking to build an Open cloud.
This talk will provide a brief overview of the different OpenStack Modules (Compute and Storage) and explain how to utilize these to build a cloud. We will also explore the newly released Xen Cloud Platform (XCP) and its integration with OpenStack Platform. There will be a hands-on demo (time permitting) where we will show how the integration between the OpenStack Platform and XCP works.
Key Takeaways for the audience:
1) Understanding of OpenStack platform.
2) How to get started with OpenStack for building your own cloud.
3) Understanding of XCP
3) How the integration (OpenStack-XCP) is supposed to work
4) What are the opportunities for building different products that add value in the OpenStack Ecosystem
Speaker: Amit Naik is an Architect at BMC Software and has 15 years of experience in the IT field with experience in delivering multiple end-to-end projects and Products. Multiple speaking engagements at different venues both in India and Abroad. Experience with blogging, evangelizing etc. Excellent communication and interpersonal skills.
Joint Speaker: Prasad Nirantar is a Staff Product Developer at BMC Software. He holds a B.E in Polymer Engineering from the University of Pune and an MS from University of Akron, US. He also holds a diploma in business management from Symbiosis University.
This is a workshop done on codemonsters.pro technological conference.
It illustrates how you can create and deal with vm and container virtual infrastructures and combine them in a useful way.
Cisco: Cassandra adoption on Cisco UCS & OpenStackDataStax Academy
n this talk we will address how we developed our Cassandra environments utilizing Cisco UCS Open Stack Platform with the DataStax Enterprise Edition software. In addition we are utilizing OpenSource CEPH storage in our Infrastructure to optimize the Performance and reduce the costs.
Do you think of cheetahs not RabbitMQ when you hear the word Swift? Think a Nova is just a giant exploding star, not a cloud compute engine. This deck (presented at the OpenStack Boston meetup) provides introduction will answer your many questions. It covers the basic components including: Nova, Swift, Cinder, Keystone, Horizon and Glance.
PSOCLD-1006 Cisco Cloud Architectures on OpenStack - Cisco Live! US 2015 San ...Rohit Agarwalla
OpenStack solutions have revolutionized economics, flexibility and scalability for the cloud. Hear how Cisco innovations like Application Centric Infrastructure and Intercloud Fabric bring unparalleled efficiency to OpenStack private cloud deployments. Attendees will be introduced to Cisco Validated Designs for deploying Red Hat Enterprise Linux OpenStack Platform. This session will cover Cisco OpenStack strategy, architecture and solutions. It will discuss in detail about the Cisco integration, innovations and differentiation for OpenStack. In addition, it will cover the architecture for both private and public cloud offerings. It will also cover the key Cisco partnerships, offerings and UCS bundles to help accelerate this solution.
What is OpenStack and the added value of IBM solutionsSasha Lazarevic
OpenStack has become de-facto standard for private cloud implementations. This is presentation of OpenStack basics, with a conclusion that can be valuable to professional services. I recommend the clients to pay attention to IBM's value-added solutions like Cloud Manager and Cloud Orchestrator.
HUAWEI CONNECT is an annual conference for the global ICT ecosystem, where industry and research leaders share ideas, explore technologies, and join forces for shared growth. The topic of 2017 conference was "Grow with the Cloud".
Jan van Eldik from CERN made a presentation to promote the achievements of this project and sowed how public procurers can drive innovation in cloud industry from the demand side.
Review of CERN's objectives and how the computing infrastructure is evolving to address the challenges at scale using community supported software such as Puppet and OpenStack.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
The OpenStack Cloud at CERN - OpenStack Nordic
1.
2. The CERN OpenStack Cloud
Compute Resource Provisioning for the
Large Hadron Collider
2
Jan van Eldik
for the CERN Cloud Team
OpenStack Days Nordic
Stockholm
September 22, 2016
3. 3The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
Agenda
• Introduction to CERN
• Computing at CERN scale
• Cloud Service Overview
• Operations
• Performance
• Outlook
4. CERN: home.cern
• European Organization for Nuclear
Research (Conseil Européen pour la Recherche Nucléaire)
- Founded in 1954, today 22 member states
- World’s largest particle physics laboratory
- Located at Franco-Swiss border near Geneva
- ~2’300 staff members, >12’500 users
- Budget: ~1000 MCHF (2016)
• CERN’s mission
- Answer fundamental questions of the universe
- Advance the technology frontiers
- Train the scientists and engineers of tomorrow
- Bring nations together
4The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
5. The Large Hadron Collider (LHC)
5
Largest machine on earth: 27km circumference
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
6. LHC: 9’600 Magnets for Beam Control
6
1232 superconducting dipoles for bending: 14m, 35t, 8.3T, 12kA
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
7. LHC: Coldest Temperature
7
World’s largest cryogenic system: colder than outer space (1.9K/2.7K), 120t of He
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
8. LHC: Highest Vacuum
8
Vacuum system: 104 km of pipes, 10-10-10-11 mbar (comparable to the moon)
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
9. 9
LHC: Detectors
Four main experiments to study the fundamental properties of the universe
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
10. 10
A collision at LHC
January 2013 - The LHC Computing Grid - Nils
Hoymir
12. Tier 0 at CERN: Acquisition, First pass reconstruction,
Storage & Distribution
1.25 GB/sec (ions)
2011: 400-500 MB/sec
2011: 4-6 GB/sec
13. Solution: the Grid
• Use the Grid to unite computing
resources of particle physics institutes
around the world
The World Wide Web
provides seamless access to
information that is stored in
many millions of different
geographical locations
The Grid is an infrastructure
that provides seamless
access to computing power
and data storage capacity
distributed over the globe
14. 14
LHC: World-wide Computing Grid
TIER-1:
permanent storage,
re-processing,
analysis
TIER-0 (CERN):
data recording,
reconstruction and
distribution
TIER-2:
simulation,
end-user analysis
> 2 million jobs/day
~350’000 cores
500 PB of storage
nearly 170 sites,
40 countries
10-100 Gb links
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
15. LHC: Data & Compute Growth
0
20
40
60
80
100
120
140
160
Run 1 Run 2 Run 3 Run 4
GRID
ATLAS
CMS
LHCb
ALICE
0.0
50.0
100.0
150.0
200.0
250.0
300.0
350.0
400.0
450.0
Run 1 Run 2 Run 3 Run 4
CMS
ATLAS
ALICE
LHCb
PB/year109HS06sec/month
What we
can afford
15The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
Collisions produce ~1 PB/s
16. 2012: Enter the cloud
• Aim: virtualize all the machines
• Unless really, really, really not possible
• Offer Cloud endpoints to users
• Scale horizontally
• Consolidate server provisioning
• Yes, use the private cloud for server
consolidation usecases as well
16The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
17. In production:
• 4 clouds
• >200K cores
• >8,000
hypervisors
90% of CERN’s
compute resources
are now delivered
on top of OpenStack
OpenStack at CERN
17The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
18. Cloud Service Context
• CERN IT to enable the laboratory to fulfill its mission
- Main data center on the Geneva site
- Wigner data center, Budapest, 23ms distance
- Connected via two dedicated 100Gbs links
• CERN Cloud Service one of the three
major components in IT’s AI project
- Policy: Servers in CERN IT shall be virtual
• Based on OpenStack
- Production service since July 2013
- Performed 4 rolling upgrades since
- Currently in transition from Liberty to Mitaka
- Nova, Glance, Keystone, Horizon, Cinder, Ceilometer, Heat, Neutron, Magnum,
Barbican
18The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
http://goo.gl/maps/K5SoG
19. CERN Cloud Architecture (1)
19
• Deployment spans our two
data centers
- 1 region (to have 1 API), ~40 cells
- Cells map use cases
hardware, hypervisor type, location, users, …
• Top cell on physical and virtual
nodes in HA
- Clustered RabbitMQ with mirrored queues
- API servers are VMs in various child cells
• Child cell controllers are OpenStack VMs
- One controller per cell
- Tradeoff between complexity and failure impact
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
20. CERN Cloud Architecture (2)
20
nova-cells
rabbitmq
Top cell controller API server
nova-api
rabbitmq
nova-cells
nova-api
nova-scheduler
nova-conductor
nova-network
Child cell controller
Compute node
nova-compute
rabbitmq
nova-cells
nova-api
nova-scheduler
nova-conductor
nova-network
Child cell controller
Compute node
nova-compute
DB infrastructure
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
21. CERN Cloud in Numbers (1)
• ~6700 hypervisors in production
- Split over 40+ Nova cells
- Vast majority qemu/kvm now on CERN CentOS 7 (~150 Hyper-V hosts)
- ~2’100 HVs at Wigner in Hungary (batch, compute, services)
- 370 HVs on critical power
• 190k Cores
• ~430 TB RAM
• ~20’000 VMs
• Big increase during 2016!
- +57k cores in spring
- +40k cores in autumn
21The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
22. CERN Cloud in Numbers (2)
• 2’700 images/snapshots
- Glance on Ceph
• 2’300 volumes
- Cinder on Ceph (& NetApp) in GVA & Wigner
22
Only issue during
2 years in prod:
Ceph Issue 6480
Rally down
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
Every 10s a VM
gets created or
deleted in our
cloud!
23. Software Deployment
23The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
• Deployment based on CentOS and RDO
- Upstream, only patched where necessary
(e.g. nova/neutron for CERN networks)
- Some few customizations
- Works well for us
• Puppet for config’ management
- Introduced with the adoption of AI paradigm
• We submit upstream whenever possible
- openstack, openstack-puppet, RDO, …
• Updates done service-by-service over several months
- Running services on dedicated (virtual) servers helps
(Exception: ceilometer and nova on compute nodes)
• Upgrade testing done with packstack and devstack
- Depends on service: from simple DB upgrades to full shadow installations
24. ‘dev’ environment
24The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
• Simulate the full CERN cloud environment
on your laptop, even offline
• Docker containers in a Kubernetes cluster
- Clone of central Puppet configuration
- Mock-up of
- our two Ceph clusters
- our secret store
- our network DB & DHCP
- Central DB & Rabbit instances
- One POD per service
• Change & test in seconds
• Full upgrade testing
25. 25The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
(*) Pilot
ESSEX
Nova (*)
Swift
Glance (*)
Horizon (*)
Keystone (*)
FOLSOM
Nova (*)
Swift
Glance (*)
Horizon (*)
Keystone (*)
Quantum
Cinder
GRIZZLY
Nova
Swift
Glance
Horizon
Keystone
Quantum
Cinder
Ceilometer (*)
HAVANA
Nova
Swift
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer (*)
Heat
ICEHOUSE
Nova
Swift
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer
Heat
JUNO
Nova
Swift
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer
Heat (*)
Rally (*)
5 April 2012 27 September 2012 4 April 2013 17 October 2013 17 April 2014 16 October 2014
July 2013
CERN OpenStack
Production Service
February 2014
CERN OpenStack
Havana Release
October 2014
CERN OpenStack
Icehouse Release
30 April 2015
March2015
CERN OpenStack
Juno Release
LIBERTY
Nova
Swift
Glance
Horizon
Keystone
Neutron (*)
Cinder
Ceilometer
Heat
Rally
Magnum (*)
Barbican (*)
15 October 2015
September 2015
CERN OpenStack
Kilo Release
KILO
Nova
Swift
Glance
Horizon
Keystone
Neutron
Cinder
Ceilometer
Heat
Rally
May 2016
CERN OpenStack
ongoing Liberty
Cloud Service Release Evolution
26. Rich Usage Spectrum …
26The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
• Batch service
- Physics data analysis
• IT Services
- Sometimes built on top of other
virtualised services
• Experiment services
- E.g. build machines
• Engineering services
- E.g. micro-electronics/chip design
• Infrastructure services
- E.g. hostel booking, car rental, …
• Personal VMs
- Development
… rich requirement spectrum!
27. Usecases (1)
Server consolidation:
• Service nodes, dev boxes, Personal VMs, …
• Performance less important than “durability”
• Live-migration is desirable
• Persistent block storage is required
• Linux VM @ KVM, Windows VMs @ HyperV
• Starting to run Win VMs under KVM
• “Pets usecase”: 32K cores, 7500 VMs
27The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
28. Usecases (2)
• Compute workloads
• Optimize for compute efficiency
• CPU passthrough, NUMA aware flavours
• Still, very different workloads
• IT Batch: LSF and HTCondor, longlived VMs,
8 and 16-core VMs, “full-node” flavors
• CMS Tier-0: medium-long, 8-core VMs
• LHCb Vcycle: short-lived, single core VMs
• Low-SLA, “cattle usecase”
• 150K cores, 12500 VMs @ 6000 compute nodes
28The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
29. Wide Hardware Spectrum
29The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
• The ~6700 hypervisors differ in …
- Processor architectures: AMD vs. Intel (av. features, NUMA, …)
- Core-to-RAM ratio (1:2, 1:4, 1:1.5, ...)
- Core-to-disk ratio (going down with SSDs!)
- Disk layout (2 HDDs, 3 HDDs, 2 HDDs + 1 SSD, 2 SSDs, …)
- Network (1GbE vs 10 GbE)
- Critical vs physics power
- Physical location (Geneva vs. Budapest)
- Network domain (LCG vs. GPN vs. TN)
- CERN CentOS 7, RHEL7, SLC6, Windows
- …
• Variety reflected/accessible via instance types,
cells, projects … variety not necessarily visible to
users!
- We try to keep things simple and hide some of the complexity
- We can react to (most of the) requests with special needs
31. Basic Building Blocks: Volume Types
31The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
Name IOPS b/w [MB/s] feature Location Backend
standard 100 80 GVA
io1 500 120 fast GVA
cp1 100 80 critical GVA
cpio1 500 120 critical/fast GVA
cp2 n.a. 120 critical/Windows GVA
wig-cp1 100 80 critical WIG
wig-cpio1 500 120 critical/fast WIG
m2.* flavor family plus volumes as
basic building blocks for services
32. Automate provisioning with
32The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
Automate routine procedures
- Common place for workflows
- Clean web interface
- Scheduled jobs, cron-style
- Traceability and auditing
- Fine-grained access control
- …
Procedures for
- OpenStack project creation
- OpenStack quota changes
- Notifications of VM owners
- Usage and health reports
- …
Disable
compute node
• Disable nova-service
• Switch Alarms OFF
• Update Service-Now ticket
Notifications
• Send e-mail to VM owners
Other tasks
Post new
message broker
Add remote AT job
Save intervention
details
Send calendar
invitation
33. Operations: Retirement Campaign
33The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
• About 1’600 nodes to retire from the service by 3Q16
- ~1’200 from compute, ~400 with services (hosting ~5000 VMs)
• We have gained quite some experience with (manual)
migration
- Live where possible and cold where necessary
- Works reliably (where it can)
• We have developed a tool that you can instruct to drain
hypervisor (or simply migrate given VMs)
- Main tasks are VM classification and progress monitoring
- The nova scheduler will pick the target (nova patch)
• We are using the “IP service bridging” to handle
CERN network specifics
34. Operations: Network Migration
34
• We’ll need to replace nova-network
- It’s going to be deprecated (really really really this time)
• New potential features (Tenant networks, LBaaS,
Floating IPs, …)
• We have a number of patches to adapt to the CERN network
constraints
- We patched nova for each release …
- … neutron allows for out-of-tree plugins!
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
35. 35
• We have ~5 Neutron cells in production
- Neutron control plane in Liberty (fully HA)
- Bridge agent in Kilo (nova)
• And it is very stable
• All new cells will be Neutron cells
• “As mentioned, there is currently no way to
cleanly migrate from nova-network to neutron.”
- All efforts to establish a general migration path failed so far
- Should be OK for us, various options (incl. in-place, w/ migration, …)
Operations: Neutron Status
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
36. 36
• Magnum: OpenStack project to treat Container
Orchestration Engines (COEs) as 1st class
resources
• Pre-production service available
- Support for Docker Swarm, Kubernetes, Mesos
• Many users interested, usage ramping up
- GitLab CI, Jupyter/Swan, FTS, …
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
Operations: Containers
37. Operations: Federated Clouds
Public Cloud such
as Rackspace or IBM
CERN Private Cloud
160K cores
ATLAS Trigger
28K cores
ALICE Trigger
9K cores
CMS Trigger
13K cores
INFN
Italy
Brookhaven
National Labs
NecTAR
Australia
Many Others
on Their Way
37The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
• Access to EduGAIN users via Horizon
- Allow (limited) access given appropriate membership
38. CPU Performance Issues
44
• The benchmarks on full-node VMs was about
20% lower than the one of the underlying host
- Smaller VMs much better
• Investigated various tuning options
- KSM*, EPT**, PAE, Pinning, … +hardware type dependencies
- Discrepancy down to ~10% between virtual and physical
• Comparison with Hyper-V: no general issue
- Loss w/o tuning ~3% (full-node), <1% for small VMs
- … NUMA-awareness!
*KSM on/off: beware of memory reclaim! **EPT on/off: beware of expensive page table walks!
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
39. CPU: NUMA
45
• NUMA-awareness identified as most
efficient setting
- Full node VMs have ~3% overhead in HS06
• “EPT-off” side-effect
- Small number of hosts, but very
visible there
• Use 2MB Huge Pages
- Keep the “EPT off” performance gain
with “EPT on”
• More details in this talk
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
40. Operations: NUMA/THP Roll-out
46
• Rolled out on ~2’000 batch hypervisors (~6’000 VMs)
- HP allocation as boot parameter reboot
- VM NUMA awareness as flavor metadata delete/recreate
• Cell-by-cell (~200 hosts):
- Queue-reshuffle to minimize resource impact
- Draining & deletion of batch VMs
- Hypervisor reconfiguration (Puppet) & reboot
- Recreation of batch VMs
• Whole update took about 8 weeks
- Organized between batch and cloud teams
- No performance issue observed since
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
41. Future Plans
47
• Investigate Ironic (Bare metal provisioning)
- OpenStack as one interface for compute resource provisioning
- Allow for complete accounting
- Use physical machines for containers
• Replace Hyper-V by qemu/kvm
- Windows expertise is a scarce resource in our team
- Reduce complexity in service setup
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016
42. Summary
48
• OpenStack at CERN in production since 3 years
- We’re working closely with the various communities
- OpenStack, Ceph, RDO, Puppet, …
• Cloud service continues to grow and mature
- While experimental, good experience with Nova cells for scaling
- Experience gained helps with general resource provisioning
- New features added (containers, identity federation)
- Expansion planned (bare metal provisioning)
• Confronting some major operational challenges
- Transparent retirement of service hosts
- Replacement of network layer
• http://openstack-in-production.blogspot.com
(read about our recent 2M req/s Magnum & Kubernetes!)
The CERN OpenStack Cloud - OpenStack Days Nordic - Sept 22,2016