Openstack is an open-source cloud computing platform that is widely used. It allows for the provisioning of computing, storage, and networking resources on demand in a manner similar to public cloud services like Amazon Web Services. The presentation discusses Openstack's architecture, current uses, development status, and relationship to high-performance computing. It also covers how Argonne National Labs uses Openstack and potential future directions, like more native support for HPC workloads and integrated application platforms.
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...OpenStack
Audience Level
All levels
Synopsis
Peter has been involved in OpenStack community since its B-release, and he has been enabling and helping customers across various industries adopt OpenStack in strategic ways. In this session, you will learn from his experience what Red Hat’s perspective is on the current state of affairs in the OpenStack community and the path we see ahead that Red Hat is putting its efforts in. OpenStack is not a product that tries to solve any one business problem in particular, but a technology that aims to be usable for many – what are the required steps to make sure that your organisation is ready for the OpenStack-based cloudification and transformation.
Speaker Bio:
Peter Jung is a Senior Business Development Manager at Red Hat where he leads the practice in the areas of Cloud, SDN/NFV and IoT across Australia and New Zealand. He is passionate about open innovation and open source software development model as the foundation for next generation society and ICT systems. Prior to Red Hat, he had various roles at Cisco and Dell for 15 years. He holds a BSEE and an MBA.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...OpenStack
Audience Level
Intermediate
Synopsis
M3 is the latest generation system of the MASSIVE project, an HPC facility specializing in characterization science (imaging and visualization). Using OpenStack as the compute provisioning layer, M3 is a hybrid HPC/cloud system, custom-integrated by Monash’s R@CMon Research Cloud team. Built to support Monash University’s next-gen high-throughput instrument processing requirements, M3 is half-half GPU-accelerated and CPU-only.
We’ll discuss the design and tech used to build this innovative platform as well as detailing approaches and challenges to building GPU-enabled and HPC clouds. We’ll also discuss some of the software and processing pipelines that this system supports and highlight the importance of tuning for these workloads.
Speaker Bio
Blair Bethwaite: Blair has worked in distributed computing at Monash University for 10 years, with OpenStack for half of that. Having served as team lead, architect, administrator, user, researcher, and occasional hacker, Blair’s unique perspective as a science power-user, developer, and system architect has helped guide the evolution of the research computing engine central to Monash’s 21st Century Microscope.
Lance Wilson: Lance is a mechanical engineer, who has been making tools to break things for the last 20 years. His career has moved through a number of engineering subdisciplines from manufacturing to bioengineering. Now he supports the national characterisation research community in Melbourne, Australia using OpenStack to create HPC systems solving problems too large for your laptop.
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
Audience: Intermediate
About: Learn how cloud storage differs to traditional storage systems and how that delivers revolutionary benefits.
Starting with an overview of how Ceph integrates tightly into OpenStack, you’ll see why 62% of OpenStack users choose Ceph, we’ll then take a peek into the very near future to see how rapidly Ceph is advancing and how you’ll be able to achieve all your childhood hopes and dreams in ways you never thought possible.
Speaker Bio: Andrew Hatfield – Practice Lead–Cloud Storage and Big Data, Red Hat
Andrew has over 20 years experience in the IT industry across APAC, specialising in Databases, Directory Systems, Groupware, Virtualisation and Storage for Enterprise and Government organisations. When not helping customers slash costs and increase agility by moving to the software-defined storage future, he’s enjoying the subtle tones of Islay Whisky and shredding pow pow on the world’s best snowboard resorts.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016/
Scott Callaghan from the Southern California Earthquake Center presented this deck in a recent Blue Waters Webinar.
"I will present an overview of scientific workflows. I'll discuss what the community means by "workflows" and what elements make up a workflow. We'll talk about common problems that users might be facing, such as automation, job management, data staging, resource provisioning, and provenance tracking, and explain how workflow tools can help address these challenges. I'll present a brief example from my own work with a series of seismic codes showing how using workflow tools can improve scientific applications. I'll finish with an overview of high-level workflow concepts, with an aim to preparing users to get the most out of discussions of specific workflow tools and identify which tools would be best for them."
Watch the video: http://wp.me/p3RLHQ-gtH
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...OpenStack
Audience Level
All levels
Synopsis
Peter has been involved in OpenStack community since its B-release, and he has been enabling and helping customers across various industries adopt OpenStack in strategic ways. In this session, you will learn from his experience what Red Hat’s perspective is on the current state of affairs in the OpenStack community and the path we see ahead that Red Hat is putting its efforts in. OpenStack is not a product that tries to solve any one business problem in particular, but a technology that aims to be usable for many – what are the required steps to make sure that your organisation is ready for the OpenStack-based cloudification and transformation.
Speaker Bio:
Peter Jung is a Senior Business Development Manager at Red Hat where he leads the practice in the areas of Cloud, SDN/NFV and IoT across Australia and New Zealand. He is passionate about open innovation and open source software development model as the foundation for next generation society and ICT systems. Prior to Red Hat, he had various roles at Cisco and Dell for 15 years. He holds a BSEE and an MBA.
OpenStack Australia Day Melbourne 2017
https://events.aptira.com/openstack-australia-day-melbourne-2017/
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...OpenStack
Audience Level
Intermediate
Synopsis
M3 is the latest generation system of the MASSIVE project, an HPC facility specializing in characterization science (imaging and visualization). Using OpenStack as the compute provisioning layer, M3 is a hybrid HPC/cloud system, custom-integrated by Monash’s R@CMon Research Cloud team. Built to support Monash University’s next-gen high-throughput instrument processing requirements, M3 is half-half GPU-accelerated and CPU-only.
We’ll discuss the design and tech used to build this innovative platform as well as detailing approaches and challenges to building GPU-enabled and HPC clouds. We’ll also discuss some of the software and processing pipelines that this system supports and highlight the importance of tuning for these workloads.
Speaker Bio
Blair Bethwaite: Blair has worked in distributed computing at Monash University for 10 years, with OpenStack for half of that. Having served as team lead, architect, administrator, user, researcher, and occasional hacker, Blair’s unique perspective as a science power-user, developer, and system architect has helped guide the evolution of the research computing engine central to Monash’s 21st Century Microscope.
Lance Wilson: Lance is a mechanical engineer, who has been making tools to break things for the last 20 years. His career has moved through a number of engineering subdisciplines from manufacturing to bioengineering. Now he supports the national characterisation research community in Melbourne, Australia using OpenStack to create HPC systems solving problems too large for your laptop.
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
Audience: Intermediate
About: Learn how cloud storage differs to traditional storage systems and how that delivers revolutionary benefits.
Starting with an overview of how Ceph integrates tightly into OpenStack, you’ll see why 62% of OpenStack users choose Ceph, we’ll then take a peek into the very near future to see how rapidly Ceph is advancing and how you’ll be able to achieve all your childhood hopes and dreams in ways you never thought possible.
Speaker Bio: Andrew Hatfield – Practice Lead–Cloud Storage and Big Data, Red Hat
Andrew has over 20 years experience in the IT industry across APAC, specialising in Databases, Directory Systems, Groupware, Virtualisation and Storage for Enterprise and Government organisations. When not helping customers slash costs and increase agility by moving to the software-defined storage future, he’s enjoying the subtle tones of Islay Whisky and shredding pow pow on the world’s best snowboard resorts.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016/
Scott Callaghan from the Southern California Earthquake Center presented this deck in a recent Blue Waters Webinar.
"I will present an overview of scientific workflows. I'll discuss what the community means by "workflows" and what elements make up a workflow. We'll talk about common problems that users might be facing, such as automation, job management, data staging, resource provisioning, and provenance tracking, and explain how workflow tools can help address these challenges. I'll present a brief example from my own work with a series of seismic codes showing how using workflow tools can improve scientific applications. I'll finish with an overview of high-level workflow concepts, with an aim to preparing users to get the most out of discussions of specific workflow tools and identify which tools would be best for them."
Watch the video: http://wp.me/p3RLHQ-gtH
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
ClusterVision specialises in high performance computing (HPC) solutions. We design, build, manage, and support supercomputers.
High performance computing accelerates scientific discovery. It is with this in mind that ClusterVision provides state-of-the-art, fully tailored HPC clusters to researchers and innovators all over Europe. With in-house software development and a large team of technical specialists (60% technical staff), we work to not just participate in HPC, but to advance the technology behind it.
By providing customised solutions accompanied by an exhaustive set of services and trainings, we ensure that our customers can take maximum advantage of HPC technology in order to expand knowledge and advance their respective fields of study.
Modern data lakes are now built on cloud storage, helping organizations leverage the scale and economics of object storage while simplifying overall data storage and analysis flow
In this deck, Gilad Shainer from Mellanox announces the world’s first HDR 200Gb/s data center interconnect solutions. "These 200Gb/s HDR InfiniBand solutions maintain Mellanox’s generation-ahead leadership while enabling customers and users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data center total cost of ownership. Mellanox 200Gb/s HDR solutions will become generally available in 2017.
Watch the video presentation: http://insidehpc.com/2016/11/hdr-infiniband/
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)Jose Antonio Coarasa Perez
The CMS online cluster consists of more than 3000 computers. It has been exclusively used for the Data Acquisition of the CMS experiment at CERN, archiving around 20Tbytes of data per day.
An openstack cloud layer has been deployed on part of the cluster (totalling more than 13000 cores) as a minimal overlay so as to leave the primary role of the computers untouched while allowing an opportunistic usage of the cluster. This allows running offline computing jobs on the online infrastructure while it is not (fully) used.
We will present the architectural choices made to deploy an unusual, as opposed to dedicated, "overlaid cloud infrastructure". These architectural choices ensured a minimal impact on the running cluster configuration while giving a maximal segregation of the overlaid virtual computer infrastructure. Openvswitch was chosen during the proof of concept phase in order to avoid changes on the network infrastructure. Its use will be illustrated as well as the final networking configuration used. The design and performance of the openstack cloud controlling layer will be also presented together with new developments and experience from the first year of usage.
Running Distributed TensorFlow with GPUs on Mesos with DC/OS Mesosphere Inc.
Running distributed TensorFlow is challenging, especially if you want to train large models on your own infrastructure. In this talk, Kevin Klues presents an open source TensorFlow framework for distributed training on DC/OS. This framework takes the pain out of deploying distributed TensorFlow, so you can spend less time worrying about your deployment strategy and more time building out your model.
Speaker Bio:
Kevin Klues is an Engineering Manager at Mesosphere where he leads the DC/OS Cluster Operations team. Prior to joining Mesosphere, Kevin worked at Google on an experimental operating system for data centers called Akaros. He and a few others founded the Akaros project while working on their Ph.Ds at UC Berkeley. In a past life, Kevin was a lead developer of the TinyOS project, working at Stanford University, the Technical University of Berlin, and the CSIRO in Australia. When not working, you can usually find Kevin on a snowboard or up in the mountains in some capacity or another.
In this video from the HPC User Forum in Santa Fe, Yoonho Park from IBM presents: IBM Datacentric Servers & OpenPOWER.
"Big data analytics, machine learning and deep learning are among the most rapidly growing workloads in the data center. These workloads have the compute performance requirements of traditional technical computing or high performance computing, coupled with a much larger volume and velocity of data."
Watch the video: http://wp.me/p3RLHQ-gJv
Learn more: https://openpowerfoundation.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Accelerating Data Computation on Ceph ObjectsAlluxio, Inc.
Alluxio Global Online Meetup
November 10, 2020
For more Alluxio events: https://www.alluxio.io/events/
Speaker(s):
Leonardo Militano, ZHAW
In most of the distributed storage systems, the data nodes are decoupled from compute nodes. This is motivated by an improved cost efficiency, storage utilization and a mutually independent scalability of computation and storage. While this consideration is indisputable, several situations exist where moving computation close to the data brings important benefits. Whenever the stored data is to be processed for analytics purposes, all the data needs to be repeatedly moved from the storage to the compute cluster, which leads to reduced performance.
In this talk, we will present how using Alluxio computation and storage ecosystems can better interact benefiting the "bringing the data close to the code" approach. Moving away from the complete disaggregation of computation and storage, data locality can enhance the computation performance. During this talk, we will present our observations and testing results that will show important enhancements in accelerating Spark Data Analytics on Ceph Objects Storage using Alluxio.
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageMayaData Inc
Webinar Session - https://youtu.be/_5MfGMf8PG4
In this webinar, we share how the Container Attached Storage pattern makes performance tuning more tractable, by giving each workload its own storage system, thereby decreasing the variables needed to understand and tune performance.
We then introduce MayaStor, a breakthrough in the use of containers and Kubernetes as a data plane. MayaStor is the first containerized data engine available that delivers near the theoretical maximum performance of underlying systems. MayaStor performance scales with the underlying hardware and has been shown, for example, to deliver in excess of 10 million IOPS in a particular environment.
Accelerate Analytics and ML in the Hybrid Cloud EraAlluxio, Inc.
Alluxio Community Office Hour
February 23, 2021
For more Alluxio events: https://www.alluxio.io/events/
Speaker(s):
Alex Ma, Alluxio
Peter Behrakis, Alluxio
Many companies we talk to have on premises data lakes and use the cloud(s) to burst compute. Many are now establishing new object data lakes as well. As a result, running analytics such as Hive, Spark, Presto and machine learning are experiencing sluggish response times with data and compute in multiple locations. We also know there is an immense and growing data management burden to support these workflows.
In this talk, we will walk through what Alluxio’s Data Orchestration for the hybrid cloud era is and how it solves the performance and data management challenges we see.
In this tech talk, we'll go over:
- What is Alluxio Data Orchestration?
- How does it work?
- Alluxio customer results
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and IgniteJoseph Kuo
This session aims to establish applications running against distributed and scalable system, or as we know cloud computing system. We will introduce you not only briefing of Hazelcast but also deeper kernel of it, and how it works with Spark, the most famous Map-reduce library. Furthermore, we will introduce another in-memory cache called Apache Ignite and compare it with Hazelcast to see what's the difference between them. In the end, we will give a demonstration showing how Hazelcast and Spark work together well to form a cloud-base service which is distributed, flexible, reliable, available, scalable and stable. You can find demo code here: https://github.com/CyberJos/jcconf2016-hazelcast-spark
https://cyberjos.blog/java/seminar/jcconf-2016-cloud-computing-applications-hazelcast-spark-and-ignite/
Data Con LA 2022-Open Source or Open Core in Your Data Layer? What Needs to B...Data Con LA
Anil Inamdar, VP & Head of Data Solutions, Instaclustr
Most organizations considering open source and open core cloud technologies as part of their all-important data stack understand they need to rigorously evaluate the software's licensing terms and gauge the long-term health of its community and ecosystem. What still happens less frequently ' but is just as crucial to these risk assessments ' is developing a thorough understanding of the business models governing the commercial organizations attached to each data-layer technology being considered. You must discern the underlying motivations of the vendors or technology providers you depend on to deliver or support open source data-layer software (as well as those vendors with strong influence over its development and maintenance). By acutely understanding these incentives, you can identify if, where, and how they may map to possible risks to your enterprise's adoption and ongoing open source implementation. Don't limit the assessment to licenses and community health -- although both are still very key variables.
This session will discuss specifics on what you need to look for and consider when vetting open source data technologies in the cloud as offered by:
-- Businesses using OSS as the foundation of their own intellectual property
-- Businesses that maintain total control offer the OSS they offer
-- Major cloud providers
ClusterVision specialises in high performance computing (HPC) solutions. We design, build, manage, and support supercomputers.
High performance computing accelerates scientific discovery. It is with this in mind that ClusterVision provides state-of-the-art, fully tailored HPC clusters to researchers and innovators all over Europe. With in-house software development and a large team of technical specialists (60% technical staff), we work to not just participate in HPC, but to advance the technology behind it.
By providing customised solutions accompanied by an exhaustive set of services and trainings, we ensure that our customers can take maximum advantage of HPC technology in order to expand knowledge and advance their respective fields of study.
Modern data lakes are now built on cloud storage, helping organizations leverage the scale and economics of object storage while simplifying overall data storage and analysis flow
In this deck, Gilad Shainer from Mellanox announces the world’s first HDR 200Gb/s data center interconnect solutions. "These 200Gb/s HDR InfiniBand solutions maintain Mellanox’s generation-ahead leadership while enabling customers and users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data center total cost of ownership. Mellanox 200Gb/s HDR solutions will become generally available in 2017.
Watch the video presentation: http://insidehpc.com/2016/11/hdr-infiniband/
The CMS openstack, opportunistic, overlay, online-cluster Cloud (CMSooooCloud)Jose Antonio Coarasa Perez
The CMS online cluster consists of more than 3000 computers. It has been exclusively used for the Data Acquisition of the CMS experiment at CERN, archiving around 20Tbytes of data per day.
An openstack cloud layer has been deployed on part of the cluster (totalling more than 13000 cores) as a minimal overlay so as to leave the primary role of the computers untouched while allowing an opportunistic usage of the cluster. This allows running offline computing jobs on the online infrastructure while it is not (fully) used.
We will present the architectural choices made to deploy an unusual, as opposed to dedicated, "overlaid cloud infrastructure". These architectural choices ensured a minimal impact on the running cluster configuration while giving a maximal segregation of the overlaid virtual computer infrastructure. Openvswitch was chosen during the proof of concept phase in order to avoid changes on the network infrastructure. Its use will be illustrated as well as the final networking configuration used. The design and performance of the openstack cloud controlling layer will be also presented together with new developments and experience from the first year of usage.
Running Distributed TensorFlow with GPUs on Mesos with DC/OS Mesosphere Inc.
Running distributed TensorFlow is challenging, especially if you want to train large models on your own infrastructure. In this talk, Kevin Klues presents an open source TensorFlow framework for distributed training on DC/OS. This framework takes the pain out of deploying distributed TensorFlow, so you can spend less time worrying about your deployment strategy and more time building out your model.
Speaker Bio:
Kevin Klues is an Engineering Manager at Mesosphere where he leads the DC/OS Cluster Operations team. Prior to joining Mesosphere, Kevin worked at Google on an experimental operating system for data centers called Akaros. He and a few others founded the Akaros project while working on their Ph.Ds at UC Berkeley. In a past life, Kevin was a lead developer of the TinyOS project, working at Stanford University, the Technical University of Berlin, and the CSIRO in Australia. When not working, you can usually find Kevin on a snowboard or up in the mountains in some capacity or another.
In this video from the HPC User Forum in Santa Fe, Yoonho Park from IBM presents: IBM Datacentric Servers & OpenPOWER.
"Big data analytics, machine learning and deep learning are among the most rapidly growing workloads in the data center. These workloads have the compute performance requirements of traditional technical computing or high performance computing, coupled with a much larger volume and velocity of data."
Watch the video: http://wp.me/p3RLHQ-gJv
Learn more: https://openpowerfoundation.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Accelerating Data Computation on Ceph ObjectsAlluxio, Inc.
Alluxio Global Online Meetup
November 10, 2020
For more Alluxio events: https://www.alluxio.io/events/
Speaker(s):
Leonardo Militano, ZHAW
In most of the distributed storage systems, the data nodes are decoupled from compute nodes. This is motivated by an improved cost efficiency, storage utilization and a mutually independent scalability of computation and storage. While this consideration is indisputable, several situations exist where moving computation close to the data brings important benefits. Whenever the stored data is to be processed for analytics purposes, all the data needs to be repeatedly moved from the storage to the compute cluster, which leads to reduced performance.
In this talk, we will present how using Alluxio computation and storage ecosystems can better interact benefiting the "bringing the data close to the code" approach. Moving away from the complete disaggregation of computation and storage, data locality can enhance the computation performance. During this talk, we will present our observations and testing results that will show important enhancements in accelerating Spark Data Analytics on Ceph Objects Storage using Alluxio.
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageMayaData Inc
Webinar Session - https://youtu.be/_5MfGMf8PG4
In this webinar, we share how the Container Attached Storage pattern makes performance tuning more tractable, by giving each workload its own storage system, thereby decreasing the variables needed to understand and tune performance.
We then introduce MayaStor, a breakthrough in the use of containers and Kubernetes as a data plane. MayaStor is the first containerized data engine available that delivers near the theoretical maximum performance of underlying systems. MayaStor performance scales with the underlying hardware and has been shown, for example, to deliver in excess of 10 million IOPS in a particular environment.
Accelerate Analytics and ML in the Hybrid Cloud EraAlluxio, Inc.
Alluxio Community Office Hour
February 23, 2021
For more Alluxio events: https://www.alluxio.io/events/
Speaker(s):
Alex Ma, Alluxio
Peter Behrakis, Alluxio
Many companies we talk to have on premises data lakes and use the cloud(s) to burst compute. Many are now establishing new object data lakes as well. As a result, running analytics such as Hive, Spark, Presto and machine learning are experiencing sluggish response times with data and compute in multiple locations. We also know there is an immense and growing data management burden to support these workflows.
In this talk, we will walk through what Alluxio’s Data Orchestration for the hybrid cloud era is and how it solves the performance and data management challenges we see.
In this tech talk, we'll go over:
- What is Alluxio Data Orchestration?
- How does it work?
- Alluxio customer results
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and IgniteJoseph Kuo
This session aims to establish applications running against distributed and scalable system, or as we know cloud computing system. We will introduce you not only briefing of Hazelcast but also deeper kernel of it, and how it works with Spark, the most famous Map-reduce library. Furthermore, we will introduce another in-memory cache called Apache Ignite and compare it with Hazelcast to see what's the difference between them. In the end, we will give a demonstration showing how Hazelcast and Spark work together well to form a cloud-base service which is distributed, flexible, reliable, available, scalable and stable. You can find demo code here: https://github.com/CyberJos/jcconf2016-hazelcast-spark
https://cyberjos.blog/java/seminar/jcconf-2016-cloud-computing-applications-hazelcast-spark-and-ignite/
Data Con LA 2022-Open Source or Open Core in Your Data Layer? What Needs to B...Data Con LA
Anil Inamdar, VP & Head of Data Solutions, Instaclustr
Most organizations considering open source and open core cloud technologies as part of their all-important data stack understand they need to rigorously evaluate the software's licensing terms and gauge the long-term health of its community and ecosystem. What still happens less frequently ' but is just as crucial to these risk assessments ' is developing a thorough understanding of the business models governing the commercial organizations attached to each data-layer technology being considered. You must discern the underlying motivations of the vendors or technology providers you depend on to deliver or support open source data-layer software (as well as those vendors with strong influence over its development and maintenance). By acutely understanding these incentives, you can identify if, where, and how they may map to possible risks to your enterprise's adoption and ongoing open source implementation. Don't limit the assessment to licenses and community health -- although both are still very key variables.
This session will discuss specifics on what you need to look for and consider when vetting open source data technologies in the cloud as offered by:
-- Businesses using OSS as the foundation of their own intellectual property
-- Businesses that maintain total control offer the OSS they offer
-- Major cloud providers
OCCIware presentation at EclipseDay in Lyon, November 2017, by Marc Dutoo, SmileOCCIware
Presentation title: Model and pilot all cloud layers with OCCIware, from IoT to Big Data
Abstract: Who uses multi cloud today ? Everybody. Alas, this leads to a lot of "technical glue". Enter OCCIware's Studio and Runtime : manage all layers and domains of the Cloud (XaaS) in a uniform, standard, extensible way - the Cloud consumer platform.presentation.
This talk presents how the OCCIware Studio - currently being contributed to the Eclipse Foundation by Inria and Obeo - takes advantage of Eclipse Modeling and SIrius in order to support a metamodel for the generic Open Cloud Computing Interface (OCCI) REST API and build a "studio factory", while providing feedback and lessons learned on various other Eclipse components.
It concludes on a live demonstration of using it to model and pilot an IoT (nodeMCU/ESP8266), Linked & Big Data (JSON-LD, Spark), containerized Cloud solution to let electricity consumption be monitored across territories by all actors - individuals, utility providers, up to regional public bodies.
Model and pilot all cloud layers with OCCIware - Eclipse Day Lyon 2017Marc Dutoo
Who uses multi cloud today ? Everybody. Alas, this leads to a lot of "technical glue". Enter OCCIware's Studio and Runtime : manage all layers and domains of the Cloud (XaaS) in a uniform, standard, extensible way - the Cloud consumer platform.
This talk presents how the OCCIware Studio - currently being contributed to the Eclipse Foundation by Inria and Obeo - takes advantage of Eclipse Modeling and SIrius in order to support a metamodel for the generic Open Cloud Computing Interface (OCCI) REST API and build a "studio factory", while providing feedback and lessons learned on various other Eclipse components.
It concludes on a live demonstration of using it to model and pilot an IoT (nodeMCU/ESP8266), Linked & Big Data (JSON-LD, Spark), containerized Cloud solution to let electricity consumption be monitored across territories by all actors - individuals, utility providers, up to regional public bodies.
Choosing PaaS: Cisco and Open Source Options: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. Confused by all the open source PaaS options out there? What criteria should you use to evaluate them? We seek to answer these questions in a systematic manner and will explore top technologies such as Mesos, Apprenda, Cloud Foundry and Kubernetes along with Cisco's Project Shipped and open source Mantl. The aim of this session will be to shed light on which platforms add value to your needs, applications and workloads.
There is no doubt that Openstack represents one of the massive industry alignment towards the Open source cloud, Some even touting it to be the linux of cloud computing. But is it “THE” perfect solution ?
Vanilla Openstack is a “Myth”
The choice of Openstack as part of your cloud strategy purely depends on the kind of workload and the add-on features.
Openstack can be a serious contender especially for fresh deployments and applications that are being architected for cloud. But as the environment gets diverse(legacy integrations) Openstack can be tricky to integrate and maintain
One might require a vendor based Cloud management platform especially when the cloud strategy involves public clouds(AWS, Azure, GCE) and migration of application services across
No doubt it is fully open source, but it comes with learning curve, release cycles, Vendor specific driver integrations etc.
Interesting developments with respect to containers, docker, Kubernetes, Mesosphere etc will challenge Openstack
Openstack will no doubt will grow mature over next couple of years, until then, the hunt for the CMP continues...
Topics of interest :
to build a true hyper converged cloud ?
as an enterprise cloud management platform ?
public cloud ? (as a CSP)
Telco carrier grade cloud ?
VNF, MANO and SDN integrations
OpenStack Preso: DevOps on Hybrid Infrastructurerhirschfeld
Discusses the approach for making hybrid DevOps workable including what obstacles must be overcome. Includes demo of multiple OpenStack clouds & Kubernetes deploy on AWS, Google and OpenStack
Peanut Butter and jelly: Mapping the deep Integration between Ceph and OpenStackSean Cohen
Ceph is the most widely deployed storage technology used with OpenStack, most often because it's an open source, massively scalable, unified software-defined storage solution. Its popularity is also due to its unique and optimized technical integration with the OpenStack services and its pure-software approach to scaling. In this session, we'll review how Ceph is integrated into Nova, Glance, Keystone, Cinder, and Manila and demonstrate why using traditional storage products won’t give you the full benefits of an elastic cloud infrastructure. We’ll also cover the flexible deployment options, available through Red Hat Enterprise Linux OpenStack Platform and Red Hat Ceph Storage, for seamless operations and key scenarios like disaster recovery. We'll discuss architectural options for deploying a multisite OpenStack cluster and cover the varying levels of maturity in the OpenStack services for configuring multisite. This session will also show how other technologies are using OpenStack Ceph to increase performance and reduce power consumption, such as Intel SSDs. This will include reference architectures and best practices for Ceph and SSDs.
We have the Bricks to Build Cloud-native Cathedrals - But do we have the mortar?Nane Kratzke
This is some input for a panel discussion about "Challenges of Cloud Computing-based Systems" I attend at the 9th International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2018) in Barcelona, Spain in February 2018.
Cloud-native applications (CNA) are build more and more often according to microservice and independent system architecture (ISA) approaches. ISA involves two architecture layers: the macro and the micro architecture layer. Software engineering outcomes on the micro layer are often distributed in a standardized form as self-contained deployment units (so called container images). There exist plenty of programming languages to implement these units: JAVA, C, C++, JavaScript, Python, R, PHP, Ruby, ... (this list is almost endless) But on the macro layer, one might mention TOSCA and little more. TOSCA is an OASIS deployment and orchestration standard language to describe a topology of cloud based web services, their components, relationships, and the processes that manage them. This works for static deployments. However, CNA are elastic, self-adaptive - almost the exact opposite of what can be defined efficiently using TOSCA. For these kind of scenarios one might mention Kubernetes or Docker Swarm as container orchestrators which are intentionally build to operate elastic services formed of containers. But these operating platforms do not provide expressive and pragmatic programming languages covering the macro layer of cloud-native applications.
So it seems there is a gap and the question arises, whether we need further (and what kind of) macro layer languages for CNA?
OpenStack & the Evolving Cloud EcosystemMark Voelker
OpenStack has come a long way since 2010. What started as a collaboration on compute and storage between NASA and Rackspace has changed dramatically and grown into a large, successful open source project that meets the needs of thousands of organizations. But OpenStack hasn’t evolved in a vacuum over the past seven years: the technology landscape around it has been changing as well. Join VMware’s chief OpenStack architect and longtime community member Mark Voelker for a look at the new technology landscape around OpenStack, how we got here, and where we might go next. We’ll discuss how what started as an IaaS platform ending up being a winning platform for Network Functions Virtualization and telco applications, how OpenStack came to be selected as a common underpinning for container orchestration systems like Kubernetes, how OpenStack governance influenced other open source communities, and how OpenStack changed the way companies looked at Open Source. We’ll consider the role IaaS might play in a future that includes options like functions-as-a-service, containers, and the internet of things. We’ll consider OpenStack as a common foundation for a variety of new technologies, and discuss OpenStack’s lasting impact in the cloud ecosystem. We’ll also discuss how OpenStack is changing and adapting to shifts in the technology landscape, both as an open source community and in terms of product offerings. Learn about new interoperability programs targeted at use cases that didn’t exist seven years ago, and new initiatives from the OpenStack technical community and Foundation.
Similar to At the Crossroads of HPC and Cloud Computing with Openstack (20)
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Enhancing Research Orchestration Capabilities at ORNL.pdf
At the Crossroads of HPC and Cloud Computing with Openstack
1. At the Crossroads of HPC
and
Cloud Computing with Openstack
Presenter: Ryan M. Aydelott of Argonne National Labs
Questions that will be addressed:
• Why is Openstack important?
• What is Openstack currently used for?
• Where is Openstack at in it’s development cycle?
• What does the Openstack architecture/model look like?
• What should you know about Openstack?
• What does Openstack have to do with HPC?
• What will Openstack look like tomorrow?
2. Ryan Aydelott - CELS
Outline
‣ My background
‣ Cloud - what is happening?
‣ The future of programmable stacks
‣ The human factor
‣ Openstack architecture overview
‣ The state of Openstack today
‣ How does Argonne use Openstack?
‣ The future
‣ Discussion/Q&A
2
3. Ryan Aydelott - CELS
My Background
3
‣ Born in the 70’s, took many things apart - put most of them back together.
‣ Started connecting at 300 Baud (3,24 MB/d), Doing Internet things ~1990
‣ First business was shell account/email access to those along the I-88 R&D
Corridor, which blossomed into a full menu ISP.
‣ Left the ISP Business in 1999 and went to work for Lucent, followed by a string of
various employers/startups doing interesting things.
‣ Currently working on various alternative corporate structures, including an
incubator/co-living house on Chicago’s west side.
5. Ryan Aydelott - CELS
The future of programmable stacks
‣ If the infrastructure is programmable, we won’t even consider the *aaS acronym relevant any longer.
‣ Everything looks like code. Infrastructure descriptions can be checked into your source code repo in
tandem with your application. This is great news for developers, interesting news for traditional
sysadmins.
‣ Docker and Vagrant show us how to have textual, executable, repeatable descriptions of platforms with
people providing recipes as part of an ecosystem. Those recipes will come to include clusters of servers
and network topology.
‣ The tools are nascent, (think Mesos/Kubernetes, etc) but these will eventually become the visible face of
what we call “Cloud” today.
5
- http://en.wikipedia.org/wiki/Cloud_computing#Service_models
- https://www.docker.com/
- https://www.vagrantup.com/
- http://kubernetes.io/
- http://mesos.apache.org/
6. Ryan Aydelott - CELS
The Human Factor
6
Sysadmin c.1980-2010
Basic Programming Skills (one off administrator task automation, etc)
Relatively experienced with hardware (RAID, systems architecture, etc)
Networking/Server responsibilities were usually different individuals
Typically responsible for running applications at the server level
Sysadmin c. 2011-
Experienced/Broad Developer (using automation frameworks such as Chef, Saltstack, etc)
Very little hardware expertise (thinks in terms of availability zones)
Deploys networks via SDN frameworks to match the application profile
Responsible for making sure that applications are running well on the platform
7. Ryan Aydelott - CELS
Why is Openstack Important?
‣ Openstack is the single most developed and widely used open source cloud
platform today.
‣ Openstack is a project that has reached the level of adoption that will enable it to
continue to receive considerable development contributions for the foreseeable
near term future.
‣ Openstack’s architecture is modular in nature which allows ease of integration and
customization specific to user environments.
7
9. Ryan Aydelott - CELS
Openstack Component Description
‣ Dashboard ("Horizon") provides a front end Dashboard to other Openstack services
‣ Compute ("Nova") stores and retrieves virtual disks ("images") and associated metadata in Image
(“Glance”). Images can stored on shared storage such as Gluster, Ceph, NFS, etc. for live migration or
on local disk.
‣ Network ("Neutron") provides virtual networking for Compute. This can either be a single node or in later
versions, multiple-hosts
‣ Block Storage ("Cinder") provides storage volumes for Compute which can be backed by files on ZFS,
LVM, etc. exported most commonly via iSCSI. (Gluster/Ceph supported as well)
‣ Image ("Glance") can store the actual virtual disk files in the Object Store(“Swift”), local disk, Ceph,
Gluster, etc.
‣ All the services authenticate with Identity (“Keystone")
‣ Shared message bus (RabbitMQ, Qpid, or 0MQ) glues it all together.
9
10. Ryan Aydelott - CELS
Openstack Logical Usage
‣ End users can interact through a common web interface (Horizon) or directly to
each service through their API
‣ All services authenticate through a common source (facilitated through keystone)
‣ Individual services interact with each other through their public APIs (except where
privileged administrator commands are necessary)
10
11. Ryan Aydelott - CELS
What is Openstack Used for?
‣ Provide computing services for resale to end users by service operators
‣ Provide internal enterprise computing support as either an alternative or in addition
to existing computing resources
‣ Provide computing resources to research environments across a variety of
disciplines, including research of cloud on cloud computing
‣ Hobbyist/Tinkerers who are interested in learning about/using the software stack
(keep in mind this includes professional hobbyists tasks with exploring Openstack
by their employer)
11
12. Ryan Aydelott - CELS
Where is Openstack at from the standpoint of
maturity?
‣ The largest opinion at the last openstack conference in Paris (November 2014) is
that Openstack is finally ready to enter its maturity phase. This translates to less
features and a focus on robust/stable code.
‣ Currently still on a 6 month major release cycle (moved from a 3 month release
cycle ~2 years ago), there were some murmurs of extending this out even further.
‣ Ready to use/stable for most organizations without a dedicated developer,
provided you are not deploying the stack using non-standard configurations/
hardware.
‣ Many professional service organizations now will run/manage Openstack for you.
(however many typically have their own release)
12
14. Ryan Aydelott - CELS
How does Argonne Use Openstack?
‣ Currently running some custom software for data analysis as well as standard
stacks (such as Hadoop). Additionally some web facing application stacks are also
running on the system.
‣ At 800 nodes, it was a large system at that time of its deployment (~2010), now
however large systems begin at ~1000 unique hardware entities with many scaling
over 10,000 nodes in service provider environments.
‣ Our system is focusing on vertical performance in specific areas (networking,
memory/compute, storage) rather than overall system size.
‣ A major challenge today is provisioning hardware that allows us the most flexibility
via software. The difficulty lies in the very different nature of partitioning required
for different types of workloads.
14
15. Ryan Aydelott - CELS
HPC and Openstack
‣ Virtual clusters can elastically scale up/down based on demand (Heat Autoscaling)
‣ Not stuck with one distribution - end user can choose
‣ End users can install their own software using the distro package manager
‣ Bring your own runtime in the form of an image (Docker, etc)
‣ Excellent method to combine multiple smaller (1-2 Rack) HPC clusters into a
single managed entity to solve partitioning/underutilization problems
‣ Software that runs most workloads hasn’t fully taken advantage of this new
architecture (yet)
15
16. Ryan Aydelott - CELS
Challenges of HPC and Openstack
Network
‣ Commonly deployed Openstack network is built for features, not performance
‣ MPI performs poorly on common Openstack networks (tunnels, bridges, nat, iptables)
Storage
‣ Presenting the same block device to multiple VMs just becoming available (needed for
distributed filesystems)
‣ Shared filesystem as a service support relatively new feature
Future
‣ SRIOV support, and isolation features for IB are available
‣ SRIOV / pci passthrough is available in recent Openstack releases
‣ IRONIC is an Openstack Project that allows bare metal performance across diverse
systems
16
17. Ryan Aydelott - CELS
Why you should know Openstack
‣ The tools required to work effectively in this industry are changing dramatically to the point
that IT shops are completely retooling their organizational structure/processes.
‣ All future systems will iterate on a version of this architecture. This is as relevant now as
learning about pc’s in the 80’s.
‣ Openstack could be perceived as providing something akin to low-level language for utility
computing.
‣ In research (both corporate and public) tightly coupled compute clusters are falling out of
favor to distributed systems.
‣ Budgets always prefer commodity curve computing at scale.
‣ If you have to customize or are trying to integrate hot-rod hardware, Openstack isn’t a bad
choice.
17
18. Ryan Aydelott - CELS
Openstack has opened up opportunities for
integrators
‣ New applications being deployed can be designed in ways that allow them to fully
utilize the operating environment (I call these architecturally aware applications)
‣ Many cluster management frameworks available today have an interface similar to
the one Openstack and other providers have.
‣ Hardware/Software integrators have a single platform agnostic integration point.
‣ This first group of problem solvers will pave the way for additional management
frameworks to be built on top of Openstack, further abstracting the underlying
infrastructure to the application.
18
- http://www.slideshare.net/hpcloud/openstack-integration-case-study-of-an-application-deployment-on-a-hybrid-cloud
- http://www.slideshare.net/uri1803/open-stack-bigdata
19. Ryan Aydelott - CELS
Building Openstack Aware Applications
‣ HEAT: https://wiki.openstack.org/wiki/Heat
‣ A Heat template describes the infrastructure for a cloud application in a text file that is readable and writable
by humans, and can be checked into version control, diffed, etc.
‣ Infrastructure resources that can be described include: servers, floating ips, volumes, security groups, users,
etc.
‣ Heat also provides an autoscaling service that integrates with Ceilometer, so you can include a scaling group
as a resource in a template.
‣ Templates can also specify the relationships between resources (e.g. this volume is connected to this server).
This enables Heat to call out to the OpenStack APIs to create all of your infrastructure in the correct order to
completely launch your application.
‣ Heat manages the whole lifecycle of the application - when you need to change your infrastructure, simply
modify the template and use it to update your existing stack. Heat knows how to make the necessary
changes. It will delete all of the resources when you are finished with the application, too.
‣ Heat primarily manages infrastructure, but the templates integrate well with software configuration
management tools such as Puppet and Chef. The Heat team is working on providing even better integration
between infrastructure and software.
19
- https://www.openstack.org/summit/openstack-paris-summit-2014/session-videos/presentation/
deploying-and-auto-scaling-applications-on-openstack-with-heat
20. Ryan Aydelott - CELS
Trove - Opensource Database as a Service
‣ https://wiki.openstack.org/wiki/Trove
‣ The goal is to provide a scalable/reliable cloud database as a service for both
relational/non-relational engines
‣ Natively built to run on Openstack
‣ Support a single tenant database within an instance
‣ Still under active development, production ready only with experts.
20
- http://www.slideshare.net/mirantis/trove-d-baa-s-28013400
21. Ryan Aydelott - CELS
Sahara - Hadoop on Openstack
‣ https://wiki.openstack.org/wiki/Sahara
‣ The goal is to provide a scalable/reliable cloud database as a service for both
relational/non-relational engines
‣ API to run analytics jobs
‣ Cluster Provisioning
‣ Still under active development
21
- http://www.slideshare.net/mirantis/savanna-hadoop-on-openstack
22. Ryan Aydelott - CELS
Zaqar - Messaging Service
‣ https://wiki.openstack.org/wiki/Zaqar
‣ Messaging service native to Openstack.
‣ Tenant Queues based on Keystone Project ID’s
‣ HA wHorizontal Scaling
‣ Still under active development
22
23. Ryan Aydelott - CELS
This is just the beginning…
‣ Already a number of these commonly used application stacks are being written
directly against Openstack.
‣ More complicated deployments that consists of groups of applications can be
coordinated with Heat, either exclusively or in tandem with other management
frameworks/systems.
‣ Further development of abstraction/optimization layers will continue, reducing the
human overhead necessary to run complex jobs.
23