A quick update of whats happening in Nova covering: API v2.1, Cells v2, Scheduler and much more.
It was presented at the Ops Midcylce meetup in Manchester UK, Feb 2016.
OpenStack Nova Liberty focused on maintaining stability while increasing velocity. Key priorities included improving the API, ensuring reliability, enabling live upgrades and scaling. The architecture evolved to separate the data and control planes to reduce downtime during upgrades. Future releases will focus on continued architecture evolution, reducing scope creep, and improving the user experience.
How is automation done in real world (and) on existing systems. This webcast shows our way from existing handmade installations to ansible playbook managed environment.
Why did we choose ansible over others? A demo shows installation and how automation tools can reduce stress during incident remediation situations.
This document discusses Apache Flink version 1.7 and beyond. It summarizes key features of Flink 1.7 including contributions from 112 contributors and over 1,000 commits. It also discusses upcoming features in Flink 1.8 such as support for state schema evolution, dynamic scaling, unifying batch and streaming, an extendable scheduler, and end-to-end SQL-only pipelines. The document encourages participation in the Flink community.
Why Architecting for Disaster Recovery is Important for Your Time Series Data...InfluxData
Time Series data at Capital One consists of Infrastructure, Application, and Business Process Metrics. The combination of these metrics are what the internal stakeholders rely on for observability which allows them to deliver better service and uptime for their customers, so protecting this critical data with a proven and tested recovery plan is not a “nice to have” but a “must have.”
In this talk, the members of IT staff, Saravanan Krisharaju, Rajeev Tomer, and Karl Daman will share how they built a fault-tolerant solution based on InfluxEnterprise and AWS that collects and stores metrics and events. They added to this, Machine Learning, which uses the collected time series to model predictions which are then brought back into InfluxDB time series database for real-time access. This Capital One team shares the journey they took to architect and build this solution as well as plan and execute on their disaster recovery plan.
Future of Apache Flink Deployments: Containers, Kubernetes and More - Flink F...Till Rohrmann
Container technology experiences an ever increasing adoption throughout many industries. Not only does this technology make your applications portable across different machines and operating systems, it also allows to scale applications in a matter of seconds. Moreover, it significantly simplifies and speeds up deployments which decreases development and operation costs. Consequently, more and more Flink deployments run in containerized environments which poses new challenges for Flink.
In this talk, we will take a look at Flink's current and future container support which will make it a first class citizen of the container world. First of all, we will explain how the new reactive execution mode will solve the problem of seamless application scaling and how it blends in with any environment. Complementary to the reactive mode, the active execution mode demonstrates its strengths when it comes to changing workloads such as batch jobs. Last but not least, we will take a look beyond Flink's own nose and investigate how Flink can be used together with Kubernetes operators or data Artisans' Application Manager. We will conclude the talk with a short demo of Flink's native Kubernetes support and giving an outlook on future developments in the container realm.
Flink Community Update December 2015: Year in ReviewRobert Metzger
This document summarizes the Berlin Apache Flink Meetup #12 that took place in December 2015. It discusses the key releases and improvements to Flink in 2015, including the release of versions 0.10.0 and 0.10.1, and new features that were added to the master branch, such as improvements to the Kafka connector. It also lists pending pull requests, recommended reading, and provides statistics on Flink's growth in 2015 in terms of GitHub activity, meetup groups, organizations at Flink Forward, and articles published.
OpenStack Nova Liberty focused on maintaining stability while increasing velocity. Key priorities included improving the API, ensuring reliability, enabling live upgrades and scaling. The architecture evolved to separate the data and control planes to reduce downtime during upgrades. Future releases will focus on continued architecture evolution, reducing scope creep, and improving the user experience.
How is automation done in real world (and) on existing systems. This webcast shows our way from existing handmade installations to ansible playbook managed environment.
Why did we choose ansible over others? A demo shows installation and how automation tools can reduce stress during incident remediation situations.
This document discusses Apache Flink version 1.7 and beyond. It summarizes key features of Flink 1.7 including contributions from 112 contributors and over 1,000 commits. It also discusses upcoming features in Flink 1.8 such as support for state schema evolution, dynamic scaling, unifying batch and streaming, an extendable scheduler, and end-to-end SQL-only pipelines. The document encourages participation in the Flink community.
Why Architecting for Disaster Recovery is Important for Your Time Series Data...InfluxData
Time Series data at Capital One consists of Infrastructure, Application, and Business Process Metrics. The combination of these metrics are what the internal stakeholders rely on for observability which allows them to deliver better service and uptime for their customers, so protecting this critical data with a proven and tested recovery plan is not a “nice to have” but a “must have.”
In this talk, the members of IT staff, Saravanan Krisharaju, Rajeev Tomer, and Karl Daman will share how they built a fault-tolerant solution based on InfluxEnterprise and AWS that collects and stores metrics and events. They added to this, Machine Learning, which uses the collected time series to model predictions which are then brought back into InfluxDB time series database for real-time access. This Capital One team shares the journey they took to architect and build this solution as well as plan and execute on their disaster recovery plan.
Future of Apache Flink Deployments: Containers, Kubernetes and More - Flink F...Till Rohrmann
Container technology experiences an ever increasing adoption throughout many industries. Not only does this technology make your applications portable across different machines and operating systems, it also allows to scale applications in a matter of seconds. Moreover, it significantly simplifies and speeds up deployments which decreases development and operation costs. Consequently, more and more Flink deployments run in containerized environments which poses new challenges for Flink.
In this talk, we will take a look at Flink's current and future container support which will make it a first class citizen of the container world. First of all, we will explain how the new reactive execution mode will solve the problem of seamless application scaling and how it blends in with any environment. Complementary to the reactive mode, the active execution mode demonstrates its strengths when it comes to changing workloads such as batch jobs. Last but not least, we will take a look beyond Flink's own nose and investigate how Flink can be used together with Kubernetes operators or data Artisans' Application Manager. We will conclude the talk with a short demo of Flink's native Kubernetes support and giving an outlook on future developments in the container realm.
Flink Community Update December 2015: Year in ReviewRobert Metzger
This document summarizes the Berlin Apache Flink Meetup #12 that took place in December 2015. It discusses the key releases and improvements to Flink in 2015, including the release of versions 0.10.0 and 0.10.1, and new features that were added to the master branch, such as improvements to the Kafka connector. It also lists pending pull requests, recommended reading, and provides statistics on Flink's growth in 2015 in terms of GitHub activity, meetup groups, organizations at Flink Forward, and articles published.
Elastic Streams at Scale @ Flink Forward 2018 BerlinTill Rohrmann
This document discusses Elastic Streams at scale using Apache Flink and Mesos. It describes how Flink jobs can be deployed on Mesos clusters by having the Flink master process request resources from the Mesos resource manager. The resource manager then allocates Mesos containers for the Flink master and task managers, allowing the Flink processes to be deployed and tasks to run on the Mesos cluster resources. A Mesos dispatcher can be used to start and monitor the Flink master process.
OSMC 2021 | Handling 250K flows per second with OpenNMS: a case studyNETWAYS
What does it take to go from no flow support, to handling huge volumes of heterogeneous flow data in a 100% open-source monitoring stack, in a real-world environment? Expect a brief refresher on flows, an overview of the customer environment, and discussion of the engineering challenges faced. A medium dive follows into the movement of flow data from ingest to query and display, the solution architecture as it exists today, and lessons learned and their application to the project roadmap.
WSO2 Kubernetes Reference Architecture - Nov 2017Imesh Gunaratne
This document provides an overview of WSO2's reference architecture for deploying their middleware products on Kubernetes. It begins with introductions to containers and Kubernetes, explaining concepts like pods, services, deployments, etc. It then outlines WSO2's approach for container orchestration, service discovery, configuration management, load balancing, security, updates, and monitoring in a Kubernetes environment. Specific practices and Kubernetes resources are recommended for areas like pod security policies, horizontal pod autoscaling, ingress definitions, and more. Overall the document serves as a guide for architecting and operating WSO2 products on Kubernetes according to best practices.
44CON 2014 - Binary Protocol Analysis with CANAPE, James Forshaw44CON
44CON 2014 - Binary Protocol Analysis with CANAPE, James Forshaw
CANAPE is an open source network proxy written in .NET. It has been developed to aid in the analysis and exploitation of unknown application network protocols using a similar use case to common HTTP proxies such as Burp or CAT.
This workshop will go through the basics of analysing an unknown application protocol with hands on training examples. By the end of the workshop candidates should be able to better understand CANAPE’s functionality and be able to apply that to other protocols they come across.
Serverless stream processing of Debezium data change events with Knative | De...Red Hat Developers
Come and join us for an (almost) no-slides session around the terrific trio of Debezium, Apache Kafka Streams, and Knative Eventing! Leveraging Apache Kafka as the de-facto standard for event-driven data pipelines, these open-source technologies allow you to ingest data changes from relational and NoSQL databases, process and enrich them, and consume them serverless-style. In a live demo, you’ll see how Debezium, Apache Kafka, Quarkus, and Knative are the dream-team for building serverless, cloud-native stream processing pipelines. You will learn: How to stream change events out of your database using Debezium How to use the Quarkus extension for Kafka Streams to build cloud-native stream processing applications, running either on the JVM or GraalVM How to consume and distribute Kafka messages with Knative Eventing, allowing you to manage modern serverless workloads on Kubernetes.
Wayfair Storefront Performance Monitoring with InfluxEnterprise by Richard La...InfluxData
In this InfluxDays NYC 2019 session, Richard Laskey from the Wayfair Storefront team will share their monitoring best practices using InfluxEnterprise. These efforts are critical and help improve the user experience by driving forward site-wide improvements, establishing best practices, and driving change through many different teams.
Flink Forward SF 2017: Scott Kidder - Building a Real-Time Anomaly-Detection ...Flink Forward
Mux uses Apache Flink to identify anomalies in the distribution & playback of digital video for major video streaming websites. Scott Kidder will describe the Apache Flink deployment at Mux leveraging Docker, AWS Kinesis, Zookeeper, HDFS, and InfluxDB. Deploying a Flink application in a zero-downtime production environment can be tricky, so unit- & behavioral-testing, application packaging, upgrade, and monitoring strategies will be covered as well.
stackconf 2020 | Ignite talk: Opensource in Advanced Research Computing, How ...NETWAYS
Opensource software is becoming a pillar in our everyday life, leveraged by our cell phones, our transportation systems and on the websites we visit. In this quick talk, we will have a look on how Canada’s Advanced Research Computing (“ARC”) organizations use opensource software to deploy and operate some of the largest Supercomputers and Cloud deployments on Earth. We will briefly introduce the systems and dig deeper into the opensource technologies that together make the magic happen !
The document discusses plans for making ManageIQ providers more modular and gemified. It covers namespaces, asking providers for their capabilities instead of assuming, gemifying individual providers, and generating boilerplate code for new providers. The overall goal is for providers to be owned, maintained and released independently by their authors.
Better Kafka Performance Without Changing Any Code | Simon Ritter, AzulHostedbyConfluent
Apache Kafka is the most popular open-source stream-processing software for collecting, processing, storing, and analyzing data at scale. Most known for its excellent performance, low latency, fault tolerance, and high throughput, it's capable of handling thousands of messages per second. For mission-critical applications, how do you ensure that the performance delivered is the performance required? This is especially important as Kafka is written in Java and Scala and runs on the JVM. The JVM is a fantastic platform that delivers on an internet scale.
In this session, we'll explore how making changes to the JVM design can eliminate the problems of garbage collection pauses and raise the throughput of applications. For cloud-based Kafka applications, this can deliver both lower latency and reduced infrastructure costs. All without changing a line of code!
OSMC 2019 | Monitoring Cockpit for Kubernetes Clusters by Ulrike KlusikNETWAYS
Monitoring Kubernetes Clusters with Prometheus is state of the art. The difficulty is to find the significant metrics from the vast amount of available metrics. This talk shows a Monitoring Cockpit defined to get a quick overview of the cluster health and usage. It uses the Standard Metrics available for Kubernetes/OpenShift Clusters and their standard services. The monitoring solution is based on Prometheus, using InfluxDB for central long term storage and Grafana.
Flink Forward San Francisco 2018 keynote: Srikanth Satya - "Stream Processin...Flink Forward
Stream Processing in conjunction with a Consistent, Durable, Reliable stream storage is kicking the revolution up a notch in Big Data processing. This modern paradigm is enabling a new generation of data middleware that delivers on the streaming promise of a simplified and unified programming model. From data ingest, transformation, and messaging to search, time series and more, a robust streaming data ecosystem means we’ll all be able to more quickly build applications that solve problems we could not solve before.
Introducing Confluent labs Parallel Consumer client | Anthony Stubbes, ConfluentHostedbyConfluent
Consuming messages in parallel is what Apache Kafka® is all about, so you may well wonder, why would we want anything else? It turns out that, in practice, there are a number of situations where Kafka’s partition-level parallelism gets in the way of optimal design.
This session will go over some of these types of situations that can benefit from parallel message processing within a single application instance (aka slow consumers or competing consumers), and then introduce the new Parallel Consumer labs project from Confluent, which can improve functionality and massively improve performance in such situations.
It will cover the
- Different ordering modes of the client
- Relative performance improvements
- Usage with other components like Kafka Streams
- An introduction to the internal architecture of the project
- How it can achieve all this in a reassignment friendly manner
- Operators are applications that extend Kubernetes to manage complex stateful applications. They use custom resource definitions (CRDs) to configure and automate tasks.
- Helm is a good starting point for creating operators as it is widely used and easy to learn. Operators created with Helm can later be used to manage resources in other operators.
- The demo showed creating a Helm operator from a Nginx chart and combining two operators with ArgoCD to deploy example apps based on custom resources.
Kafka Summit SF 2017 - Query the Application, Not a Database: “Interactive Qu...confluent
Interactive Queries in Apache Kafka's Streams API allows users to query the local state of a Kafka Streams application without accessing an external database. It treats the Kafka Streams application as an embedded, lightweight database. The local state is fault-tolerant and can be sharded across tasks to scale horizontally. Users can discover other application instances and their state to perform queries on remote state if needed. Interactive Queries simplifies stateful stream processing by reducing moving parts compared to using an external database.
Quarkus: From developer joy to Kubernetes nirvana! | DevNation Tech TalkRed Hat Developers
In a time where container image building tools outnumber application frameworks, and deployment descriptors are lengthier than a small app, "deployment" is the stage where developer fun goes to die. In a less dramatic tone: The options and complexity of containerizing and deploying an application is, to say the least, a distraction for most developers. But it doesn't have to be. Quarkus provides extensions that help developers eliminate those distractions by making smart choices for them and by integrating with the rest of the Quarkus ecosystem. This demonstration will show that in Quarkusland, Kubernetes is not a killjoy but part of the fun, by providing a concise experience as you mix and match support for various platforms (vanilla Kubernetes & OpenShift) with image building solutions (Docker, Jib & S2i).
This document discusses implementing and testing a self-managed logging and visualization solution for a Kubernetes cluster. It considers tools like FluentD, Elasticsearch, Kibana, Helm, and Kops for collecting, processing, and visualizing logs. A turn-key deployment approach using Helm is recommended to install all stack components from a single chart and leverage dependencies. Concerns about authentication, capacity planning, and security hardening are noted for future improvement.
Kubernetes-native or not? When should you ditch your traditional CI/CD server...Red Hat Developers
This document discusses when to use Kubernetes-native CI/CD tools like Tekton instead of traditional CI/CD servers. It introduces Tekton as an open-source framework for building reusable CI/CD pipelines that runs on Kubernetes. The key benefits of Kubernetes-native tools are centralized logging and monitoring, high availability guaranteed by Kubernetes, and self-healing capabilities. The document suggests considering Kubernetes-native CI/CD when your workloads are already running on Kubernetes.
Exploring Kubeflow on Kubernetes for AI/ML | DevNation Tech TalkRed Hat Developers
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable, and scalable by leveraging best-of-breed open source projects. These include Jupyter Notebooks, TensorFlow, and Pytorch for Training; Seldon and KFServing for Serving; and Kubeflow Pipelines. These are all wrapped up neatly in an easy-to-use portal so developers and data scientists can easily collaborate and deliver production-ready AI/ML workloads.
The document outlines Michael Neumann's perspective on key aspects of Scripture including Creation, Crisis, Covenant, the six 'acts' of Scripture, Christ, the Church, and Consummation. It also includes several Bible verses about God working through people and pouring out his Spirit on all flesh to bring justice to the nations and fulfill his good purpose.
This document discusses strategies for keeping products moving through their product life cycle. It notes that while some products only have one life, others can have extended lifecycles through strategic adaptations. During a product's life cycle, marketing and promotions must adapt to the stage of maturity. Constant innovation is essential for product survival to avoid forcing early retirement. Relatively small changes to design, packaging or features can help a product stay relevant. Products reach the end of their lifecycle in different ways, such as lack of sales, but with the right strategy even mature products can be renewed or given new life to avoid going extinct. The key message is that products are not designed to sit on shelves, so companies must find ways to keep them moving
Elastic Streams at Scale @ Flink Forward 2018 BerlinTill Rohrmann
This document discusses Elastic Streams at scale using Apache Flink and Mesos. It describes how Flink jobs can be deployed on Mesos clusters by having the Flink master process request resources from the Mesos resource manager. The resource manager then allocates Mesos containers for the Flink master and task managers, allowing the Flink processes to be deployed and tasks to run on the Mesos cluster resources. A Mesos dispatcher can be used to start and monitor the Flink master process.
OSMC 2021 | Handling 250K flows per second with OpenNMS: a case studyNETWAYS
What does it take to go from no flow support, to handling huge volumes of heterogeneous flow data in a 100% open-source monitoring stack, in a real-world environment? Expect a brief refresher on flows, an overview of the customer environment, and discussion of the engineering challenges faced. A medium dive follows into the movement of flow data from ingest to query and display, the solution architecture as it exists today, and lessons learned and their application to the project roadmap.
WSO2 Kubernetes Reference Architecture - Nov 2017Imesh Gunaratne
This document provides an overview of WSO2's reference architecture for deploying their middleware products on Kubernetes. It begins with introductions to containers and Kubernetes, explaining concepts like pods, services, deployments, etc. It then outlines WSO2's approach for container orchestration, service discovery, configuration management, load balancing, security, updates, and monitoring in a Kubernetes environment. Specific practices and Kubernetes resources are recommended for areas like pod security policies, horizontal pod autoscaling, ingress definitions, and more. Overall the document serves as a guide for architecting and operating WSO2 products on Kubernetes according to best practices.
44CON 2014 - Binary Protocol Analysis with CANAPE, James Forshaw44CON
44CON 2014 - Binary Protocol Analysis with CANAPE, James Forshaw
CANAPE is an open source network proxy written in .NET. It has been developed to aid in the analysis and exploitation of unknown application network protocols using a similar use case to common HTTP proxies such as Burp or CAT.
This workshop will go through the basics of analysing an unknown application protocol with hands on training examples. By the end of the workshop candidates should be able to better understand CANAPE’s functionality and be able to apply that to other protocols they come across.
Serverless stream processing of Debezium data change events with Knative | De...Red Hat Developers
Come and join us for an (almost) no-slides session around the terrific trio of Debezium, Apache Kafka Streams, and Knative Eventing! Leveraging Apache Kafka as the de-facto standard for event-driven data pipelines, these open-source technologies allow you to ingest data changes from relational and NoSQL databases, process and enrich them, and consume them serverless-style. In a live demo, you’ll see how Debezium, Apache Kafka, Quarkus, and Knative are the dream-team for building serverless, cloud-native stream processing pipelines. You will learn: How to stream change events out of your database using Debezium How to use the Quarkus extension for Kafka Streams to build cloud-native stream processing applications, running either on the JVM or GraalVM How to consume and distribute Kafka messages with Knative Eventing, allowing you to manage modern serverless workloads on Kubernetes.
Wayfair Storefront Performance Monitoring with InfluxEnterprise by Richard La...InfluxData
In this InfluxDays NYC 2019 session, Richard Laskey from the Wayfair Storefront team will share their monitoring best practices using InfluxEnterprise. These efforts are critical and help improve the user experience by driving forward site-wide improvements, establishing best practices, and driving change through many different teams.
Flink Forward SF 2017: Scott Kidder - Building a Real-Time Anomaly-Detection ...Flink Forward
Mux uses Apache Flink to identify anomalies in the distribution & playback of digital video for major video streaming websites. Scott Kidder will describe the Apache Flink deployment at Mux leveraging Docker, AWS Kinesis, Zookeeper, HDFS, and InfluxDB. Deploying a Flink application in a zero-downtime production environment can be tricky, so unit- & behavioral-testing, application packaging, upgrade, and monitoring strategies will be covered as well.
stackconf 2020 | Ignite talk: Opensource in Advanced Research Computing, How ...NETWAYS
Opensource software is becoming a pillar in our everyday life, leveraged by our cell phones, our transportation systems and on the websites we visit. In this quick talk, we will have a look on how Canada’s Advanced Research Computing (“ARC”) organizations use opensource software to deploy and operate some of the largest Supercomputers and Cloud deployments on Earth. We will briefly introduce the systems and dig deeper into the opensource technologies that together make the magic happen !
The document discusses plans for making ManageIQ providers more modular and gemified. It covers namespaces, asking providers for their capabilities instead of assuming, gemifying individual providers, and generating boilerplate code for new providers. The overall goal is for providers to be owned, maintained and released independently by their authors.
Better Kafka Performance Without Changing Any Code | Simon Ritter, AzulHostedbyConfluent
Apache Kafka is the most popular open-source stream-processing software for collecting, processing, storing, and analyzing data at scale. Most known for its excellent performance, low latency, fault tolerance, and high throughput, it's capable of handling thousands of messages per second. For mission-critical applications, how do you ensure that the performance delivered is the performance required? This is especially important as Kafka is written in Java and Scala and runs on the JVM. The JVM is a fantastic platform that delivers on an internet scale.
In this session, we'll explore how making changes to the JVM design can eliminate the problems of garbage collection pauses and raise the throughput of applications. For cloud-based Kafka applications, this can deliver both lower latency and reduced infrastructure costs. All without changing a line of code!
OSMC 2019 | Monitoring Cockpit for Kubernetes Clusters by Ulrike KlusikNETWAYS
Monitoring Kubernetes Clusters with Prometheus is state of the art. The difficulty is to find the significant metrics from the vast amount of available metrics. This talk shows a Monitoring Cockpit defined to get a quick overview of the cluster health and usage. It uses the Standard Metrics available for Kubernetes/OpenShift Clusters and their standard services. The monitoring solution is based on Prometheus, using InfluxDB for central long term storage and Grafana.
Flink Forward San Francisco 2018 keynote: Srikanth Satya - "Stream Processin...Flink Forward
Stream Processing in conjunction with a Consistent, Durable, Reliable stream storage is kicking the revolution up a notch in Big Data processing. This modern paradigm is enabling a new generation of data middleware that delivers on the streaming promise of a simplified and unified programming model. From data ingest, transformation, and messaging to search, time series and more, a robust streaming data ecosystem means we’ll all be able to more quickly build applications that solve problems we could not solve before.
Introducing Confluent labs Parallel Consumer client | Anthony Stubbes, ConfluentHostedbyConfluent
Consuming messages in parallel is what Apache Kafka® is all about, so you may well wonder, why would we want anything else? It turns out that, in practice, there are a number of situations where Kafka’s partition-level parallelism gets in the way of optimal design.
This session will go over some of these types of situations that can benefit from parallel message processing within a single application instance (aka slow consumers or competing consumers), and then introduce the new Parallel Consumer labs project from Confluent, which can improve functionality and massively improve performance in such situations.
It will cover the
- Different ordering modes of the client
- Relative performance improvements
- Usage with other components like Kafka Streams
- An introduction to the internal architecture of the project
- How it can achieve all this in a reassignment friendly manner
- Operators are applications that extend Kubernetes to manage complex stateful applications. They use custom resource definitions (CRDs) to configure and automate tasks.
- Helm is a good starting point for creating operators as it is widely used and easy to learn. Operators created with Helm can later be used to manage resources in other operators.
- The demo showed creating a Helm operator from a Nginx chart and combining two operators with ArgoCD to deploy example apps based on custom resources.
Kafka Summit SF 2017 - Query the Application, Not a Database: “Interactive Qu...confluent
Interactive Queries in Apache Kafka's Streams API allows users to query the local state of a Kafka Streams application without accessing an external database. It treats the Kafka Streams application as an embedded, lightweight database. The local state is fault-tolerant and can be sharded across tasks to scale horizontally. Users can discover other application instances and their state to perform queries on remote state if needed. Interactive Queries simplifies stateful stream processing by reducing moving parts compared to using an external database.
Quarkus: From developer joy to Kubernetes nirvana! | DevNation Tech TalkRed Hat Developers
In a time where container image building tools outnumber application frameworks, and deployment descriptors are lengthier than a small app, "deployment" is the stage where developer fun goes to die. In a less dramatic tone: The options and complexity of containerizing and deploying an application is, to say the least, a distraction for most developers. But it doesn't have to be. Quarkus provides extensions that help developers eliminate those distractions by making smart choices for them and by integrating with the rest of the Quarkus ecosystem. This demonstration will show that in Quarkusland, Kubernetes is not a killjoy but part of the fun, by providing a concise experience as you mix and match support for various platforms (vanilla Kubernetes & OpenShift) with image building solutions (Docker, Jib & S2i).
This document discusses implementing and testing a self-managed logging and visualization solution for a Kubernetes cluster. It considers tools like FluentD, Elasticsearch, Kibana, Helm, and Kops for collecting, processing, and visualizing logs. A turn-key deployment approach using Helm is recommended to install all stack components from a single chart and leverage dependencies. Concerns about authentication, capacity planning, and security hardening are noted for future improvement.
Kubernetes-native or not? When should you ditch your traditional CI/CD server...Red Hat Developers
This document discusses when to use Kubernetes-native CI/CD tools like Tekton instead of traditional CI/CD servers. It introduces Tekton as an open-source framework for building reusable CI/CD pipelines that runs on Kubernetes. The key benefits of Kubernetes-native tools are centralized logging and monitoring, high availability guaranteed by Kubernetes, and self-healing capabilities. The document suggests considering Kubernetes-native CI/CD when your workloads are already running on Kubernetes.
Exploring Kubeflow on Kubernetes for AI/ML | DevNation Tech TalkRed Hat Developers
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable, and scalable by leveraging best-of-breed open source projects. These include Jupyter Notebooks, TensorFlow, and Pytorch for Training; Seldon and KFServing for Serving; and Kubeflow Pipelines. These are all wrapped up neatly in an easy-to-use portal so developers and data scientists can easily collaborate and deliver production-ready AI/ML workloads.
The document outlines Michael Neumann's perspective on key aspects of Scripture including Creation, Crisis, Covenant, the six 'acts' of Scripture, Christ, the Church, and Consummation. It also includes several Bible verses about God working through people and pouring out his Spirit on all flesh to bring justice to the nations and fulfill his good purpose.
This document discusses strategies for keeping products moving through their product life cycle. It notes that while some products only have one life, others can have extended lifecycles through strategic adaptations. During a product's life cycle, marketing and promotions must adapt to the stage of maturity. Constant innovation is essential for product survival to avoid forcing early retirement. Relatively small changes to design, packaging or features can help a product stay relevant. Products reach the end of their lifecycle in different ways, such as lack of sales, but with the right strategy even mature products can be renewed or given new life to avoid going extinct. The key message is that products are not designed to sit on shelves, so companies must find ways to keep them moving
April Dunford of Sprint.ly presents Leaky Buckets, Death Stink & True LoveTechTO
April Dunford of Sprintly shares the truth on what non-marketing activities have to do with marketing and how to build a stronger profitable startup in the process. Presented at Tech Toronto Meetup February 2016.
The document appears to be a list of artists and songs from classic rock music. It includes over 200 entries listing popular rock bands from the 1960s-1980s and some of their most well-known songs, such as Led Zeppelin's "Rock and Roll", Lynyrd Skynyrd's "Sweet Home Alabama", and The Beatles' "Come Together". The list provides a sampling of iconic rock artists and songs that helped define the rock music genre during the era.
The document describes several building projects including a temporary and permanent housing project in Ottawa located near a light rail station, a Shanghai community center near a historic building consisting of a recreation center, boat club, library and soccer field, and the Ha'erbin convention center in China consisting of convention halls, a theatre, meeting halls, retail, and hotels.
This is a brief overview of how Interbrand works
Theory: The formula (Espiell, 2009) used in this method: (Σ
)
Where
Van Present value of the brand Total income attributed to the product identified with the
s brand in tangible costs attributed to the product identified by the
brand in s
Income attributable to intangibles in the year s
s Number of years since today
Discount rate d
73
M ‰ of income attributable to the intangible on
brand exclusively
F ‰ on the reliability of the income attributable to the mark
The document discusses guidance and counseling at Yogyakarta State University in Indonesia. It was written by Sailah Ribha, who works in the Guidance and Counseling department at the university. The document provides background information on the author and their affiliation with Yogyakarta State University.
Este documento presenta las características de varias herramientas web y ofimáticas aplicadas a la educación y la enfermería. Describe las características de herramientas como Google Drive, Dropbox, Gmail, Hotmail, Mendeley, Blogger, SlideShare, GoConqr, Excel, Word, PowerPoint, Epi Info y Quipux. El documento analiza cómo estas herramientas pueden usarse para mejorar la educación y la práctica de la enfermería, al facilitar la creación, edición, almacenamiento y compartición de arch
La Unión Europea ha acordado un embargo petrolero contra Rusia en respuesta a la invasión de Ucrania. El embargo prohibirá las importaciones marítimas de petróleo ruso a la UE y pondrá fin a las entregas a través de oleoductos dentro de seis meses. Esta medida forma parte de un sexto paquete de sanciones de la UE destinadas a aumentar la presión económica sobre el gobierno de Putin.
The best thing about Chambre Luxe is that it treats with only the best quality cosmetic products which are not harmful at all. These do not contain parabens and other harmful ingredients. The services here are fantastic.
Eco E offers an energy management service solution with no upfront costs and guaranteed 20% savings on electricity costs. Customers pay a single monthly fee for Eco E to install and monitor an LED lighting system. Eco E is responsible for installation, maintenance, and ensuring ongoing savings over the 5-year agreement. The solution reduces electricity usage and costs while eliminating customer risk and exposure to additional costs.
Este documento resume varias teorías psicológicas sobre la motivación. Define la motivación como los estímulos que impulsan a las personas a realizar acciones. Explica que la motivación puede ser interna u externa, y describe las teorías de Maslow sobre las jerarquías de necesidades, de Herzberg sobre los factores motivadores e higiénicos, y de McClelland sobre los tipos de motivación como logro, poder y afiliación. También resume las teorías X y Y de McGregor y la teoría ERC de Alderfer.
Darshana Dinesh Patil has over 13 years of experience in IT project management and service delivery. She currently works as a Validation Program Manager for Tata Consultancy Services, where she manages a validation team of 10 members for Hospira, a Pfizer company. Previously, she has worked as a Project Manager for several projects involving Oracle, Java, and Salesforce for clients like Cisco Systems. She has expertise in computer system validation, quality assurance, and regulatory compliance.
1) El documento presenta información sobre un curso de informática de la carrera de enfermería de la Universidad Técnica de Machala en el primer semestre del año lectivo 2015-2016.
2) Se explican diferentes formas de configurar las herramientas de corrección ortográfica y gramatical en Microsoft Office como funcionar en segundo plano, ocultar errores y utilizar la ortografía contextual.
3) Finalmente, se detallan métodos para proteger documentos de Word como restringir formatos, edición y marcar como final.
The National Multifamily Index ranks major U.S. markets based on projected vacancy rates, rent growth, and employment gains. San Francisco and San Jose rank at the top due to strong job growth, low vacancy, and high rents. Markets in the Pacific Northwest and Northeast also rank highly. Atlanta and Riverside-San Bernardino moved into the top 20 due to improving economies and property performance. Midwest markets rank in the lower third despite favorable demand drivers. Supply growth will challenge some markets like Houston and Tampa.
OpenStack Nova - Developer IntroductionJohn Garbutt
This document provides an overview of Nova, OpenStack's compute service. It discusses Nova's architecture, code structure, API concepts, upgrade process, and how different groups work together as part of the upstream community. The new upgrade process aims to minimize downtime by expanding the database schema, restarting services individually, and signaling services to reload configuration. Collaboration across various groups with different perspectives is important to OpenStack's open development model.
OpenStack Nova Upgrade - /dev/winter Jan 2016John Garbutt
Rackspace uses OpenStack to power both its public cloud and many private clouds.
Lets take a look at how OpenStack Compute (Nova) works with other OpenStack services to convert a users REST API call into accessible compute resources, be they virtual machines, containers or bare metal.
Now you understand how Nova is a highly distributed system, lets have a look at how you can upgrade the control place, spread across thousands of nodes, with minimal downtime.
ONUG Tutorial: Bridges and Tunnels Drive Through OpenStack Networkingmarkmcclain
This document summarizes OpenStack networking (Neutron) and discusses its key components and architecture. It describes how Neutron provides network abstraction and virtualization through pluggable backend drivers. It also outlines some common Neutron features like security groups and highlights new capabilities in the Juno release like IPv6 support and distributed virtual routing. The document concludes by looking ahead to further networking developments in OpenStack.
How kubernetes can help you quickly and automatically test and deploy new services
While Kubernetes is primarily associated with managing cloud-native applications and microservices, it can also play a role in IoT deployments. Here are a few reasons why Kubernetes is relevant in the context of IoT:
1 Scalability: IoT systems often involve a large number of devices generating massive amounts of data. Kubernetes provides automatic scaling capabilities, allowing IoT applications to scale horizontally by adding or removing instances based on demand. This helps manage the increasing workload efficiently.
2 Resilience and High Availability: IoT applications require high availability to ensure uninterrupted operations. Kubernetes offers features like load balancing, fault tolerance, and self-healing capabilities. It can automatically restart failed containers or replace them with healthy instances, ensuring that IoT services remain available and resilient.
3 Resource Optimization: IoT deployments typically involve a mix of hardware devices with varying capabilities. Kubernetes can optimize resource utilization by efficiently distributing workloads across devices. It allows you to define resource constraints and priorities, ensuring that devices with higher capabilities handle more demanding tasks.
4 Service Discovery and Load Balancing: In an IoT ecosystem, devices and services need to discover and communicate with each other. Kubernetes provides built-in service discovery mechanisms, such as DNS-based service discovery and load balancing, allowing devices to locate and interact with services dynamically.
5 Security and Updates: Security is a crucial aspect of IoT systems, and Kubernetes helps in managing security at scale. It provides features like role-based access control (RBAC), network policies, and secret management to enforce security measures across IoT deployments. Additionally, Kubernetes facilitates rolling updates, allowing for seamless updates and patches without downtime.
6 Flexibility and Portability: Kubernetes abstracts the underlying infrastructure, enabling IoT applications to be deployed consistently across different environments, whether it’s on-premises, in the cloud, or at the edge. This flexibility allows organizations to migrate or distribute their IoT workloads as needed, making it easier to adopt hybrid or multi-cloud strategies
Workday has built one of the largest OpenStack-based private clouds in the world, hosting a workload of over a million physical cores on over 16,000 compute nodes in 5 data centers for over ten years. However, there was a growing need for a newer, more maintainable deployment model that would closely follow the upstream community. We would like to share our new architecture and deployment approach as well as lessons learned from our experience.
We’ve converted many of our technologies in the process, from…
Migrating from Mitaka, to Victoria
Converting from OpenContrail, to pure L3 Calico with BGP on the host
Deploying with Chef, to deploying with Ansible
Building home-grown container images, to Kolla
Monitoring with Sensu and Wavefront, to Prometheus and Grafana
CI/CD in Jenkins, to Zuul
CentOS 7, to CentOS 8 Stream
We'll also talk about some internal tools we wrote that, while Workday-specific, may inspire you to see what value-add you can make for your customers.
OpenDaylight Openflow & OVSDB use cases ODL summit 2016abhijit2511
The document discusses the OpenDaylight OpenFlow & OVSDB projects and use cases. It provides an overview of the OpenFlow plugin project including participants, features, and new additions in Boron. It also summarizes the OVSDB project including the OVSDB southbound plugin and library. Finally, it describes how the OpenFlow and OVSDB plugins are used in virtual network and service function chaining use cases.
Building Apps with Distributed In-Memory Computing Using Apache GeodePivotalOpenSourceHub
Slides from the Meetup Monday March 7, 2016 just before the beginning of #GeodeSummit, where we cover an introduction of the technology and community that is Apache Geode, the in-memory data grid.
Applying Hyper-scale Design Patterns to RoutingHannes Gredler
Hannes Gredler presents applying hyper-scale design patterns to routing. He discusses a multi-level architecture with microservices, commodity hardware, and resiliency. He advocates open source development and cites Cisco's Vector Packet Processing as the best open source routing code. A demo shows a router processing 39 million routes with fast restart times using snapshots for state recovery.
This document summarizes a presentation about Open vSwitch, an open source virtual switch that allows programmable networking in virtualized environments. Open vSwitch brings standard networking features like VLANs, bonding, and ACLs to virtual machines. It supports OpenFlow for remote programmability and management. Open vSwitch can emulate traditional switch pipelines or be extended with primitives like registers and resubmit actions. While establishing new flows impacts performance, established flows perform near native speeds. Open vSwitch integrates with hypervisors like libvirt and OpenStack for network management. Future work may improve performance, integration, and add new features and protocols.
Openstack upgrade without_down_time_20141103r1Yankai Liu
The document describes NTT's strategy for performing live upgrades of OpenStack without downtime. It discusses pre-upgrade investigation, considerations for the upgrade procedure, testing the upgrade process, and results. The key aspects covered are migrating user resources, upgrading components in a specific order while blocking requests, and evaluating the upgrade to ensure no impact on users or their API calls. Some issues identified included errors from Active/Standby switches and RPC API version mismatches between components.
[OpenStack Days Korea 2016] Track1 - Red Hat enterprise Linux OpenStack PlatformOpenStack Korea Community
This document discusses Red Hat's OpenStack platform. It provides an overview of OpenStack and what it is used for. It then discusses why Red Hat is well suited to provide an OpenStack platform, including that it is optimized to run on Red Hat Enterprise Linux and benefits from Red Hat's engineering resources and long term support. Key features of Red Hat's OpenStack platform are also summarized, such as performance, availability, security and manageability.
Block & File Services – Die Lösung von Nutanix für ihre AnforderungenNEXTtour
.NEXT is designed to equip you with the tools, knowledge, and network of people that can help you make real, tangible business impact in your organization.
Intro to Apache Apex - Next Gen Platform for Ingest and TransformApache Apex
Introduction to Apache Apex - The next generation native Hadoop platform. This talk will cover details about how Apache Apex can be used as a powerful and versatile platform for big data processing. Common usage of Apache Apex includes big data ingestion, streaming analytics, ETL, fast batch alerts, real-time actions, threat detection, etc.
Bio:
Pramod Immaneni is Apache Apex PMC member and senior architect at DataTorrent, where he works on Apache Apex and specializes in big data platform and applications. Prior to DataTorrent, he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs.
Cisco is developing solutions to deploy OpenStack using Cisco compute, network, and storage technologies. Cisco contributes code to OpenStack projects, provides automation tools for OpenStack deployment on UCS servers, and has plugins that integrate Cisco networking products like Nexus switches and the Nexus 1000V virtual switch with OpenStack. Cisco works with customers to implement OpenStack using best practices defined in Cisco "blueprints" and provides a unified management system for UCS blade and rack servers. The presentation demonstrates how ACI can simplify networking for OpenStack through its application-centric policy model and integration with Neutron.
4.17.0 is the latest Apache CloudStack major release. In this talk, Nicolas goes through the new features introduced in this version from an administrator/user perspective, explaining their benefits and the problems those features resolve. He also ran a live demo to see the new features in action.
Nicolas Vazquez is a Senior Software Engineer at ShapeBlue and is a PMC member of the Apache CloudStack project. He spends his time designing and implementing features in Apache CloudStack and can be seen acting as a release manager also. Nicolas is based in Uruguay and is a father of a young girl. He is a fan of sports, enjoys playing tennis and football. In his free time, he also enjoys reading and listening to economic and political materials.
-----------------------------------------
CloudStack Collaboration Conference 2022 took place on 14th-16th November in Sofia, Bulgaria and virtually. The day saw a hybrid get-together of the global CloudStack community hosting 370 attendees. The event hosted 43 sessions from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
Deploying OpenStack with Cisco Networking, Compute and StorageLora O'Haver
Cisco offers solutions for deploying OpenStack with Cisco compute, network, and storage technologies. Key elements include Cisco's participation in the OpenStack community, Cisco OpenStack engineering efforts, and Cisco technology partnerships with companies providing OpenStack platforms. Cisco provides unified management of compute and network resources through Cisco UCS.
Collaborating with OpenDaylight for a Network-Enabled CloudTesora
OpenDaylight is an open source SDN platform developed under the Linux Foundation. It aims to promote adoption of SDN through an industry-supported common platform. OpenDaylight has over 31,000 commits from nearly 700 contributors, representing over 2.6 million lines of Java code. It is used in over 150 commercial deployments and integrates with OpenStack for network virtualization and NFV services. Future releases will improve scaling, performance, and application integration through projects like Genius and NetVirt.
Similar to Nova Update - OpenStack Ops Midcycle, Manchester, Feb 2016 (20)
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Project Management: The Role of Project Dashboards.pdfKarya Keeper
Project management is a crucial aspect of any organization, ensuring that projects are completed efficiently and effectively. One of the key tools used in project management is the project dashboard, which provides a comprehensive view of project progress and performance. In this article, we will explore the role of project dashboards in project management, highlighting their key features and benefits.
UI5con 2024 - Bring Your Own Design SystemPeter Muessig
How do you combine the OpenUI5/SAPUI5 programming model with a design system that makes its controls available as Web Components? Since OpenUI5/SAPUI5 1.120, the framework supports the integration of any Web Components. This makes it possible, for example, to natively embed own Web Components of your design system which are created with Stencil. The integration embeds the Web Components in a way that they can be used naturally in XMLViews, like with standard UI5 controls, and can be bound with data binding. Learn how you can also make use of the Web Components base class in OpenUI5/SAPUI5 to also integrate your Web Components and get inspired by the solution to generate a custom UI5 library providing the Web Components control wrappers for the native ones.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
The Key to Digital Success_ A Comprehensive Guide to Continuous Testing Integ...kalichargn70th171
In today's business landscape, digital integration is ubiquitous, demanding swift innovation as a necessity rather than a luxury. In a fiercely competitive market with heightened customer expectations, the timely launch of flawless digital products is crucial for both acquisition and retention—any delay risks ceding market share to competitors.
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
10. 10
Upgrade
• Data plane and Control plane independence
• Upgrade from:
– Last stable branch
– Previous commit in same cycle
• Existing Configuration “just work”
• Warning before removing Features
13. 13
Nova Architecture
API Nodes
Behind LB
Compute Compute Compute Compute Compute
Database
Message
Queue
Conductor(s)
Other Control
Nodes
Isolate from DB using
oslo.versionedobjects Versioned RPC Signature
Schema and Data Migrations
Graceful Shutdown
RPC Signature
17. 17
API Users
The Absent The Active Multi-Cloud Ops & Dev
• Cloud upgrades
• But old script
works
• Uses newest APIs
• Check availability
• Multiple clouds
• Different versions
• Single script
• Who is using
what?
• How to evolve
API?
18. 18
API Evolution
v2.0 v2.1 Third Party APIs
• First API
• Alias for v1.1
• Base + Extensions
• Deprecated Legacy Code
• No Extensions
• Better Validation
• Backwards compatible
mode
• Evolve using
“Microversions”
• Replaced by External
Project
• Removed in Mitaka
24. 24
Live Migration
Fixes Features
• Better CI coverage
• Make all disks configurations
moveable
• Status updates
• Force Complete / Cancel
• Split Networks
Rackspace public cloud powered by OpenStack Nova
Started working on OpenStack at Citrix in 2010
Joined nova-core in June 2013, Nova PTL for Liberty and Mitaka
Image from unsplash.com
https://images.unsplash.com/photo-1418489098061-ce87b5dc3aee?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=b928238e71d53b027f5f89cdeb897892
Image from http://www.coronabrass.co.uk/
Mission hasn’t changed.
Lack of alignment is a big cause of friction.
What is Nova?
https://upload.wikimedia.org/wikipedia/commons/7/76/Blue_Linckia_Starfish.JPG
https://images.unsplash.com/photo-1431794062232-2a99a5431c6c?q=80&fm=jpg&s=2a0c6cb067ffaef134e053d94f555d91
To get the strong ecosystem, API needs to be interoperable and useful.
Pet VMs want Server “HA”
Out of Scope for Nova, but we have work to add supporting APIs.
https://upload.wikimedia.org/wikipedia/commons/7/78/Airforce_forklift.jpg
https://images.unsplash.com/photo-1429497419816-9ca5cfb4571a?q=80&fm=jpg&s=4bf1164d23eea4f04aeefe1732149cf3
This talk will focus on the control plane
Flow:
API (-> DB) -> Conductor (-> Scheduler) -> Compute (talks to other services)
Why:
Scale small and large: API requests vs Compute nodes
Note Upgrade features.
http://www.danplanet.com/blog/2015/06/26/upgrading-nova-to-kilo-with-minimal-downtime/
Aim: zero downtime.
Note: no rollback
(1) Expand DB, checks all data migrations are complete, removes any cruft from previous releases
(2) Pin RPC, upgrade all the control plane together, but conductor first
(3) Talk about graceful compute shutdown, and its limitations
(4) Un pin RPC by rechecking
Lets take a look at our users, and what they want.
Reference:
https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/