This document discusses moving OpenStack instances between compute nodes. It begins by asking what is being moved (the guest configuration, storage, and state) and then discusses reasons for moving instances like node maintenance or capacity management. It describes different mechanisms for moving instances, including evacuate, migrate, live migration, and helpers for moving all instances on a node. It then covers new enhancements in OpenStack Liberty and Mitaka related to long running live migrations, including scaling the downtime and adding configuration options to control concurrent operations and timeouts.
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
- What is NOVA ?
- NOVA architecture
- How instance are spawned in Openstack ?
- Interaction of nova with other openstack projects like neutron, glance and cinder.
A Container Stack for Openstack - OpenStack Silicon ValleyStephen Gordon
OpenStack is an Infrastructure as a Service offering that provides a powerful abstraction layer for interacting with your datacenter infrastructure, supported by a wide array of pluggable drivers for existing physical and virtual infrastructure investments. In this session, you’ll learn how OpenStack is evolving to integrate with the Linux, Docker, Kubernetes stack to provide the ideal infrastructure platform for modern containerized applications. You’ll learn how you can modernize application delivery using the Linux, Docker, Kubernetes stack provided by Red Hat while seamlessly using the authentication, network, and storage infrastructure services provided by an underlying OpenStack cloud.
OpenStack “Liberty,” due for imminent release, represents the 12th release of the open source computing platform for public and private clouds. Recent OpenStack releases have focused on improving stability and enhancing the operator experience. This is still the case with Liberty, but there are still new features to consider.
Join Sean Cohen and Steve Gordon to review notable features of this new OpenStack release, including:
Network quality of service (QoS) support via a new extensible API for dynamically defining per-port and per-network QoS policies.
Mark host down API enhancement in support of external high-availability solutions, including pacemaker, providing resilient instances in the event of compute node failure.
Enhanced Security Assertion Markup Language (SAML) support including dashboard integration, Ipsilon, and OpenID Connect support.
Role-based access control (RBAC) for networks, providing fine-grained permissions for sharing networks between tenants.
Dashboard support for database-as-a-service (Trove), subnet allocation, floating IP assignment, and volume migration.
Generic volume migration—adding the ability to migrate workloads from iSCSI to non-iSCSI back ends.
New Cinder replication API to allow block level replication between back ends.
Nondisruptive backup to allow backup while the volume is still attached, by performing backup from a temporary attached snapshot.
New Image signing and encryption to guarantee integrity by supporting signing and signature validation of bootable images.
In addition we’ll discuss the state of emerging projects including Manila and Zaqar.
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
- What is NOVA ?
- NOVA architecture
- How instance are spawned in Openstack ?
- Interaction of nova with other openstack projects like neutron, glance and cinder.
A Container Stack for Openstack - OpenStack Silicon ValleyStephen Gordon
OpenStack is an Infrastructure as a Service offering that provides a powerful abstraction layer for interacting with your datacenter infrastructure, supported by a wide array of pluggable drivers for existing physical and virtual infrastructure investments. In this session, you’ll learn how OpenStack is evolving to integrate with the Linux, Docker, Kubernetes stack to provide the ideal infrastructure platform for modern containerized applications. You’ll learn how you can modernize application delivery using the Linux, Docker, Kubernetes stack provided by Red Hat while seamlessly using the authentication, network, and storage infrastructure services provided by an underlying OpenStack cloud.
OpenStack “Liberty,” due for imminent release, represents the 12th release of the open source computing platform for public and private clouds. Recent OpenStack releases have focused on improving stability and enhancing the operator experience. This is still the case with Liberty, but there are still new features to consider.
Join Sean Cohen and Steve Gordon to review notable features of this new OpenStack release, including:
Network quality of service (QoS) support via a new extensible API for dynamically defining per-port and per-network QoS policies.
Mark host down API enhancement in support of external high-availability solutions, including pacemaker, providing resilient instances in the event of compute node failure.
Enhanced Security Assertion Markup Language (SAML) support including dashboard integration, Ipsilon, and OpenID Connect support.
Role-based access control (RBAC) for networks, providing fine-grained permissions for sharing networks between tenants.
Dashboard support for database-as-a-service (Trove), subnet allocation, floating IP assignment, and volume migration.
Generic volume migration—adding the ability to migrate workloads from iSCSI to non-iSCSI back ends.
New Cinder replication API to allow block level replication between back ends.
Nondisruptive backup to allow backup while the volume is still attached, by performing backup from a temporary attached snapshot.
New Image signing and encryption to guarantee integrity by supporting signing and signature validation of bootable images.
In addition we’ll discuss the state of emerging projects including Manila and Zaqar.
A study and practice of OpenStack release Kilo HA deployment. The Kilo document has some errors, and it's hardly find a detailed document to describe how to deploy a HA cloud based on Kilo release. Hope this slides can provide some clues.
Containers for the Enterprise: Delivering OpenShift on OpenStack for Performa...Stephen Gordon
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
KVM (Kernel-based Virtual Machine) is a full virtualization solution built into the Linux kernel. OpenStack Foundation user surveys consistently indicate that KVM is the most commonly used Hypervisor for OpenStack deployments, managed using the Libvirt driver for OpenStack Compute (Nova). Despite this sustained popularity development of the driver, and indeed the underlying Hypervisor itself, continues at a frantic pace.
This presentation will help you make sense of it all starting with an overview of the way Nova, Libvirt, and KVM interact before analysing progress made in Kilo on utilizing key Libvirt/KVM features in Nova including:
Instance vCPU pinning
Huge page backed instances
Enhanced NUMA topology awareness
...and more! The session will close with a discussion of how in addition to exposing existing Libvirt/KVM features emerging OpenStack use cases - such as Network Function Virtualization (NFV) and High Performance Computing (HPC) - are driving open innovation in the Libvirt, QEMU, and KVM projects themselves.
An overview of the OpenStack Cinder project, which provides block storage services in OpenStack. This presentation is updated to cover the Havana release, with a look forward at what's expected in Icehouse.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Deploying Containers at Scale on OpenStackStephen Gordon
magine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
A study and practice of OpenStack release Kilo HA deployment. The Kilo document has some errors, and it's hardly find a detailed document to describe how to deploy a HA cloud based on Kilo release. Hope this slides can provide some clues.
Containers for the Enterprise: Delivering OpenShift on OpenStack for Performa...Stephen Gordon
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
KVM (Kernel-based Virtual Machine) is a full virtualization solution built into the Linux kernel. OpenStack Foundation user surveys consistently indicate that KVM is the most commonly used Hypervisor for OpenStack deployments, managed using the Libvirt driver for OpenStack Compute (Nova). Despite this sustained popularity development of the driver, and indeed the underlying Hypervisor itself, continues at a frantic pace.
This presentation will help you make sense of it all starting with an overview of the way Nova, Libvirt, and KVM interact before analysing progress made in Kilo on utilizing key Libvirt/KVM features in Nova including:
Instance vCPU pinning
Huge page backed instances
Enhanced NUMA topology awareness
...and more! The session will close with a discussion of how in addition to exposing existing Libvirt/KVM features emerging OpenStack use cases - such as Network Function Virtualization (NFV) and High Performance Computing (HPC) - are driving open innovation in the Libvirt, QEMU, and KVM projects themselves.
An overview of the OpenStack Cinder project, which provides block storage services in OpenStack. This presentation is updated to cover the Havana release, with a look forward at what's expected in Icehouse.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Deploying Containers at Scale on OpenStackStephen Gordon
magine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Stephen Gordon
This deck begins with a high-level overview of where OpenStack Compute (Nova) fits into the overall OpenStack architecture, as demonstrated in Red Hat Enterprise Linux OpenStack Platform. Before illustrating how OpenStack Compute interacts with other OpenStack components.
The session will also provide a grounding in some common Compute terminology and a deep-dive look into key areas of OpenStack Compute, including the:
Compute APIs.
Compute Scheduler.
Compute Conductor.
Compute Service.
Compute Instance lifecycle.
Intertwined with the architectural information are details on horizontally scaling and dividing compute resources as well as customization of the Compute scheduler. You’ll also learn valuable insights into key OpenStack Compute features present in OpenStack Icehouse.
The Programmable Telecom Network
Douglas Tait
Director Telecoms Markets
Oracle
Stefano Gioia
Master Principal SDP Solution Specialist
Oracle
Enzo Amorino
Telecom Italia
WebRTC and Telecom APIs are the fundamental enablers of the programmable telecom network. We'll share several case studies on how Oracle's customers are rewriting the rules on telecom app development, with a special focus on Telecom Italia.
Cisco Data Center Orchestration SolutionCisco Canada
Cisco Data Center Orchestration Solution
By: Sasha Lebovic, Data Center Consulting Systems Engineer
Join Cisco for a presentation on the management and orchestration of modern virtualized multiservice data center (VMDC) architectures built on integrated compute stack designs. The presentation will provide the audience with an overview of industry directions and Cisco participation in open industry standards such as OpenStack, as well as discussing specific Cisco and partner branded offerings in this area. The presentation will then focus on Cisco’s newest offerings for datacenter automation including the Cisco UCS Director as well BMCs Cloud Lifecycle Manager products. The presentation will include a both a discussion, as well as demonstration of these products capability to provide automated datacenter operations as well as the key services for deploying private and public cloud offerings.
Rundeck + Nexus (from Nexus Live on June 5, 2014)dev2ops
The SimplifyOps team was on Nexus Live talking about how people use Rundeck and the integration between Rundeck and Nexus.
Link to the webcast:
https://www.youtube.com/watch?v=eHaEEBEMRA8
DevOps is not a one-trick pony. It involves a lot of changes to culture and attitudes. But the cultural changes only happen when you have the technology to enable it all. Oracle provides a comprehensive set of tools and products for traditional IT and cloud environments to help you deliver on your DevOps goals.
Accenture DevOps: Delivering applications at the pace of businessAccenture Technology
Are you ready to shift to continuous delivery? DevOps, a leading software engineering innovation, makes this shift possible by bringing business, development and operation teams together to streamline IT and applying more automated processes.
DevOps and Continuous Delivery Reference Architectures (including Nexus and o...Sonatype
There are numerous examples of DevOps and Continuous Delivery reference architectures available, and each of them vary in levels of detail, tools highlighted, and processes followed. Yet, there is a constant theme among the tool sets: Jenkins, Maven, Sonatype Nexus, Subversion, Git, Docker, Puppet/Chef, Rundeck, ServiceNow, and Sonar seem to show up time and again.
Slides da Apresentação realizada 24/09/2015 na Trilha de Java do The Developers Conference.
Resumo:
Programadores Java estão acostumados a desenvolver preocupando-se não somente com o que deveria ser feito, mas também em como fazer. Um simples código para buscar os dois maiores valores de uma lista pode levar tempo precioso de desenvolvimento. Para solucionar esse e outros problemas, o Java 8 traz uma série de melhorias buscando trazer para o Java muito da programação funcional que víamos em outras linguagens. Expressões lambda, streams entre outras novidades do Java serão apresentadas nessa palestra de forma simples e direto ao ponto.
http://www.thedevelopersconference.com.br/tdc/2015/portoalegre/trilha-java
Part 2 of III: An innovative technology from Bishop-Wisecarver has effectively changed the face of curvilinear track design possibilities. Part of the highly successful HepcoMotion PRT2 family of products, the new and patent pending 1-Trak product allows track systems to be manufactured in almost any conceivable 2D shape and from one single piece of material.
OpenNebulaConf 2016 - Hypervisors and Containers Hands-on Workshop by Jaime M...OpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the computing subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Hypervisors and Containers with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
Many companies build new-age KVM clouds, only to find out that their applications & workloads do not perform well. In this talk we’ll show you how to get the most out of your KVM cloud and how to optimize it for performance: You’ll understand why performance matters and how to measure it properly. We’ll teach you how to optimize CPU and memory for ultimate performance and how to tune the storage layer for performance. You’ll find out what are the main components of an efficient new-age cloud and which network components work best. In addition, you’ll learn how to select the right hardware to achieve unmatched performance for your new-age cloud and applications.
Venko Moyankov is an experienced system administrator and solutions architect at StorPool storage. He has experience with managing large virtualizations, working in telcos, designing and supporting the infrastructure of large enterprises. In the last year, his focus has been in helping companies globally to build the best storage solution according to their needs and projects.
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...javier ramirez
QuestDB es una base de datos open source de alto rendimiento. Mucha gente nos comentaba que les gustaría usarla como servicio, sin tener que gestionar las máquinas. Así que nos pusimos manos a la obra para desarrollar una solución que nos permitiese lanzar instancias de QuestDB con provisionado, monitorización, seguridad o actualizaciones totalmente gestionadas.
Unos cuantos clusters de Kubernetes más tarde, conseguimos lanzar nuestra oferta de QuestDB Cloud. Esta charla es la historia de cómo llegamos ahí. Hablaré de herramientas como Calico, Karpenter, CoreDNS, Telegraf, Prometheus, Loki o Grafana, pero también de retos como autenticación, facturación, multi-nube, o de a qué tienes que decir que no para poder sobrevivir en la nube.
Achieving the Ultimate Performance with KVMDevOps.com
Building and managing a cloud is not an easy task. It needs solid knowledge, proper planning and extensive experience in selecting the proper components and putting them together.
Many companies build new-age KVM clouds, only to find out that their applications & workloads do not perform well. Join this webinar to learn how to get the most out of your KVM cloud and how to optimize it for performance.
Join this webinar and learn:
Why performance matters and how to measure it properly?
What are the main components of an efficient new-age cloud?
How to select the right hardware?
How to optimize CPU and memory for ultimate performance?
Which network components work best?
How to tune the storage layer for performance?
Slides at OpenStack Summit 2017 Sydney
Session Info and Video: https://www.openstack.org/videos/sydney-2017/100gbps-openstack-for-providing-high-performance-nfv
Quantifying the Noisy Neighbor Problem in OpenstackNodir Kodirov
Two of the desirable features for private clouds are better control and predictable performance. Although public clouds have been extensively researched to characterize their unpredictable performance, private clouds have received less scrutiny.
In this talk, we will present how production workloads interfere with each other in an Openstack based cloud. We draw lessons from a several month long study of running workloads in different configurations on highly available implementation of Openstack. We study the impact of noisy neighbors on the network and storage IO performance of applications. We also look at the performance metrics of Openstack control plane and how the API calls are impacted with more number of entities like networks, routers, VMs, volumes. Our study relies on a tool that we developed to create clean and noisy workload deployments, using micro-benchmarks as well as enterprise workloads such as Hadoop, Jenkins and Redis.
Cumulus Linux supports great networking, what’s next? Matt Peterson (@dorkmatt) our resident expert from the office of the CTO shares his previous experience, his views on devops, and how Cumulus Networks makes it easier to manage networks with ONIE, ZTP and no CLI! “Devops is a lifestyle, shared responsibility”. With Linux as the networks OS, “it’s all just one apt-get away!”
Practical information on how to Optimize Virtual Machines for High Performance by Boyan Krosnov, Chief Product Officer at StorPool Storage
Presentation delivered at OpenNebula TechDay Sofia on 25-th of February 2016
The road to enterprise ready open stack storage as serviceSean Cohen
The OpenStack storage projects continue to mature each cycle exposing more and more Enterprise cloud storage infrastructure functionalities around high availability, security, business continuity, & provisioning, that redefines Enterprise storage to Storage as a service for both production, test & development cloud workloads.
I invite you to come and listen to my presentation about how Openstack and Gluster are integrating together in both Cinder and Swift.
I will give a brief description about Openstack storage components (Cinder, Swift and Glance) , followed by an intro to Gluster, and then present the integration points and some preferred topology and configuration between gluster and openstack.
Similar to Dude, This Isn't Where I Parked My Instance? (20)
Toronto RHUG: Container-native virtualizationStephen Gordon
November 2018 presentation covering Container-native virtualization, enabling OpenShift/Kubernetes as a common platform for application containers and virtual machines.
KubeVirt (Kubernetes and Cloud Native Toronto)Stephen Gordon
In this session Stephen will present the use cases for and current state of the KubeVirt project (http://www.kubevirt.io/), which aims to build a virtualization API for Kubernetes in order to manage virtual machines which themselves run in Kubernetes pods.
You will also hear how this project differs from, and is complementary to, the recently announced Katacontainers (https://katacontainers.io/) project.
OpenStackTO: Friendly coexistence of Virtual Machines and Containers on Kuber...Stephen Gordon
KubeVirt is intended to provide a convergence point for the data center of the future using Kubernetes as an infrastructure fabric for both application container and virtual machine workloads. Using a unified management approach simplifies deployments, allows for better resource utilization, and supports different workloads in a more optimal way. This session will outline how the Kubevirt project seeks to achieve this while using the extensible nature of Kubernetes in a way that provides a developer workflow that is as consistent as possible with the same patterns used for working with application containers.
Kubernetes and OpenStack at Scale at OpenStack Summit Boston 2017
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster and elastic infrastructure. Now, take that one step further - all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, you will see just that.
In this presentation, we will walk through a recent benchmarking deployment using Kubernetes and OpenStack on the Cloud Native Computing Foundation’s (CNCF's) 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
You'll also what's been happening in subsequent rounds of testing in Red Hat's own SCALE lab and the CNCF cluster and how we are working with the relevant open source communities including OpenStack, Kubernetes, and Ansible to continue to raise the bar for horizontal scaling of these platforms via community powered innovation.
A brief introduction to Publican for members of the OpenStack documentation community. Originally presented at the OpenStack documentation bootcamp on the 10th of September 2013
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Leading Change strategies and insights for effective change management pdf 1.pdf
Dude, This Isn't Where I Parked My Instance?
1. DUDE, THIS ISN’T WHERE I PARKED MY
INSTANCE?
Moving instances around your OpenStack cloud for fun and profit.
Stephen Gordon (@xsgordon)
Sr. Technical Product Manager, Red Hat
October 29th, 2015
2. 2
● What are we moving? *
● Why are we moving instances?
● How are we moving instances?
● What new enhancements do we get in:
○ Liberty?
○ Mitaka?
* #spoileralert: instances
AGENDA
4. 4
GUEST
CONFIGURATION
● Guest configuration
including vCPUs,
memory, devices etc.
GUEST
STORAGE
● Initial image or volume.
WHAT ARE WE MOVING?
What is an instance (“server”)?
All paths for moving instances involve moving some subset of these elements.
GUEST
STATE
● In-memory state.
● On-disk state.
6. 6
WHEN PERFORMING
NODE MAINTENANCE
● Adding hardware
● Updating software
● Response to imminent
failure
IN REACTION TO NODE
FAILURE
● Host lost power
● Host lost connectivity
● Host otherwise went
down (e.g. DC fire)
FOR CAPACITY
MANAGEMENT
● Consolidate or spread
instances to save
power or avoid
resource contention
issues respectively.
WHY ARE WE MOVING INSTANCES?
Moving instances is an operational tool for use...
8. 8
$ nova help | grep -E '(migrat|evacuat)'
evacuate Evacuate server from failed host.
live-migration Migrate running server to a new machine.
migrate Migrate a server. The new host will be..
migration-list Print a list of migrations.
host-servers-migrate Migrate all instances of the specified host to...
host-evacuate Evacuate all instances from failed host.
host-evacuate-live Live migrate all instances of the specified host to...
MECHANISMS FOR MOVING INSTANCES
Let me google that for you!
9. 9
$ nova help | grep -E '(migrat|evacuat)'
evacuate Evacuate server from failed host.
live-migration Migrate running server to a new machine.
migrate Migrate a server. The new host will be..
migration-list Print a list of migrations.
host-servers-migrate Migrate all instances of the specified host to...
host-evacuate Evacuate all instances from failed host.
host-evacuate-live Live migrate all instances of the specified host to...
MECHANISMS FOR MOVING INSTANCES
Let me google that for you!
10. 10
EVACUATE
Rebuild an instance that is
currently on a compute node
that is down on a different
compute node.
MIGRATE
Rebuild* an instance that is
currently on a compute node
that is up on a different
compute node**.
LIVE-MIGRATION
Move an instance to a
different compute node
without downtime.
MECHANISMS FOR MOVING INSTANCES
* By rebuild we really mean resize.
** Where this behavior will change if you turn on resizing to the same host (off by default)
11. 11
HOST-EVACUATE
Rebuild all instances that are
currently on a compute node
that is down on another
compute node.
HOST-SERVERS-MIGRATE
Rebuild* all instances that are
currently on a compute node
that is up on another compute
node**.
HOST-EVACUATE-LIVE
Move all instances on a
compute node to another
compute node without
downtime.
HELPERS FOR MOVING INSTANCES
* By rebuild we really mean resize.
** Where this behavior will change if you turn on resizing to the same host (off by default)
13. 13
● Works when compute node hosting instance fails due to a hardware failure or other
issue.
● Rebuilds instance on a new compute node either selected by the scheduler or
optionally the user initiating the evacuation.
○ Benefit over and above starting afresh is keeping same UUID, IP etc.
● Requires that Nova recognizes the source compute node is down.
● Requires shared storage to maintain user data on disk (not mandatory).
● Allows injecting a new admin password (if shared storage is not being used).
EVACUATION
nova evacuate [--password <password>] [--on-shared-storage] <server> [<host>]
14. 14
$ nova evacuate instance-001
+-----------+--------------+
| Property | Value |
+-----------+--------------+
| adminPass | pjaDV46p94Nz |
+-----------+--------------+
$
EVACUATION
nova evacuate [--password <password>] [--on-shared-storage] <server> [<host>]
16. 16
● Works when compute node hosting instance is up (at least to start with…).
● Rebuilds instance on a new host selected by the scheduler.
○ Actually uses the resize path in the code base.
○ Shuts down instance.
○ Copies disk to the new compute node.
○ Starts the instance there and removes it from the source hypervisor.
● Instance’s current host must be operational.
● Like resize requires a manual confirmation step.
● Unlike evacuation and live migration doesn’t allow specification of target host to
override scheduler.
COLD MIGRATION
nova migrate [--poll] <server>
17. 17
$ nova migrate instance-001 --poll
Server migrating... 100% complete
Finished
$ nova list
+--------------+--------------+---------------+------------+-------------+ ...
| ID | Name | Status | Task State | Power State | ...
+--------------+--------------+---------------+------------+-------------+ ...
| 5819a2e0-... | instance-001 | VERIFY_RESIZE | - | Running | ...
+--------------+--------------+---------------+------------+-------------+ ...
$ nova resize-confirm instance-001
COLD MIGRATION
nova migrate [--poll] <server>
19. 19
● Moves powered on virtual machine to a new compute node without any (noticeable)
downtime.
● Two approaches to live migration:
○ Using shared storage (including volume-based).
■ Requires either /var/lib/nova/instances/ to be on shared storage (e.g. NFS,
GlusterFS, Ceph, etc.)across all compute nodes in the migration domain; or
■ Volume-backed instances
■ Still requires memory state transfer/sync
○ Using block migration.
■ Direct transfer/sync of not just memory state but also disks from source
compute node to destination
LIVE MIGRATION
$ nova live-migration [--block-migrate] [--disk-over-commit] <server> [<host>]
20. 20
1. Scheduler selects destination host, unless user specified
2. Check migration source and destination (disk, ram, cpu model, mapped volumes)
3. Iterative pre-copy, copying memory pages from the active virtual machine on the source
to a new paused instance on the destination
4. Source instance is paused while remaining memory pages and CPU state is copied.
5. Destination instance is started, source is cleaned up
LIVE MIGRATION - HOW IT WORKS
21. 21
● Maximum performance is obtained by exposing as many host CPU features to the guest
as possible
● Live migration will fail if destination host is not able to expose the same CPU features to
guests as the source host
● Performance versus Flexibility trade-off
● Nova provides configuration keys, including libvirt_cpu_mode, for deployers to make
the performance versus flexibility trade-off for their environment
○ host-passthrough
○ host-model
○ custom
LIVE MIGRATION - HOW IT DOESN’T WORK
CPU mode/model compatibility
22. 22
$ virsh cpu-models x86_64
...
SandyBridge
Westmere
Nehalem
...
$ grep ‘libvirt_cpu_mode’ /etc/nova/nova.conf
libvirt_cpu_mode = custom
libvirt_cpu_model = Sandybridge
LIVE MIGRATION - HOW IT DOESN’T WORK
CPU mode/model compatibility
Can also use qemu-kvm -cpu help
23. 23
● Incompatible QEMU machine types
● Inconsistent networking configuration
○ Source hypervisor must be able to hit destination’s live_migration_uri and vice
versa (live_migration_uri = qemu+tcp://%s/system)
● Inconsistent clocks
○ Synchronize clocks using ntp or chronyd
● Incompatible VNC listening addresses
● Incompatible or no SSH tunnelling configuration
LIVE MIGRATION OTHER WAYS TO FAIL
24. 24
● Migrations take too long or fail to complete.
● Many common user operations are not supported during migration (e.g. pause).
● Need to use virsh, bypassing Nova, to:
○ Control a running migration (e.g. throttle or cancel)
○ Monitor a running migration
○ Tune migration max downtime
● Certain instance configurations can not be migrated.
○ Use a config drive (e.g. config_drive_format=iso9960) or mix local/remote
storage
○ Use passed through devices associated with them (SR-IOV, GPU, etc.)
● Live migration doesn’t correctly account for overcommit when checking destination host
validity.
● Tenant admin initiating needs to know if shared or block storage available.
LIVE MIGRATION - OTHER OPERATOR ISSUES
26. 26
● Primary factors in determining how long it will take to migrate a guest:
○ Amount of guest RAM
○ Speed with which guest RAM is being dirtied
○ Speed of the migration network
● Previously live migrations in OpenStack ran with fixed maximum downtime as
determined by QEMU.
● As of Liberty:
○ The downtime allowable is scaled up exponentially (to a limit) to allow a better
chance for completion.
○ The number of concurrent outbound live migrations is limited
○ The number of concurrent inbound build requests is limited
● QEMU endeavors to estimate when the number of dirty pages is low enough to finalize
LONG RUNNING LIVE MIGRATIONS
I’m gonna let you finish...but...
27. 27
● Scaling downtime to finalize migration:
○ live_migration_downtime - Maximum permitted guest downtime for switchover (minimum
100ms)
○ live_migration_downtime_steps - Number of incremental steps to reach max downtime
value (minimum 3)
○ live_migration_downtime_delay - Time to wait, in seconds, between each step in increase
of max downtime (minimum 10s)
● Timeouts:
○ live_migration_completion_timeout - Time to wait (in seconds) for migration to complete
(default 800 seconds, 0 means no timeout) - is scaled by GB of guest RAM
○ live_migration_progress_timeout - Time to wait (in seconds) for migration to make forward
progress (default 150 seconds).
LONG RUNNING LIVE MIGRATIONS
New configuration keys to control this behavior...
28. 28
● Concurrent operations:
○ max_concurrent_live_migrations - Maximum outbound live migrations to run concurrently,
defaults to 1. Do not change unless absolutely sure.
○ max_concurrent_builds - Maximum inbound instance builds to run concurrently, defaults to
10.
LONG RUNNING LIVE MIGRATIONS
New configuration keys to control this behavior...
29. 29
● Delay between steps is set to 30 * 3 (seconds of delay * GB of RAM).
○ 0 seconds -> set downtime to 37ms
○ 90 seconds -> set downtime to 38ms
○ 180 seconds -> set downtime to 39ms
○ 270 seconds -> set downtime to 42ms
○ 360 seconds -> set downtime to 46ms
○ 450 seconds -> set downtime to 55ms
○ 540 seconds -> set downtime to 70ms
○ 630 seconds -> set downtime to 98ms
○ 720 seconds -> set downtime to 148ms
○ 810 seconds -> set downtime to 238ms
○ 900 seconds -> set downtime to 400ms
LONG RUNNING LIVE MIGRATIONS EXAMPLE
400 millisecond max, 10 steps, 30 second delay, 3 GB guest
30. 30
● Liberty provides a mechanism for external tools to report into Nova
when a node has failed (“mark host down”/”force down” API call)
● As soon as host has been explicitly marked down evacuation can
commence, triggered by the external tool.
● Used to provide “instance high availability” using e.g. Pacemaker.
○ http://redhatstackblog.redhat.com/2015/09/24/highly-available-virtual-
machines-in-rhel-openstack-platform-7/
MARK HOST DOWN API CALL
32. 32
Short Term
● CI coverage
● Improve API documentation
● Support for migrating instances with mixed storage
● Support for pausing (and perhaps cancelling) migrations
● Better resource tracking
● Use Libvirt storage pools instead SSH for migrate/resize.
○ Enabler for other work including migrating suspended instances.
● Correct memory overcommit handling for live migration.
Mid to Long Term
● TLS encryption (work underway in QEMU)
● Auto-convergence - adjusting instance activity to help complete migration
● Post copy migration - start instance at destination and then copy memory over on demand
CURRENTLY UNDER DISCUSSION
34. 34
● Where can I find the slides?
○ http://www.slideshare.net/sgordon2
● Where can I submit anonymised feedback?
○ Session Feedback Survey in the official OpenStack Summit App
● Where can I contact you?
○ Twitter: @xsgordon
○ Email: sgordon@redhat.com
○ IRC: sgordon on irc.freenode.net
● How can I get involved?
○ https://etherpad.openstack.org/p/mitaka-live-migration
FAQ
36. 36
● Outstanding work items:
○ Etherpad: https://etherpad.openstack.org/p/mitaka-live-migration
○ Bug list: https://docs.google.
com/spreadsheets/d/19MFatOpjePS4JtkVHXCh6Qa8XUf6T2t0Igy1PucZ3Zk/edit#
gid=2127877307
● Past presentations:
○ Live Migration at HP Public Cloud:
■ https://www.openstack.org/summit/vancouver-2015/summit-
videos/presentation/live-migration-at-hp-public-cloud
○ Intel Dive into VM Live Migration:
■ https://www.openstack.org/summit/vancouver-2015/summit-
videos/presentation/dive-into-vm-live-migration
RECOMMENDED READING, VIEWING, AND
REFERENCES