DragonFlow sdn based distributed virtual router for openstack neutronEran Gampel
Dragonflow is an implementation of a fully distributed virtual router for OpenStack® Neutron™ that is based on a light weight SDN controller
blog.gampel.net
Dragonflow is an integral part of OpenStack that provides distributed SDN capabilities for Neutron including scale, performance, and latency. It uses a lightweight and easily extensible distributed control plane with pluggable database support. Current features include L2/L3 networking, tunnels, distributed DHCP, and selective database distribution. The roadmap includes adding container, SNAT/DNAT, reactive database, and service chaining support.
This document summarizes Dragonflow, an OpenStack Neutron plugin that implements a distributed SDN controller. Some key points:
- Dragonflow provides a full implementation of the Neutron API and acts as a lightweight distributed SDN controller using a pluggable database.
- It aims to provide advanced networking services like security groups, load balancing, and DHCP in an efficient, scalable way.
- As an integral part of OpenStack, it is fully open source and designed for performance, scalability, and low latency. Its distributed control plane can sync policies across compute nodes.
Neutron Done the SDN Way
Dragonflow is an open source distributed control plane implementation of Neutron which is an integral part of OpenStack. Dragonflow introduces innovative solutions and features to implement networking and distributed network services in a manner that is both lightweight and simple to extend, yet targeted towards performance-intensive and latency-sensitive applications. Dragonflow aims at solving the performance
Dockerizing the Hard Services: Neutron and Novaclayton_oneill
Talk about the benefits and pitfalls involved in successfully running complex services like Neutron and Nova inside of Docker containers.
Topics include:
* What magic incantations are needed to run these services at all?
* How to prevent HA router failover on service restarts.
* How to prevent network namespaces from breaking everything.
* Bonus: How network namespace fixes also helped fix Cinder NFS backend
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
DragonFlow sdn based distributed virtual router for openstack neutronEran Gampel
Dragonflow is an implementation of a fully distributed virtual router for OpenStack® Neutron™ that is based on a light weight SDN controller
blog.gampel.net
Dragonflow is an integral part of OpenStack that provides distributed SDN capabilities for Neutron including scale, performance, and latency. It uses a lightweight and easily extensible distributed control plane with pluggable database support. Current features include L2/L3 networking, tunnels, distributed DHCP, and selective database distribution. The roadmap includes adding container, SNAT/DNAT, reactive database, and service chaining support.
This document summarizes Dragonflow, an OpenStack Neutron plugin that implements a distributed SDN controller. Some key points:
- Dragonflow provides a full implementation of the Neutron API and acts as a lightweight distributed SDN controller using a pluggable database.
- It aims to provide advanced networking services like security groups, load balancing, and DHCP in an efficient, scalable way.
- As an integral part of OpenStack, it is fully open source and designed for performance, scalability, and low latency. Its distributed control plane can sync policies across compute nodes.
Neutron Done the SDN Way
Dragonflow is an open source distributed control plane implementation of Neutron which is an integral part of OpenStack. Dragonflow introduces innovative solutions and features to implement networking and distributed network services in a manner that is both lightweight and simple to extend, yet targeted towards performance-intensive and latency-sensitive applications. Dragonflow aims at solving the performance
Dockerizing the Hard Services: Neutron and Novaclayton_oneill
Talk about the benefits and pitfalls involved in successfully running complex services like Neutron and Nova inside of Docker containers.
Topics include:
* What magic incantations are needed to run these services at all?
* How to prevent HA router failover on service restarts.
* How to prevent network namespaces from breaking everything.
* Bonus: How network namespace fixes also helped fix Cinder NFS backend
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
This document discusses Neutron networking status in OpenStack, including features like Distributed Virtual Router (DVR) support. DVR allows distributed routing to remove bottlenecks and enable one-hop east-west traffic between VMs on different hypervisors. The document provides configuration options for enabling DVR and an example multi-node Devstack configuration for testing DVR on compute and network nodes. It also includes diagrams illustrating how DVR works to deliver traffic between VMs on different networks and hypervisors.
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
2014 OpenStack Summit - Neutron OVS to LinuxBridge MigrationJames Denton
Presentation titled 'Migrating production workloads from OVS to LinuxBridge'. Presented at the Fall 2014 OpenStack summit in Paris, this slide deck introduced the possibility of migrating live workloads from Open vSwitch to LinuxBridge with minimal downtime.
OpenStack networking can use either VLAN tagging or GRE tunneling to provide logical isolation between tenant networks. With VLAN, packets are tagged with a VLAN ID at the compute and network nodes to associate them with a particular tenant network. With GRE, packets are encapsulated with a GRE header that includes a tunnel ID to associate them with a tenant network. Security groups are applied using iptables rules to filter traffic between VMs in different networks.
OpenStack: Virtual Routers On Compute Nodesclayton_oneill
Learn the production pros and cons of operating Neutron legacy and HA routers on compute nodes in your production cloud. Not ready for DVR or third-party network overhauls? Virtual router network “hot spots” got you down? Large virtual router failure domains keeping you up late at night? Neutron reference architectures not providing a scalable routing solution? If you answered yes to any of these questions then this talk is for you.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
Tech Talk by Gal Sagie: Kuryr - Connecting containers networking to OpenStack...nvirters
Project Kuryr aims to provide Neutron networking abstractions and APIs for container networking to avoid vendor lock-in. It maps container networking operations to the Neutron API and allows different Neutron plugins like OVN, Midonet, and Calico to provide networking for containers. This provides networking features to containers like security groups, LBaaS, FWaaS, and avoids issues with current container networking solutions around performance, management, and flexibility. Kuryr provides a common base for Neutron vendors to support VM and container networking.
DockerCon US 2016 - Docker Networking deep diveMadhu Venugopal
Docker networking provides a networking fabric for containers called libnetwork that defines the container networking model and provides features like multi-host networking, service discovery, load balancing, and security. New features in Docker 1.12 include networking in swarm mode without an external key-value store, macvlan driver support, a gossip-based secure control plane, optional IPSec for the data plane, built-in DNS for service discovery and load balancing, and a routing mesh for edge routing.
DevOops - Lessons Learned from an OpenStack Network ArchitectJames Denton
Join as we discuss various OpenStack Neutron network configuration options and issues experienced with VLAN, VXLAN, L2population, multicast, Neutron routers, Open vSwitch and more.
This document provides an overview and agenda for a presentation on OpenStack networking. It begins with an overview of OpenStack architecture and services like Compute, Networking, Identity and Image services. It then discusses basic network components like controllers, compute nodes and networking plugins. Next, it covers networking process flows and dives deeper into the Neutron networking plugin, including the Modular Layer 2 plugin framework and drivers like Open vSwitch. It concludes with a planned demonstration of networking functionality in an OpenStack lab environment.
Accelerating Envoy and Istio with Cilium and the Linux KernelThomas Graf
The document discusses how Cilium can accelerate Envoy and Istio by using eBPF/XDP to provide transparent acceleration of network traffic between Kubernetes pods and sidecars without any changes required to applications or Envoy. Cilium also provides features like service mesh datapath, network security policies, load balancing, and visibility/tracing capabilities. BPF/XDP in Cilium allows for transparent TCP/IP acceleration during the data phase of communications between pods and sidecars.
This document provides an overview and agenda for a Docker networking deep dive presentation. The presentation covers key concepts in Docker networking including libnetwork, the Container Networking Model (CNM), multi-host networking capabilities, service discovery, load balancing, and new features in Docker 1.12 like routing mesh and secured control/data planes. The agenda demonstrates Docker networking use cases like default bridge networks, user-defined bridge networks, and overlay networks. It also covers networking drivers, Docker 1.12 swarm mode networking functionality, and how packets flow through systems using routing mesh and load balancing.
A study and practice of OpenStack release Kilo HA deployment. The Kilo document has some errors, and it's hardly find a detailed document to describe how to deploy a HA cloud based on Kilo release. Hope this slides can provide some clues.
Kyle Mestery provided an update on OpenStack Networking (Neutron) priorities for the Liberty release. Key areas of focus include continuing the plugin decomposition effort, improving the API, enabling quality of service features, and integrating network services like load balancing and VPN. Governance changes are also underway to help scale Neutron development.
Docker network performance in the public cloudArjan Schaaf
Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
This document summarizes Jakub Pavlik's experience deploying Contrail virtual networks with OpenStack at tcp cloud. Key points include:
- Contrail 1.05 was deployed with Havana on CentOS using SaltStack instead of Fabric for configuration management.
- The deployment consisted of 3 OpenStack controllers, 2 Contrail controllers, and used HA technologies like Corosync/Pacemaker and Galera for high availability.
- Some issues were encountered with Fabric not providing true HA and missing options for cinder/glance backends. BGP peering also required restoration after control node failures.
Jakub Pavlik discusses high availability versus disaster recovery in OpenStack clouds. He describes four types of high availability in OpenStack: physical infrastructure, OpenStack control services, virtual machines, and applications. For each type, he outlines concepts like active/passive and active/active configurations, specific technologies used like Pacemaker, Corosync, HAProxy, and MySQL Galera, and considerations for shared and non-shared storage. Finally, he provides examples of high availability architectures and methods used by different OpenStack vendors.
This document discusses Neutron networking status in OpenStack, including features like Distributed Virtual Router (DVR) support. DVR allows distributed routing to remove bottlenecks and enable one-hop east-west traffic between VMs on different hypervisors. The document provides configuration options for enabling DVR and an example multi-node Devstack configuration for testing DVR on compute and network nodes. It also includes diagrams illustrating how DVR works to deliver traffic between VMs on different networks and hypervisors.
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
2014 OpenStack Summit - Neutron OVS to LinuxBridge MigrationJames Denton
Presentation titled 'Migrating production workloads from OVS to LinuxBridge'. Presented at the Fall 2014 OpenStack summit in Paris, this slide deck introduced the possibility of migrating live workloads from Open vSwitch to LinuxBridge with minimal downtime.
OpenStack networking can use either VLAN tagging or GRE tunneling to provide logical isolation between tenant networks. With VLAN, packets are tagged with a VLAN ID at the compute and network nodes to associate them with a particular tenant network. With GRE, packets are encapsulated with a GRE header that includes a tunnel ID to associate them with a tenant network. Security groups are applied using iptables rules to filter traffic between VMs in different networks.
OpenStack: Virtual Routers On Compute Nodesclayton_oneill
Learn the production pros and cons of operating Neutron legacy and HA routers on compute nodes in your production cloud. Not ready for DVR or third-party network overhauls? Virtual router network “hot spots” got you down? Large virtual router failure domains keeping you up late at night? Neutron reference architectures not providing a scalable routing solution? If you answered yes to any of these questions then this talk is for you.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
Tech Talk by Gal Sagie: Kuryr - Connecting containers networking to OpenStack...nvirters
Project Kuryr aims to provide Neutron networking abstractions and APIs for container networking to avoid vendor lock-in. It maps container networking operations to the Neutron API and allows different Neutron plugins like OVN, Midonet, and Calico to provide networking for containers. This provides networking features to containers like security groups, LBaaS, FWaaS, and avoids issues with current container networking solutions around performance, management, and flexibility. Kuryr provides a common base for Neutron vendors to support VM and container networking.
DockerCon US 2016 - Docker Networking deep diveMadhu Venugopal
Docker networking provides a networking fabric for containers called libnetwork that defines the container networking model and provides features like multi-host networking, service discovery, load balancing, and security. New features in Docker 1.12 include networking in swarm mode without an external key-value store, macvlan driver support, a gossip-based secure control plane, optional IPSec for the data plane, built-in DNS for service discovery and load balancing, and a routing mesh for edge routing.
DevOops - Lessons Learned from an OpenStack Network ArchitectJames Denton
Join as we discuss various OpenStack Neutron network configuration options and issues experienced with VLAN, VXLAN, L2population, multicast, Neutron routers, Open vSwitch and more.
This document provides an overview and agenda for a presentation on OpenStack networking. It begins with an overview of OpenStack architecture and services like Compute, Networking, Identity and Image services. It then discusses basic network components like controllers, compute nodes and networking plugins. Next, it covers networking process flows and dives deeper into the Neutron networking plugin, including the Modular Layer 2 plugin framework and drivers like Open vSwitch. It concludes with a planned demonstration of networking functionality in an OpenStack lab environment.
Accelerating Envoy and Istio with Cilium and the Linux KernelThomas Graf
The document discusses how Cilium can accelerate Envoy and Istio by using eBPF/XDP to provide transparent acceleration of network traffic between Kubernetes pods and sidecars without any changes required to applications or Envoy. Cilium also provides features like service mesh datapath, network security policies, load balancing, and visibility/tracing capabilities. BPF/XDP in Cilium allows for transparent TCP/IP acceleration during the data phase of communications between pods and sidecars.
This document provides an overview and agenda for a Docker networking deep dive presentation. The presentation covers key concepts in Docker networking including libnetwork, the Container Networking Model (CNM), multi-host networking capabilities, service discovery, load balancing, and new features in Docker 1.12 like routing mesh and secured control/data planes. The agenda demonstrates Docker networking use cases like default bridge networks, user-defined bridge networks, and overlay networks. It also covers networking drivers, Docker 1.12 swarm mode networking functionality, and how packets flow through systems using routing mesh and load balancing.
A study and practice of OpenStack release Kilo HA deployment. The Kilo document has some errors, and it's hardly find a detailed document to describe how to deploy a HA cloud based on Kilo release. Hope this slides can provide some clues.
Kyle Mestery provided an update on OpenStack Networking (Neutron) priorities for the Liberty release. Key areas of focus include continuing the plugin decomposition effort, improving the API, enabling quality of service features, and integrating network services like load balancing and VPN. Governance changes are also underway to help scale Neutron development.
Docker network performance in the public cloudArjan Schaaf
Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
This document summarizes Jakub Pavlik's experience deploying Contrail virtual networks with OpenStack at tcp cloud. Key points include:
- Contrail 1.05 was deployed with Havana on CentOS using SaltStack instead of Fabric for configuration management.
- The deployment consisted of 3 OpenStack controllers, 2 Contrail controllers, and used HA technologies like Corosync/Pacemaker and Galera for high availability.
- Some issues were encountered with Fabric not providing true HA and missing options for cinder/glance backends. BGP peering also required restoration after control node failures.
Jakub Pavlik discusses high availability versus disaster recovery in OpenStack clouds. He describes four types of high availability in OpenStack: physical infrastructure, OpenStack control services, virtual machines, and applications. For each type, he outlines concepts like active/passive and active/active configurations, specific technologies used like Pacemaker, Corosync, HAProxy, and MySQL Galera, and considerations for shared and non-shared storage. Finally, he provides examples of high availability architectures and methods used by different OpenStack vendors.
Multi tier-app-network-topology-neutron-finalSadique Puthen
This document discusses how Neutron builds network topology for multi-tier applications. It explains that Neutron uses network namespaces to isolate tenant resources and correlate application topology to Neutron components. It provides details on how Neutron creates networks, routers, load balancers, firewalls, and VPN connections to build the necessary infrastructure for a sample multi-tier application topology across two OpenStack sites.
- The document discusses Neutron L3 HA (VRRP) and summarizes a presentation given on the topic.
- Neutron L3 HA uses the VRRP protocol to provide redundancy and failover for virtual routers across multiple network nodes. A heartbeat network is created for each tenant using their tenant network.
- When a router is created, a heartbeat port and interface are created on each L3 agent node using the tenant's heartbeat network to enable communication between the agents for the VRRP implementation.
OpenStack Neutron Havana Overview - Oct 2013Edgar Magana
Presentation about OpenStack Neutron Overview presented during three meet-ups in NYC, Connecticut and Philadelphia during October 2013 by Edgar Magana from PLUMgrid
Security best practices for hyper v and server virtualisation [svr307]Louis Göhl
The document provides information on the Microsoft Assessment & Planning Toolkit 5.0 customer technology preview and Visual Studio Team System 2010 Lab Management Beta 2. It also covers topics like Windows Server 2008 R2 Hyper-V security best practices, Hyper-V networking configurations, Windows Server 2008 R2: SCONFIG, and Hyper-V best practices.
Virtualization and Open Virtualization Format (OVF)rajsandhu1989
This document discusses virtualization and its role as the backbone of cloud computing. It defines virtualization as the creation of virtual versions of hardware platforms, operating systems, storage devices and network resources. The document outlines different types of virtualization including hardware/server virtualization, storage virtualization, network virtualization, and desktop virtualization. It describes how server virtualization works using hypervisors to divide physical servers into multiple virtual machines. The benefits of virtualization discussed include resource sharing, load balancing, easier backup and recovery, and scalability.
In early March, Harbour IT hosted a breakfast session in conjunction with VMware – “vForum Wrap – All the best bits from VMware’s vForum 2010”.
Held in both the Norwest and Sydney offices, local customers were given a VMware update from guest speaker, Bo Leksono. The presentation covered the latest VMware technology and the steps to follow on your journey to the cloud
Private Cloud Academy: Backup and DPM 2010Aidan Finn
The session I ran on how to design CSV for Hyper-V backups, and how to use DPM 2010, at the Microsoft/System Dynamics Private Cloud Academy in Dublin, Ireland.
p2 is an extensible provisioning platform for OSGi systems that helps manage all aspects of software installation, deployment, updating, and servicing from build time to runtime. It provides a model where all installable software units are treated uniformly, along with tools for building, deploying, and managing software repositories. p2 decouples decision making from content to provide a flexible solution that can be used for provisioning in various environments and configurations.
The document discusses how to remotely update IoT devices using Eclipse hawkBit and SWUpdate. It provides an overview of the Android approach to OTA updates, which uses a recovery OS to install updates atomically. It then describes how SWUpdate can be used as an agent on embedded Linux devices to manage updates similarly to Android. Key points covered include SWUpdate's architecture, features like local/remote interfaces and update file format/security, and how it can be integrated with hawkBit for remote management of software updates.
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
Hyper Converged PLCloud with CEPH
This document discusses PowerLeader Cloud (PLCloud), a cloud computing platform that uses a hyper-converged infrastructure with OpenStack, Docker, and Ceph. It provides an overview of PLCloud and how it has adopted OpenStack, Ceph, and other open source technologies. It then describes PLCloud's hyper-converged architecture and how it leverages OpenStack, Docker, and Ceph. Finally, it discusses a specific use case where Ceph RADOS Gateway is used for media storage and access in PLCloud.
The document discusses running MongoDB in Microsoft Azure. It begins with an introduction to Azure and the different deployment models of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It then covers the different ways MongoDB can be deployed in Azure, including as a single instance, replica set, or sharded cluster. For each deployment type, it outlines the technical approach, pros, and cons. Key points covered include how to configure MongoDB for durable blob storage, setting up replica sets across Azure instances, and using a mongos router to expose a sharded cluster.
The document discusses Rohit Yadav and his work with Apache CloudStack. It provides an agenda for understanding CloudStack internals, including getting started as a user or developer, a guided tour of the codebase, common development patterns, and deep dives into key areas like system VMs, networking implementation, and plugins. The document outlines ways to join the CloudStack community and how to contribute code through GitHub pull requests.
Een virtuele server-infrastructuur zorgt, ondanks haar vele voordelen, voor nieuwe uitdagingen vanwege geconsolideerde datastores, dynamic workloads en de wens voor schaalbare infrastructuren. Bedrijven worden dan ook gedwongen om hun strategie voor databeveiliging te herzien.
Dit vereist een oplossing die o.a.:
- Databeveiliging consolideert van fysieke en virtuele omgevingen en dit op meerdere platforms ondersteunt (VMware® en Microsoft® Hyper-V)
- Een hybride model kan bieden waarin meerdere hypervisors of zelfs een mix tussen private en public cloud omgevingen wordt geboden.
Bekijk in deze presentatie alle voordelen van hypervisor en back-up integratie vanuit één single platfom.
TechEd NZ 2014 - DCIM211 - Aben Samuel
This session with take IT Pros, Managers through various aspects of Azure, but with a focus on SharePoint and how organizations should be looking at Azure with regards to: 1. Hybrid Approach 2. Complete Warm SharePoint Platform 3. Disaster Recovery , Business Continuity The session would also look into some of the newer features that have been made available recently and also look into some of the experiences with deploying SharePoint implementations on Azure.
RightScale Webinar: December 8, 2010 – In this Webinar, we discuss the benefits and pain points of multi-cloud as well as key considerations to have in mind when going multi-cloud. We present examples of multi-cloud scenarios and describe the design principles to consider when architecting deployments that must span and migrate across different clouds and providers.
Efficient Data Protection in VMware environments. You will learn about the basics of data protection in VMware environments and you will also find sample configurations and recommendations including Symantec Backup Exec / NetBackup, Fujitsu ETERNUS LT and Fujitsu ETERNUS CS800.
This document discusses business continuity challenges related to increasing data growth and insufficient data protection solutions. It presents Microsoft solutions for addressing these challenges, including Azure Site Recovery for orchestrated replication and recovery across on-premises and Azure environments. The solutions aim to automate processes, eliminate tape management, increase protection breadth and depth, and provide testable disaster recovery.
Orchestrated Android-Style System Upgrades for Embedded LinuxKynetics
This document summarizes Diego Rondini's presentation at the Embedded Linux Conference Europe 2017 in Prague about orchestrating Android-style system upgrades for embedded Linux.
The presentation discussed [1] managing and deploying software updates on embedded Linux devices in a way similar to how Android handles over-the-air updates, [2] using the SWUpdate tool and Eclipse hawkBit for updating devices, and [3] their implementation of an "Update Factory" to remotely manage and deploy updates across a fleet of devices like Android. A demo was also promised.
Orchestrated Android-Style System Upgrades for Embedded LinuxNicolaLaGloria
This document summarizes Diego Rondini's presentation at the Embedded Linux Conference Europe 2017 in Prague about orchestrating Android-style system upgrades for embedded Linux. The presentation covered:
[1] Managing and deploying software updates on embedded Linux devices in a way that is similar to how Android handles over-the-air updates. This involves using SWUpdate to run updates from a recovery partition and Eclipse hawkBit for remote management and rollout campaigns.
[2] The architecture of "Update Factory" which implements the missing pieces to provide an Android-like OTA experience on embedded Linux, including device to cloud communication, bootloader coordination, a recovery partition and more.
[3] How SWUpdate can be used
How to accelerate docker adoption with a simple and powerful user experienceDocker, Inc.
1) Societe Generale aims to accelerate Docker adoption by providing a simple and powerful user experience. They plan to increase their container usage from 2000 to 15,000 containers.
2) They aim to achieve this growth while improving security, quality of service, and reducing VM costs. Their challenge is providing these improvements while maintaining a good user experience.
3) Docker Universal Control Plane (UCP) is used to provide a production cluster with logical isolation and central administration. This achieves multi-tenancy, security/compliance checks, and self-service onboarding.
It about the technical session. I have given a talk so that local people know about the cloud and they feel motivated to work with the cloud. It was basically for newbies who are planning to start their career. I tried to show them who they would be a cloud engineer what's will be their future responsibility and more.
Build servers are typically not top-of-the-list for environments that security teams choose to monitor and secure. The perception is that they do not actually hold sensitive data like a production environment would. However, in reality, they have unique access and functionality that makes them a common target for attackers.
Threat Stack VP of Product, Chris Ford, will walk through the impact of a build system breach. This example will highlight how the attacker leveraged a build server to wage an insidious attack that has a larger blast radius than a similar attack targeting a production environment directly.
Similar to OpenStack Tokyo Talk Application Data Protection Service (20)
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Advanced control scheme of doubly fed induction generator for wind turbine us...
OpenStack Tokyo Talk Application Data Protection Service
1. OpenStack Summit Tokyo 2015
Wang Hao, Software Engineer, Huawei IT Product Line
Eran Gampel, Cloud Chief Architect , Huawei European Research Center
Oshrit Feder, IBM Research - Haifa
Cloud DR Orchestration:
Beyond volume replication
2. Agenda
Why we need disaster recovery?
Replication in Cinder
Hypervisor-based DR
ADPaaS: Project Smaug
Demo
3. Why do we need disaster recovery?
Customers want 24x7 service availability
Hardware Failures
Human Error
Accidents and Natural Disasters
5. Got version 2 of replication in Liberty release
Improve and make it more widely
usable by other backend devices.
None driver supported yet
Implemented for Juno release
Upstream OS code merged Support to IBM Storwize/SVC driver
Begin from Icehouse summit
Design summit on volume replication
Status of Replication in Cinder
6. The main use of volume replication is resiliency in presence of failures.
OpenStack
Storage Backend Storage Backend
Cinder
DC#1 DC#2
Data Replication
Use Case of Replication
OpenStack
10. Hypervisor LevelHardware Level
Replication Solution Types
Case in point: Hardware vs. Hypervisor
Volume
Storage HW
Hypervisor
VM
IO Mirroring
Replication
Agent
Volume
Storage HW
Volume
Storage HW
Hypervisor
VM
Volume
Storage HW
Source
Target
Source
Target
11. Production Site DR Site
DR Manager DR Manager
Host
IO Mirror
VM VM VM
Storage
hypervisor
VRGOpenStack
Host
Write Agent
Storage
hypervisor
VRG OpenStack
WAN
OpenStack® Component
New Component
Vendor Component
Protected VM
Control Path
Data Path
Another choice: Hypervisor DR
12. IO Commands IO Completion
IO Capture
Write as normal
Write ACK
IO replication
Queue
IO Forwarding ,Compression and
Encryption
IO cache, Decompression and
Decryption
Write ACK
IO Completion
Write
Write ACK
IO Parsing
Production Site DR Site
Guest OS
IO Mirror
VRG VRG
Write Agent
Hypervisor DR: IO Mirroring
13. Setup
Connection
with vRG
Start CBT Data
Replication
Consistency
Check
Queue Data
Replication
Queue overflow
CBT done
Finished1.Host abnormal restart
2. Swap(re-protect)
Stop
Hypervisor DR: IO Mirroring State Machine
15. Replication Type HW Array
Replication
Hypervisor
Replication
Multi-Vendor Hardware Agnostic
No Impact on Compute Performance
No Special Network/Storage Privileges
No Special Admin Skillset Required
Transparent Deduplication
Virtualization-Ready
Cross VM Consistency Grouping Support
Cross Array Consistency Group Support
Hypervisor DR: HW(Array) vs. Hypervisor
16. Multiple Use Cases, Multiple
Protection Plans
Users need to be able to Choose the right protection plan
Vendors need a way to plug different implementations
22. Case in point: Typical 3-tier Cloud App
Volume
Web Net
Router
SG
Web Srv 1
Project
Web Srv 2
Image
SG
App Net
App Server
DB Net
DB Server
Image Image
Volume
25. Smaug: Mission Statement
Formalize Application Data Protection in OpenStack
APIs, Services, Plugins, …
Be able to protect Any Resource in OpenStack (as well as
their dependencies)
Allow Diversity of vendor solutions, capabilities and
implementations without compromising usability
26. Smaug: Highlights
Open Architecture
Vendors create plugins that implement Protection mechanisms for different
OpenStack resources
User perspective: Protect App Deployment
Configure and manage custom protection plans on the deployed resources
(topology, VMs, volumes, images, …)
Admin perspective: Define Protectable Resources
Decide what plugins protect which resources, what is available for the user
Decide where users can protect their resources
27. How to protect?
(Protection Plans)
Smaug: Application Data Protection as a Service
What is protected?
(Protected Resources)
Where to protect?
(Protection Banks)
What was protected?
(Protection Transactions)
Who protects?
(Protection Providers)
Plan
API
Protection
Resource
API
Protection
Transaction
API
Bank
API
Pluggable
Plan Enforcer
Service
Resource Protection Service
Bank
Vault
Resource
Protection
Plugin
Orchestrate
28. Overview
Swift S3 …
What is protected?
(Protected Resources)
VM
Image
Topology Volume
How to protect?
(Protection Plans)
Protection
Plan
Name
ID
Protected
Resource
Trigger
Retries
Bank
Options
Volume Protection Plugin
Backup Replication SnapshotWho protects?
(Protection Providers)
VM Protection Plugin
Image Protection Plugin
Topology Protection Plugin
Protect
Restore
Verify
OptionSchema
ResultsSchema
Protection API
Read
Write
Bank API
Where to protect?
(Protection Banks)
Bank
Vault
Cinder Nova …
What was protected?
(Protection Transactions)
Ledger
ProtectionTransaction
implements
Manual
Time
Event
29. Help us Build Smaug – Join the project
https://launchpad.net/smaug
IRC (gampel)
eran.gampel@huawei.com
oshritf@il.ibm.com
Download Link
30. Demo Time
Video -- Application DR With IBM Cloud Manger
References
Paris summit talk & demo
European FP7 ORBIT Research project
IBM Cloud Manager with Openstack
Service continuity
Hardware can fail, sometimes
People make mistakes, sometimes
Natural Calamities, or cataclysmic events (like fire, tornado, etc.)
Replication is for critical data and has relatively shorter lifespan
Backup has longer lifespan, but is snapshot-based, so your RPO is not as good.
Cloud admin create a volume type with capabilities:replication="<is> True“
End users use this volume type to create volume
Cinder scheduler will choose a backend supporting replication
The backend will create a volume replica & setup replication between two volumes
Cinder have periodic task to update volumes’ replication status
When disaster happen, the cloud admin promotes the replica
Users can use those volumes in the secondary data center with its storage
As part of the fail-back process, re-enable the replication between the primary and secondary volumes
Users can test the replication by creating volume with –source-replica
4. According the configuration in cinder.conf, driver will choose replication target device to create replica & setup replication between two volumes
5. If replication is enable in driver, update the replication status in driver report periodic task
6. When disaster happen, the cloud admin failover a replicating volume to it's secondary via “failover_replication” API
8. Cloud admin also can enable/disable replication on a replication capable volume for some use case, like maintenance
9. Cloud admin also can query a volume for a list of configured replication targets
IO Mirror state machine:
CBT(changed Block Tracking) replication: based on “Bitmap”
Queue replication: In this state, user can create a snapshot for replication data.
Consistency check
Start
Setup Connection with Virtual Replication Gateway
Initial Replication
Host normal restart, data in queue during shutdown is written to disk by using CBT bitmap
CBT Data Replication
CBT bitmap is clear, proceed to Queue-based
If Queue in overflow, switch to CBT
On Host Abnormal Restart or Swap (re-protect)
Do Consistency Check and then CBT data replication
Install and Configure Hypervisor with replication capabilities.
DR admin creates a Protected Group for VMS in dashboard
DR admin can define the Protection Policy (encryption, compression, RPO, etc)
When admin create the protect group, replication start, IO Mirror will send IO data to VRG.
DR admin creates a Recovery Plan for fail-over, replication test and fail-back
When disaster happens, DR admin chooses the fail-over recovery plan by using snapshot or newest data in DR site
DR admin can use re-protect to swap production site and DR site. System will replicate data from new production sit to new DR site.
If needing fail-back, DR admin choose the recovery plan to make data consistency between production site and DR site.
So… what do we need??
Is data only storage?
If it where so, we would need just Data Protection.
For example… (move slide)
We start by define the API and the services frameworks