Project Kuryr aims to provide Neutron networking abstractions and APIs for container networking to avoid vendor lock-in. It maps container networking operations to the Neutron API and allows different Neutron plugins like OVN, Midonet, and Calico to provide networking for containers. This provides networking features to containers like security groups, LBaaS, FWaaS, and avoids issues with current container networking solutions around performance, management, and flexibility. Kuryr provides a common base for Neutron vendors to support VM and container networking.
Slides for the OpenStack Newton Summit in Austin that cover the changes done during the Mitaka cycle and the direction we will take for Neutron. Swarm and Kubernetes integrations explained
Kuryr-Kubernetes: The perfect match for networking cloud native workloads - I...Cloud Native Day Tel Aviv
The Kuryr project offers an interesting approach to network cloud native workloads, by enabling container orchestration engines to consume network services from OpenStack Neutron.With pod-in-VM support, Kuryr-Kubernetes enables a whole slew of new hybrid workloads, like bare metal or in-VM pods accessing services that run on VMs, multiple COEs (e.g. Docker Swarm to Kubernetes), and more. Unified networking simplifies deployment, configuration and provides single pane of glass into management and troubleshooting.
Let’s dive into Kuryr Kubernetes and learn how different open source technologies can complement each other in order to enable number of complicated deployment scenarios.
OpenStack Israel Meetup - Project Kuryr: Bringing Container Networking to Neu...Cloud Native Day Tel Aviv
Kuryr is a new project, started by Gal Sagie, that makes Neutron networking available to containers networking used in Docker / Kubernetes and others.
Kuryr aims at bridging the gap between containers orchestration engines and models to OpenStack networking abstraction and expose Neutron flexibility/features and advanced services to containers networking.
The attached is a summary of terms, description of constructs, integration alternatives and more in the networking world of Kubernetes, Openshift and AWS
The slides give the brief idea of the current situation of the container orchestration integration in OpenStack and how OpenStack Kuryr can improve the situation.
Tectonic Summit 2016: Networking for Kubernetes CoreOS
Sreekanth Pothanis, Cloud Engineering, eBay shares a networking Kubernetes tale from the trenches.
Networking is the hardest component in any ones infrastructure, everything depends on it. Specifically when we have web scale infrastructure with tens of thousands of servers. eBay is investing heavily in Kubernetes and networking again is one of the areas we have the most difficulty with.
During the course of this talk we will go through various approaches we tried to make container networking conform to Kubernetes networking principles, while ensuring that it adapts to the existing networking models our infrastructure supports.
We would also cover how we have automated the process of setting up networking for Kubernetes clusters and how it offers seamless integration with non-Kubernetes workloads.
12/12/16
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
Slides for the OpenStack Newton Summit in Austin that cover the changes done during the Mitaka cycle and the direction we will take for Neutron. Swarm and Kubernetes integrations explained
Kuryr-Kubernetes: The perfect match for networking cloud native workloads - I...Cloud Native Day Tel Aviv
The Kuryr project offers an interesting approach to network cloud native workloads, by enabling container orchestration engines to consume network services from OpenStack Neutron.With pod-in-VM support, Kuryr-Kubernetes enables a whole slew of new hybrid workloads, like bare metal or in-VM pods accessing services that run on VMs, multiple COEs (e.g. Docker Swarm to Kubernetes), and more. Unified networking simplifies deployment, configuration and provides single pane of glass into management and troubleshooting.
Let’s dive into Kuryr Kubernetes and learn how different open source technologies can complement each other in order to enable number of complicated deployment scenarios.
OpenStack Israel Meetup - Project Kuryr: Bringing Container Networking to Neu...Cloud Native Day Tel Aviv
Kuryr is a new project, started by Gal Sagie, that makes Neutron networking available to containers networking used in Docker / Kubernetes and others.
Kuryr aims at bridging the gap between containers orchestration engines and models to OpenStack networking abstraction and expose Neutron flexibility/features and advanced services to containers networking.
The attached is a summary of terms, description of constructs, integration alternatives and more in the networking world of Kubernetes, Openshift and AWS
The slides give the brief idea of the current situation of the container orchestration integration in OpenStack and how OpenStack Kuryr can improve the situation.
Tectonic Summit 2016: Networking for Kubernetes CoreOS
Sreekanth Pothanis, Cloud Engineering, eBay shares a networking Kubernetes tale from the trenches.
Networking is the hardest component in any ones infrastructure, everything depends on it. Specifically when we have web scale infrastructure with tens of thousands of servers. eBay is investing heavily in Kubernetes and networking again is one of the areas we have the most difficulty with.
During the course of this talk we will go through various approaches we tried to make container networking conform to Kubernetes networking principles, while ensuring that it adapts to the existing networking models our infrastructure supports.
We would also cover how we have automated the process of setting up networking for Kubernetes clusters and how it offers seamless integration with non-Kubernetes workloads.
12/12/16
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Kubernetes has two simple but powerful network concepts: every Pod is connected to the same network, and Services let you talk to a Pod by name. Bryan will take you through how these concepts are implemented - Pod Networks via the Container Network Interface (CNI), Service Discovery via kube-dns and Service virtual IPs, then on to how Services are exposed to the rest of the world.
How to build a Kubernetes networking solution from scratchAll Things Open
Presented by: Antonin Bas & Jianjun Shen, VMware
Presented at All Things Open 2020
Abstract: For the non-initiated, Kubernetes (K8s) networking can be a bit like dark magic. Many clusters have requirements beyond what the default network plugin, kubenet, can provide and require the use of a third-party Container Network Interface (CNI) plugin. But what exactly is the role of these plugins, how do they differ from each other and how does the choice of one affect your cluster?
In this talk, Antonin and Jianjun will describe how a group of developers was able to build a CNI plugin - an open source project called Antrea - from scratch and bring it to production in a matter of months. This velocity was achieved by leveraging existing open-source technologies extensively: Open vSwitch, a well-established programmable virtual switch for the data plane, and the K8s libraries for the control plane. Antonin and Jianjun will explain the responsibilities of a CNI plugin in the context of K8s and will walk the audience through the steps required to create one. They will show how Antrea integrates with the rest of the cloud-native ecosystem (e.g. dashboards such as Octant and Prometheus) to provide insight into the network and ensure that K8s networking is not just dark magic anymore.
Docker network performance in the public cloudArjan Schaaf
Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
Overlay/Underlay - Betting on Container NetworkingLee Calcote
Presented at Rackspace Austin (downtown) on July 27th, 2016.
An inherent to component to any distributed application, networking is one of the most complicated and expansive infrastructure technologies. Container networking needs to be developer-friendly. Application-driven and portable. With developers busily adopting container technologies, the time has come for network engineers and operators to prepare for the unique challenges brought on by cloud native applications. What container networking specifications bring to the table and how to leverage them.
Docker Online Meetup #22: Docker NetworkingDocker, Inc.
Building on top of his talk at DockerCon 2015, Jana Radhakrishnan, Lead Software Engineer at Docker, does a deep dive into Docker Networking with additional demos and insights on the product roadmap.
Docker Networking in OpenStack: What you need to know nowPLUMgrid
Learn how you bring secure, scalable, available and open software defined networking to Docker containers managed by OpenStack. This session will cover how Docker virtual networks function, how to plumb them into the virtual network fabric and reliably assign information such as IP addresses, virtual interfaces and more. In addition, this session will also cover how to securely wrap Docker containers using security policies and encryption.
Gaetano Borgione's presentation from the 2017 Open Networking Summit.
Networking is vital for cloud-native apps where distributed computing and development models require speed, simplicity, and scale for massive number of ephemeral containers. Two of the most prevalent container networking models are CNI and CNM for developers using Docker, Mesos, or Kubernetes. This session will present an overview of distributed development, how CNI and CNM models work, and how container frameworks use these models for networking. Gaetano will also discuss the additional functions users need to consider in the control plane and data plane to achieve operational scale and efficiency.
Kubernetes has been the leading container orchestration since Google released Kubenetes source in 2014.
1. Deploy a containerized app
2. Deploy a app to Kubernetes. Pros & Cons
3. Current status of Kubernetes and its future
Kubernetes has a very complex network architecture. It is the networking that enables Kubernetes to redefine the latest container technology.
1. Docker containers networks
2. Containers communication in a Pod
3. Pods communication cross different nodes
4. Pod to Service communication
Docker Networking with Container Orchestration Engines [Docker Meetup Santa C...Debra Robertson
The Docker container ecosystem is growing very fast and networking has taken an interesting direction with different networking models being introduced and it becomes even more interesting when container orchestration engines like Swarm, Mesos, Kubernetes have to implement networking for Docker containers. At this Meetup, we will talk about the networking capabilities for Docker, networking models like CNM (Container Network Model), how they fit into container orchestration frameworks, what's ready for production and what's in the design/discussion phase expected to be available in near future.
Containers require a new approach to networking. How are your containers communicating with each other? This talk will go through the different network topologies of Kubernetes. How Kubernetes addresses networking compared to traditional physical networking concepts. What are your options for networking using Kubernetes. What is the CNI (Container Network Interface) and how it affects Kubernetes networking.
Provided an overview about Hybrid Networking including Containers and VM. It also touches upon opensource solutions like Openstack Kuryr, Opendaylight.
Open stack networking_101_update_2014-os-meetupsyfauser
This is the latest Update to my OpenStack Networking / Neutron 101 Slides with some more Information and caveats on the new DVR and Gateway HA Features
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Kubernetes has two simple but powerful network concepts: every Pod is connected to the same network, and Services let you talk to a Pod by name. Bryan will take you through how these concepts are implemented - Pod Networks via the Container Network Interface (CNI), Service Discovery via kube-dns and Service virtual IPs, then on to how Services are exposed to the rest of the world.
How to build a Kubernetes networking solution from scratchAll Things Open
Presented by: Antonin Bas & Jianjun Shen, VMware
Presented at All Things Open 2020
Abstract: For the non-initiated, Kubernetes (K8s) networking can be a bit like dark magic. Many clusters have requirements beyond what the default network plugin, kubenet, can provide and require the use of a third-party Container Network Interface (CNI) plugin. But what exactly is the role of these plugins, how do they differ from each other and how does the choice of one affect your cluster?
In this talk, Antonin and Jianjun will describe how a group of developers was able to build a CNI plugin - an open source project called Antrea - from scratch and bring it to production in a matter of months. This velocity was achieved by leveraging existing open-source technologies extensively: Open vSwitch, a well-established programmable virtual switch for the data plane, and the K8s libraries for the control plane. Antonin and Jianjun will explain the responsibilities of a CNI plugin in the context of K8s and will walk the audience through the steps required to create one. They will show how Antrea integrates with the rest of the cloud-native ecosystem (e.g. dashboards such as Octant and Prometheus) to provide insight into the network and ensure that K8s networking is not just dark magic anymore.
Docker network performance in the public cloudArjan Schaaf
Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
Overlay/Underlay - Betting on Container NetworkingLee Calcote
Presented at Rackspace Austin (downtown) on July 27th, 2016.
An inherent to component to any distributed application, networking is one of the most complicated and expansive infrastructure technologies. Container networking needs to be developer-friendly. Application-driven and portable. With developers busily adopting container technologies, the time has come for network engineers and operators to prepare for the unique challenges brought on by cloud native applications. What container networking specifications bring to the table and how to leverage them.
Docker Online Meetup #22: Docker NetworkingDocker, Inc.
Building on top of his talk at DockerCon 2015, Jana Radhakrishnan, Lead Software Engineer at Docker, does a deep dive into Docker Networking with additional demos and insights on the product roadmap.
Docker Networking in OpenStack: What you need to know nowPLUMgrid
Learn how you bring secure, scalable, available and open software defined networking to Docker containers managed by OpenStack. This session will cover how Docker virtual networks function, how to plumb them into the virtual network fabric and reliably assign information such as IP addresses, virtual interfaces and more. In addition, this session will also cover how to securely wrap Docker containers using security policies and encryption.
Gaetano Borgione's presentation from the 2017 Open Networking Summit.
Networking is vital for cloud-native apps where distributed computing and development models require speed, simplicity, and scale for massive number of ephemeral containers. Two of the most prevalent container networking models are CNI and CNM for developers using Docker, Mesos, or Kubernetes. This session will present an overview of distributed development, how CNI and CNM models work, and how container frameworks use these models for networking. Gaetano will also discuss the additional functions users need to consider in the control plane and data plane to achieve operational scale and efficiency.
Kubernetes has been the leading container orchestration since Google released Kubenetes source in 2014.
1. Deploy a containerized app
2. Deploy a app to Kubernetes. Pros & Cons
3. Current status of Kubernetes and its future
Kubernetes has a very complex network architecture. It is the networking that enables Kubernetes to redefine the latest container technology.
1. Docker containers networks
2. Containers communication in a Pod
3. Pods communication cross different nodes
4. Pod to Service communication
Docker Networking with Container Orchestration Engines [Docker Meetup Santa C...Debra Robertson
The Docker container ecosystem is growing very fast and networking has taken an interesting direction with different networking models being introduced and it becomes even more interesting when container orchestration engines like Swarm, Mesos, Kubernetes have to implement networking for Docker containers. At this Meetup, we will talk about the networking capabilities for Docker, networking models like CNM (Container Network Model), how they fit into container orchestration frameworks, what's ready for production and what's in the design/discussion phase expected to be available in near future.
Containers require a new approach to networking. How are your containers communicating with each other? This talk will go through the different network topologies of Kubernetes. How Kubernetes addresses networking compared to traditional physical networking concepts. What are your options for networking using Kubernetes. What is the CNI (Container Network Interface) and how it affects Kubernetes networking.
Provided an overview about Hybrid Networking including Containers and VM. It also touches upon opensource solutions like Openstack Kuryr, Opendaylight.
Open stack networking_101_update_2014-os-meetupsyfauser
This is the latest Update to my OpenStack Networking / Neutron 101 Slides with some more Information and caveats on the new DVR and Gateway HA Features
Overview of OpenDaylight Container Orchestration Engine IntegrationMichelle Holley
Looking for a way to deploy a stable OpenStack Cloud Environment with Opendaylight at ease? This session is about learning to deploy a Cloud environment with OPNFV Fuel deployer. Fuel is a deployment tool which deploys a wide variety of distributions with third party plugins like OpenDayLight, while abstracting out complexities of the deployment. The intent of this session is to familiarize deployment of OpenStack with OpenDaylight.
About the presenter: Pramod Raghavendra Jayathirth is a software developer in OpenStack and OpenDayLight, working for OTC, SSG at Intel. His Area of Interest is in Cloud Networking and Applications. He has prior experience in Databases and his current focus is on developing features of Cloud Networking Platform. He holds Masters Degree from San Jose State University.
This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support
Software Defined Networking is seeing a lot of momentum these days. With server virtualization solving the virtual machines problem, and large scale object storage solving the distributed storage challenge, SDN is seen as key in virtual networking.
In this talk we don't try to define SDN but rather dive straight into what in our opinion is the core enabled of SDN: the virtual switch OVS.
OVS can help manage VLAN for guest network isolation, it can re-route any traffic at L2-L4 by keeping forwarding tables controlled by a remote controller (Openfow controller). We show these few OVS capabilities and highlight how they are used in CloudStack and Xen.
Xen Summit presentation of CloudStack and Software Defined Networks. OpenVswitch is the default bridge in Xen and supported in XenServer and Xen Cloud Platform
"One network to rule them all" - OpenStack Summit Austin 2016Phil Estes
Presentation at IBM Client Day by Kyle Mestery and Phil Estes, OpenStack Summit 2016 - Austin, Texas on April 26, 2016. "Open, Scalable and Integrated Networking for Containers and VMs" covering Project Kuryr, Docker's libnetwork, and Neutron & OVS and OVN network stacks
OpenStack and OpenContrail for FreeBSD platform by Michał Dubieleurobsdcon
Abstract
OpenStack and OpenContrail network virtualization solution form a complete suite able to successfully handle orchestration of resources and services of a contemporary cloud installations. These projects, however, have been only available for Linux hosted platforms by now. This talk is about a work underway that brings them into the FreeBSD world.
It explains in greater details an architecture of an OpenStack system and shows how support for the FreeBSD bhyve hypervisor was brought up using the libvirt library. Details of the OpenContrail network virtualization solution is also provided, with special emphasis on the lower level system entities like a vRouter kernel module, which required most of the work while developing the FreeBSD version.
Speaker bio
Michal Dubiel, M.Sc. Eng., born 17th of September 1983 in Kraków, Poland. He graduated in 2009 from the faculty of Electrical Engineering, Automatics, Computer Science and Electronics of AGH University of Science and Technology in Kraków. Throughout his career he worked for ACK Cyfronet AGH on hardware-accelerated data mining systems and later for Motorola Electronics on DSP software for LTE base stations. Currently he is working for Semihalf on various software projects ranging from low level kernel development to Software Defined Networking systems. He is mainly interested in the computer science, especially the operating systems, programming languages, networks, and digital signal processing.
Quantum - Virtual networks for Openstacksalv_orlando
An overview of Quantum, the soon-to-be default Openstack network service.
These slides introduce Quantum, its design goals, and discusses the API. It also tries to address how quantum relates to Software Defined Networking (SDN)
Christian Kniep presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"With Docker v1.9 a new networking system was introduced, which allows multi-host network- ing to work out-of-the-box in any Docker environment. This talk provides an introduction on what Docker networking provides, followed by a demo that spins up a full SLURM cluster across multiple machines. The demo is based on QNIBTerminal, a Consul backed set of Docker Images to spin up a broad set of software stacks."
Watch the video presentation:
http://wp.me/p3RLHQ-f7G
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
Optimising nfv service chains on open stack using dockerAnanth Padmanabhan
Uploading slides presented in the OpenStack summit, at Austin in April, 2016. Here is the link to the video,
https://www.openstack.org/videos/video/optimising-nfv-service-chains-on-openstack-using-docker
Uploading slides presented in the OpenStack summit, at Austin in April, 2016. Here is the link to the video,
https://www.openstack.org/videos/video/optimising-nfv-service-chains-on-openstack-using-docker
Tech Talk by Peng Li: Open Mobile Networks with NFVnvirters
Synopsis
Applications are moving to mobile. This talk is about upcoming future of the mobile networks, key technology enabler and how to build your application and service on top of next generation mobile networks. Peng will describe the mobile network trend and why openness will play a critical role going forward. He will also present example of the NFV enabled mobile network architecture, its building blocks and use cases, and introduce Huawei's Open Mobile Foundry platform as a real world example to share some of our valuable experiences in this field with all. This talk will cover both flavors of open source projects (OPNFV, OpenStack, ONOS and ODL) as well as commercial products (Huawei's cloudEdge solution).
About Peng Li
Peng Li is a Network Architect and Ecosystem Partnership Manager for Huawei's wireless BU. He has extensive experience on SDN, NFV, network architecture and network protocols. He has spent all his professional career so far on computer networking, mainly with Amber/Nokia networks and Foundry/Brocade before joining Huawei. He co-implemented the industry first full redundancy BGP protocol, and had many years of experience in network protocol development and engineering management for flagship data center routers. Peng has Master's in Computer Engineering from USC, and Bachelor's in EE from Tsinghua University, China.
Tech Talk by Louis Fourie: SFC: technology, trend and implementationnvirters
Synopsis
In this Tech Talk, Louis Fourie will do deep dive into one of the key technology enablers -- service function chaining and describe extensions to OpenStack networking (Neutron) for service chaining, including use cases, architecture and implementation.
About Louis Fourie
Louis Fourie is currently a senior staff engineer working on network virtualization, cloud services, and SDN technologies at Huawei Technology, USA. Louis is an active contributor to the service chaining work in several organizations including OpenStack, ONF, ETSI NFV, IETF, and OPNFV. Louis previously worked at Cisco on several computer networking, voice and data communications products, and is the holder of several patents.
Tech Talk: ONOS- A Distributed SDN Network Operating Systemnvirters
This event takes us to the cusp of Distributed Software Development and SDN Controllers. We will be hosting Madan and Brian who have been involved in the architecture and development of ONOS (Open Network Operating System).
Synopsis
ONOS is a distributed SDN network operating system architected to provide performance, scale-out, resiliency, and well-defined northbound and southbound abstractions. Madan and Brian, both from ON.Lab, will start the talk with a deep-dive into ONOS architecture, including the key technical challenges that were solved to build this platform. They will also walk us through a live demo of building a SDN application on ONOS.
Details:
ONOS Architecture
ONOS Abstractions and Modularity
ONOS Distributed architecture
ONOS APIs and their usage
Live demo- Building a SDN app on ONOS
Speaker Bios
Madan Jampani, Distributed Systems Architect, ONOS
Madan is Distributed Systems Architect at ON.Lab focusing on the core distributed systems problems for ONOS. Prior to joining ON.Lab in Sep 2014, Madan worked at Amazon for around 10 years. At Amazon, Madan was instrumental in building several key technologies ranging from Amazon retail ordering systems, distributed data stores and shared compute clusters for running large-scale data processing and machine learning workloads.
Brian O’Connor, Lead Developer, ONOS
Brian is the ONOS Application Intent Framework lead and a core developer at ON.Lab, working on ONOS and Mininet. Brian O’Connor received Bachelor’s and Master’s degrees in Computer Science from Stanford University. At Stanford, he helped develop “An Introduction to Computer Networking,” one of Stanford’s first MOOCs (Massively Open Online Courses).
ABOUT ON.LAB and ONOS
Open Networking Lab (ON.Lab) is a non-profit organization founded by SDN inventors and leaders from Stanford University and UC Berkeley to foster an open source community for developing tools and platforms to realize the full potential of SDN. ON.Lab brings innovative ideas from leading edge research and delivers high quality open source platforms on which members of its ecosystem and the industry can build real products and solutions.
ONOS, a SDN network operating system for service provider and mission critical networks, was open sourced on Dec 5th, 2014. ONOS delivers a highly available, scalable SDN control plane featuring northbound and southbound abstractions and interfaces for a diversity of management, control, service applications and network devices. ONOS ecosystem comprises of ON.Lab, organizations who are funding and contributing to the ONOS initiative including AT&T, NTT Communications, SK Telecom, Ciena, Cisco, Ericsson, Fujitsu, Huawei, Intel, NEC; members who are collaborating and contributing to ONOS include ONF, Infoblox, SRI, Internet2, Happiest Minds, CNIT, Black Duck, Create-Net and the broader ONOS community. Learn how you can get involved with ONOS at onosproject.org.
Tech Tutorial by Vikram Dham: Let's build MPLS router using SDNnvirters
Synopsis
We will start with MPLS 101 and then look into MPLS related OpenFlow actions. In the second half we will delve into RouteFlow architecture and extend it to enable Label Distribution Protocol (LDP) and MPLS routing. We will conclude with a mini-net based test bed switching traffic using MPLS labels instead of IP addresses.
This will be a hands on workshop. VM Images for Virtual Box will be provided. Attendees are expected to bring their laptops loaded with Virtual Box.
About Vikram Dham
Vikram is the CTO and co-founder of Kamboi Technologies, LLC where he advises networking companies, switch vendors and early adopters on SDN technology and distributed software development. Also, he is the founder of Bay Area Network Virtualization (BANV) meet-up group, that brings together technologists in the SDN/NFV/NV domain for technical talks, workshops and creates a truly "open" platform for sharing knowledge.
He has used SDN technologies for building software related to traffic engineering, security and routing. In the past, he was the Principal Engineer at Slingbox where he architected & built the distributed networking software for peer to peer connectivity of millions of end points. He holds MS degree in EE with a specialization in Computer Networks from Virginia Tech and has worked on research projects with companies like ECI Telecom, Raytheon and Avaya Research Labs.
This hands on workshop for OpenContrail will be led by Sreelakshmi Sarva & Aniket Daptari.
This is a labs session so we will have hard RSVP limits. Please RSVP only if you are confident that you will be able to attend.
About Sreelakshmi Sarva
Sree is currently working as part of solution engineering team at Juniper’s Contrail team. She is responsible for delivering & managing SDN solutions & partnerships relating to Contrail. She has been with Juniper for the last 13 years working on various Routing, Switching, Network programmability & virtualization platforms. Prior to Juniper, She worked at Nortel networks in the Systems Engineering group. Sree received her Masters in Computer Science from University of Texas at Dallas and Bachelor’s in Computer Science from India.
About Aniket Daptari
Aniket is currently working as part of Juniper Networks' Contrail Cloud Solutions team. He is responsible for delivering SDN solutions and technology partnerships related to Contrail. He has been with Juniper for the last 3 years working on various Network programmability & virtualization platforms. Prior to Juniper, he worked at Cisco Systems in the Internet Systems Business Unit (Catalyst 6500). Aniket received his Masters in Computer Science from University of Southern California and a graduate certificate in Management Science and Engineering from Stanford University.
Course Abstract
This session will be the first of a series of OpenContrail hands-on tutorials for developers who want to get deep into OpenContrail code.
This “Basic OpenContrail Programming” Hands-on Session will focus on making developers proficient in writing and contributing code for our OpenContrail Project.
Session will cover the following areas
1) Contrail Overview
· Use Cases
· Architecture recap
2) Contrail Hands on
· Demo + Hands on - Configuration , VN, VM, Network Policies etc
· DevStack introduction
RouteFlow & IXPs
This talk will discuss the architecture of RouteFlow which is a leading OpenFlow based virtual router. It will focus on the new projects based upon RouteFlow which are finding traction in Internet eXchange Points (IXPs) - Cardigan being one of the most popular one. Some common aspects of IXPS will be shown. The talk will conclude with a list of future projects and vision of SDN routing.
About Raphael Vincent Rosa
Raphael is a Communications Network Engineer. He finished his MS in Computer Science working with intra datacenter routing, contributing to open source SDN projects such as Ryu network controller and RouteFlow platform. Currently he is pursuing PhD research under the guidance of Dr. Christian Esteve Rothenburg with main interests in SDN and Distributed-NFV topics.
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...nvirters
OpenStack is HOT! No doubt about it. A recent survey by The New Stack and The Linux Foundation shows OpenStack as the most popular open source project ahead of other hot projects like Docker and KVM. OpenStack is now taking its rightful place as the open source cloud solution for enterprises and service providers.
To date OpenStack networking has not yet achieved the performance, scalability and reliability that many large enterprises demand. CPLANE NETWORKS solves that problem by delivering secure multi-tenant virtual networking that overcomes the limitations of the standard Neutron networking service. By making all networking services local to the compute node and achieving near line-rate throughput, CPLANE NETWORKS Dynamic Virtual Networks (DVN) delivers mega-scale networking for the most demanding application environments.
In this session John Casey will cover the basics of DVN and explain how CPLANE NETWORKS achieves "at scale" network performance within and across data centers.
About John Casey
John Casey has over 20 years of deep technology leadership. His proven success with a variety of technical leadership roles in Telecom, Enterprise and Government and in software design and development provide the foundation for the system architecture and engineering team.
Previously John led worldwide deployment teams for both IBM’s Software Group and Narus, Inc. His work in large scale, high performance system design at Transarc Labs and Walker Interactive Systems brings leadership to the CPLANE NETWORKS product suite.
Tech Talk by Tim Van Herck: SDN & NFV for WANnvirters
Extending SDN & NFV to WAN
This session will walk through the evolution in branch networking and how SDN & NFV principles can be applied to the enterprise WAN to achieve increased reliability and flexibility. It will also cover how to lower the associated operational expense of running a classic enterprise WAN and what industry trends are pressuring changes on the design of such networks.When applying SDN & NFV principles to the WAN, there will be a natural reduction in complexity of managing services and guaranteeing uptime of network connectivity.
About Tim Van Herck
Tim is the Director of Technology and founding member at VeloCloud Networks.He is responsible for building out a global network of Points of Presence to deliver virtual last mile service to enterprise branches. Prior to joining VeloCloud, Tim was a founding member of Aryaka Networks, which offers WAN Optimization as a service. Tim has been passionately following the leading edge of network virtualization and security solutions for the past 15 years. He holds a master's degree in Industrial Engineering from the University of Antwerp, and is based in VeloCloud's headquarters in Los Altos, CA
More info @ http://meetup.com/openvswitch
Follow us on twitter @nvirters
Tech Talk by Ben Pfaff: Open vSwitch - Part 2nvirters
Open vSwitch - Part 2
A previous presentation in March 2013 at Bay Area Network Virtualization meetup covered the past, present, and predicted future of Open vSwitch. This talk picks up where that one left off, covering improvements made in Open vSwitch since then, new directions for the coming year, and some related work of interest in the industry.
About Ben Pfaff (twitter: @Ben_Pfaff)
Ben joined Nicira as one of its first employees in 2007 after finishing his PhD at Stanford. Since then he has been working on what became OpenFlow and Open vSwitch. He also made some early contributions to the NOX controller. He has been involved with free software since about 1996, when he started work on GNU PSPP and joined the Debian project.
More info @ http://meetup.com/openvswitch
Follow us on twitter @nvirters
OpenFlow Data Center - A case Study by Pica8nvirters
White box switches are emerging as a viable alternative for network architects deploying software defined networks, but SDN deployments will require OpenFlow support. In this presentation, David will explain the experience of taking an OpenFlow white box switch to production in 3 data centers. The presentation will cover the following topics:
- How to work through limited TCAM in commercial silicon and maximize the TCAM usage for production
- How to scale an OpenFlow-based data center network under constraints
- How commercial silicon supports the OpenFlow 1.3 specification
- Additional features of the OpenFlow specification that will drive commercial silicon development
- Interworking L2/L3 and an OpenFlow network on the same switch
Pyretic - A new programmer friendly language for SDNnvirters
Managing a network requires support for multiple concurrent tasks, from routing and traffic monitoring, to access control and server load balancing. Software-Defined Networking (SDN) allows applications to realize these tasks directly, by installing packet-processing rules on switches. However, today's SDN platforms provide limited support for creating modular applications.
Join Bay Area Network Virtualization as Dr. Joshua Reich, Postdoctoral Research Scientist and Computing Innovation Fellow at Princeton University presents Pyretic - a new programmer-friendly domain-specific language embedded in Python that enables modular programming for SDN applications. Pyretic is part of the Frenetic Network Programming Language initiative sponsored by Princeton University and Cornell University, with support from the National Science Foundation, the Office of Naval Research, Google, Intel and Dell.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
2. What are the problems?
Reinventing networking abstractions
Changing and vendor specific solutions
Flannel
Weave
SocketPlane
Overlay2
for VM nested containers
Performance, latency, SLA, management penalties
6. Kuryr Solution
Neutron as the production ready network abstraction
containers need
Map container networking abstractions to the Neutron API
Allow consumers to choose vendor keeping one high quality API free of
vendor lock-in
Bring your container and VM networking together under one
API
Implement all the common code for Neutron vendors allowing
them to get to container networking by just having a binding
script
7. Kuryr Solution
Implement a common base for Neutron vendors that support
VM nested containers
Avoid double encapsulation
Manage each container port as a Neutron entity
Planned support for OVN, MidoNet, Dragonflow and Calico
Leverage Neutron advanced networking
LBaaS, FWaaS, VPNaaS
Security Groups / NAT
8. Kuryr In OpenStack – Bare metal
Controller Node
Neutron
Server
Kuryr
Service
Compute Node
Neutron agents
(Optional)
9. Kuryr In OpenStack – Nested deployment
Controller Node
Neutron
Server
Compute Node
Neutron agents
(Optional)
VM
Kuryr
Service
10. Kuryr Project Overview
Open source
Part of OpenStack Neutron's big stadium
Brings the Neutron networking model as a provider for the
Docker CNM
Aims to support different Container Orchestration Engines
E.g. Kubernetes, Mesos, Docker Swarm
Weekly IRC meetings
Working together with OpenStack community
Neutron, Magnum, Kolla
12. Kuryr Libnetwork Remote Driver
Keeping up to date with the changing libnetwork remote driver API
Maps Docker's CNM operations into a Neutron API usage
Any Neutron plugin can use it (for example OVS)
14. Kuryr Generic VIF Binding Layer
Binds the container networking namespace to the networking infra
Common part (container side)
– IPAM
– vEth creation
Executable based vendor-specific part
– Choice based on Neutron port type
– Free implementation language
– Root context
15. Deployment
Package based
Container based with Kolla
Vendors must generate their downstream container with the necessary
agents and plugin
Quick and easy deployment (Ansible based)
16. VM Nested Containers
Leverage the same Neutron solution for tenant containers
networking
– Neutron features
– Easier management
– Same “implementation”
– Support containers networks and VM network isolation
– Neutron plugins already support this: OVN, Midonet,
Dragonflow
Magnum
Backend Implementations interoperability
18. Neutron Side
Port Forwarding
– Can be used to implement Docker port-mapping
– Save public IP space
Adding Tags to Resources
– Pre allocation of ports/networks
– Mapping between Docker IDs to Neutron IDs
VLAN Trunk API (Nested Ports)
– Formal Neutron API to define nested containers ports
DNS Resolution for Port Names
– Leveraged for DNS service discovery
19. New Features for Containers
Security Groups
Subnet Pools
NAT (SNAT / DNAT – Floating IP)
Port Security (ARP Spoofing)
QoS
Quota Management
Neutron pluggable IPAM
Provide well-integrated COE Load balancing through Neutron
FWaaS for Containers
Many more as Neutron progress…
20. Kuryr Roadmap Plan
Liberty Release
– Kuryr specs in Neutron/Magnum communities
– Neutron new features specs
– Docker Libnetwork remote driver
– Generic VIF binding layer
– Functional testing
– Configuration and authentication in Neutron
and Docker
21. Kuryr Roadmap Plan
Mitaka Release
Neutron IPAM for Docker
Containerized Neutron plugins and solutions with Kolla
Nested containers in VM’s, Magnum – Kuryr integration
Functional testing
Missing Neutron features
Port forwarding – port mapping for Docker
Neutron tags to resources – pre-allocating of network/ports/subnets
DNS resolution for port names – Docker DNS discovery
VLAN trunk API - used for nested containers
22. N Release
Neutron advance services (LBaaS, FWaaS VPNaaS)
Kubernetes services to use Neutron LBaaS
Kubernetes networking model (K8s API)
Kuryr Roadmap Plan