The document discusses Azure networking capabilities for containers. It describes how Azure's software defined networking (SDN) provides a single virtual network (VNet) that containers can connect to, allowing for service chaining, security, and connectivity. The Azure container networking (CNI) plugin integrates containers as first-class citizens on the Azure network. It is open source and works with container orchestrators and an open, modular architecture. The Azure Container Service makes the CNI plugin available through settings to enable networking integration for containers.
13. Orchestrator
(Kubernetes, DC/OS, Service Fabric)
Open & Modular Architecture
3rd party
plugins
IPAM
Plugin
Operating System (Windows, Linux)
CNIContainer
Runtime
(Docker) Network
Plugin
IP1
IP2
IP3
Container1 Container2 Container3
Azure SDN
Service Chaining, Security, Connectivity
Application Containers
Container hosting environment
OS environment
Containers as first class citizens on Network
14. Azure Container Service
Azure CNI plugin integrated and available
through settings on ACS engine, allowing
users to turn on CNI plugin on the settings
template and start using with their
container orchestrator.
https://github.com/Azure/acs-engine
1.8m virtual network interfaces
879k Network Security Groups
23k virtual network peerings
42.1m public IP address in use
28.8m reserved IP
Over 100k TB traffic in/out per week
4.9k remote connectivity circuits
16.8 m hours/week of VPN gateway
Vnet: One SDN for VMs & Containers
Consistent way to specify policies
One IP space, Containers as first class citizens on the network
Connectivity between VMs and containers, Cross connectivity with on premises
Rich feature set: Service chaining, ACLs, IPAM, Load balancing, DNS, PaaS Services
Optimized for Cloud (no double overlays)
Accelerated networking/ FPGA works/ existing offloads work
No double encap
Microsoft Contributing to open source project
CNI project, portability to Windows
Azure VNet for Containers project, CNI plugin for Azure
Microsoft is serious about open source and about serving as a committed participant in the open source community. We want to contribute fresh, innovative solutions for the community to share and build on. In this spirit, we are making available the complete and scalable Azure networking stack for containers that run on the Azure platform. A completely open source Container Network Interface (CNI) plug-in, sponsored by the Linux foundation, will work with different orchestrators on any platform—without vendor lock-in—and open up the benefits of the Azure networking stack for the community to implement their own versions in Windows and Linux
Allowing the community to contribute to, modify, and engage with the Azure network stack.
The significance of this announcement is that the container approach has not been available for networking before now. To network between containers, customers needed an overlay—which has an impact on performance—and had to use different vendors for different functionality such as load balancing, security, and on-premises connections.
Azure Virtual Network for Containers will provide all that functionality at no extra cost, with the familiar Software Defined Networking (SDN) stack that is available in Azure VMs today. And you can use any third-party orchestrator to create Containers and leverage the Azure network as the platform. To learn more about Azure Virtual Network for Containers
Open architecture – our SDN works with every partner
Azure offers a rich Software Defined Networking stack to accomplish the Network Virtual functions for virtual machines. Customers can deploy VMs into virtual private networks (VNets), set up network ACLs, load balancing, internet connectivity and connect back to on-premises through hybrid technologies. Today, we are announcing that all these network virtual functions can also be leveraged for containers running in Azure. ‘Azure Virtual Network’ for containers is a CNI plugin that works with various container orchestration engines to impart SDN to containers. This solution is also integrated to ‘Azure Container Service Engine’ such that this is readily available for a customer when using the kubernetes SKU. Some of the unique benefits of the product are:
• Every container gets a directly addressable private IP addresses from the Vnet
• The containers can communicate with one another by using the private IP address. No overlay or complex routing will be required.
• The containers can be configured behind the Azure Load Balancer
• The container IP addresses can be programmed in Azure Network Security groups to provide fine grained access control across VM instances.
• The containers will have full connectivity to rest of the Virtual as well as on-premises through ExpressRoute or S2S VPN
Benefits:
Battle tested, enterprise-grade network
Routing, security, NFV
Uniform policies, designed to scale