Secure Your Containers!
What Network Admins
Should Know When Moving
Into Production Cynthia Thomas
Systems Engineer
@_techcet_
{ Why is networking an afterthought?
Containers, Containers,
Containers!
Why Containers?
• Much lighter weight and less overhead than virtual
machines
• Don’t need to copy entire OS or libraries – keep track of deltas
• More efficient unit of work for cloud-native aps
• Crucial tools for rapid-scale application development
• Increase density on a physical host
• Portable container image for moving/migrating resources
Containers: Old and New
• LXC: operating system-level virtualization through a virtual
environment that has its own process and network space
• 8 year old technology
• Leverages Linux kernel cgroup
• Also other namespaces for isolation
• Focus on System Containers
• Security:
• Previously possible to run code on Host systems as root on guest system
• LXC 1.0 brought “unprivileged containers” for HW accessibility restrictions
• Ecosystem:
• Vendor neutral, Evolving LXD, CGManager, LXCFS
Containers: Old and New
• Explosive growth: Docker created a de-facto standard image format and API for
defining and interacting with containers
• Docker: also operating system-level virtualization through a virtual environment
• 3 year old technology
• Application-centric API
• Also leverages Linux kernel cgroups and kernal namespaces
• Moved from LXC to libcontainer implementation
• Portable deployment across machines
• Brings image management and more seamless updates through versioning
• Security:
• Networking: linuxbridge, IPtables
• Ecosystem:
• CoreOS, Rancher, Kubernetes
Container Orchestration Engines
• Step forth the management of containers for application
deployment!
• Scale applications with clusters where the underlying
deployment unit is a container
• Examples include Docker Swarm, Kubernetes, Apache Mesos
Today’s COEs have vulnerabilities
What’s the problem?
Why are containers insecure?
• They weren’t designed with full isolation like VMs
• Not everything in Linux is namespaced
• What do they do to the network?
COEs help container orchestration!
…but what about networking?
• Scaling Issues for ad-hoc security
implementation with Security/Policy
complexity
• Which networking model to choose? CNM? CNI?
• Why is network security always seemingly considered last?
{ Your Network Security team!
And you should too.
Who’s going to care?
Containers add network complexity!!!
• More components
= more endpoints
• Network Scaling
Issues
• Security/Policy
complexity
Perimeter Security approach is not enough
• Legacy architectures
tended to put higher layer
services like Security and
FWs at the core
• Perimeter protection is
useful for north-south
flows, but what about
east-west?
• More = better? How to
manage more pinch
points?
#ThrowbackThursday
What did OpenStack do?
• Started in 2010 as an open source community for cloud compute
• Gained a huge following and became production ready
• Enabled collaboration amongst engineers for technology advancement
#ThrowbackThursday
Neutron came late in the game!
• Took 3 years before dedicated project formed
• Neutron enabled third party plugin solutions
• Formed advanced networking framework via community
What is Neutron?
• Production-grade open framework for Networking:
 Multi-tenancy
 Scalable, fault-tolerant devices (or device-
agnostic network services).
 L2 isolation
 L3 routing isolation
• VPC
• Like VRF (virtual routing and fwd-ing)
 Scalable Gateways
 Scalable control plane
• ARP, DHCP, ICMP
 Floating/Elastic Ips
 Decoupled from Physical Network
 Stateful NAT
• Port masquerading
• DNAT
 ACLs
 Stateful (L4) Firewalls
• Security Groups
 Load Balancing with health checks
 Single Pane of Glass (API, CLI, GUI)
 Integration with COEs & management platforms
• Docker Swarm, K8S
• OpenStack, CloudStack
• vSphere, RHEV, System Center
Hardened Neutron Plugins
{ Leverage Neutron
Kuryr Can Deliver Networking
to Containers
{
Bridging the container
networking framework with
OpenStack network abstractions
The Kuryr Mission
What is Kuryr?
Kuryr has become a collection of projects
and repositories:
- kuryr-lib: common libraries (neutron-client,
keystone-client)
- kuryr-libnetwork: docker networking plugin
- kuryr-kubernetes: k8s api watcher and CNI driver
- fuxi: docker cinder driver
Project Kuryr Contributions
As of Oct. 18th, 2016: http://stackalytics.com/?release=all&module=kuryr-
group&metric=commits
Some previous* networking options with
Docker
STOP
IPtables maybe?
IPtables maybe?
Done with Neutron? Tell me more,
please!
• libnetwork:
• Null (with nothing in its networking namespace)
• Bridge
• Overlay
• Remote
Kuryr: Docker (1.9+)’s remote driver
for Neutron networking
Kuryr implements a libnetwork remote network
driver and maps its calls to OpenStack Neutron.
It translates between libnetwork's Container
Network Model (CNM) and Neutron's networking
model.
Kuryr also acts as a libnetwork IPAM driver.
Libnetwork implements CNM
• CNM has 3 main networking components: sandbox, endpoint,
and network
Kuryr translation please!
• Docker uses PUSH model to call a service for libnetwork
• Kuryr maps the 3 main CNM components to Neutron
networking constructs
• Ability to attach to existing Neutron networks with host
isolation (container cannot see host network)
libnetwork neutron
Network Network
Sandbox Subnet, Ports, netns
Endpoint Port
Networking services from Neutron, for containers!
Distributed Layer 2 Switching
Distributed Layer 3 Gateways
Floating IPs
Service Insertion
Layer 4 Distributed Stateful NAT
Distributed Firewall
VTEP Gateways
Distributed DHCP
Layer 4 Load Balancer-as-a-
Service (with Health Checks)
Policy without the need for IP tables
Distributed Metadata
TAP-as-a-Service
Launching a Container in Docker with Kuryr/MidoNet
{ It’s an enabler for existing, well-defined
networking plugins for containers
Kuryr delivers for CNM,
but what about CNI?
Kubernetes Presence in Container Orchestration
• Open sourced from production-grade, scalable technology used by
Borg & Omega at Google for over 10 years
• Explosive use over the last 12 months, including users like eBay and
Lithium Technologies
• Portable, extensible, self-healing
Impressive automated rollouts & rollbacks with one command
• Growing ecosystem supporting Kubernetes:
• CoreOS, RH OpenShift, Platform9, Weaveworks, Midokura!
Kubernetes Architecture
• Uses PULL model
architecture for config
changes
• Mean K8S emits events on
its API server
• etcd
• All persistent master state is
stored in an instance of etcd
• To date, runs as single instance;
HA clusters in future
• Provides a “great” way to store
configuration data reliably
• With watch support,
coordinating components can
be notified very quickly of
changes
Kubernetes Control Plane
• K8S API Server
• Serves up the Kubernetes API
• Intended to be a CRUD-y server, with separate components or in plug-ins
for logic implementation
• Processes REST operations, validates them, and updates the corresponding
objects in etcd
• Scheduler
• Binds unscheduled pods to nodes
• Pluggable, for multiple cluster schedulers and even user-provided
schedulers in the future
• K8S Controller Manager Server
• All other cluster-level functions are currently performed by the Controller
Manager
• E.g. Endpoints objects are created and updated by the endpoints
controller; and nodes are discovered, managed, and monitored by the
node controller.
• The replicationcontroller is a mechanism that is layered on top of the
simple pod API
• Planned to be a pluggable mechanism
Kubernetes Control Plane Continued
• kubelet
• Manages pods and their
containers, their images, their
volumes, etc
• kube-proxy
• Run on each node to provide
a simple network proxy and
load balancer
• Reflects services as defined in
the Kubernetes API on each
node and can do simple TCP
and UDP stream forwarding
(round robin) across a set of
backends
Kubernetes Worker Node
Kubernetes Networking Model
There are 4 distinct networking problems to solve:
1. Highly-coupled container-to-container
communications
2. Pod-to-Pod communications
3. Pod-to-Service communications
4. External-to-internal communications
Kubernetes Networking Options
Flannel provides an overlay to enable cross-host communication
- IP per POD
- VXLAN tunneling between hosts
- IPtables for NAT
- Multi-tenancy?
- Host per tenant?
- Cluster per tenant?
- How to share VMs and containers on the same network for the same tenant?
- Security Risk on docker bridge? Shared networking stack
MidoNet Integration with
Kubernetes using Kuryr
35
MidoNet: 6+ years of steady growth
Security at the edge
1. vPort1 initiates a packet flow through the virtual network
2. MN Agent fetches the virtual topology/state
3. MN simulates the packet through the virtual network
4. MN installs a flow in the kernel at the ingress host
5. Packet is sent in tunnel to egress host
Kubernetes Integration: How with Kuryr?
Kubernetes 1.2+
Two integration components:
CNI driver
• Standard container networking: preferred K8S network extension point
• Can serve rkt, appc, docker
• Uses Kuryr port binding library to bind local pod using metadata
Raven (Part of Kuryr project)
• Python 3
• AsyncIO
• Extensible API watcher
• Drives the K8S API to Neutron API translation
Kubernetes Integration: How with Kuryr+MidoNet?
Defaults:
kube-proxy: generates iptables rules which map portal_ips
such that the traffic gets to the local kube-proxy daemon. Does the
equivalent of a NAT to the actual pod address
flannel: default networking integration in CoreOS
Enhanced by:
Kuryr CNI driver: enables the host binding
Raven: process used to proxy K8S API to Neutron API
MidoNet agent: provides higher layer services to the pods
Kubernetes Integration: How with Kuryr?
Raven: used to proxy K8S API to Neutron API + IPAM
- focuses only on building the virtual network topology translated
from the events of the internal state changes of K8S through its API
server
Kuryr CNI driver: takes care of binding virtual ports to physical
interfaces on worker nodes for deployed pods
Kubernetes API Neutron API
Namespace Network
Cluster Subnet Subnet
Pod Port
Service LBaaS Pool LBaaS VIP (FIP)
Endpoint LBaaS Pool Member
Kubernetes Integration: How with Kuryr+MidoNet?
Raven: used to proxy K8S API to Neutron API
Kuryr CNI driver: takes care of binding virtual ports to physical
interfaces on worker nodes for deployed pods
Kubernetes Integration: How with Kuryr+MidoNet?
Raven: used to proxy K8S API to Neutron API
Kuryr CNI driver: takes care of binding virtual
ports to physical interfaces on worker nodes
for deployed pods
Completed integration components:
- CNI driver
- Raven
- Namespace Implementation (a mechanism to partition resources created
by users into a logically named group):
- - each namespace gets its own router
- - all pods driven by the RC should be on the same logical network
CoreOS support
- Containerized MidoNet services
Kubernetes Integration: Where are we now with MidoNet?
Where will Kuryr go next?
• Bring container and VM networking under one API
• Multi-tenancy
• Advanced networking services/map Network Policies
• QoS
• Adapt implementation to work with other COEs
• kuryr-mesos
• kuryr-cloudfoundry
• kuryr-openshift
• Magnum Support (containers in VMs) in OpenStack
Kuryr
 Project Launchpad
 https://launchpad.net/kuryr
 Project Git Repository
 https://github.com/openstack/kuryr
 Weekly IRC Meeting
 http://eavesdrop.openstack.org/#Kuryr_Projec
t_Meeting
 IRC
 #openstack-neutron @ Freenode
MidoNet
 Community Site
 www.midonet.org
 Project Git Repository
 https://github.com/midonet/midonet
 Try MidoNet with one command:
 $> curl -sL quickstart.midonet.org | sudo bash
 Join Slack
 slack.midonet.org
Get Involved!
{
Cynthia Thomas
Systems Engineer
@_techcet_
Thank you!

Secure Your Containers: What Network Admins Should Know When Moving Into Production

  • 1.
    Secure Your Containers! WhatNetwork Admins Should Know When Moving Into Production Cynthia Thomas Systems Engineer @_techcet_
  • 2.
    { Why isnetworking an afterthought? Containers, Containers, Containers!
  • 3.
    Why Containers? • Muchlighter weight and less overhead than virtual machines • Don’t need to copy entire OS or libraries – keep track of deltas • More efficient unit of work for cloud-native aps • Crucial tools for rapid-scale application development • Increase density on a physical host • Portable container image for moving/migrating resources
  • 4.
    Containers: Old andNew • LXC: operating system-level virtualization through a virtual environment that has its own process and network space • 8 year old technology • Leverages Linux kernel cgroup • Also other namespaces for isolation • Focus on System Containers • Security: • Previously possible to run code on Host systems as root on guest system • LXC 1.0 brought “unprivileged containers” for HW accessibility restrictions • Ecosystem: • Vendor neutral, Evolving LXD, CGManager, LXCFS
  • 5.
    Containers: Old andNew • Explosive growth: Docker created a de-facto standard image format and API for defining and interacting with containers • Docker: also operating system-level virtualization through a virtual environment • 3 year old technology • Application-centric API • Also leverages Linux kernel cgroups and kernal namespaces • Moved from LXC to libcontainer implementation • Portable deployment across machines • Brings image management and more seamless updates through versioning • Security: • Networking: linuxbridge, IPtables • Ecosystem: • CoreOS, Rancher, Kubernetes
  • 6.
    Container Orchestration Engines •Step forth the management of containers for application deployment! • Scale applications with clusters where the underlying deployment unit is a container • Examples include Docker Swarm, Kubernetes, Apache Mesos
  • 7.
    Today’s COEs havevulnerabilities
  • 8.
    What’s the problem? Whyare containers insecure? • They weren’t designed with full isolation like VMs • Not everything in Linux is namespaced • What do they do to the network?
  • 9.
    COEs help containerorchestration! …but what about networking? • Scaling Issues for ad-hoc security implementation with Security/Policy complexity • Which networking model to choose? CNM? CNI? • Why is network security always seemingly considered last?
  • 10.
    { Your NetworkSecurity team! And you should too. Who’s going to care?
  • 11.
    Containers add networkcomplexity!!! • More components = more endpoints • Network Scaling Issues • Security/Policy complexity
  • 12.
    Perimeter Security approachis not enough • Legacy architectures tended to put higher layer services like Security and FWs at the core • Perimeter protection is useful for north-south flows, but what about east-west? • More = better? How to manage more pinch points?
  • 13.
    #ThrowbackThursday What did OpenStackdo? • Started in 2010 as an open source community for cloud compute • Gained a huge following and became production ready • Enabled collaboration amongst engineers for technology advancement
  • 14.
    #ThrowbackThursday Neutron came latein the game! • Took 3 years before dedicated project formed • Neutron enabled third party plugin solutions • Formed advanced networking framework via community
  • 15.
    What is Neutron? •Production-grade open framework for Networking:  Multi-tenancy  Scalable, fault-tolerant devices (or device- agnostic network services).  L2 isolation  L3 routing isolation • VPC • Like VRF (virtual routing and fwd-ing)  Scalable Gateways  Scalable control plane • ARP, DHCP, ICMP  Floating/Elastic Ips  Decoupled from Physical Network  Stateful NAT • Port masquerading • DNAT  ACLs  Stateful (L4) Firewalls • Security Groups  Load Balancing with health checks  Single Pane of Glass (API, CLI, GUI)  Integration with COEs & management platforms • Docker Swarm, K8S • OpenStack, CloudStack • vSphere, RHEV, System Center
  • 16.
  • 17.
    { Leverage Neutron KuryrCan Deliver Networking to Containers
  • 18.
    { Bridging the container networkingframework with OpenStack network abstractions The Kuryr Mission
  • 19.
    What is Kuryr? Kuryrhas become a collection of projects and repositories: - kuryr-lib: common libraries (neutron-client, keystone-client) - kuryr-libnetwork: docker networking plugin - kuryr-kubernetes: k8s api watcher and CNI driver - fuxi: docker cinder driver
  • 20.
    Project Kuryr Contributions Asof Oct. 18th, 2016: http://stackalytics.com/?release=all&module=kuryr- group&metric=commits
  • 21.
    Some previous* networkingoptions with Docker STOP IPtables maybe? IPtables maybe? Done with Neutron? Tell me more, please! • libnetwork: • Null (with nothing in its networking namespace) • Bridge • Overlay • Remote
  • 22.
    Kuryr: Docker (1.9+)’sremote driver for Neutron networking Kuryr implements a libnetwork remote network driver and maps its calls to OpenStack Neutron. It translates between libnetwork's Container Network Model (CNM) and Neutron's networking model. Kuryr also acts as a libnetwork IPAM driver.
  • 23.
    Libnetwork implements CNM •CNM has 3 main networking components: sandbox, endpoint, and network
  • 24.
    Kuryr translation please! •Docker uses PUSH model to call a service for libnetwork • Kuryr maps the 3 main CNM components to Neutron networking constructs • Ability to attach to existing Neutron networks with host isolation (container cannot see host network) libnetwork neutron Network Network Sandbox Subnet, Ports, netns Endpoint Port
  • 25.
    Networking services fromNeutron, for containers! Distributed Layer 2 Switching Distributed Layer 3 Gateways Floating IPs Service Insertion Layer 4 Distributed Stateful NAT Distributed Firewall VTEP Gateways Distributed DHCP Layer 4 Load Balancer-as-a- Service (with Health Checks) Policy without the need for IP tables Distributed Metadata TAP-as-a-Service
  • 26.
    Launching a Containerin Docker with Kuryr/MidoNet
  • 27.
    { It’s anenabler for existing, well-defined networking plugins for containers Kuryr delivers for CNM, but what about CNI?
  • 28.
    Kubernetes Presence inContainer Orchestration • Open sourced from production-grade, scalable technology used by Borg & Omega at Google for over 10 years • Explosive use over the last 12 months, including users like eBay and Lithium Technologies • Portable, extensible, self-healing Impressive automated rollouts & rollbacks with one command • Growing ecosystem supporting Kubernetes: • CoreOS, RH OpenShift, Platform9, Weaveworks, Midokura!
  • 29.
    Kubernetes Architecture • UsesPULL model architecture for config changes • Mean K8S emits events on its API server
  • 30.
    • etcd • Allpersistent master state is stored in an instance of etcd • To date, runs as single instance; HA clusters in future • Provides a “great” way to store configuration data reliably • With watch support, coordinating components can be notified very quickly of changes Kubernetes Control Plane
  • 31.
    • K8S APIServer • Serves up the Kubernetes API • Intended to be a CRUD-y server, with separate components or in plug-ins for logic implementation • Processes REST operations, validates them, and updates the corresponding objects in etcd • Scheduler • Binds unscheduled pods to nodes • Pluggable, for multiple cluster schedulers and even user-provided schedulers in the future • K8S Controller Manager Server • All other cluster-level functions are currently performed by the Controller Manager • E.g. Endpoints objects are created and updated by the endpoints controller; and nodes are discovered, managed, and monitored by the node controller. • The replicationcontroller is a mechanism that is layered on top of the simple pod API • Planned to be a pluggable mechanism Kubernetes Control Plane Continued
  • 32.
    • kubelet • Managespods and their containers, their images, their volumes, etc • kube-proxy • Run on each node to provide a simple network proxy and load balancer • Reflects services as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends Kubernetes Worker Node
  • 33.
    Kubernetes Networking Model Thereare 4 distinct networking problems to solve: 1. Highly-coupled container-to-container communications 2. Pod-to-Pod communications 3. Pod-to-Service communications 4. External-to-internal communications
  • 34.
    Kubernetes Networking Options Flannelprovides an overlay to enable cross-host communication - IP per POD - VXLAN tunneling between hosts - IPtables for NAT - Multi-tenancy? - Host per tenant? - Cluster per tenant? - How to share VMs and containers on the same network for the same tenant? - Security Risk on docker bridge? Shared networking stack
  • 35.
  • 36.
    MidoNet: 6+ yearsof steady growth
  • 37.
    Security at theedge 1. vPort1 initiates a packet flow through the virtual network 2. MN Agent fetches the virtual topology/state 3. MN simulates the packet through the virtual network 4. MN installs a flow in the kernel at the ingress host 5. Packet is sent in tunnel to egress host
  • 38.
    Kubernetes Integration: Howwith Kuryr? Kubernetes 1.2+ Two integration components: CNI driver • Standard container networking: preferred K8S network extension point • Can serve rkt, appc, docker • Uses Kuryr port binding library to bind local pod using metadata Raven (Part of Kuryr project) • Python 3 • AsyncIO • Extensible API watcher • Drives the K8S API to Neutron API translation
  • 39.
    Kubernetes Integration: Howwith Kuryr+MidoNet? Defaults: kube-proxy: generates iptables rules which map portal_ips such that the traffic gets to the local kube-proxy daemon. Does the equivalent of a NAT to the actual pod address flannel: default networking integration in CoreOS Enhanced by: Kuryr CNI driver: enables the host binding Raven: process used to proxy K8S API to Neutron API MidoNet agent: provides higher layer services to the pods
  • 40.
    Kubernetes Integration: Howwith Kuryr? Raven: used to proxy K8S API to Neutron API + IPAM - focuses only on building the virtual network topology translated from the events of the internal state changes of K8S through its API server Kuryr CNI driver: takes care of binding virtual ports to physical interfaces on worker nodes for deployed pods Kubernetes API Neutron API Namespace Network Cluster Subnet Subnet Pod Port Service LBaaS Pool LBaaS VIP (FIP) Endpoint LBaaS Pool Member
  • 41.
    Kubernetes Integration: Howwith Kuryr+MidoNet? Raven: used to proxy K8S API to Neutron API Kuryr CNI driver: takes care of binding virtual ports to physical interfaces on worker nodes for deployed pods
  • 42.
    Kubernetes Integration: Howwith Kuryr+MidoNet? Raven: used to proxy K8S API to Neutron API Kuryr CNI driver: takes care of binding virtual ports to physical interfaces on worker nodes for deployed pods
  • 43.
    Completed integration components: -CNI driver - Raven - Namespace Implementation (a mechanism to partition resources created by users into a logically named group): - - each namespace gets its own router - - all pods driven by the RC should be on the same logical network CoreOS support - Containerized MidoNet services Kubernetes Integration: Where are we now with MidoNet?
  • 44.
    Where will Kuryrgo next? • Bring container and VM networking under one API • Multi-tenancy • Advanced networking services/map Network Policies • QoS • Adapt implementation to work with other COEs • kuryr-mesos • kuryr-cloudfoundry • kuryr-openshift • Magnum Support (containers in VMs) in OpenStack
  • 45.
    Kuryr  Project Launchpad https://launchpad.net/kuryr  Project Git Repository  https://github.com/openstack/kuryr  Weekly IRC Meeting  http://eavesdrop.openstack.org/#Kuryr_Projec t_Meeting  IRC  #openstack-neutron @ Freenode MidoNet  Community Site  www.midonet.org  Project Git Repository  https://github.com/midonet/midonet  Try MidoNet with one command:  $> curl -sL quickstart.midonet.org | sudo bash  Join Slack  slack.midonet.org Get Involved!
  • 46.

Editor's Notes

  • #7 Purpose Examples of existing ones What are COE networking models? Docker: CNM K8S & Mesos: CNI Maturity? Re-inventing wheel, including the political battles, but that’s the fun that open source brings - Otto’s Magnum webinar compares COEs: (minute 16:30??) http://blog.midokura.com/2016/05/project-magnum-introduction/ Talk about which are good for what If 10K nodes, use …
  • #33 Reference: https://github.com/kubernetes/kubernetes/blob/master/docs/design/architecture.md Service endpoints are currently found via DNS or through environment variables (both Docker-links-compatible and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy. The kubelet ships with built-in support for cAdvisor, which collects, aggregates, processes and exports information about running containers on a given system. cAdvisor includes a built-in web interface available on port 4194