© 2016
VNS3 3.5
Container System Add-Ons
VNS3:net and VNS3:turret
2016
© 2016
Table of Contents
2
Introduction 3
Container Network 7
Uploading an Image or Dockerfile 9
Allocating a Container 13
Saving a Running Container 15
Access Considerations 18
© 2016
Introduction
3
© 2016
Container System Overview
4
The VNS3 Container System makes use of Linux Containers and Docker open source project released in March 2013
that automates the deployment of applications in Linux Containers (LXC). It is a lightweight virtualization engine that
allows users to encapsulate any Linux-based application or set of applications as a lightweight, portable, self-sufficient
virtual container. These containers can be manipulated using standard operations and run anywhere Docker is installed.
Docker offers a different granularity of virtualization that allows for greater isolation between applications.
Cloud Provider OS/Hypervisor
Server Hardware
VNS3
bins/
libs
bins/
libs
bins/
libs
Guest
OS
Guest
OS
Guest
OS
App

Stack
App

Stack
App

Stack
VM
Cloud Provider OS/Hypervisor
Server Hardware
VNS3
LXC / Docker
bins/
libs
bins/
libs
App

Stack
App

Stack
App

Stack
App

Stack
Container
© 2016
Docker and VNS3
5
We have received numerous requests from customers for the ability to add their own layer 4-7 network service applications to
the VNS3 layer 3 transport device. To provide that level of customization without compromising VNS3 core functionality, we
added an Application Container System to VNS3 powered by Docker. Now you can embed layer 4-7 network service features
and functions provided by other vendors - or developed in house, safely and securely into your Cloud Network.
Take a look at the following blog posts for further explanation and an example of how you can use VNS3 the VNS3 Container
System:
•An Introduction to Docker in VNS3
•Using Docker.io for SSL termination and load balancing
waf content caching nids proxy load balancing custom
router switch firewall vpn concentrator
protocol
redistributor
dynamic &
srciptable sdn
VNS3 Core Components
firewall
vpn
concentrator
protocol
redistributor
extensible nfv
VNS3CoreComponents
router switch
© 2016
Instance Sizing Considerations
6
VNS3 instance sizes have always been a factor in determining to network performance of the Overlay (customer’s edge
connectivity, customer’s router config and geo/network distance being the other factors). Throughput is dependent on
the instance's access to underlying hardware (more specifically the NIC). The fewer virtual workloads competing for
those hardware resources, the better the performance. As you increase the size of the VNS3 instances you increase the
total throughput.
Now that Docker is running as part of VNS3 the Controller’s instance size will also determine how many Docker
application containers can run in your Controller. The type and process loads of the containers will be the determining
factor. We recommend using m3.medium instance size for VNS3 Controllers.
Note: VNS3 3.5 is available as EBS-backed AMIs. This will not only allow for persistent storage in order to save Container
configurations, but allow instance scaling within AWS.
© 2016
Container Network
7
© 2016
Container Network Setup
8
To start using the Container System you must first setup an internal
subnet where your containers will run. The default VNS3 container subnet
is 198.51.100.0/28. VNS3 allows you to choose a custom address block.
Make sure it will not overlap with the Overlay Subnet or any subnets you
plan on connecting to VNS3. The container subnet can be thought of as a
VLAN segment bridged to the VNS3 Controller’s public network interface.
The Container Networking Page shows the available container IP
addresses for the chosen Container Network. IP addresses listed as
reserved are either used by Docker (for routing, bridging, and broadcast)
or are being used by a currently running container.
To change the Container Network first enter a new network subnet in CIDR
notation.
Click Validate to ensure the subnet accommodates the Container Network
requirements.
Click Set once validation is passed.
You will prompted with a popup warning that a Container Network change
will require a restart of any running container. Click OK.
© 2016
Uploading an LXC Image or Dockerfile
9
© 2016
Container Images
10
VNS3 3.5 supports uploading a compressed archive of an LXC Container Image, Dockerfile or Docker Context Directory. In the future we will
support pulling Containers from the public Docker Index and private repositories.



Container

Container Images are used to launch Containers. You can think of this relationship as similar to an AMI and Instance in AWS. Once an Image is
uploaded you can launch one or multiple Containers from the Image.


Dockerfile

Dockerfiles are a representation of a Container image, basically a map of how to build an image - start from a source image and run a number
of commands on that image before finalizing the Container Image. See the Dockerfile Reference Document for more information. 



Dockerfile Context Directories

VNS3 also supports the upload of what Docker calls a “context” or collection of files in a directory that are used along with a Dockerfile to build
an Image. The Dockerfile needs to be in the root of the directory and the rest of the files need to be relative so the Dockerfile can access the
appropriate assets during the build process. 



Cohesive Networks provides a number of Containers and Dockerfiles to help get you started on our Product Resources page and in the Docker
Index respectively.
© 2016
Container Images: Upload a Container
11
To Upload a Container Image click on the Images left
column menu item listed under the Container heading.
Click Upload Image.
On the resulting Upload Container Image Window enter
the following:
• Input name
• Description
• Select the Container Url radio button - provide the
publicly accessible URL of the archived Container
Image file (supported file formats tar, tgz, tar.gz,
tar.bz2, and zip)
Click Upload.



Once the Container Image has finished the import
process, you will be able to use the action button to edit
and delete the Image or allocate (launch) a Container.
© 2016
Container Images: Upload from a Dockerfile or Docker Context
12
To Upload a Dockerfile click on the Images left column menu
item listed under the Container heading.
Click Upload Image.
On the resulting Upload Container Image Window enter the
following;
• Input name
• Description
• Select the Dockerfile Url radio button - provide the publicly
accessible URL of the Dockerfile (note the filename is
required to be Dockerfile) or URL of an archived Dockerfile
Context Directory (supported file formats tar, tgz, tar.gz,
tar.bz2, and zip)
Click Upload.



Once the Dockerfile has been uploaded and the image has has
finished the build process, you will be able to use the action
button to edit and delete the Image or allocate (launch) a
Container.
© 2016
Allocating a Container
13
© 2016
Container Images: Allocate a Container
14
To launch a Container click the Actions drop down
button next to the Container Image you want to use
and click Allocate.
On the resulting pop up window enter the following:
• Name of the Container
• Command used on initiation of the Container
• Description
Click Allocate.
You will be taken to the Containers page where you
newly created Container will list its status.
© 2016
Saving a Running Container
15
© 2016
Saving a Running Container: Save as an Image
16
This operation saves the state of the current
running container in image form for re-use or
export for download. What is saved is an LXC
image, from which a new container can be
allocated.
NOTE: VNS3 does not currently support the
Docker “commit” command which will push your
changes back to a source DockerHub. Nor does
it support Docker “export” command which
delivers a full delta history of the container as
opposed to just an LXC image.
© 2016
Saving a Running Container: Export
17
This operation allows you to package a running
container for download from the VNS3 Controller.
After executing this operation the image will show
in uncompressed form on the page available via
the “Exported Images” link below the Images table
on the Images page.
NOTE: VNS3 does not currently support the
Docker “commit” command which will push your
changes back to a source DockerHub. Nor does
it support Docker “export” command which
delivers a full delta history of the container as
opposed to a single LXC image.
© 2016
Access Considerations
18
© 2016
Container Images: Accessing the Container
19
Once the Container has launched, an IP address
included in the specified Container Network
CIDR will be listed.
Accessing the Container depends on the source
network. The following pages cover connection
considerations when trying to access a VNS3
Container from the public Internet, Overlay
Network, and Remote IPsec Subnet.
© 2016
Access Consideration: Public Internet
20
Accessing a Container from the Public Internet will require additions to
the inbound hypervisor firewall rules with the VNS3 Controller as well as
VNS3 Firewall.
The following example shows how to access an Nginx server running as
a Container listening on port 80 (substitute port 22 if the Container is
running SSHD).
Network Firewall/Security Group Rule

Allow port 80 from your source IP (possibly 0.0.0.0/0 if the Nginx server
is load balancing for a public website).

VNS3 Firewall

Enter rules to port forward incoming traffic to the Container Network
and Masquerade outgoing traffic off the VNS3 Manger’s public network
interface.
#Let the Docker Subnet Access the Internet Via the Controllers Public IP

MACRO_CUST -o eth0 -s <Controller Private IP> -j MASQUERADE
#Port forward 9080 to the nginx docker container

PREROUTING_CUST -i eth0 -p tcp -s 0.0.0.0/0 --dport 9080 -j DNAT --to
<Container Network IP>:80
© 2016
Access Consideration: Overlay Network
21
Accessing a Container from the Overlay Network does not require any Network Firewall/
Security Group or VNS3 Firewall rule additions.
© 2016
Access Consideration: IPsec Remote Subnets
22
Accessing a Container from a remote subnet advertised behind an IPsec tunnel will either require an existing tunnel to the
VNS3 Overlay Network PLUS some VNS3 forwarding firewall rules OR a tunnel negotiated between the remote subnet and the
Container Network.
Option 1 - Existing Tunnel and VNS3 Firewall

If you have an existing tunnel to the VNS3 Overlay Network, you can add a few VNS3 firewall forwarding rules to access any
Containers you have launched.
Enter rules to port forward incoming traffic to the Container Network and Masquerade outgoing traffic off the VNS3 Manger’s
public network interface.
#Let the Docker Subnet Access the Internet Via the Controllers Public IP

-o eth0 -s <Controller Private IP> -j MASQUERADE
#Port forward 9080 to the nginx docker container

PREROUTING_CUST -i eth0 -p tcp -s <Remote Subnet CIDR> --dport 9080 -j DNAT --to <Container Network IP>:80
Option 2 - Remote Subnet<->Container Network IPsec tunnel

Access between a remote subnet and any subset of the Container Network can be established using IPsec tunnels. Simply
specify the Container Network CIDR (default of 172.0.10.0/28) as one end of the IPsec subnet configuration on both the VNS3
(Container Network is the local subnet) and the remote IPsec device (Container Network is the remote subnet).

© 2016
VNS3 Configuration Document Links
23
VNS3 Product Resources - Documentation | Add-ons
VNS3 Configuration Instructions

Instructions and screenshots for configuring a VNS3 Controller in a single or multiple Controller topology.
Specific steps include, initializing a new Controller, generating clientpack keys, setting up peering, building
IPsec tunnels, and connecting client servers to the Overlay Network. 

VNS3 Administration Document

Covers the administration and operation of a configured VNS3 Controller. Additional detail is provided around
the VNS3 Firewall, all administration menu items, upgrade licenses, other routes and SNMP traps.

VNS3 Troubleshooting

Troubleshooting document that provides explanation issues that are more commonly experienced with VNS3.


Cohesive Networks Support Docs: VNS3 3.5 Container System Add-Ons

  • 1.
    © 2016 VNS3 3.5 ContainerSystem Add-Ons VNS3:net and VNS3:turret 2016
  • 2.
    © 2016 Table ofContents 2 Introduction 3 Container Network 7 Uploading an Image or Dockerfile 9 Allocating a Container 13 Saving a Running Container 15 Access Considerations 18
  • 3.
  • 4.
    © 2016 Container SystemOverview 4 The VNS3 Container System makes use of Linux Containers and Docker open source project released in March 2013 that automates the deployment of applications in Linux Containers (LXC). It is a lightweight virtualization engine that allows users to encapsulate any Linux-based application or set of applications as a lightweight, portable, self-sufficient virtual container. These containers can be manipulated using standard operations and run anywhere Docker is installed. Docker offers a different granularity of virtualization that allows for greater isolation between applications. Cloud Provider OS/Hypervisor Server Hardware VNS3 bins/ libs bins/ libs bins/ libs Guest OS Guest OS Guest OS App
 Stack App
 Stack App
 Stack VM Cloud Provider OS/Hypervisor Server Hardware VNS3 LXC / Docker bins/ libs bins/ libs App
 Stack App
 Stack App
 Stack App
 Stack Container
  • 5.
    © 2016 Docker andVNS3 5 We have received numerous requests from customers for the ability to add their own layer 4-7 network service applications to the VNS3 layer 3 transport device. To provide that level of customization without compromising VNS3 core functionality, we added an Application Container System to VNS3 powered by Docker. Now you can embed layer 4-7 network service features and functions provided by other vendors - or developed in house, safely and securely into your Cloud Network. Take a look at the following blog posts for further explanation and an example of how you can use VNS3 the VNS3 Container System: •An Introduction to Docker in VNS3 •Using Docker.io for SSL termination and load balancing waf content caching nids proxy load balancing custom router switch firewall vpn concentrator protocol redistributor dynamic & srciptable sdn VNS3 Core Components firewall vpn concentrator protocol redistributor extensible nfv VNS3CoreComponents router switch
  • 6.
    © 2016 Instance SizingConsiderations 6 VNS3 instance sizes have always been a factor in determining to network performance of the Overlay (customer’s edge connectivity, customer’s router config and geo/network distance being the other factors). Throughput is dependent on the instance's access to underlying hardware (more specifically the NIC). The fewer virtual workloads competing for those hardware resources, the better the performance. As you increase the size of the VNS3 instances you increase the total throughput. Now that Docker is running as part of VNS3 the Controller’s instance size will also determine how many Docker application containers can run in your Controller. The type and process loads of the containers will be the determining factor. We recommend using m3.medium instance size for VNS3 Controllers. Note: VNS3 3.5 is available as EBS-backed AMIs. This will not only allow for persistent storage in order to save Container configurations, but allow instance scaling within AWS.
  • 7.
  • 8.
    © 2016 Container NetworkSetup 8 To start using the Container System you must first setup an internal subnet where your containers will run. The default VNS3 container subnet is 198.51.100.0/28. VNS3 allows you to choose a custom address block. Make sure it will not overlap with the Overlay Subnet or any subnets you plan on connecting to VNS3. The container subnet can be thought of as a VLAN segment bridged to the VNS3 Controller’s public network interface. The Container Networking Page shows the available container IP addresses for the chosen Container Network. IP addresses listed as reserved are either used by Docker (for routing, bridging, and broadcast) or are being used by a currently running container. To change the Container Network first enter a new network subnet in CIDR notation. Click Validate to ensure the subnet accommodates the Container Network requirements. Click Set once validation is passed. You will prompted with a popup warning that a Container Network change will require a restart of any running container. Click OK.
  • 9.
    © 2016 Uploading anLXC Image or Dockerfile 9
  • 10.
    © 2016 Container Images 10 VNS33.5 supports uploading a compressed archive of an LXC Container Image, Dockerfile or Docker Context Directory. In the future we will support pulling Containers from the public Docker Index and private repositories.
 
 Container
 Container Images are used to launch Containers. You can think of this relationship as similar to an AMI and Instance in AWS. Once an Image is uploaded you can launch one or multiple Containers from the Image. 
 Dockerfile
 Dockerfiles are a representation of a Container image, basically a map of how to build an image - start from a source image and run a number of commands on that image before finalizing the Container Image. See the Dockerfile Reference Document for more information. 
 
 Dockerfile Context Directories
 VNS3 also supports the upload of what Docker calls a “context” or collection of files in a directory that are used along with a Dockerfile to build an Image. The Dockerfile needs to be in the root of the directory and the rest of the files need to be relative so the Dockerfile can access the appropriate assets during the build process. 
 
 Cohesive Networks provides a number of Containers and Dockerfiles to help get you started on our Product Resources page and in the Docker Index respectively.
  • 11.
    © 2016 Container Images:Upload a Container 11 To Upload a Container Image click on the Images left column menu item listed under the Container heading. Click Upload Image. On the resulting Upload Container Image Window enter the following: • Input name • Description • Select the Container Url radio button - provide the publicly accessible URL of the archived Container Image file (supported file formats tar, tgz, tar.gz, tar.bz2, and zip) Click Upload.
 
 Once the Container Image has finished the import process, you will be able to use the action button to edit and delete the Image or allocate (launch) a Container.
  • 12.
    © 2016 Container Images:Upload from a Dockerfile or Docker Context 12 To Upload a Dockerfile click on the Images left column menu item listed under the Container heading. Click Upload Image. On the resulting Upload Container Image Window enter the following; • Input name • Description • Select the Dockerfile Url radio button - provide the publicly accessible URL of the Dockerfile (note the filename is required to be Dockerfile) or URL of an archived Dockerfile Context Directory (supported file formats tar, tgz, tar.gz, tar.bz2, and zip) Click Upload.
 
 Once the Dockerfile has been uploaded and the image has has finished the build process, you will be able to use the action button to edit and delete the Image or allocate (launch) a Container.
  • 13.
  • 14.
    © 2016 Container Images:Allocate a Container 14 To launch a Container click the Actions drop down button next to the Container Image you want to use and click Allocate. On the resulting pop up window enter the following: • Name of the Container • Command used on initiation of the Container • Description Click Allocate. You will be taken to the Containers page where you newly created Container will list its status.
  • 15.
    © 2016 Saving aRunning Container 15
  • 16.
    © 2016 Saving aRunning Container: Save as an Image 16 This operation saves the state of the current running container in image form for re-use or export for download. What is saved is an LXC image, from which a new container can be allocated. NOTE: VNS3 does not currently support the Docker “commit” command which will push your changes back to a source DockerHub. Nor does it support Docker “export” command which delivers a full delta history of the container as opposed to just an LXC image.
  • 17.
    © 2016 Saving aRunning Container: Export 17 This operation allows you to package a running container for download from the VNS3 Controller. After executing this operation the image will show in uncompressed form on the page available via the “Exported Images” link below the Images table on the Images page. NOTE: VNS3 does not currently support the Docker “commit” command which will push your changes back to a source DockerHub. Nor does it support Docker “export” command which delivers a full delta history of the container as opposed to a single LXC image.
  • 18.
  • 19.
    © 2016 Container Images:Accessing the Container 19 Once the Container has launched, an IP address included in the specified Container Network CIDR will be listed. Accessing the Container depends on the source network. The following pages cover connection considerations when trying to access a VNS3 Container from the public Internet, Overlay Network, and Remote IPsec Subnet.
  • 20.
    © 2016 Access Consideration:Public Internet 20 Accessing a Container from the Public Internet will require additions to the inbound hypervisor firewall rules with the VNS3 Controller as well as VNS3 Firewall. The following example shows how to access an Nginx server running as a Container listening on port 80 (substitute port 22 if the Container is running SSHD). Network Firewall/Security Group Rule
 Allow port 80 from your source IP (possibly 0.0.0.0/0 if the Nginx server is load balancing for a public website).
 VNS3 Firewall
 Enter rules to port forward incoming traffic to the Container Network and Masquerade outgoing traffic off the VNS3 Manger’s public network interface. #Let the Docker Subnet Access the Internet Via the Controllers Public IP
 MACRO_CUST -o eth0 -s <Controller Private IP> -j MASQUERADE #Port forward 9080 to the nginx docker container
 PREROUTING_CUST -i eth0 -p tcp -s 0.0.0.0/0 --dport 9080 -j DNAT --to <Container Network IP>:80
  • 21.
    © 2016 Access Consideration:Overlay Network 21 Accessing a Container from the Overlay Network does not require any Network Firewall/ Security Group or VNS3 Firewall rule additions.
  • 22.
    © 2016 Access Consideration:IPsec Remote Subnets 22 Accessing a Container from a remote subnet advertised behind an IPsec tunnel will either require an existing tunnel to the VNS3 Overlay Network PLUS some VNS3 forwarding firewall rules OR a tunnel negotiated between the remote subnet and the Container Network. Option 1 - Existing Tunnel and VNS3 Firewall
 If you have an existing tunnel to the VNS3 Overlay Network, you can add a few VNS3 firewall forwarding rules to access any Containers you have launched. Enter rules to port forward incoming traffic to the Container Network and Masquerade outgoing traffic off the VNS3 Manger’s public network interface. #Let the Docker Subnet Access the Internet Via the Controllers Public IP
 -o eth0 -s <Controller Private IP> -j MASQUERADE #Port forward 9080 to the nginx docker container
 PREROUTING_CUST -i eth0 -p tcp -s <Remote Subnet CIDR> --dport 9080 -j DNAT --to <Container Network IP>:80 Option 2 - Remote Subnet<->Container Network IPsec tunnel
 Access between a remote subnet and any subset of the Container Network can be established using IPsec tunnels. Simply specify the Container Network CIDR (default of 172.0.10.0/28) as one end of the IPsec subnet configuration on both the VNS3 (Container Network is the local subnet) and the remote IPsec device (Container Network is the remote subnet).

  • 23.
    © 2016 VNS3 ConfigurationDocument Links 23 VNS3 Product Resources - Documentation | Add-ons VNS3 Configuration Instructions
 Instructions and screenshots for configuring a VNS3 Controller in a single or multiple Controller topology. Specific steps include, initializing a new Controller, generating clientpack keys, setting up peering, building IPsec tunnels, and connecting client servers to the Overlay Network. 
 VNS3 Administration Document
 Covers the administration and operation of a configured VNS3 Controller. Additional detail is provided around the VNS3 Firewall, all administration menu items, upgrade licenses, other routes and SNMP traps.
 VNS3 Troubleshooting
 Troubleshooting document that provides explanation issues that are more commonly experienced with VNS3.