Overview

The VNS3 Container System makes use of Linux Containers and the Docker open source project that automates the deployment of applications in Linux Containers (LXC). It is a lightweight virtualization engine that allows users to encapsulate any Linux-based application or set of applications as a lightweight, portable, self-sufficient virtual container. These containers can be manipulated using standard operations and run anywhere Docker is installed.

Docker offers a different granularity of virtualization that allows for greater isolation between applications.

VNS3 Network Edge Plugins Arch Diagram

Instance Sizing Considerations

VNS3 instance sizes have always been a factor in determining to network performance of the Overlay (customer’s edge connectivity, customer’s router config and geo/network distance being the other factors). Throughput is dependent on the instance’s access to underlying hardware (more specifically the NIC). The fewer virtual workloads competing for those hardware resources, the better the performance. As you increase the size of the VNS3 instances you increase the total throughput.

Now that Docker is running as part of VNS3 the Controller’s instance size will also determine how many Docker application containers can run in your Controller. The type and process loads of the containers will be the determining factor. We recommend using c5.large instance size for VNS3 Controllers.

Container Network Setup

To start using the Container System you must first setup an internal subnet where your containers will run. The default VNS3 container subnet is 198.51.100.0/28. VNS3 allows you to choose a custom address block. Make sure it will not overlap with the Overlay Subnet or any subnets you plan on connecting to VNS3. The container subnet can be thought of as a VLAN segment bridged to the VNS3 Controller’s public network interface.

The Container Networking Page shows the available container IP addresses for the chosen Container Network. IP addresses listed as reserved are either used by Docker (for routing, bridging, and broadcast) or are being used by a currently running container.

VNS3 Network Edge System Page

To change the Container Network first enter a new network subnet in CIDR notation. Click Validate to ensure the subnet accommodates the Container Network requirements.

Click Set once validation is passed. You will prompted with a popup warning that a Container Network change will require a restart of any running container. Click OK.

NOTE: The subnet 198.51.100.250 /30 is RESERVED for internal use by VNS3 controllers and cannot be used.

Container Images

VNS3 supports uploading a compressed archive of a Container Image, Dockerfile or Docker Context Directory. In the future we will support pulling Containers from the public Docker Index and private repositories.

Container

Container Images are used to launch Containers. You can think of this relationship as similar to an AMI and Instance in AWS. Once an Image is uploaded you can launch one or multiple Containers from the Image.

Dockerfile

Dockerfiles are a representation of a Container image, basically a map of how to build an image - start from a source image and run a number of commands on that image before finalizing the Container Image. See the Dockerfile Reference Document for more information.

Dockerfile Context Directories

VNS3 also supports the upload of what Docker calls a “context” or collection of files in a directory that are used along with a Dockerfile to build an Image. The Dockerfile needs to be in the root of the directory and the rest of the files need to be relative so the Dockerfile can access the appropriate assets during the build process. NOTE: This means you DO NOT put your files in a directory and then zip up the directory. You must zip up the files inside the directory so they are at the root level as they are extracted.

Cohesive Networks provides a number of Containers and Dockerfiles to help get you started on our Product Resources page.

Uploading a Container Image

To Upload a Container Image click on the Images left column menu item listed under the Container heading.

VNS3 Network Edge Plugins Upload 1

Click Upload Image.

On the resulting Upload Container Image window enter the following:

  • Input name
  • Description
  • Select the Container Url radio button - provide the publicly accessible URL of the archived Container Image file (supported file formats tar, tgz, tar.gz, tar.bz2, and zip)

Click Upload.

VNS3 Network Edge Plugins Upload 2

Once the Container Image has finished the import process, you will be able to use the action button to edit and delete the Image or allocate (launch) a Container.

Uploading from Dockerfile or Docker Context

To Upload a Dockerfile click on the Images left column menu item listed under the Container heading.

VNS3 Network Edge Plugins Dockerfile

Click Upload Image.

On the resulting Upload Container Image Window enter the following;

  • Input name
  • Description
  • Select the Dockerfile Url radio button - provide the publicly accessible URL of the Dockerfile (note the filename is required to be Dockerfile) or URL of an archived Dockerfile Context Directory (supported file formats tar, tgz, tar.gz, tar.bz2, and zip)

Click Upload.

VNS3 Network Edge Plugins Upload Status 1

Once the Dockerfile has been uploaded and the image has has finished the build process, you will be able to use the action button to edit and delete the Image or allocate (launch) a Container.

Running a Container

To launch a Container click the Actions drop down button next to the Container Image you want to use and click Allocate.

VNS3 Run Container

On the resulting pop up window enter the following:

  • Name of the Container
  • Command used on initiation of the Container
  • Description

VNS3 Run Container Form

Click Allocate.

VNS3 Container Status Page

Saving a Container as an Image

This operation saves the state of the current running container in image form for re-use or export for download. What is saved is an gzipped raw file image, from which a new container can be allocated.

VNS3 Container Save

VNS3 Container Save Form

View saved image

VNS3 Container Saved Image

NOTE: VNS3 does not currently support the Docker “commit” command which will push your changes back to a source DockerHub. Nor does it support Docker “export” command which delivers a full delta history of the container as opposed to just a raw image.

Export a Running Container

This operation allows you to package a running container for download from the VNS3 Controller.

VNS3 Container Export 1

VNS3 Container Export Form

After executing this operation the image will show in uncompressed form on the page available via the “Exported Images” link below the Images table on the Images page.

VNS3 Container Export List

NOTE: VNS3 does not currently support the Docker “commit” command which will push your changes back to a source DockerHub. Nor does it support Docker “export” command which delivers a full delta history of the container as opposed to a single LXC image.

Accessing Containers

Once the Container has launched, an IP address included in the specified Container Network CIDR will be listed. Accessing the Container depends on the source network. The following pages cover connection considerations when trying to access a VNS3 Container from the public Internet, Overlay Network, and Remote IPsec Subnet.

VNS3 Container Access 1 View

Via Public Internet

Accessing a Container from the Public Internet will require additions to the inbound hypervisor firewall rules with the VNS3 Controller as well as VNS3 Firewall.

The following example shows how to access a plugin running as a Container listening on port 22. Since VNS3 uses uses port 22 and has it blocked by default you will need to re-direct from another port, in this example port 44.

Enter rules to port forward incoming traffic to the Container Network and Masquerade outgoing traffic off the VNS3 Manger’s public network interface.

# Let the Docker Subnet Access the Internet Via the Controllers Public IP
MACRO_CUST -o eth0 -s <Controller Private IP> -j MASQUERADE 
# Port forward 44 to the container
PREROUTING_CUST -i eth0 -p tcp -s 0.0.0.0/0 --dport 44 -j DNAT --to <Container Network IP>:22

VNS3 Container Access FW Rules

Via Overlay Network

Accessing a Container from the Overlay Network does not require any Network Firewall/Security Group or VNS3 Firewall rule additions.

Via IPSec Remote Tunnel

Accessing a Container from a remote subnet advertised behind an IPsec tunnel will either require an existing tunnel to the VNS3 Overlay Network PLUS some VNS3 forwarding firewall rules OR a tunnel negotiated between the remote subnet and the Container Network.

Option 1 - Existing Tunnel and VNS3 Firewall

If you have an existing tunnel to the VNS3 Overlay Network, you can add a few VNS3 firewall forwarding rules to access any Containers you have launched. Enter rules to port forward incoming traffic to the Container Network and Masquerade outgoing traffic off the VNS3 Manger’s public network interface.

#Let the Docker Subnet Access the Internet Via the Controllers Public IP
MACRO_CUST -o eth0 -s <Controller Private IP> -j MASQUERADE 
#Port forward 22 to the container
PREROUTING_CUST -i eth0 -p tcp -s <Remote Subnet CIDR> --dport 44 -j DNAT --to <Container Network IP>:22

Option 2 - Remote Subnet<->Container Network IPsec Tunnel

Access between a remote subnet and any subset of the Container Network can be established using IPsec tunnels. Simply specify the Container Network CIDR (default of 198.51.100.0/28) as one end of the IPsec subnet configuration on both the VNS3 (Container Network is the local subnet) and the remote IPsec device (Container Network is the remote subnet).