Multi-Cloud with Terraform and VNS3

Creating a multi-cloud network involves the following steps:

  1. Launch VNS3 in AWS VPC
  2. Launch VNS3 in Azure VNet
  3. Create an IPsec tunnel between the VNS3 controllers
  4. Route traffic for Azure VNet to VNS3 and similarily in Azure, route for AWS to VNS3

Multi-cloud architecture diagram medium

Technologies

This tutorial makes use of Terraform and the Cohesive Networks Python SDK to build out the virtual networks and the encrypted bridge between clouds using VNS3.

Building your clouds with Terraform

Launching VNS3 in AWS and Azure is simple with terraform. The terraform code used for this example can be found in our templates repository. The multi-cloud topology code can be found here.

The following files are included:

  • clouds.tf - builds an AWS VPC and an Azure VNet using terraform modules
  • vns3.tf - Launches VNS3 in the VLANs created by clouds.tf in both AWS and Azure. The VNS3 controllers are assigned public IP addresses and the security group only provides access to the network range defined by the variable access_cidr in variables.tf
  • bridge.tf - This establishes the routes and firewall rules for allowing the VNS3 controllers to talk to eachother via the public internet. The VPC and VNet only allow access from the access_cidr defined in variables and their peer VNS3 controller’s public IP such that either private vlan allows no traffic other than the tunnel and support traffic.
  • variables.tf - this defines the input variables required for this infrastructure
  • outputs.tf - defines terraform outputs for the topology.

You can build the infrastructure by running the following:

cd multi-cloud-network-tf
terraform init 
# Create a terraform plan with a timestamp
terraform plan -out "build__$(date -u +"%Y-%m-%dT%H-%M-%SZ").tfplan"
terraform apply build__2020-05-22T21-29-32Z.tfplan

After about 10 minutes you will get your outputs:

Apply complete! Resources: 42 added, 0 changed, 0 destroyed.

Outputs:

aws_controller_ips = [
  [
    "10.100.2.23",
    "10.100.2.5",
  ],
]
aws_controller_public_ips = [
  "3.91.137.75",
]
aws_default_security_group_id = sg-0d63e7843c0190b9e
aws_route_table_id = rtb-0d53b2f5b5c08c88c
aws_subnet_ids = [
  "subnet-0ecf52fbf997b8996",
  "subnet-0f00b942585c4c6cf",
  "subnet-0f87caa329804dd2c",
]
aws_vns3_instance_ids = [
  "i-0165790a49c499c4b",
]
aws_vpc_cidr = 10.100.2.0/24
aws_vpc_id = vpc-0e3d88939615e3a9a
azure_controller_instance_names = [
  "cntopos-multicloud-am-az-vns3-0",
]
azure_controller_ips = [
  "10.100.1.4",
]
azure_controller_public_ips = [
  "23.99.134.217",
]
azure_resource_group_location = centralus
azure_resource_group_name = cn-topos-multicloud-am-az
azure_route_table_id = /subscriptions/XYZ/resourceGroups/cn-topos-multicloud-am-az/providers/Microsoft.Network/routeTables/vnet-cntopos-multicloud-am-az-rt-main
azure_subnet_ids = [
  "/subscriptions/XYZ/resourceGroups/cn-topos-multicloud-am-az/providers/Microsoft.Network/virtualNetworks/vnet-cntopos-multicloud-am-az/subnets/vnet-cntopos-multicloud-am-az-sn-0",
  "/subscriptions/XYZ/resourceGroups/cn-topos-multicloud-am-az/providers/Microsoft.Network/virtualNetworks/vnet-cntopos-multicloud-am-az/subnets/vnet-cntopos-multicloud-am-az-sn-1",
  "/subscriptions/XYZ/resourceGroups/cn-topos-multicloud-am-az/providers/Microsoft.Network/virtualNetworks/vnet-cntopos-multicloud-am-az/subnets/vnet-cntopos-multicloud-am-az-sn-2",
]
azure_vnet_cidr = [
  "10.100.1.0/24",
]
azure_vnet_id = /subscriptions/XYZ/resourceGroups/cn-topos-multicloud-am-az/providers/Microsoft.Network/virtualNetworks/vnet-cntopos-multicloud-am-az

Creating an IPsec tunnel between clouds

Building an IPsec tunnel between your VNS3 controllers involves the following steps:

  1. Configure your VNS3 controllers with a license
  2. Create an IPsec endpoint on both VNS3 controllers
  3. Create a route on both controllers that tells VNS3 that the peer network is on the other side of the IPsec tunnel

Configuring your VNS3 controllers with a license

Configuring your controller requires the following:

  1. Upload a license with PUT /license
  2. Configuring and accepting the license parameters with PUT /license/parameters
  3. Generating a keyset with PUT /keyset

Here’s some simple code with all of the API calls using the Python SDK:

vns3_client.licensing.upload_license(license_file_data)
client.licensing.put_set_license_parameters(**license_parameters)
client.config.put_keyset
# Poll on the keyset until it is available
client.config.wait_for_keyset

Now, VNS3 does take some time to configure itself during licensing and so requires some polling on responses. We tend to do this a lot for our topologies so we added a helper function to the SDK that will idempotently configure your controller:

def setup_controller(
    client: VNS3Client,
    topology_name: str,
    license_file: str,
    license_parameters: Dict,
    keyset_parameters: Dict,
    peering_id: int = 1,
    reboot_timeout=120,
    keyset_timeout=120,
):
    """setup_controller

    Set the topology name, controller license, 
    keyset and peering ID if provided

    Arguments:
        client {VNS3Client}
        topology_name {str}
        keyset_parameters {Dict} -- UpdateKeysetRequest {
            'source': 'str',
            'token': 'str',
            'topology_name': 'str',
            'sealed_network': 'bool'
        }
        peering_id {int} -- ID for this controller in peering mesh

    Returns:
        OperationResult
    """

This allows for easy configuration of a new controller:

from cohesivenet.macros import config, connect
vns3_client = connect.get_client(host, username, password)
config.setup_controller(
    vns3_client,
    topology_name,
    data["license_file"],
    license_parameters={"default": True},
    keyset_parameters={"token": data["keyset_token"]}
)

This will configure your VNS3 controller with the provided parameters if not already configured.

Create an IPsec endpoint on both VNS3 controllers

To create an encrypted bridge between VNS3 controllers requires creating an IPsec endpoint and tunnel for each side. This requires one API call against each VNS3 controller: POST /ipsec/endpoints.

We are creating a Route-based VPN here, utilizing virtual tunnel interfaces (VTI). These types of VPNs can be much more dynamic and are a lot easier to setup as they avoid the need to set up static access lists for your site-to-site VPN. A route-based VPN requires you to provide a network range to use for this virtual interface. Typically we use the RFC 3927 Link local network range 169.254.0.0/16 to allocate VTI ranges from. Your ranges can be as small as /30’s.

Here is an example using the SDK:

from cohesivenet import constants as network_constants

# network_constants.VTI_RANGE_LINK_LOCAL=169.254.0.0/16
vti_blocks = network_math.calculate_next_subnets(
    prefix_length=30, take=2, cidr=network_constants.VTI_RANGE_LINK_LOCAL
)
ipsec_endpoint_response = aws_vns3_client.ipsec.post_create_ipsec_endpoint(
    name="azure-aws tunnel,
    ipaddress=azure_public_ip_address,
    secret="my-secret-tunnel-psk",
    pfs=True,
    ike_version=2,
    extra_config="local-peer-id=%s" % aws_vns3_client.configuration.host_uri,
    vpn_type="vti",
    route_based_int_address=vti_blocks[0],
    route_based_local="0.0.0.0/0",
    route_based_remote="0.0.0.0/0"
)
print(ipsec_endpoint_response.data)

The extra config sets the public IP address of the IPsec gateway for this side of the controller. So we are using the public IP address of VNS3, which is captured in the configuration and can be fetched from the host_uri property. You will do this for each controller.fso tunne

Create a route on both controllers for the peer network

The final step is creating a route on each VNS3 controller for their peer’s network. So for our AWS VNS3 controller, we need to create a route that indicates that the Azure VNet, 10.100.1.0/24, is on the other side of the IPsec tunnel. This can be done with a POST /routes call. Here’s an example with the SDK:

# the IPsec endpoint response looks like this:
# { response: { tunnels: { 1: { tunnel data  } } } }
# so we fetch the first 1 as their should only be 1.
tunnel_id = list(ipsec_endpoint_response.response.tunnels.keys())[0]

# Creating a route to Azure
routes_response = aws_vns3_client.routing.post_create_route(
    cidr="10.100.1.0/24",
    description="Route to Azure via tunnel",
    tunnel=tunnel_id,
    metric=0,   # indicating 0 hop away
    advertise=False
)
print(routes_response.data)

Now we just need to do the same thing on the Azure side to create a route to AWS!

Tip: We tend to create these IPsec VPNS a lot so we created a "macro" for it. The following code will create both your endpoint and your route.
from cohesivenet.macros import ipsec

ipsec.create_tunnel_endpoint(
    vns3_client,
    endpoint_name,          # Name for your endpoint
    data["tunnel_psk"],     # Preshared key for the IPsec tunnel
    data["peer_endpoint"],  # Azure/AWS VNS3 public IP address for the IPsec endpoint
    data["peer_cidr"],  # Azure/AWS CIDR for the route
    data["tunnel_vti"]  # VTI block to use for POST /ipsec/endpoints
)

Routing traffic between AWS and Azure

Now we need to create a route to Azure via the AWS VNS3 controller and likewise, a route to AWS via the Azure VNS3 controller. In AWS these means creating a route for 10.100.1.0/24 to the network interface of the VNS3 controller. And for Azure, this means creating a route for the AWS cidr, 10.100.2.0/24, with a next hop of the private IP address of the the Azure VNS3 controller. But this was already built by the terraform! So we’re done!

The terraform responsible was the following in bridge.tf:

# Route to Azure from AWS via VNS3
resource "aws_route" "to_bridge" {
  route_table_id              = "${module.aws_vpc.route_table_id}"
  destination_cidr_block      = "${var.vnet_cidr}"
  network_interface_id        = "${element(module.aws_vns3.vns3_network_interfaces, 0)}"
}

# Route to AWS from Azure via VNS3
resource "azurerm_route" "to_bridge" {
  name                   = "${var.topology_name}-aws-bridge"
  resource_group_name    = "${module.azure_vnet.resource_group_name}"
  route_table_name       = "${module.azure_vnet.route_table_name}"
  address_prefix         = "${var.vpc_cidr}"
  next_hop_type          = "VirtualAppliance"
  next_hop_in_ip_address = "${element(module.azure_vns3.vns3_primary_ips, 0)}"
}

Putting it all together

So to wrap up, configuring a global multi-cloud network doesn’t have to be complex. Here we did it with some short scripts. Building network topologies with VNS3 can be fully automated and reproducible with frameworks like terraform and the VNS3 API.

If you’d like to see a final version of a working python script that builds this multi-cloud network, you can check it out here, as one of the python SDK examples.

Any questions on how to automate your network? Email us at support@cohesive.net or open a ticket directly on our support site.