Showing posts with label contrail. Show all posts
Showing posts with label contrail. Show all posts

Sunday, October 12, 2014

Not all APIs are created equally.

Automation is starting to become the new catch phrase in the networking Industry. It seems like 2014 is the year that marketing groups from different vendors have been touting APIs on their products. Skeptical networking engineers however have claimed that an API does not mean that their jobs are getting easier. In fact it’s been making their jobs a little harder.
Before a networking engineer could strictly focus on pure networking. But now, network engineers are increasingly required to know more and more on how to code or at least know how to read code.

Just because you have an API doesn’t mean all the devices can and will play nice with each other. 

We can see this by looking at three different platforms. 

OpenStack         Contrail        Junos

Now let’s look at their API for data retrieval/configuration

REST                  REST           Netconf

Ok now you have two platforms that use one type of API and a third platform that uses a different API.

Let's look at the resulting Data Structure response

JSON                JSON XML

Again we have two platforms that have the same Data Structure and a third with a different Data Structure.

You might say, ok, at least two of these platforms have the same API and with the same data structure things should be good for both of them right? Actually as they say, the devil is in the details.

I can illustrate this just by looking at a simple IPv4 subnet. 

On Openstack the data abstracted looks like this

{ "networks": [ { "contrail:subnet_ipam":  [  { "subnet_cidr": "12.1.1.0/24",  } ] } ] }

On Contrail it looks like this

{"virtual-network":{ "network_ipam_refs":[ { "attr": { "ipam_subnets": [ { "subnet": { "ip_prefix": "12.1.1.0", "ip_prefix_len": 24 }, } ] }, } ], } }

You can see that one platform combines the subnet with the mask while the other one separates it. For a DevOps engineers and Network Engineers this is annoying. It’s like having to learn different Network Operating systems. The goal of an API should be to allow a simplified abstraction layer.


APIs need to be standardized. Openflow is a good attempt at this. Openflow requires the underlay to have a common protocol in order to allow a controller to programmatically configure them. The networking industry has done a great job at standardizing protocols but a sorry job at creating a common API standard. Maybe the IETF needs to jump in on this. A standardized API could ultimately make our jobs that much more easier.

Sunday, September 28, 2014

Exploring the REST API on contrail.

I noticed on github that there was a tutorial on how to access the REST API on Juniper Contrail.

https://juniper.github.io/contrail-vnc/api-doc/html/tutorial_with_rest.html#

The easiest way to access this is by using cURL. Contrail uses tcp port 8082 for accessing it's REST API.

The url http:/contrail-ip/virtual-networks prints out a list of configured virtual networks on Contrail.

$ curl -X GET -H "Content-Type: application/json; charset=UTF-8" http://172.16.1.4:8082/virtual-networks | python -mjson.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1229  100  1229    0     0   2129      0 --:--:-- --:--:-- --:--:--  2129
{
    "virtual-networks": [
        {
            "fq_name": [
                "default-domain",
                "default-project",
                "__link_local__"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/4092df7b-997a-4ee7-a5cc-46d5db1187d4",
            "uuid": "4092df7b-997a-4ee7-a5cc-46d5db1187d4"
        },
        {
            "fq_name": [
                "default-domain",
                "default-project",
                "default-virtual-network"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/34579f9a-064e-4048-96a7-a30355c54e44",
            "uuid": "34579f9a-064e-4048-96a7-a30355c54e44"
        },
        {
            "fq_name": [
                "default-domain",
                "default-project",
                "ip-fabric"
            ],
            "href": "http://172.16.100.104:8082/virtual-network/db7d1afe-bcaa-456b-b33a-9a36f6d176fe",
            "uuid": "db7d1afe-bcaa-456b-b33a-9a36f6d176fe"
        },
        {
            "fq_name": [
                "default-domain",
                "demo",
                "Network1"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/380c0f4e-34b6-4901-98b2-61a8ed7afd7d",
            "uuid": "380c0f4e-34b6-4901-98b2-61a8ed7afd7d"
        },
        {
            "fq_name": [
                "default-domain",
                "demo",
                "Network2"
            ],
            "href": "http://172.16.100.104:8082/virtual-network/ffe03354-aaa2-4305-b615-654e14111134",
            "uuid": "ffe03354-aaa2-4305-b615-654e14111134"
        },
        {
            "fq_name": [
                "default-domain",
                "demo",
                "Network3"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/93317422-eea4-4b77-88cb-5aaac3bb58b6",
            "uuid": "93317422-eea4-4b77-88cb-5aaac3bb58b6"
        }
    ]
}

However cURL, at least for me, isn’t capable of abstracting the data structures programatically. It's more like screen scraping. So I dug around the internet and noticed that python has a curl module.

http://pycurl.sourceforge.net/doc/quickstart.html#

Now I can use python to execute cURL to pull data from a REST API. 

There are a few parts to this script.

The first part is a function to issue the cURL command.

The second part is to abstract the route-target from a virtual network created in the Contrail controller.

This could then be used later on with another script to create a template configuration and program the underlay gateway router with a VRF.

In this script I cheated a little by grabbing a specific virtual-network’s url address from the previous curl command. Then parsed the data to grab the information I was looking for.

—————————————

import pycurl
import StringIO
import json

#Function to issue cURL command
def get_url(WEB):
  buf = StringIO.StringIO()
  c = pycurl.Curl()
  c.setopt(c.URL, WEB)
  c.setopt(c.WRITEFUNCTION, buf.write)
  c.perform()
  body = buf.getvalue()
  network = json.loads(body)
  buf.close()
  return network

#URL of virtual network on contrail

SITE = 'http://172.16.1.4:8082/virtual-network/380c0f4e-34b6-4901-98b2-61a8ed7afd7d'
objects = get_url(SITE)
#pretty print the json results
print json.dumps(objects, sort_keys=True, indent=4)

#This part is to grab the path of the Virtual Network name and RT from contrail

print "Name: ", objects['virtual-network']['name'], " RT: ", objects['virtual-network']['route_target_list']['route_target'][0]


Script in action:
——————
$ python contrail.py 
{
    "virtual-network": {
        "fq_name": [
            "default-domain", 
            "demo", 
            "Network1"
        ], 
        "href": "http://172.16.1.4:8082/virtual-network/380c0f4e-34b6-4901-98b2-61a8ed7afd7d", 
        "id_perms": {
            "created": "2014-09-19T19:19:11.650288", 
            "description": null, 
            "enable": true, 
            "last_modified": "2014-09-27T05:22:30.453524", 
            "permissions": {
                "group": "cloud-admin-group", 
                "group_access": 7, 
                "other_access": 7, 
                "owner": "cloud-admin", 
                "owner_access": 7
            }, 
            "uuid": {
                "uuid_lslong": 11002964217786203517, 
                "uuid_mslong": 4038619794410719489
            }
        }, 
        "instance_ip_back_refs": [
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/instance-ip/17b19dc0-b177-4df7-955b-57b8a87caf28", 
                "to": [
                    "17b19dc0-b177-4df7-955b-57b8a87caf28"
                ], 
                "uuid": "17b19dc0-b177-4df7-955b-57b8a87caf28"
            }, 
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/instance-ip/d525ddce-3542-4986-b8d1-56f3831d8678", 
                "to": [
                    "d525ddce-3542-4986-b8d1-56f3831d8678"
                ], 
                "uuid": "d525ddce-3542-4986-b8d1-56f3831d8678"
            }
        ], 
        "is_shared": false, 
        "name": "Network1", 
        "network_ipam_refs": [
            {
                "attr": {
                    "ipam_subnets": [
                        {
                            "default_gateway": "100.1.1.254", 
                            "subnet": {
                                "gw": "100.1.1.254", 
                                "ip_prefix": "100.1.1.0", 
                                "ip_prefix_len": 24
                            }
                        }
                    ]
                }, 
                "href": "http://172.16.1.4:8082/network-ipam/1f24fa35-b7bf-4d0f-8185-746f58e234c9", 
                "to": [
                    "default-domain", 
                    "demo", 
                    "Network1-ipam"
                ], 
                "uuid": "1f24fa35-b7bf-4d0f-8185-746f58e234c9"
            }
        ], 
        "network_policy_refs": [
            {
                "attr": {
                    "sequence": {
                        "major": 0, 
                        "minor": 0
                    }, 
                    "timer": null
                }, 
                "href": "http://172.16.1.4:8082/network-policy/0a1c1776-a323-4c00-959a-aada50b91be8", 
                "to": [
                    "default-domain", 
                    "demo", 
                    "net1<->net2"
                ], 
                "uuid": "0a1c1776-a323-4c00-959a-aada50b91be8"
            }
        ], 
        "parent_href": "http://172.16.1.4:8082/project/3af66afe-3284-40cc-8b04-85c69af512c7", 
        "parent_type": "project", 
        "parent_uuid": "3af66afe-3284-40cc-8b04-85c69af512c7", 
        "route_target_list": {
            "route_target": [
                "target:64512:100"
            ]
        }, 
        "router_external": false, 
        "routing_instances": [
            {
                "href": "http://172.16.1.4:8082/routing-instance/06165761-3b55-417d-acf0-0ad27a9010d0", 
                "to": [
                    "default-domain", 
                    "demo", 
                    "Network1", 
                    "Network1"
                ], 
                "uuid": "06165761-3b55-417d-acf0-0ad27a9010d0"
            }
        ], 
        "uuid": "380c0f4e-34b6-4901-98b2-61a8ed7afd7d", 
        "virtual_machine_interface_back_refs": [
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/virtual-machine-interface/2aefccdc-bf34-4d3b-bd37-41f716274883", 
                "to": [
                    "e432ad9a-8f6e-4f7c-813b-18ada76bfd64", 
                    "2aefccdc-bf34-4d3b-bd37-41f716274883"
                ], 
                "uuid": "2aefccdc-bf34-4d3b-bd37-41f716274883"
            }, 
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/virtual-machine-interface/f20bb145-49cd-43bf-a6e5-9d9c50794244", 
                "to": [
                    "2e775d6f-8043-41df-9295-d1b8d8f705a2", 
                    "f20bb145-49cd-43bf-a6e5-9d9c50794244"
                ], 
                "uuid": "f20bb145-49cd-43bf-a6e5-9d9c50794244"
            }
        ], 
        "virtual_network_properties": {
            "extend_to_external_routers": null, 
            "forwarding_mode": "l2_l3", 
            "network_id": 4, 
            "vxlan_network_identifier": null
        }
    }
}

Name:  Network1  RT:  target:64512:100


As you can see I was able to pull the "name" of the virtual network and the route-target of the virtual network. Later I can then create a script template to build the VRF on the Router Gateway like a MX.  That's one step closer to network automation.

Wednesday, September 24, 2014

How Contrail communicates with the underlay

Contrail typically consists of a cluster of nodes. The three main nodes are the config, control and compute. The config node is where the Openstack Horizon and Contrail Controller exists. The Control node is used to form a MP-BGP session to a gateway router. The Compute node hosts all the VMs and virtual networks.

You can think of Contrail as a PE router as pretty much this is what a gateway router perceives the other end of the connection. Contrail uses a vRouter and when you configure virtual networks you have the ability to add a route-target to that virtual network. On the Gateway router you would create VRFs to associate with the corresponding virtual networks and prefixes can be exchanged. Data plane traffic will traverse through an MPLS tunnel between Contrail and the Gateway router. It's at the gateway router where you would "leak" the received Contrail virtual network into the main routing instance of the gateway router.

Here I use an Juniper MX as the gateway router. When I first setup contrail I used the testbed.py script to add the mx gateway router.

It's called ext_router = [ip address]

Then in contrail webui I should see the BGP session. You can however add this post contrail installation.


On the MX, I configure an iBGP session to connect with the Contrail control node.

user@router# show protocols
mpls {
    interface all;
}
bgp {
    group IBGP-CONTRAIL {
        type internal;
        local-address 192.168.10.11;
        family inet-vpn {
            unicast;
        }
        neighbor 192.168.10.2;
    }
}

Then in Contrail config node I create a virtual network and add a route target.




I create a corresponding VRF on the MX with the route target.

user@router# show routing-instances
VRF1 {
    instance-type vrf;
    interface lt-3/0/0.3;
    route-distinguisher 1.1.1.1:101;
    vrf-target target:64512:101;
    routing-options {
        static {
            route 0.0.0.0/0 next-hop 192.168.12.1;
        }
    }


I check to see the BGP session established.

user@router# run show bgp summary                                     
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
bgp.l3vpn.0         
                       8          8          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
192.168.10.2          64512       5724       6292       0       3 1d 23:08:56 Establ
  bgp.l3vpn.0: 8/8/8/0
  VRF1.inet.0: 3/3/3/0

The Virtual Network IP addresses for the VMs will be sent.

user@router# run show route receive-protocol bgp 192.168.10.2 

VRF1.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
  Prefix          Nexthop           MED     Lclpref    AS path
* 11.1.1.1/32             192.168.10.3                 100        ?
* 11.1.1.5/32             192.168.10.3                 100        ?
* 11.1.1.7/32             192.168.10.3                 200        ?

bgp.l3vpn.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
  Prefix          Nexthop           MED     Lclpref    AS path
  192.168.10.3:7:11.1.1.1/32                   
*                         192.168.10.3                 100        ?
  192.168.10.3:7:11.1.1.5/32                   
*                         192.168.10.3                 100        ?
  192.168.10.3:7:11.1.1.7/32                   
*                         192.168.10.3                 200        ?

mpls.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0                  *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
1                  *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
2                  *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
13                 *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
299904             *[VPN/170] 1d 23:14:00
                    > to 192.168.11.1 via lt-3/0/0.1, Pop     
299936             *[VPN/170] 1d 12:47:38
                      receive table VRF1.inet.0, Pop     
299952             *[VPN/170] 1d 12:47:38
                    > to 192.168.12.1 via lt-3/0/0.3, Pop     

bgp.l3vpn.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

192.168.10.3:7:11.1.1.1/32               
                   *[BGP/170] 1d 12:46:01, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 24
192.168.10.3:7:11.1.1.5/32               
                   *[BGP/170] 1d 12:46:01, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 18
192.168.10.3:7:11.1.1.7/32               
                   *[BGP/170] 1d 12:26:14, localpref 200, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 28


Note the dynamic MPLS GRE tunnel is created. You will need to create one on the MX.

user@router# show chassis
fpc 3 {
    pic 0 {
        tunnel-services;
    }
}

user@router# show routing-options
static {
    route 0.0.0.0/0 next-hop 10.161.1.1;
}
autonomous-system 64512;
dynamic-tunnels {
    dynamic_overlay_tunnels {
        source-address 192.168.10.11;
        gre;
        destination-networks {
            192.168.10.0/24;
        }
    }
}



PoC-Demo.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[Static/5] 1d 13:05:36
                    > to 192.168.12.1 via lt-3/0/0.3
11.1.1.1/32        *[BGP/170] 1d 13:03:59, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 24
11.1.1.5/32        *[BGP/170] 1d 13:03:59, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 18
11.1.1.7/32        *[BGP/170] 1d 12:44:12, localpref 200, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 28
192.168.12.0/24    *[Direct/0] 1d 13:05:36
                    > via lt-3/0/0.3
192.168.12.2/32    *[Local/0] 1d 13:05:36
                      Local via lt-3/0/0.3

LT interfaces are created to allow the virtual network traffic to communicate between the VRF and the main routing instance. You could also use RIB groups and Policies to do the same thing.

    lt-3/0/0 {
        unit 2 {
            encapsulation ethernet;
            peer-unit 3;
            family inet {
                address 192.168.12.1/24;
            }
        }
        unit 3 {
            encapsulation ethernet;
            peer-unit 2;
            family inet {
                address 192.168.12.2/24;
            }
        }
    }

You then need to make sure the interface that is connecting to the Contrail network is using MPLS.


interfaces {

    ge-3/1/1 {
        unit 0 {
            family inet {
                address 192.168.10.11/24;
            }
            family mpls;
        }
    }
    lo0 {
        unit 0 {
            family inet {
                address 1.1.1.1/32;
            }
            family iso {
                address 49.0002.0010.0100.1001.00;
            }
        }
    }
}

One thing you should be aware of is the next-hop of the route advertised by contrail points to the IP address of the Compute Node and not the control node.
user@router# run show route 11.1.1.1/32 detail

VRF1.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
11.1.1.1/32 (1 entry, 1 announced)
        *BGP    Preference: 170/-101
                Route Distinguisher: 192.168.10.3:7  <<< Contrail's RD
                Next hop type: Indirect
                Address: 0x94f4a28
                Next-hop reference count: 3
                Source: 192.168.10.2
                Next hop type: Router, Next hop index: 660
                Next hop: via gr-3/0/0.32770, selected
                Label operation: Push 24
                Label TTL action: prop-ttl
                Session Id: 0xd
                Protocol next hop: 192.168.10.3  <<<< IP of compute node
                Push 24
                Indirect next hop: 0x9574410 1048574 INH Session ID: 0xe
                State: <Secondary Active Int Ext ProtectionCand>
                Local AS: 64512 Peer AS: 64512
                Age: 1d 13:54:24     Metric2: 0
                Validation State: unverified
                Task: BGP_64512.192.168.10.2+34735
                Announcement bits (1): 1-KRT
                AS path: ?
                Communities: target:64512:101   << RT from contrail
                Import Accepted
                VPN Label: 24
                Localpref: 100
                Router ID: 192.168.10.2         <<<< IP of control node
                Primary Routing Table bgp.l3vpn.0

Monday, September 22, 2014

How to access a VM in OpenStack + Contrail

When you first spin up VMs in an Openstack + Contrail environment you don’t have many choices on how to access your VM. You can go through the Openstack Webui to access via the console. 



But that method is not ideal as you cannot do things such as copy and paste.

Another method is to access the VM from the Contrail compute node. All VMs are stored on the compute node of the Contrail cluster. 


Or you can inject an ssh-key into your VM so you can access the VM from the compute node.

First you create your ssh-key.

Select the Access and Security tab in Openstack.

Choose Keypairs

 Then create a key pair.



You will then be able to download the "key" to your local computer. You will need to copy this key onto the Compute node.


MyComputer$ scp ssh-key.pem root@172.16.100.104:~/.ssh


I chose to place this key in the .ssh directory of the Compute node.

Compute-Node:~/.ssh$ ls
authorized_keys  id_rsa  id_rsa.pub  known_hosts  ssh-key.pem

Next you create your VM.

Make sure you choose the ssh-key you created.



You can see if your ssh-key was injected from the horizon dashboard.

Note: One of the problems I see in Horizon is the inability of injecting the ssh-key after the VM was created. There is no was to edit and do this post-creation. Pretty annoying especially if there are many ssh-keys and you forget to choose the correct one.

Next go back to your compute node.
Issue the netstat -rn command.

When a VM is created the Compute node generates a 169.254.x.x link local address. It's similar to a loopback address and is only accessible on the local machine and cannot be accessed over the network.

Compute-Node$ netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         172.16.100.1    0.0.0.0         UG        0 0          0 vhost0
169.254.0.3     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.4     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.5     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.6     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.7     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.8     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.9     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
172.16.100.0    0.0.0.0         255.255.255.0   U         0 0          0 vhost0


Then you can ssh into the vm of your choice with -i parameter and path-to/ssh-key. For example:

Compute-Node$ssh -i ~/.ssh/ssh-key.pem cirros@169.254.0.3

Cirros-VM1$ whoami
cirros

You can see the ssh-key authorized hosts file in the .ssh directory of the VM

$ more authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCe9TjQfiVTiidtt2qUICwK/7DArACsjLWDkx7Esvu6vWS8MahyrlgNkeQtaFDx7wub5LaHesqq6wj2pKDX07RWxylAkxShqy2+ZIoOgJqMBr0vfq4xpp2qU7fPiAq4YV3CdqTwOggnNHQNeGgpM6406IJSJcVYJVqTC3/3SsFgxzva4UqNgA3mjRQxSsmVxc6jVHVKfAYQ8+fDFNniNjY+q9qvihtAXwmLGfv/gxE/N01aMC+MH1b5cmj2o7WNpbt5qGDyrf3jB6rqz5CI95XS0MjvaScTWKb5ul0yuWTkp1zcPY4vzDaFjc0Fl7627Lm2IuQZg76dgKl3rnnPtXbv Generated by Nova




Thursday, May 22, 2014

Test driving Open Contrail

So I went to the hands on Open Contrail MeetUp. It started off as a semi Q&A overview of the architecture and business use case.

There is a need from Service providers to create services for customers that could

a) be deployed in a timely matter (seconds rather than months

b) Lower Op Ex through automation

c) reduce Operational complexity with template configs

Contrail is a module that can be currently added to an Openstack or Cloudstack platform. I'll talk about the Openstack implementation as I know more about this Cloud platform. The Contrail Controller works entirely in the overlay. It does not know anything about the underlay except for the gateway so it expects the physical network to already be in place. This means it can interoperate with any existing switch vendor network. When you install Contrail as a Neutron plugin into Openstack, it will create a vRouter which is bound to the hypervisor. Currently KVM is the one it works with but I hear that it was tested on VMware as a vm but not directly onto ESXi.  The vRouter is important because you wouldn't use Open vSwitch (OVS). The main function of Contrail is creating the overlay network within Openstack.

Here's how this works. Let's look at this simple logical topology



First you would go to the Contrail Controller (Web GUI) and create the two networks.


Then you would go to the Openstack Horizon Web UI and spin up your VM instances. Under the networking tab you should be able to associate the network to the VM.



Next you would need to create a policy so that each network can talk to each other. Think of it as creating a firewall ACL.



Last you need to attach the policy to each network.

Contrail will automatically assign the VM an ip address and point the default route of the VM to the vRouter.


This will allow you to access the Back End Server.

Overall provisioning time would be roughly 2-3 minutes depending on how fast the VMs can spin up.

Now if you want to provide access to anything outside of the Data Center you'll need to go through a gateway router. AKA Data Center Interconnect (DCI)

To understand this more you will need a Gateway router that speaks MP-BGP can can support GRE or VXLAN. A GRE tunnel will be created from the Contrail vRouter (Virtual) to the physical Gateway Router. If you want to create multi-tenency, you will have to put each tunnel into a separate VRF on the Gateway Router. This provisioning is a manual process of the Gateway Router. Contrail does not manage the Gateway Router at all, however you may be able to automate this function using scripting.
The vRouter will exchange routes via MP-BGP and there is a setting in Contrail to create the Router-Target. The BGP  family types vRouter supports is inet-vpn (or vpnv4) and evpn.

Overall the provisioning is fairly simple. There are a few things I would like "improved" in the product, such as being able to attach a policy from the networking overview "tab" instead of having to drill down into each network and doing it there. There could be a more excel feel to this view as you should be able to make modifications from this view using pulldowns or directly adding the ip subnets.
Also the manual process of creating VRFs on the Gateway router is a bit tedious. I'm investigating whether Openstack has the ability run a script that can make a netconf rpc call to the gateway router.

An alternative is to use slax and curl on a Juniper MX router to extract the details from the Contrail controller.