Showing posts with label opencontrail. Show all posts
Showing posts with label opencontrail. Show all posts

Sunday, October 12, 2014

Not all APIs are created equally.

Automation is starting to become the new catch phrase in the networking Industry. It seems like 2014 is the year that marketing groups from different vendors have been touting APIs on their products. Skeptical networking engineers however have claimed that an API does not mean that their jobs are getting easier. In fact it’s been making their jobs a little harder.
Before a networking engineer could strictly focus on pure networking. But now, network engineers are increasingly required to know more and more on how to code or at least know how to read code.

Just because you have an API doesn’t mean all the devices can and will play nice with each other. 

We can see this by looking at three different platforms. 

OpenStack         Contrail        Junos

Now let’s look at their API for data retrieval/configuration

REST                  REST           Netconf

Ok now you have two platforms that use one type of API and a third platform that uses a different API.

Let's look at the resulting Data Structure response

JSON                JSON XML

Again we have two platforms that have the same Data Structure and a third with a different Data Structure.

You might say, ok, at least two of these platforms have the same API and with the same data structure things should be good for both of them right? Actually as they say, the devil is in the details.

I can illustrate this just by looking at a simple IPv4 subnet. 

On Openstack the data abstracted looks like this

{ "networks": [ { "contrail:subnet_ipam":  [  { "subnet_cidr": "12.1.1.0/24",  } ] } ] }

On Contrail it looks like this

{"virtual-network":{ "network_ipam_refs":[ { "attr": { "ipam_subnets": [ { "subnet": { "ip_prefix": "12.1.1.0", "ip_prefix_len": 24 }, } ] }, } ], } }

You can see that one platform combines the subnet with the mask while the other one separates it. For a DevOps engineers and Network Engineers this is annoying. It’s like having to learn different Network Operating systems. The goal of an API should be to allow a simplified abstraction layer.


APIs need to be standardized. Openflow is a good attempt at this. Openflow requires the underlay to have a common protocol in order to allow a controller to programmatically configure them. The networking industry has done a great job at standardizing protocols but a sorry job at creating a common API standard. Maybe the IETF needs to jump in on this. A standardized API could ultimately make our jobs that much more easier.

Sunday, September 28, 2014

Exploring the REST API on contrail.

I noticed on github that there was a tutorial on how to access the REST API on Juniper Contrail.

https://juniper.github.io/contrail-vnc/api-doc/html/tutorial_with_rest.html#

The easiest way to access this is by using cURL. Contrail uses tcp port 8082 for accessing it's REST API.

The url http:/contrail-ip/virtual-networks prints out a list of configured virtual networks on Contrail.

$ curl -X GET -H "Content-Type: application/json; charset=UTF-8" http://172.16.1.4:8082/virtual-networks | python -mjson.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1229  100  1229    0     0   2129      0 --:--:-- --:--:-- --:--:--  2129
{
    "virtual-networks": [
        {
            "fq_name": [
                "default-domain",
                "default-project",
                "__link_local__"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/4092df7b-997a-4ee7-a5cc-46d5db1187d4",
            "uuid": "4092df7b-997a-4ee7-a5cc-46d5db1187d4"
        },
        {
            "fq_name": [
                "default-domain",
                "default-project",
                "default-virtual-network"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/34579f9a-064e-4048-96a7-a30355c54e44",
            "uuid": "34579f9a-064e-4048-96a7-a30355c54e44"
        },
        {
            "fq_name": [
                "default-domain",
                "default-project",
                "ip-fabric"
            ],
            "href": "http://172.16.100.104:8082/virtual-network/db7d1afe-bcaa-456b-b33a-9a36f6d176fe",
            "uuid": "db7d1afe-bcaa-456b-b33a-9a36f6d176fe"
        },
        {
            "fq_name": [
                "default-domain",
                "demo",
                "Network1"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/380c0f4e-34b6-4901-98b2-61a8ed7afd7d",
            "uuid": "380c0f4e-34b6-4901-98b2-61a8ed7afd7d"
        },
        {
            "fq_name": [
                "default-domain",
                "demo",
                "Network2"
            ],
            "href": "http://172.16.100.104:8082/virtual-network/ffe03354-aaa2-4305-b615-654e14111134",
            "uuid": "ffe03354-aaa2-4305-b615-654e14111134"
        },
        {
            "fq_name": [
                "default-domain",
                "demo",
                "Network3"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/93317422-eea4-4b77-88cb-5aaac3bb58b6",
            "uuid": "93317422-eea4-4b77-88cb-5aaac3bb58b6"
        }
    ]
}

However cURL, at least for me, isn’t capable of abstracting the data structures programatically. It's more like screen scraping. So I dug around the internet and noticed that python has a curl module.

http://pycurl.sourceforge.net/doc/quickstart.html#

Now I can use python to execute cURL to pull data from a REST API. 

There are a few parts to this script.

The first part is a function to issue the cURL command.

The second part is to abstract the route-target from a virtual network created in the Contrail controller.

This could then be used later on with another script to create a template configuration and program the underlay gateway router with a VRF.

In this script I cheated a little by grabbing a specific virtual-network’s url address from the previous curl command. Then parsed the data to grab the information I was looking for.

—————————————

import pycurl
import StringIO
import json

#Function to issue cURL command
def get_url(WEB):
  buf = StringIO.StringIO()
  c = pycurl.Curl()
  c.setopt(c.URL, WEB)
  c.setopt(c.WRITEFUNCTION, buf.write)
  c.perform()
  body = buf.getvalue()
  network = json.loads(body)
  buf.close()
  return network

#URL of virtual network on contrail

SITE = 'http://172.16.1.4:8082/virtual-network/380c0f4e-34b6-4901-98b2-61a8ed7afd7d'
objects = get_url(SITE)
#pretty print the json results
print json.dumps(objects, sort_keys=True, indent=4)

#This part is to grab the path of the Virtual Network name and RT from contrail

print "Name: ", objects['virtual-network']['name'], " RT: ", objects['virtual-network']['route_target_list']['route_target'][0]


Script in action:
——————
$ python contrail.py 
{
    "virtual-network": {
        "fq_name": [
            "default-domain", 
            "demo", 
            "Network1"
        ], 
        "href": "http://172.16.1.4:8082/virtual-network/380c0f4e-34b6-4901-98b2-61a8ed7afd7d", 
        "id_perms": {
            "created": "2014-09-19T19:19:11.650288", 
            "description": null, 
            "enable": true, 
            "last_modified": "2014-09-27T05:22:30.453524", 
            "permissions": {
                "group": "cloud-admin-group", 
                "group_access": 7, 
                "other_access": 7, 
                "owner": "cloud-admin", 
                "owner_access": 7
            }, 
            "uuid": {
                "uuid_lslong": 11002964217786203517, 
                "uuid_mslong": 4038619794410719489
            }
        }, 
        "instance_ip_back_refs": [
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/instance-ip/17b19dc0-b177-4df7-955b-57b8a87caf28", 
                "to": [
                    "17b19dc0-b177-4df7-955b-57b8a87caf28"
                ], 
                "uuid": "17b19dc0-b177-4df7-955b-57b8a87caf28"
            }, 
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/instance-ip/d525ddce-3542-4986-b8d1-56f3831d8678", 
                "to": [
                    "d525ddce-3542-4986-b8d1-56f3831d8678"
                ], 
                "uuid": "d525ddce-3542-4986-b8d1-56f3831d8678"
            }
        ], 
        "is_shared": false, 
        "name": "Network1", 
        "network_ipam_refs": [
            {
                "attr": {
                    "ipam_subnets": [
                        {
                            "default_gateway": "100.1.1.254", 
                            "subnet": {
                                "gw": "100.1.1.254", 
                                "ip_prefix": "100.1.1.0", 
                                "ip_prefix_len": 24
                            }
                        }
                    ]
                }, 
                "href": "http://172.16.1.4:8082/network-ipam/1f24fa35-b7bf-4d0f-8185-746f58e234c9", 
                "to": [
                    "default-domain", 
                    "demo", 
                    "Network1-ipam"
                ], 
                "uuid": "1f24fa35-b7bf-4d0f-8185-746f58e234c9"
            }
        ], 
        "network_policy_refs": [
            {
                "attr": {
                    "sequence": {
                        "major": 0, 
                        "minor": 0
                    }, 
                    "timer": null
                }, 
                "href": "http://172.16.1.4:8082/network-policy/0a1c1776-a323-4c00-959a-aada50b91be8", 
                "to": [
                    "default-domain", 
                    "demo", 
                    "net1<->net2"
                ], 
                "uuid": "0a1c1776-a323-4c00-959a-aada50b91be8"
            }
        ], 
        "parent_href": "http://172.16.1.4:8082/project/3af66afe-3284-40cc-8b04-85c69af512c7", 
        "parent_type": "project", 
        "parent_uuid": "3af66afe-3284-40cc-8b04-85c69af512c7", 
        "route_target_list": {
            "route_target": [
                "target:64512:100"
            ]
        }, 
        "router_external": false, 
        "routing_instances": [
            {
                "href": "http://172.16.1.4:8082/routing-instance/06165761-3b55-417d-acf0-0ad27a9010d0", 
                "to": [
                    "default-domain", 
                    "demo", 
                    "Network1", 
                    "Network1"
                ], 
                "uuid": "06165761-3b55-417d-acf0-0ad27a9010d0"
            }
        ], 
        "uuid": "380c0f4e-34b6-4901-98b2-61a8ed7afd7d", 
        "virtual_machine_interface_back_refs": [
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/virtual-machine-interface/2aefccdc-bf34-4d3b-bd37-41f716274883", 
                "to": [
                    "e432ad9a-8f6e-4f7c-813b-18ada76bfd64", 
                    "2aefccdc-bf34-4d3b-bd37-41f716274883"
                ], 
                "uuid": "2aefccdc-bf34-4d3b-bd37-41f716274883"
            }, 
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/virtual-machine-interface/f20bb145-49cd-43bf-a6e5-9d9c50794244", 
                "to": [
                    "2e775d6f-8043-41df-9295-d1b8d8f705a2", 
                    "f20bb145-49cd-43bf-a6e5-9d9c50794244"
                ], 
                "uuid": "f20bb145-49cd-43bf-a6e5-9d9c50794244"
            }
        ], 
        "virtual_network_properties": {
            "extend_to_external_routers": null, 
            "forwarding_mode": "l2_l3", 
            "network_id": 4, 
            "vxlan_network_identifier": null
        }
    }
}

Name:  Network1  RT:  target:64512:100


As you can see I was able to pull the "name" of the virtual network and the route-target of the virtual network. Later I can then create a script template to build the VRF on the Router Gateway like a MX.  That's one step closer to network automation.

Wednesday, September 24, 2014

How Contrail communicates with the underlay

Contrail typically consists of a cluster of nodes. The three main nodes are the config, control and compute. The config node is where the Openstack Horizon and Contrail Controller exists. The Control node is used to form a MP-BGP session to a gateway router. The Compute node hosts all the VMs and virtual networks.

You can think of Contrail as a PE router as pretty much this is what a gateway router perceives the other end of the connection. Contrail uses a vRouter and when you configure virtual networks you have the ability to add a route-target to that virtual network. On the Gateway router you would create VRFs to associate with the corresponding virtual networks and prefixes can be exchanged. Data plane traffic will traverse through an MPLS tunnel between Contrail and the Gateway router. It's at the gateway router where you would "leak" the received Contrail virtual network into the main routing instance of the gateway router.

Here I use an Juniper MX as the gateway router. When I first setup contrail I used the testbed.py script to add the mx gateway router.

It's called ext_router = [ip address]

Then in contrail webui I should see the BGP session. You can however add this post contrail installation.


On the MX, I configure an iBGP session to connect with the Contrail control node.

user@router# show protocols
mpls {
    interface all;
}
bgp {
    group IBGP-CONTRAIL {
        type internal;
        local-address 192.168.10.11;
        family inet-vpn {
            unicast;
        }
        neighbor 192.168.10.2;
    }
}

Then in Contrail config node I create a virtual network and add a route target.




I create a corresponding VRF on the MX with the route target.

user@router# show routing-instances
VRF1 {
    instance-type vrf;
    interface lt-3/0/0.3;
    route-distinguisher 1.1.1.1:101;
    vrf-target target:64512:101;
    routing-options {
        static {
            route 0.0.0.0/0 next-hop 192.168.12.1;
        }
    }


I check to see the BGP session established.

user@router# run show bgp summary                                     
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
bgp.l3vpn.0         
                       8          8          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
192.168.10.2          64512       5724       6292       0       3 1d 23:08:56 Establ
  bgp.l3vpn.0: 8/8/8/0
  VRF1.inet.0: 3/3/3/0

The Virtual Network IP addresses for the VMs will be sent.

user@router# run show route receive-protocol bgp 192.168.10.2 

VRF1.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
  Prefix          Nexthop           MED     Lclpref    AS path
* 11.1.1.1/32             192.168.10.3                 100        ?
* 11.1.1.5/32             192.168.10.3                 100        ?
* 11.1.1.7/32             192.168.10.3                 200        ?

bgp.l3vpn.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
  Prefix          Nexthop           MED     Lclpref    AS path
  192.168.10.3:7:11.1.1.1/32                   
*                         192.168.10.3                 100        ?
  192.168.10.3:7:11.1.1.5/32                   
*                         192.168.10.3                 100        ?
  192.168.10.3:7:11.1.1.7/32                   
*                         192.168.10.3                 200        ?

mpls.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0                  *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
1                  *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
2                  *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
13                 *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
299904             *[VPN/170] 1d 23:14:00
                    > to 192.168.11.1 via lt-3/0/0.1, Pop     
299936             *[VPN/170] 1d 12:47:38
                      receive table VRF1.inet.0, Pop     
299952             *[VPN/170] 1d 12:47:38
                    > to 192.168.12.1 via lt-3/0/0.3, Pop     

bgp.l3vpn.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

192.168.10.3:7:11.1.1.1/32               
                   *[BGP/170] 1d 12:46:01, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 24
192.168.10.3:7:11.1.1.5/32               
                   *[BGP/170] 1d 12:46:01, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 18
192.168.10.3:7:11.1.1.7/32               
                   *[BGP/170] 1d 12:26:14, localpref 200, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 28


Note the dynamic MPLS GRE tunnel is created. You will need to create one on the MX.

user@router# show chassis
fpc 3 {
    pic 0 {
        tunnel-services;
    }
}

user@router# show routing-options
static {
    route 0.0.0.0/0 next-hop 10.161.1.1;
}
autonomous-system 64512;
dynamic-tunnels {
    dynamic_overlay_tunnels {
        source-address 192.168.10.11;
        gre;
        destination-networks {
            192.168.10.0/24;
        }
    }
}



PoC-Demo.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[Static/5] 1d 13:05:36
                    > to 192.168.12.1 via lt-3/0/0.3
11.1.1.1/32        *[BGP/170] 1d 13:03:59, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 24
11.1.1.5/32        *[BGP/170] 1d 13:03:59, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 18
11.1.1.7/32        *[BGP/170] 1d 12:44:12, localpref 200, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 28
192.168.12.0/24    *[Direct/0] 1d 13:05:36
                    > via lt-3/0/0.3
192.168.12.2/32    *[Local/0] 1d 13:05:36
                      Local via lt-3/0/0.3

LT interfaces are created to allow the virtual network traffic to communicate between the VRF and the main routing instance. You could also use RIB groups and Policies to do the same thing.

    lt-3/0/0 {
        unit 2 {
            encapsulation ethernet;
            peer-unit 3;
            family inet {
                address 192.168.12.1/24;
            }
        }
        unit 3 {
            encapsulation ethernet;
            peer-unit 2;
            family inet {
                address 192.168.12.2/24;
            }
        }
    }

You then need to make sure the interface that is connecting to the Contrail network is using MPLS.


interfaces {

    ge-3/1/1 {
        unit 0 {
            family inet {
                address 192.168.10.11/24;
            }
            family mpls;
        }
    }
    lo0 {
        unit 0 {
            family inet {
                address 1.1.1.1/32;
            }
            family iso {
                address 49.0002.0010.0100.1001.00;
            }
        }
    }
}

One thing you should be aware of is the next-hop of the route advertised by contrail points to the IP address of the Compute Node and not the control node.
user@router# run show route 11.1.1.1/32 detail

VRF1.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
11.1.1.1/32 (1 entry, 1 announced)
        *BGP    Preference: 170/-101
                Route Distinguisher: 192.168.10.3:7  <<< Contrail's RD
                Next hop type: Indirect
                Address: 0x94f4a28
                Next-hop reference count: 3
                Source: 192.168.10.2
                Next hop type: Router, Next hop index: 660
                Next hop: via gr-3/0/0.32770, selected
                Label operation: Push 24
                Label TTL action: prop-ttl
                Session Id: 0xd
                Protocol next hop: 192.168.10.3  <<<< IP of compute node
                Push 24
                Indirect next hop: 0x9574410 1048574 INH Session ID: 0xe
                State: <Secondary Active Int Ext ProtectionCand>
                Local AS: 64512 Peer AS: 64512
                Age: 1d 13:54:24     Metric2: 0
                Validation State: unverified
                Task: BGP_64512.192.168.10.2+34735
                Announcement bits (1): 1-KRT
                AS path: ?
                Communities: target:64512:101   << RT from contrail
                Import Accepted
                VPN Label: 24
                Localpref: 100
                Router ID: 192.168.10.2         <<<< IP of control node
                Primary Routing Table bgp.l3vpn.0

Thursday, September 18, 2014

How to install Contrail on a single node Ubuntu system.

I wanted to install and test out a contrail set on a single node system.

I first started with a clean Ubuntu server. Contrail seems to have support on 12.04LTS version of code (12.04.03 to be precise). So I downloaded the amd64-iso file and did a fresh install on a system.

After it came up I did some package adds to get the machine prepped.

First I made sure I was root.
sudo su
passwd
I gave root a password. This is to be used later when Contrail tries to install packages using root.

This are the files I noticed while looking at different blogs on the OpenContrail.org website. I'm not sure if all these dependencies are needed. This is what worked for me. (I'm pretty sure git isn't needed here, but I use git for other projects so I did it anyways.)

apt-get install  -y git-core ant build-essential pkg-config linux-headers-3.2.0-35-virtual
apt-get install -y scons git python-lxml wget gcc patch make unzip flex bison g++ libssl-dev autoconf automake libtool pkg-config vim python-dev python-setuptools python-paramiko

apt-get update


Then I went onto the Juniper website and downloaded the software.




You DON'T have to install Openstack first to get this going.



This package has Contrail plus the Openstack Havana version built in.

I ftp or scp this file onto my ubuntu server.

I place it in the /tmp directory.

Next I install the package.

dpkg -i contrail-install-packages_1.05.1-234~havana_all.deb

The packages get placed in a contrail directory.

cd /opt/contrail/contrail_packages

Then I run the setup shell script.

./setup.sh

After this, you'll need to create or modify a testbed.py script. This tells contrail how install the compute, storage and control nodes. Since this is an all-in-one system, I going to clone the single box example and modify it.

cd /opt/contrail/utils/fabfile/testbeds/

cp testbed_singlebox_example.py testbed.py

Next I edit the file.

vi testbed.py


In order to execute this file, you need to back up a few directories (I know it's lame, the command doesn't seem to execute in the correct directory.)

cd /opt/contrail/utils

Next you will issue the fabric command

fab install_contrail

fab setup_all


After a few minutes, the scripts will run and more packages will be installed and configured to the specifications of your testbed file.

The server will reboot automatically.

After it comes back up sudo su when you login again

Then you need to source your authentication files.

keystonerc  and openstackrc  are located in /etc/contrail

source /etc/contrail/keystonerc
source /etc/contrail/openstackrc

Here are some commands  you can issue after installation to check the state of openstack

openstack-status

nova-manage service list

root@ubuntu:/home/user# openstack-status
== Nova services ==
openstack-nova-api:           active
openstack-nova-compute:       active
openstack-nova-network:       inactive (disabled on boot)
openstack-nova-scheduler:     active
openstack-nova-volume:        inactive (disabled on boot)
openstack-nova-conductor:     active
== Glance services ==
openstack-glance-api:         active
openstack-glance-registry:    active
== Keystone service ==
openstack-keystone:           active
== Cinder services ==
openstack-cinder-api:         active
openstack-cinder-scheduler:   active
openstack-cinder-volume:      inactive (disabled on boot)
== Support services ==
mysql:                        active
libvirt-bin:                  active
rabbitmq-server:              inactive (disabled on boot)
memcached:                    inactive (disabled on boot)
== Keystone users ==

+----------------------------------+---------+---------+---------------------+
|                id                |   name  | enabled |        email        |
+----------------------------------+---------+---------+---------------------+
| 2fa7b037efe1437a9045eab35f446511 |  admin  |   True  |  admin@example.com  |
| 6bb262f464a546b391b30879f1e8f10b |  cinder |   True  |  cinder@example.com |
| fdedd2c1a26d44ab9e83f000383cedf3 |   demo  |   True  |   demo@example.com  |
| cdc338c569af42cf87ba7bc7e7e161a8 |  glance |   True  |  glance@example.com |
| 2e28f88537064d97bfed64ad62f2fc66 | neutron |   True  | neutron@example.com |
| c09776b2f84744198d46c0361bfc1070 |   nova  |   True  |   nova@example.com  |
+----------------------------------+---------+---------+---------------------+
== Glance images ==
ID                                   Name                           Disk Format          Container Format     Size         
------------------------------------ ------------------------------ -------------------- -------------------- --------------
== Nova instance flavors ==
m1.medium: Memory: 4096MB, VCPUS: 2, Root: 40GB, Ephemeral: 0Gb, FlavorID: 3, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.tiny: Memory: 512MB, VCPUS: 1, Root: 1GB, Ephemeral: 0Gb, FlavorID: 1, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.large: Memory: 8192MB, VCPUS: 4, Root: 80GB, Ephemeral: 0Gb, FlavorID: 4, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.xlarge: Memory: 16384MB, VCPUS: 8, Root: 160GB, Ephemeral: 0Gb, FlavorID: 5, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.small: Memory: 2048MB, VCPUS: 1, Root: 20GB, Ephemeral: 0Gb, FlavorID: 2, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
== Nova instances ==

root@ubuntu:/home/user# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth ubuntu                               internal         enabled    :-)   2014-09-18 17:26:26
nova-console     ubuntu                               internal         enabled    :-)   2014-09-18 17:26:26
nova-scheduler   ubuntu                               internal         enabled    :-)   2014-09-18 17:26:26
nova-conductor   ubuntu                               internal         enabled    :-)   2014-09-18 17:26:27
nova-compute     ubuntu                               nova             enabled    :-)   2014-09-18 17:26:26


You are now ready to login to the Horizon Dashboard to configure your Openstack platform.

http://x.x.x.x/horizon



To log into your contrail controller change the port number to 8143

https://x.x.x.x:8143/



Next time I'll show how create a MP-BGP connection to a MX gateway to all traffic from the virtual network to connect to a physical network.

Thursday, May 22, 2014

Test driving Open Contrail

So I went to the hands on Open Contrail MeetUp. It started off as a semi Q&A overview of the architecture and business use case.

There is a need from Service providers to create services for customers that could

a) be deployed in a timely matter (seconds rather than months

b) Lower Op Ex through automation

c) reduce Operational complexity with template configs

Contrail is a module that can be currently added to an Openstack or Cloudstack platform. I'll talk about the Openstack implementation as I know more about this Cloud platform. The Contrail Controller works entirely in the overlay. It does not know anything about the underlay except for the gateway so it expects the physical network to already be in place. This means it can interoperate with any existing switch vendor network. When you install Contrail as a Neutron plugin into Openstack, it will create a vRouter which is bound to the hypervisor. Currently KVM is the one it works with but I hear that it was tested on VMware as a vm but not directly onto ESXi.  The vRouter is important because you wouldn't use Open vSwitch (OVS). The main function of Contrail is creating the overlay network within Openstack.

Here's how this works. Let's look at this simple logical topology



First you would go to the Contrail Controller (Web GUI) and create the two networks.


Then you would go to the Openstack Horizon Web UI and spin up your VM instances. Under the networking tab you should be able to associate the network to the VM.



Next you would need to create a policy so that each network can talk to each other. Think of it as creating a firewall ACL.



Last you need to attach the policy to each network.

Contrail will automatically assign the VM an ip address and point the default route of the VM to the vRouter.


This will allow you to access the Back End Server.

Overall provisioning time would be roughly 2-3 minutes depending on how fast the VMs can spin up.

Now if you want to provide access to anything outside of the Data Center you'll need to go through a gateway router. AKA Data Center Interconnect (DCI)

To understand this more you will need a Gateway router that speaks MP-BGP can can support GRE or VXLAN. A GRE tunnel will be created from the Contrail vRouter (Virtual) to the physical Gateway Router. If you want to create multi-tenency, you will have to put each tunnel into a separate VRF on the Gateway Router. This provisioning is a manual process of the Gateway Router. Contrail does not manage the Gateway Router at all, however you may be able to automate this function using scripting.
The vRouter will exchange routes via MP-BGP and there is a setting in Contrail to create the Router-Target. The BGP  family types vRouter supports is inet-vpn (or vpnv4) and evpn.

Overall the provisioning is fairly simple. There are a few things I would like "improved" in the product, such as being able to attach a policy from the networking overview "tab" instead of having to drill down into each network and doing it there. There could be a more excel feel to this view as you should be able to make modifications from this view using pulldowns or directly adding the ip subnets.
Also the manual process of creating VRFs on the Gateway router is a bit tedious. I'm investigating whether Openstack has the ability run a script that can make a netconf rpc call to the gateway router.

An alternative is to use slax and curl on a Juniper MX router to extract the details from the Contrail controller.