Tuesday, September 30, 2014

Using python to connect remotely to Openstack

In order to beef up my DevOps skills I decided to see if I could connect through a REST API using python. I found out that there are some python modules built specifically for this.

One such module is called novaclient. You would need pip on your machine to do this.

sudo easy_install pip


Then you can use pip to install the nova client.

sudo pip install python-novaclient


If you want to look at the details of the python module it can be found here:

https://github.com/openstack/python-novaclient

This is where I look more into the code and found out it’s capabilities

You can pass parameters to the “client” function.

def __init__(self, username=None, api_key=None, project_id=None,
auth_url=None, insecure=False, timeout=None,
proxy_tenant_id=None, proxy_token=None, region_name=None,
endpoint_type='publicURL', extensions=None,
service_type='compute', service_name=None,
volume_service_name=None, timings=False, bypass_url=None,
os_cache=False, no_cache=True, http_log_debug=False,
auth_system='keystone', auth_plugin=None, auth_token=None,
cacert=None, tenant_id=None, user_id=None,
connection_pool=False, session=None, auth=None,
completion_cache=None):

First I built a login function that will pass these parameters which I can call from my main script.


login.py
-----------
def get_nova_credentials():
 cred =  {}
 cred['username'] = "admin"
 cred['api_key'] = “password”
 cred['auth_url'] = "http://<openstack-ip>:5000/v2.0"
 cred['project_id'] = "demo"
 cred['service_type'] = "compute"
 return cred

Now in my main script I can import the client and my credentials

server-list.py
-----------------
#!/usr/bin/env python

import novaclient
from novaclient.v1_1 import client
from login import get_nova_credentials
credentials = get_nova_credentials()

#Pass credentials to the client function.

nova = client.Client(**credentials)

#grab the list of servers and print out the id, names and status

vms = nova.servers.list(detailed=True)
for vm in vms:
  print vm.id, vm.name, vm.status

—————
script in action.

laptop$ python server-list.py 
f5801333-5d81-496c-b257-e589ca36e944 Cirros-VM2 ACTIVE
098270e0-e5fb-4ea6-a1f1-2dfca11a409d Cirros-VM1 ACTIVE

So what's the big deal? Why do this when you can use the Horizon web ui?



Now that I have a basic understanding of this module, I can start automating things such as building VMs, virtual networks, etc in a scaled and precise manner.



Sunday, September 28, 2014

Exploring the REST API on contrail.

I noticed on github that there was a tutorial on how to access the REST API on Juniper Contrail.

https://juniper.github.io/contrail-vnc/api-doc/html/tutorial_with_rest.html#

The easiest way to access this is by using cURL. Contrail uses tcp port 8082 for accessing it's REST API.

The url http:/contrail-ip/virtual-networks prints out a list of configured virtual networks on Contrail.

$ curl -X GET -H "Content-Type: application/json; charset=UTF-8" http://172.16.1.4:8082/virtual-networks | python -mjson.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1229  100  1229    0     0   2129      0 --:--:-- --:--:-- --:--:--  2129
{
    "virtual-networks": [
        {
            "fq_name": [
                "default-domain",
                "default-project",
                "__link_local__"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/4092df7b-997a-4ee7-a5cc-46d5db1187d4",
            "uuid": "4092df7b-997a-4ee7-a5cc-46d5db1187d4"
        },
        {
            "fq_name": [
                "default-domain",
                "default-project",
                "default-virtual-network"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/34579f9a-064e-4048-96a7-a30355c54e44",
            "uuid": "34579f9a-064e-4048-96a7-a30355c54e44"
        },
        {
            "fq_name": [
                "default-domain",
                "default-project",
                "ip-fabric"
            ],
            "href": "http://172.16.100.104:8082/virtual-network/db7d1afe-bcaa-456b-b33a-9a36f6d176fe",
            "uuid": "db7d1afe-bcaa-456b-b33a-9a36f6d176fe"
        },
        {
            "fq_name": [
                "default-domain",
                "demo",
                "Network1"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/380c0f4e-34b6-4901-98b2-61a8ed7afd7d",
            "uuid": "380c0f4e-34b6-4901-98b2-61a8ed7afd7d"
        },
        {
            "fq_name": [
                "default-domain",
                "demo",
                "Network2"
            ],
            "href": "http://172.16.100.104:8082/virtual-network/ffe03354-aaa2-4305-b615-654e14111134",
            "uuid": "ffe03354-aaa2-4305-b615-654e14111134"
        },
        {
            "fq_name": [
                "default-domain",
                "demo",
                "Network3"
            ],
            "href": "http://172.16.1.4:8082/virtual-network/93317422-eea4-4b77-88cb-5aaac3bb58b6",
            "uuid": "93317422-eea4-4b77-88cb-5aaac3bb58b6"
        }
    ]
}

However cURL, at least for me, isn’t capable of abstracting the data structures programatically. It's more like screen scraping. So I dug around the internet and noticed that python has a curl module.

http://pycurl.sourceforge.net/doc/quickstart.html#

Now I can use python to execute cURL to pull data from a REST API. 

There are a few parts to this script.

The first part is a function to issue the cURL command.

The second part is to abstract the route-target from a virtual network created in the Contrail controller.

This could then be used later on with another script to create a template configuration and program the underlay gateway router with a VRF.

In this script I cheated a little by grabbing a specific virtual-network’s url address from the previous curl command. Then parsed the data to grab the information I was looking for.

—————————————

import pycurl
import StringIO
import json

#Function to issue cURL command
def get_url(WEB):
  buf = StringIO.StringIO()
  c = pycurl.Curl()
  c.setopt(c.URL, WEB)
  c.setopt(c.WRITEFUNCTION, buf.write)
  c.perform()
  body = buf.getvalue()
  network = json.loads(body)
  buf.close()
  return network

#URL of virtual network on contrail

SITE = 'http://172.16.1.4:8082/virtual-network/380c0f4e-34b6-4901-98b2-61a8ed7afd7d'
objects = get_url(SITE)
#pretty print the json results
print json.dumps(objects, sort_keys=True, indent=4)

#This part is to grab the path of the Virtual Network name and RT from contrail

print "Name: ", objects['virtual-network']['name'], " RT: ", objects['virtual-network']['route_target_list']['route_target'][0]


Script in action:
——————
$ python contrail.py 
{
    "virtual-network": {
        "fq_name": [
            "default-domain", 
            "demo", 
            "Network1"
        ], 
        "href": "http://172.16.1.4:8082/virtual-network/380c0f4e-34b6-4901-98b2-61a8ed7afd7d", 
        "id_perms": {
            "created": "2014-09-19T19:19:11.650288", 
            "description": null, 
            "enable": true, 
            "last_modified": "2014-09-27T05:22:30.453524", 
            "permissions": {
                "group": "cloud-admin-group", 
                "group_access": 7, 
                "other_access": 7, 
                "owner": "cloud-admin", 
                "owner_access": 7
            }, 
            "uuid": {
                "uuid_lslong": 11002964217786203517, 
                "uuid_mslong": 4038619794410719489
            }
        }, 
        "instance_ip_back_refs": [
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/instance-ip/17b19dc0-b177-4df7-955b-57b8a87caf28", 
                "to": [
                    "17b19dc0-b177-4df7-955b-57b8a87caf28"
                ], 
                "uuid": "17b19dc0-b177-4df7-955b-57b8a87caf28"
            }, 
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/instance-ip/d525ddce-3542-4986-b8d1-56f3831d8678", 
                "to": [
                    "d525ddce-3542-4986-b8d1-56f3831d8678"
                ], 
                "uuid": "d525ddce-3542-4986-b8d1-56f3831d8678"
            }
        ], 
        "is_shared": false, 
        "name": "Network1", 
        "network_ipam_refs": [
            {
                "attr": {
                    "ipam_subnets": [
                        {
                            "default_gateway": "100.1.1.254", 
                            "subnet": {
                                "gw": "100.1.1.254", 
                                "ip_prefix": "100.1.1.0", 
                                "ip_prefix_len": 24
                            }
                        }
                    ]
                }, 
                "href": "http://172.16.1.4:8082/network-ipam/1f24fa35-b7bf-4d0f-8185-746f58e234c9", 
                "to": [
                    "default-domain", 
                    "demo", 
                    "Network1-ipam"
                ], 
                "uuid": "1f24fa35-b7bf-4d0f-8185-746f58e234c9"
            }
        ], 
        "network_policy_refs": [
            {
                "attr": {
                    "sequence": {
                        "major": 0, 
                        "minor": 0
                    }, 
                    "timer": null
                }, 
                "href": "http://172.16.1.4:8082/network-policy/0a1c1776-a323-4c00-959a-aada50b91be8", 
                "to": [
                    "default-domain", 
                    "demo", 
                    "net1<->net2"
                ], 
                "uuid": "0a1c1776-a323-4c00-959a-aada50b91be8"
            }
        ], 
        "parent_href": "http://172.16.1.4:8082/project/3af66afe-3284-40cc-8b04-85c69af512c7", 
        "parent_type": "project", 
        "parent_uuid": "3af66afe-3284-40cc-8b04-85c69af512c7", 
        "route_target_list": {
            "route_target": [
                "target:64512:100"
            ]
        }, 
        "router_external": false, 
        "routing_instances": [
            {
                "href": "http://172.16.1.4:8082/routing-instance/06165761-3b55-417d-acf0-0ad27a9010d0", 
                "to": [
                    "default-domain", 
                    "demo", 
                    "Network1", 
                    "Network1"
                ], 
                "uuid": "06165761-3b55-417d-acf0-0ad27a9010d0"
            }
        ], 
        "uuid": "380c0f4e-34b6-4901-98b2-61a8ed7afd7d", 
        "virtual_machine_interface_back_refs": [
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/virtual-machine-interface/2aefccdc-bf34-4d3b-bd37-41f716274883", 
                "to": [
                    "e432ad9a-8f6e-4f7c-813b-18ada76bfd64", 
                    "2aefccdc-bf34-4d3b-bd37-41f716274883"
                ], 
                "uuid": "2aefccdc-bf34-4d3b-bd37-41f716274883"
            }, 
            {
                "attr": null, 
                "href": "http://172.16.1.4:8082/virtual-machine-interface/f20bb145-49cd-43bf-a6e5-9d9c50794244", 
                "to": [
                    "2e775d6f-8043-41df-9295-d1b8d8f705a2", 
                    "f20bb145-49cd-43bf-a6e5-9d9c50794244"
                ], 
                "uuid": "f20bb145-49cd-43bf-a6e5-9d9c50794244"
            }
        ], 
        "virtual_network_properties": {
            "extend_to_external_routers": null, 
            "forwarding_mode": "l2_l3", 
            "network_id": 4, 
            "vxlan_network_identifier": null
        }
    }
}

Name:  Network1  RT:  target:64512:100


As you can see I was able to pull the "name" of the virtual network and the route-target of the virtual network. Later I can then create a script template to build the VRF on the Router Gateway like a MX.  That's one step closer to network automation.

Wednesday, September 24, 2014

How Contrail communicates with the underlay

Contrail typically consists of a cluster of nodes. The three main nodes are the config, control and compute. The config node is where the Openstack Horizon and Contrail Controller exists. The Control node is used to form a MP-BGP session to a gateway router. The Compute node hosts all the VMs and virtual networks.

You can think of Contrail as a PE router as pretty much this is what a gateway router perceives the other end of the connection. Contrail uses a vRouter and when you configure virtual networks you have the ability to add a route-target to that virtual network. On the Gateway router you would create VRFs to associate with the corresponding virtual networks and prefixes can be exchanged. Data plane traffic will traverse through an MPLS tunnel between Contrail and the Gateway router. It's at the gateway router where you would "leak" the received Contrail virtual network into the main routing instance of the gateway router.

Here I use an Juniper MX as the gateway router. When I first setup contrail I used the testbed.py script to add the mx gateway router.

It's called ext_router = [ip address]

Then in contrail webui I should see the BGP session. You can however add this post contrail installation.


On the MX, I configure an iBGP session to connect with the Contrail control node.

user@router# show protocols
mpls {
    interface all;
}
bgp {
    group IBGP-CONTRAIL {
        type internal;
        local-address 192.168.10.11;
        family inet-vpn {
            unicast;
        }
        neighbor 192.168.10.2;
    }
}

Then in Contrail config node I create a virtual network and add a route target.




I create a corresponding VRF on the MX with the route target.

user@router# show routing-instances
VRF1 {
    instance-type vrf;
    interface lt-3/0/0.3;
    route-distinguisher 1.1.1.1:101;
    vrf-target target:64512:101;
    routing-options {
        static {
            route 0.0.0.0/0 next-hop 192.168.12.1;
        }
    }


I check to see the BGP session established.

user@router# run show bgp summary                                     
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
bgp.l3vpn.0         
                       8          8          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
192.168.10.2          64512       5724       6292       0       3 1d 23:08:56 Establ
  bgp.l3vpn.0: 8/8/8/0
  VRF1.inet.0: 3/3/3/0

The Virtual Network IP addresses for the VMs will be sent.

user@router# run show route receive-protocol bgp 192.168.10.2 

VRF1.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
  Prefix          Nexthop           MED     Lclpref    AS path
* 11.1.1.1/32             192.168.10.3                 100        ?
* 11.1.1.5/32             192.168.10.3                 100        ?
* 11.1.1.7/32             192.168.10.3                 200        ?

bgp.l3vpn.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
  Prefix          Nexthop           MED     Lclpref    AS path
  192.168.10.3:7:11.1.1.1/32                   
*                         192.168.10.3                 100        ?
  192.168.10.3:7:11.1.1.5/32                   
*                         192.168.10.3                 100        ?
  192.168.10.3:7:11.1.1.7/32                   
*                         192.168.10.3                 200        ?

mpls.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0                  *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
1                  *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
2                  *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
13                 *[MPLS/0] 1w1d 04:10:08, metric 1
                      Receive
299904             *[VPN/170] 1d 23:14:00
                    > to 192.168.11.1 via lt-3/0/0.1, Pop     
299936             *[VPN/170] 1d 12:47:38
                      receive table VRF1.inet.0, Pop     
299952             *[VPN/170] 1d 12:47:38
                    > to 192.168.12.1 via lt-3/0/0.3, Pop     

bgp.l3vpn.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

192.168.10.3:7:11.1.1.1/32               
                   *[BGP/170] 1d 12:46:01, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 24
192.168.10.3:7:11.1.1.5/32               
                   *[BGP/170] 1d 12:46:01, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 18
192.168.10.3:7:11.1.1.7/32               
                   *[BGP/170] 1d 12:26:14, localpref 200, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 28


Note the dynamic MPLS GRE tunnel is created. You will need to create one on the MX.

user@router# show chassis
fpc 3 {
    pic 0 {
        tunnel-services;
    }
}

user@router# show routing-options
static {
    route 0.0.0.0/0 next-hop 10.161.1.1;
}
autonomous-system 64512;
dynamic-tunnels {
    dynamic_overlay_tunnels {
        source-address 192.168.10.11;
        gre;
        destination-networks {
            192.168.10.0/24;
        }
    }
}



PoC-Demo.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[Static/5] 1d 13:05:36
                    > to 192.168.12.1 via lt-3/0/0.3
11.1.1.1/32        *[BGP/170] 1d 13:03:59, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 24
11.1.1.5/32        *[BGP/170] 1d 13:03:59, localpref 100, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 18
11.1.1.7/32        *[BGP/170] 1d 12:44:12, localpref 200, from 192.168.10.2
                      AS path: ?, validation-state: unverified
                    > via gr-3/0/0.32770, Push 28
192.168.12.0/24    *[Direct/0] 1d 13:05:36
                    > via lt-3/0/0.3
192.168.12.2/32    *[Local/0] 1d 13:05:36
                      Local via lt-3/0/0.3

LT interfaces are created to allow the virtual network traffic to communicate between the VRF and the main routing instance. You could also use RIB groups and Policies to do the same thing.

    lt-3/0/0 {
        unit 2 {
            encapsulation ethernet;
            peer-unit 3;
            family inet {
                address 192.168.12.1/24;
            }
        }
        unit 3 {
            encapsulation ethernet;
            peer-unit 2;
            family inet {
                address 192.168.12.2/24;
            }
        }
    }

You then need to make sure the interface that is connecting to the Contrail network is using MPLS.


interfaces {

    ge-3/1/1 {
        unit 0 {
            family inet {
                address 192.168.10.11/24;
            }
            family mpls;
        }
    }
    lo0 {
        unit 0 {
            family inet {
                address 1.1.1.1/32;
            }
            family iso {
                address 49.0002.0010.0100.1001.00;
            }
        }
    }
}

One thing you should be aware of is the next-hop of the route advertised by contrail points to the IP address of the Compute Node and not the control node.
user@router# run show route 11.1.1.1/32 detail

VRF1.inet.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
11.1.1.1/32 (1 entry, 1 announced)
        *BGP    Preference: 170/-101
                Route Distinguisher: 192.168.10.3:7  <<< Contrail's RD
                Next hop type: Indirect
                Address: 0x94f4a28
                Next-hop reference count: 3
                Source: 192.168.10.2
                Next hop type: Router, Next hop index: 660
                Next hop: via gr-3/0/0.32770, selected
                Label operation: Push 24
                Label TTL action: prop-ttl
                Session Id: 0xd
                Protocol next hop: 192.168.10.3  <<<< IP of compute node
                Push 24
                Indirect next hop: 0x9574410 1048574 INH Session ID: 0xe
                State: <Secondary Active Int Ext ProtectionCand>
                Local AS: 64512 Peer AS: 64512
                Age: 1d 13:54:24     Metric2: 0
                Validation State: unverified
                Task: BGP_64512.192.168.10.2+34735
                Announcement bits (1): 1-KRT
                AS path: ?
                Communities: target:64512:101   << RT from contrail
                Import Accepted
                VPN Label: 24
                Localpref: 100
                Router ID: 192.168.10.2         <<<< IP of control node
                Primary Routing Table bgp.l3vpn.0

Monday, September 22, 2014

How to access a VM in OpenStack + Contrail

When you first spin up VMs in an Openstack + Contrail environment you don’t have many choices on how to access your VM. You can go through the Openstack Webui to access via the console. 



But that method is not ideal as you cannot do things such as copy and paste.

Another method is to access the VM from the Contrail compute node. All VMs are stored on the compute node of the Contrail cluster. 


Or you can inject an ssh-key into your VM so you can access the VM from the compute node.

First you create your ssh-key.

Select the Access and Security tab in Openstack.

Choose Keypairs

 Then create a key pair.



You will then be able to download the "key" to your local computer. You will need to copy this key onto the Compute node.


MyComputer$ scp ssh-key.pem root@172.16.100.104:~/.ssh


I chose to place this key in the .ssh directory of the Compute node.

Compute-Node:~/.ssh$ ls
authorized_keys  id_rsa  id_rsa.pub  known_hosts  ssh-key.pem

Next you create your VM.

Make sure you choose the ssh-key you created.



You can see if your ssh-key was injected from the horizon dashboard.

Note: One of the problems I see in Horizon is the inability of injecting the ssh-key after the VM was created. There is no was to edit and do this post-creation. Pretty annoying especially if there are many ssh-keys and you forget to choose the correct one.

Next go back to your compute node.
Issue the netstat -rn command.

When a VM is created the Compute node generates a 169.254.x.x link local address. It's similar to a loopback address and is only accessible on the local machine and cannot be accessed over the network.

Compute-Node$ netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         172.16.100.1    0.0.0.0         UG        0 0          0 vhost0
169.254.0.3     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.4     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.5     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.6     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.7     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.8     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
169.254.0.9     0.0.0.0         255.255.255.255 UH        0 0          0 vhost0
172.16.100.0    0.0.0.0         255.255.255.0   U         0 0          0 vhost0


Then you can ssh into the vm of your choice with -i parameter and path-to/ssh-key. For example:

Compute-Node$ssh -i ~/.ssh/ssh-key.pem cirros@169.254.0.3

Cirros-VM1$ whoami
cirros

You can see the ssh-key authorized hosts file in the .ssh directory of the VM

$ more authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCe9TjQfiVTiidtt2qUICwK/7DArACsjLWDkx7Esvu6vWS8MahyrlgNkeQtaFDx7wub5LaHesqq6wj2pKDX07RWxylAkxShqy2+ZIoOgJqMBr0vfq4xpp2qU7fPiAq4YV3CdqTwOggnNHQNeGgpM6406IJSJcVYJVqTC3/3SsFgxzva4UqNgA3mjRQxSsmVxc6jVHVKfAYQ8+fDFNniNjY+q9qvihtAXwmLGfv/gxE/N01aMC+MH1b5cmj2o7WNpbt5qGDyrf3jB6rqz5CI95XS0MjvaScTWKb5ul0yuWTkp1zcPY4vzDaFjc0Fl7627Lm2IuQZg76dgKl3rnnPtXbv Generated by Nova




Thursday, September 18, 2014

How to install Contrail on a single node Ubuntu system.

I wanted to install and test out a contrail set on a single node system.

I first started with a clean Ubuntu server. Contrail seems to have support on 12.04LTS version of code (12.04.03 to be precise). So I downloaded the amd64-iso file and did a fresh install on a system.

After it came up I did some package adds to get the machine prepped.

First I made sure I was root.
sudo su
passwd
I gave root a password. This is to be used later when Contrail tries to install packages using root.

This are the files I noticed while looking at different blogs on the OpenContrail.org website. I'm not sure if all these dependencies are needed. This is what worked for me. (I'm pretty sure git isn't needed here, but I use git for other projects so I did it anyways.)

apt-get install  -y git-core ant build-essential pkg-config linux-headers-3.2.0-35-virtual
apt-get install -y scons git python-lxml wget gcc patch make unzip flex bison g++ libssl-dev autoconf automake libtool pkg-config vim python-dev python-setuptools python-paramiko

apt-get update


Then I went onto the Juniper website and downloaded the software.




You DON'T have to install Openstack first to get this going.



This package has Contrail plus the Openstack Havana version built in.

I ftp or scp this file onto my ubuntu server.

I place it in the /tmp directory.

Next I install the package.

dpkg -i contrail-install-packages_1.05.1-234~havana_all.deb

The packages get placed in a contrail directory.

cd /opt/contrail/contrail_packages

Then I run the setup shell script.

./setup.sh

After this, you'll need to create or modify a testbed.py script. This tells contrail how install the compute, storage and control nodes. Since this is an all-in-one system, I going to clone the single box example and modify it.

cd /opt/contrail/utils/fabfile/testbeds/

cp testbed_singlebox_example.py testbed.py

Next I edit the file.

vi testbed.py


In order to execute this file, you need to back up a few directories (I know it's lame, the command doesn't seem to execute in the correct directory.)

cd /opt/contrail/utils

Next you will issue the fabric command

fab install_contrail

fab setup_all


After a few minutes, the scripts will run and more packages will be installed and configured to the specifications of your testbed file.

The server will reboot automatically.

After it comes back up sudo su when you login again

Then you need to source your authentication files.

keystonerc  and openstackrc  are located in /etc/contrail

source /etc/contrail/keystonerc
source /etc/contrail/openstackrc

Here are some commands  you can issue after installation to check the state of openstack

openstack-status

nova-manage service list

root@ubuntu:/home/user# openstack-status
== Nova services ==
openstack-nova-api:           active
openstack-nova-compute:       active
openstack-nova-network:       inactive (disabled on boot)
openstack-nova-scheduler:     active
openstack-nova-volume:        inactive (disabled on boot)
openstack-nova-conductor:     active
== Glance services ==
openstack-glance-api:         active
openstack-glance-registry:    active
== Keystone service ==
openstack-keystone:           active
== Cinder services ==
openstack-cinder-api:         active
openstack-cinder-scheduler:   active
openstack-cinder-volume:      inactive (disabled on boot)
== Support services ==
mysql:                        active
libvirt-bin:                  active
rabbitmq-server:              inactive (disabled on boot)
memcached:                    inactive (disabled on boot)
== Keystone users ==

+----------------------------------+---------+---------+---------------------+
|                id                |   name  | enabled |        email        |
+----------------------------------+---------+---------+---------------------+
| 2fa7b037efe1437a9045eab35f446511 |  admin  |   True  |  admin@example.com  |
| 6bb262f464a546b391b30879f1e8f10b |  cinder |   True  |  cinder@example.com |
| fdedd2c1a26d44ab9e83f000383cedf3 |   demo  |   True  |   demo@example.com  |
| cdc338c569af42cf87ba7bc7e7e161a8 |  glance |   True  |  glance@example.com |
| 2e28f88537064d97bfed64ad62f2fc66 | neutron |   True  | neutron@example.com |
| c09776b2f84744198d46c0361bfc1070 |   nova  |   True  |   nova@example.com  |
+----------------------------------+---------+---------+---------------------+
== Glance images ==
ID                                   Name                           Disk Format          Container Format     Size         
------------------------------------ ------------------------------ -------------------- -------------------- --------------
== Nova instance flavors ==
m1.medium: Memory: 4096MB, VCPUS: 2, Root: 40GB, Ephemeral: 0Gb, FlavorID: 3, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.tiny: Memory: 512MB, VCPUS: 1, Root: 1GB, Ephemeral: 0Gb, FlavorID: 1, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.large: Memory: 8192MB, VCPUS: 4, Root: 80GB, Ephemeral: 0Gb, FlavorID: 4, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.xlarge: Memory: 16384MB, VCPUS: 8, Root: 160GB, Ephemeral: 0Gb, FlavorID: 5, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.small: Memory: 2048MB, VCPUS: 1, Root: 20GB, Ephemeral: 0Gb, FlavorID: 2, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
== Nova instances ==

root@ubuntu:/home/user# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth ubuntu                               internal         enabled    :-)   2014-09-18 17:26:26
nova-console     ubuntu                               internal         enabled    :-)   2014-09-18 17:26:26
nova-scheduler   ubuntu                               internal         enabled    :-)   2014-09-18 17:26:26
nova-conductor   ubuntu                               internal         enabled    :-)   2014-09-18 17:26:27
nova-compute     ubuntu                               nova             enabled    :-)   2014-09-18 17:26:26


You are now ready to login to the Horizon Dashboard to configure your Openstack platform.

http://x.x.x.x/horizon



To log into your contrail controller change the port number to 8143

https://x.x.x.x:8143/



Next time I'll show how create a MP-BGP connection to a MX gateway to all traffic from the virtual network to connect to a physical network.

Friday, September 12, 2014

Network Automation is as easy as Py

as in PyEZ. PyEZ is a micro-framework to remotely manage and automate Juniper devices. It works with Python and allows you to pull Junos specific features into an abstraction layer.  This is great because you don't have to do any screen scraping to pull out any fields. I installed this module on my Mac to test this out.

The documentation is located here is great because you can look at the apis on how to create your script. The first script I wanted to test out is how to pull information from a router.

PyEZ can use YAML which is a human readable format markup language. I created a yaml file to extract the fields I was looking for. 

Here's my yaml file.

vrf.yml
VRF:
  get: routing-instances/instance
  args_key: name
  view: VRFView

VRFView:
 fields:
  instance_name: name
  instance_type: instance-type
  rd_type: route-distinguisher/rd-type
  vrf_target: vrf-target/community
  interface: interface/name

My script will parse VRFs on a router. I created two routing instances for this demo.

jnpr@R1# show routing-instances
VRF1 {
    instance-type vrf;
    interface lo0.1;
    route-distinguisher 1.1.1.1:100;
    vrf-target target:100:100;
}
VRF2 {
    instance-type vrf;
    interface lo0.2;
    route-distinguisher 1.1.1.1:101;
    vrf-target target:100:101;
}

Now I can test this in python.

$ python
Python 2.7.5 (default, Mar  9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

First I import all the necessary libraries.

>>> from pprint import pprint
>>> from jnpr.junos import Device
>>> from jnpr.junos.op.routes import RouteTable
>>> from lxml import etree
>>> from jnpr.junos.factory import loadyaml
>>> globals().update( loadyaml('vrf.yml') )

Then I open a connection to a junos device

>>> dev = Device('hostname_or_ip', user='username', password='password')
>>> dev.open()
Device(x.x.x.x)


Next I create a table
>>> tbl = VRF(dev)



Then get the fields for the table
>>> tbl.get(values=True) #make sure to pass values=True
VRF:x.x.x.x: 2 items






Now I can iterate through the table an print the contents.

>>> for item in tbl:
...     print 'instance_name:', item.instance_name
...     print 'instance_type:', item.instance_type
...     print 'rd_type:', item.rd_type
...     print 'vrf_target:', item.vrf_target
...     print 'interface:', item.interface
...
instance_name: VRF1
instance_type: vrf
rd_type: 1.1.1.1:101
vrf_target: target:100:101
interface: lo0.1
instance_name: VRF2
instance_type: vrf
rd_type: 1.1.1.1:102
vrf_target: target:100:102
interface: lo0.2

Now I can then manipulate the tables and look at individual fields.

>>> find = tbl['VRF1']

>>> find.interface
'lo0.1'

Now imagine a router with a hundred VRFs. I can now parse through this router remotely and automate operations.