Thursday, June 26, 2014

SDN - Using POX Openflow controller to program a Juniper EX switch Pt.2

The last blog entry I demonstrated how to configure a Juniper EX9200 switch to communicate with an Openflow controller (POX). The flow entry logic was very basic. It turned all the ports on the Openflow switch into a dumb hub. Flood out all ports except for the source.

In this post I'll demonstrate how to create a more unique flow entry.  This is based on the 10-tuple OpenFlow v1.0 header fields which are:

Ingress Port, Ethec Src, Ether Dst, Ether Type, Vlan ID, IP Dst, IP Src, TCP Dst, TCP Src, and IP Proto

First we need to figure out what a host does on a layer 2 network. If Host A wants to talk to Host B, it will first send out an ARP packet asking what is the mac address of Host B based on the IP address. After recieving a reply, Host A will send a unicast frame with the destnation Mac of Host B. Host B in turn will send unicast packets back to Host A. This means we need to program three types of flows.  One for ARP and two for unicast packets for each direction.

Below is the output in action:

jnpr@EX9200-RE0> show openflow flows detail   

jnpr@EX9200-RE0>

Next I startup the openflow controller:
jnpr@ubuntu:~/OPENFLOW-CONTROLLER$ ./pox.py misc.static
POX 0.2.0 (carp) / Copyright 2011-2013 James McCauley, et al.
INFO:core:POX 0.2.0 (carp) is up.
INFO:openflow.of_01:[3c-8a-b0-0d-c7-c0 1] connected
INFO:misc.static:sending to DPID ARP flow creation 
INFO:misc.static:sending to DPID flow creation->

INFO:misc.static:sending to DPID flow creation<-

On the EX switch:

jnpr@EX9200-RE0> show openflow flows detail   
Flow name: flow-16842752
Table ID: 1     Flow ID: 16842752           
Priority: 32768   Idle timeout(in sec):0        Hard timeout(in sec): 0     
Match: Input port: 1    
       Ethernet src addr: wildcard         
       Ethernet dst addr: wildcard         
       Input vlan id: 100               Input VLAN priority: wildcard
       Ether type: 0x800  
       IP ToS: wildcard                 IP protocol: wildcard
       IP src addr: wildcard            IP dst addr: 10.1.1.3/32     
       Source port: 0                   Destination port: 0     
Action: Output port 2,

Flow name: flow-33619968
Table ID: 1     Flow ID: 33619968           
Priority: 32768   Idle timeout(in sec):0        Hard timeout(in sec): 0     
Match: Input port: 2    
       Ethernet src addr: wildcard         
       Ethernet dst addr: wildcard         
       Input vlan id: 100               Input VLAN priority: wildcard
       Ether type: 0x800  
       IP ToS: wildcard                 IP protocol: wildcard
       IP src addr: wildcard            IP dst addr: 10.1.1.2/32     
       Source port: 0                   Destination port: 0     
Action: Output port 1,

Flow name: flow-83951616
Table ID: 1     Flow ID: 83951616           
Priority: 32768   Idle timeout(in sec):0        Hard timeout(in sec): 0     
Match: Input port: wildcard
       Ethernet src addr: wildcard         
       Ethernet dst addr: wildcard         
       Input vlan id: wildcard          Input VLAN priority: wildcard
       Ether type: 0x806  
       IP ToS: 0x0                      IP protocol: wildcard
       IP src addr: wildcard            IP dst addr: wildcard        
       Source port: 0                   Destination port: 0     
Action: Output port 65531,


jnpr@EX9200-RE0> show openflow flows          
Switch                 Flow      Number of packets    Priority Number of Number of
Name                   ID                                            match    action
oftest-92k             16842752      248                     32768    6         1       
oftest-92k             33619968      248                     32768    6         1       
oftest-92k             83951616        4                       32768    4         1      


Python code for POX

jnpr@ubuntu:~/OPENFLOW-CONTROLLER/pox/misc$ cat static.py

---------------------
from pox.core import core
from pox.lib.packet.ipv4 import ipv4
from pox.lib.packet.arp import arp
import pox.lib.packet as pkt
import pox.openflow.libopenflow_01 as of
import re

log = core.getLogger()

class Sdn (object):
  def __init__ (self, connection):
    self.connection = connection
    connection.addListeners(self)


    # FIRST OPENFLOW RULE - ARP
    #create an Openflow using flow table modification
    arp = of.ofp_flow_mod()
    #Define a match structure for your flow rules to follow
    #if ether type is 0x0806 or Arp
    arp.match.dl_type = 0x0806
    # Add an action to send to flood out all ports except the source
    arp.actions.append(of.ofp_action_output(port = of.OFPP_FLOOD))
    # program flow on switch
    self.connection.send(arp)
    #Send debug message for controller
    log.info("sending to DPID ARP flow creation  " )

    # FIRST OPENFLOW RULE
    #create an Openflow using flow table modification
    msg = of.ofp_flow_mod()
    #Define a match structure for your flow rules to follow
    msg.match.in_port = 1
    msg.match.dl_vlan = 100
    msg.match.dl_type = 0x0800
    msg.match.nw_dst = "10.1.1.3/32"
    # Add an action to send to the specified port
    msg.actions.append(of.ofp_action_output(port = 2))
    # Send message to switch
    self.connection.send(msg)
    #Debug message for controller
    log.info("sending to DPID flow creation->" )

    # SECOND OPENFLOW RULE
    #create an Openflow using flow table modification
    msg2 = of.ofp_flow_mod()
    #Define a match structure for your flow rule
    msg2.match.in_port = 2
    msg2.match.dl_vlan = 100
    msg2.match.dl_type = 0x0800
    msg2.match.nw_dst = "10.1.1.2/32"
    # Add an action to send to the specified port
    msg2.actions.append(of.ofp_action_output(port = 1))
    # Send message to switch
    self.connection.send(msg2)
#Debug message for controller
    log.info("sending to DPID flow creation<-" )


def launch ():
  """
  Starts the component
  """
  def start_switch (event):
    log.debug("Controlling DPID %s" % (event.connection,))
    Sdn(event.connection)
  core.openflow.addListenerByName("ConnectionUp", start_switch)

Monday, June 23, 2014

SDN - Using POX Openflow controller to program a Juniper EX switch Pt.1

Openflow support is now available on Juniper EX switches in version 13.3 of code. I decided to explore and test this out. I first needed to find an Openflow compatible Juniper platform. The EX9200 supports Openflow, but you have to add the openflow image to JUNOS

jnpr@EX9200-RE0>request system software add <jsdn-package-name>

After installation you should see it as a module:

 jnpr@EX9200-RE0# run show version
Hostname: EX9200-RE0
Model: ex9204
Junos: 13.3R1.6
JUNOS Base OS boot [13.3R1.6]
JUNOS Base OS Software Suite [13.3R1.6]
JUNOS 64-bit Kernel Software Suite [13.3R1.6]
[TRUNCATED]
JUNOS py-base-i386 [13.3R1.6]
JUNOS SDN Software Suite [13.3R1.6]

There are a few possible operational commands:

jnpr@EX9200-RE0> show openflow ?
Possible completions:
  capability           Show feature and configuration capability
  controller           Show controller information and connection status
  filters              Show filter information
  flows                Show flow information
  interfaces           Show interface information
  statistics           Show statistics commands
  summary              Show openflow information summary
  switch               Show switch instance description information

To configure openflow, you need to:

1) Create interfaces

2) Associate those interfaces to an OPENFLOW resource group (ie. a virtualized switch)

3) Point to the Openflow controller to receive commands.


First, the creation of interfaces. This is very basic, it's similar to creating normal switching interfaces:

jnpr@EX9200-RE0# show interfaces
xe-2/0/0 {
    unit 0 {
        family ethernet-switching {
            vlan {
                members v100;
            }
        }
    }
}
xe-2/0/1 {
    unit 0 {
        family ethernet-switching {
            interface-mode access;
            vlan {
                members v100;
            }
        }
    }
}

[edit]
jnpr@EX9200-RE0# show vlans
v100 {
    vlan-id 100;
}

Next, you need to place these interfaces into an Openflow resource group (ie a virtualized switch) and map those interfaces to a port-id that Openflow can understand.

jnpr@EX9200-RE0# show protocols openflow
switch OF-SWITCH-92k {
    default-action {
        packet-in;
    }
    interfaces {
        xe-2/0/0.0 port-id 1;
        xe-2/0/1.0 port-id 2;
    }
}


Then you'll need to point to an openflow controller.

set protocols openflow switch OF-SWITCH-92k controller address 10.161.11.77

By default, openflow communicates over TCP using port 6633

Once committed, the EX will continously try to communicate with the openflow server.

jnpr@EX9200-RE0# run show openflow controller                                            
Openflowd controller information:
Controller socket: 12
Controller IP address: 10.161.11.77
Controller protocol: tcp
Controller port: 6633
Controller connection state: down
Number of connection attempt: 10
Controller role: equal

If the EX was correctly communicating with the controller, the connection state should say "up".

jnpr@EX9200-RE0# run show openflow switch
Switch Name:        OF-SWITCH-92k                                             
Switch ID:          0                  Switch DPID:    00:00:3c:8a:b0:0d:c7:c0
Flow mod received:  6                  Vendor received:      0             
Packets sent:       841                Packets received:     845           
Echo req sent:      833                Echo req received:    0             
Echo reply sent:    0                  Echo reply received:  833           
Port Status sent:   0                  Port mod received:    0             
Barrier request:    0                  Barrier reply:        0             
Error msg sent:     0                  Error msg received:   0  

The DPID or DataPathIdentifier is the physical switch that will be programmed. We'll see where this comes into play later.

I created an Ubuntu VM and installed the POX openflow controller. It's Python based which is awesome since it's a language I can understand. There's Floodlight (Java) or Trema (Ruby), so choose the flavor that you're comfortable with.

To simply start with there is a very basic POX python module found under the forwarding directory. This is the module we'll start with just to demonstrate how this works.

jnpr@ubuntu:~/OPENFLOW-CONTROLLER$ ./pox.py log.level --DEBUG forwarding.hub
POX 0.2.0 (carp) / Copyright 2011-2013 James McCauley, et al.
INFO:forwarding.hub:Hub running.
DEBUG:core:POX 0.2.0 (carp) going up...
DEBUG:core:Running on CPython (2.7.5+/Feb 27 2014 19:37:08)
DEBUG:core:Platform is Linux-3.11.0-12-generic-x86_64-with-Ubuntu-13.10-saucy
INFO:core:POX 0.2.0 (carp) is up.
DEBUG:openflow.of_01:Listening on 0.0.0.0:6633
INFO:forwarding.hub:Hubifying 3c-8a-b0-0d-c7-c0  <<<< DPID of the EX Switch

What this does to the EX switch basically turns those two interfaces in a small hub.

On POX the gist of the python script is this:


  msg = of.ofp_flow_mod()
  msg.actions.append(of.ofp_action_output(port = of.OFPP_FLOOD))

In Openflow terminology:

OFPP_FLOOD - output all openflow ports except the input port and those with flooding disabled

And that's it.

On the EX the Openflow rule looks like this:

jnpr@EX9200-RE0# run show openflow flows detail
Flow name: flow-65536
Table ID: 1     Flow ID: 65536            
Priority: 32768   Idle timeout(in sec):0        Hard timeout(in sec): 0   
Match: Input port: wildcard
       Ethernet src addr: wildcard       
       Ethernet dst addr: wildcard       
       Input vlan id: wildcard          Input VLAN priority: wildcard
       Ether type: wildcard
       IP ToS: 0x0                      IP protocol: 0x0 
       IP src addr: 0.0.0.0/32          IP dst addr: 0.0.0.0/32    
       Source port: 0                   Destination port: 0   
Action: Output port 65531,


The Action: Output port 65531  basically means everything available openflow interface but where the source of the packet came in on.


I setup a few tester ports to send constant traffic at a very low rate. ~1 pps


jnpr@EX9200-RE0# run show openflow statistics interfaces 
Switch Name: OF-SWITCH-92k                                                     
Interface Name: xe-2/0/0.0       Port Number: 1   
Num of rx pkts: 12                         Num of tx pkts: 12                  
Num of rx bytes: 17952                     Num of tx bytes: 17952               
Num of rx error: 0                         Num of tx error:0                   
Number of packets dropped by RX: 0                   
Number of packets dropped by TX: 0                   
Number of rx frame error:        0                   
Number of rx overrun error:      0                   
Number of CRC error:             0                   
Number of collisions:            0                   

Switch Name: OF-SWITCH-92k                                                     
Interface Name: xe-2/0/1.0       Port Number: 2   
Num of rx pkts: 12                         Num of tx pkts: 12                  
Num of rx bytes: 17952                     Num of tx bytes: 17952               
Num of rx error: 0                         Num of tx error:0                   
Number of packets dropped by RX: 0                   
Number of packets dropped by TX: 0                   
Number of rx frame error:        0                   
Number of rx overrun error:      0                   
Number of CRC error:             0                   
Number of collisions:            0                   

In my next blog entry I'll tweak a POX module to create flow rule entries that coincide more with SDN programming than turning your expensive switch into an expensive hub.

Wednesday, June 18, 2014

Op script - Interface auto-description based on lldp learned neighbors

A coworker said that Arista had this python PortAutoDescription script that auto populates the ports on a switch with it's neighbor description learned through lldp. He was wondering if Juniper had such an implementation. Well this is not built into Junos, but is really easy to do as a SLAX script.

Here it is in action:

{master:0}[edit]

jnpr@SW1# run show lldp neighbors

Local Interface    Parent Interface    Chassis Id          Port info          System Name

ge-0/0/47.0        -                   00:21:59:c7:09:40   ge-0/0/47.0        SW3b               

ge-0/0/44.0        -                   78:19:f7:9f:77:00   ge-0/0/44.0        SW2b               



{master:0}[edit]

jnpr@SW1# show interfaces

ge-0/0/44 {

    unit 0 {

        family ethernet-switching {

            port-mode trunk;

            vlan {

                members all;

            }

        }

    }

}

ge-0/0/47 {

    unit 0 {

        family ethernet-switching {

            port-mode access;

            vlan {

                members v100;

            }

        }

    }

}



{master:0}[edit]

jnpr@SW1# run op IntAutoDesc

lldp-local-interface ge-0/0/47.0 connected to lldp-remote-system-name SW3b

configuration check succeeds

commit complete

lldp-local-interface ge-0/0/44.0 connected to lldp-remote-system-name SW2b

configuration check succeeds

commit complete



{master:0}[edit]

jnpr@SW1# show interfaces

ge-0/0/44 {

    description to-SW2b;

    unit 0 {

        family ethernet-switching {

            port-mode trunk;

            vlan {

                members all;

            }

        }

    }

}

ge-0/0/47 {

    description to-SW3b;

    unit 0 {

        family ethernet-switching {

            port-mode access;

            vlan {

                members v100;

            }

        }

    }

}

SOURCE CODE
----------------------------------

version 1.0;

ns junos= "http://xml.juniper.net/junos/*/junos";

ns xnm= "http://xml.juniper.net/xnm/1.1/xnm";

ns jcs= "http://xml.juniper.net/junos/commit-scripts/1.0";

import "../import/junos.xsl";

match /

{

    <event-op-results> {

        var $lldp-info = <command> "show lldp neighbor ";

        var $lldp-result = jcs:invoke($lldp-info);

        var $con = jcs:open();
        for-each($lldp-result/lldp-neighbor-information)

        {
                var $lldp-int = current()/lldp-local-interface;
                var $lldp-remote = current()/lldp-remote-system-name;
                <output>local-name($lldp-int) _ " " _ $lldp-int _ " connected to " _ local-name($lldp-remote) _ " " _ $lldp-remote;

        var $if = substring-before($lldp-int, ".");
        var $int = <configuration> {
           <interfaces> {
              <interface> {
                       <name> $if;
                       <description> "to-" _ $lldp-remote;
               }
             }
        }

        call jcs:load-configuration($connection = $con, $configuration = $int);


        }

        expr jcs:close($con);

      }

}

Tuesday, June 17, 2014

My attempt at setting up ssh key pairs for an Openstack VM

In my quest to learn more about Openstack I've decided to test out the ssh key-pair authentication method. I'm not much of a Unix guy so this attempt may be the wrong approach. But hey, learning is all about experimenting, so my failures may one day lead to success.

 I've read the RDO quick install on setting up the key-pair, but I could not get it to work using the Horizon webui. The documentation is a little sparse with no examples. It says I should be able to access the VM from my host. But after a few attempts I couldn't get it to work. So I've decided to try it a different way.

First I had to figure out how to ssh from my host to the VM.

pinging the VM didn't work.

[root@centos-6-5-openstack .ssh]$ ping 192.168.251.12
PING 192.168.251.12 (192.168.251.12) 56(84) bytes of data.
^C
--- 192.168.251.12 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4659ms

Then I remembered that I had to use network namespaces.

[root@centos-6-5-openstack .ssh(keystone_admin)]# ip netns list
qrouter-ed4afc1b-06ab-417e-a7e2-d5be13b822af
qdhcp-4dc834f5-e759-4d79-acf0-780768f1fa86
qdhcp-0b6ed891-a9ae-4c5a-a7f9-36e851bf1d48
qdhcp-a5958652-7348-436f-8aff-2c9ebd7dd9f7
qdhcp-dc49c1a5-07d0-4225-bea5-02316aec3a42

[root@centos-6-5-openstack .ssh(keystone_admin)]# ip netns exec qdhcp-a5958652-7348-436f-8aff-2c9ebd7dd9f7 ip a
31: tapcb867d96-a4: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:ce:a5:7e brd ff:ff:ff:ff:ff:ff
    inet 192.168.251.11/24 brd 192.168.251.255 scope global tapcb867d96-a4
    inet6 fe80::f816:3eff:fece:a57e/64 scope link
       valid_lft forever preferred_lft forever
35: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

[root@centos-6-5-openstack .ssh(keystone_admin)]# ip netns exec qdhcp-a5958652-7348-436f-8aff-2c9ebd7dd9f7 ping 192.168.251.12
PING 192.168.251.12 (192.168.251.12) 56(84) bytes of data.
64 bytes from 192.168.251.12: icmp_seq=1 ttl=64 time=3.33 ms
64 bytes from 192.168.251.12: icmp_seq=2 ttl=64 time=0.436 ms
^C
--- 192.168.251.12 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1065ms
rtt min/avg/max/mdev = 0.436/1.885/3.334/1.449 ms

Awesome. That worked. So next I tried sshing to the VM.





[root@centos-6-5-openstack .ssh(keystone_admin)]# ip netns exec qdhcp-a5958652-7348-436f-8aff-2c9ebd7dd9f7 ssh -l cirros 192.168.251.12
The authenticity of host '192.168.251.12 (192.168.251.12)' can't be established.
RSA key fingerprint is 80:bc:58:4c:04:a6:a7:a4:0e:58:e1:0b:8d:55:e0:45.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.251.12' (RSA) to the list of known hosts.
cirros@192.168.251.12's password:
$

Good I'm in.

$ exit







Next I looked for a public key I already generated from my host machine

[root@centos-6-5-openstack .ssh(keystone_admin)]# ls
authorized_keys  id_rsa  id_rsa.pub  known_hosts










So now all I did was scp the file to the authorized_keys directory of the VM

[root@centos-6-5-openstack .ssh(keystone_admin)]# ip netns exec qdhcp-a5958652-7348-436f-8aff-2c9ebd7dd9f7 scp id_rsa.pub cirros@192.168.251.12:.ssh/authorized_keys
cirros@192.168.251.12's password:
id_rsa.pub                                                                              100%  407     0.4KB/s   00:00   

Now I can ssh with the key-pair without having to type in my password.

[root@centos-6-5-openstack .ssh(keystone_admin)]# ip netns exec qdhcp-a5958652-7348-436f-8aff-2c9ebd7dd9f7 ssh -l cirros 192.168.251.12

$ whoami
cirros

I'm still going to try to figure this out. Hopefully I'll be able to work this out the regular way.

Monday, June 16, 2014

Python script - make a config change to a Juniper device remotely

I had a previous blog post on how to make a config change remotely using python. This new script uses a more structured framework from python-ez. I will be making calls to the following module:

% pwd
/Library/Python/2.7/site-packages/jnpr/junos/cfg/phyport

The base module supports the following methods:

    PROPERTIES = [
        'admin',              # True
        'description',        # str
        'speed',              # ['10m','100m','1g','10g']
        'duplex',             # ['full','half']
        'mtu',                # int
        'loopback',           # True
        '$unit_count'         # number of units defined
    ]

With python I can iterate through all the interfaces and change the mtu size on all the interfaces

before the change:

jnpr@R1# show interfaces
xe-0/0/0 {
    mtu 1500;
}
xe-0/0/1 {
    mtu 1500;
}
xe-0/0/2 {
    mtu 1500;
}
xe-0/0/3 {
    mtu 1500;
}

% python mtu-chg.py
Host: 10.161.33.171
Model: MX80-P
Version: 12.3R1.7
Changing MTU to 9000 on the following interfaces:
xe-0/0/0
xe-0/0/1
xe-0/0/2
xe-0/0/3

after

jnpr@R1# show interfaces   
xe-0/0/0 {
    mtu 9000;
}
xe-0/0/1 {
    mtu 9000;
}
xe-0/0/2 {
    mtu 9000;
}
xe-0/0/3 {
    mtu 9000;
}

SOURCE CODE
------------

from jnpr.junos import Device as Junos
from jnpr.junos.cfg.phyport import *

login = dict(user='jnpr', host='10.161.33.171', password='pass123')

rtr = Junos(**login)

rtr.open()

print "Host: " + rtr.hostname
print "Model: " + rtr.facts['model']
print "Version: " + rtr.facts['version']
size = 9000

ints = PhyPort(rtr)
print "Changing MTU to %s on the following interfaces:" % size
for int in ints:
  print int.name
  int['mtu'] = size
  int.write()

rtr.close()

Sunday, June 15, 2014

Python script - check for interface errors remotely.

In my last blog post I created a slax script that would tell you which interfaces that had errors on it. I recreated this script again using Python so a NetOps engineer could check it remotely. This could be useful if you have multiple networking equipment and want to track down what may be causing packet loss.

Somethings to consider. I'm using the juniper python library from GITHUB https://github.com/jeremyschulman/py-junos-eznc

There are a few things to note. The library to see the relevant python methods can be found here.

/Library/Python/2.7/site-packages/jnpr/junos/op

There is a YAML file phyport.yml which shows what each rpc and attributes are supported.


PhyPortErrorTable:
  rpc: get-interface-information
  args:
    extensive: True
    interface_name: '[fgx]e*'
  args_key: interface_name
  item: physical-interface
  view: PhyPortErrorView

PhyPortErrorView:
  groups:
    ts: traffic-statistics
    rxerrs: input-error-list
    txerrs: output-error-list

  # fields that are part of groups are called
  # "fields_<group-name>"

  fields_ts:
    rx_bytes: { input-bytes: int }
    rx_packets: { input-packets: int }
    tx_bytes: { output-bytes: int }
    tx_packets: { output-packets: int }

  fields_rxerrs:
    rx_err_input: { input-errors: int }
    rx_err_drops: { input-drops: int }
    rx_err_frame: { framing-errors: int }
    rx_err_runts: { input-runts: int }
    rx_err_discards: { input-discards: int }
    rx_err_l3-incompletes: { input-l3-incompletes: int }
    rx_err_l2-channel: { input-l2-channel-errors: int }
    rx_err_l2-mismatch: { input-l2-mismatch-timeouts: int }
    rx_err_fifo: { input-fifo-errors: int }
    rx_err_resource: { input-resource-errors: int }

  fields_txerrs:
    tx_err_carrier-transitions: { carrier-transitions: int }
    tx_err_output: { output-errors: int }
    tx_err_collisions: { output-collisions: int }
    tx_err_drops: { output-drops: int }
    tx_err_aged: { aged-packets: int }
    tx_err_mtu: { mtu-errors: int }
    tx_err_hs-crc: { hs-link-crc-errors: int }
    tx_err_fifo: { output-fifo-errors: int }
    tx_err_resource: { output-resource-errors: int }


script in action:

[mylaptop:~/scripts/PYTHON/] user% python error.py
host: 192.168.1.1
Interface    Error
xe-0/0/1     tx_err_carrier-transitions 6
xe-0/0/3     tx_err_carrier-transitions 7
ge-1/0/0     tx_err_drops 35994597
ge-1/0/0     tx_err_carrier-transitions 23
ge-1/1/0     tx_err_carrier-transitions 25
ge-1/1/1     tx_err_carrier-transitions 29
ge-1/1/3     rx_err_frame 1
ge-1/1/3     tx_err_carrier-transitions 31
ge-1/1/3     rx_err_input 1
ge-1/1/5     tx_err_carrier-transitions 11
ge-1/1/8     tx_err_carrier-transitions 8


-----------
SOURCE CODE
----------

from pprint import pprint as pp

from lxml import etree

from jnpr.junos import Device as Junos

from jnpr.junos.op.phyport import *

import json
import itertools

login = dict(user='user', host='192.168.1.1', password='password')

rtr = Junos(**login)

rtr.open()

ports = PhyPortTable(rtr).get()
stats = PhyPortErrorTable(rtr).get()


print "host: " + rtr.hostname
print "Interface\tError"



for port,stat in map(None,ports,stats):
   for attr in stat.FIELDS:
     if 'err' in attr:
       if getattr(stat,attr) != 0:
         print port.name.ljust(12), attr.ljust(12), getattr(stat, attr)



rtr.close()

Wednesday, June 11, 2014

Find and display any and all interface errors easily. Juniper slax script

Whenever I troubleshot a router in a network, I always started at layer 1. However when there were hundreds of links connected to the equipment, narrowing the problem down was difficult.

When there are are many interfaces on your Juniper router, it could take a long time to look at each interface and check to see if there are errors.

Doing a "pipe" and grepping for errors is a mess. You can't find out which interface the counters are associated with.

This simple Juniper slax script will do all the work for you. Saved me many times when I found out the problem was a layer 1 issue.


jnpr@router> op error
Error on input errors:  xe-1/0/2 on input-errors 768
Error on input errors:  xe-1/0/2 on framing-errors 768
Error on input errors:  xe-8/2/3 on input-errors 8
Error on input errors:  xe-8/2/3 on framing-errors 8
Error on input errors:  xe-9/0/1 on input-errors 2
Error on input errors:  xe-9/0/1 on framing-errors 2

---------------

version 1.0;

ns junos= "http://xml.juniper.net/junos/*/junos";

ns xnm= "http://xml.juniper.net/xnm/1.1/xnm";

ns jcs= "http://xml.juniper.net/junos/commit-scripts/1.0";

import "../import/junos.xsl";



match /
{
    <event-op-results> {

        var $interface-info = <command> "show interfaces extensive ";
        var $interface-result = jcs:invoke($interface-info);

        for-each($interface-result/physical-interface)
        {
                var $interface-name = current()/name;
                for-each(current()/input-error-list/*) {
                if(current() != 0)
                {
                        <output>"Error on input errors:  " _ $interface-name _ " on " _ local-name(current()) _ " " _ current();
                }
              }
                for-each(current()/output-error-list/*) {
                if(current() != 0)
                {
                        <output>"Error on output errors:  " _ $interface-name _ " on " _ local-name(current()) _ " " _ current();
                 
                }
              }
        }
      }
}