Wednesday, October 21, 2015

Using cURL to edit and delete RESTful API Objects in a Palo Alto Networks Firewall

XML is a very hard language to understand when you are first working with it. For example when you refer to an element like:

<ship>Titanic</ship>

you refer to everything including the start tag and the end tag. The information inside the tags is text.

You need to keep this in mind when trying to reference things.

Another confusing thing is xpath and attributes. Take this XML example:

<rules>
  <entry name="rule1">
   <from>
       <member>
            Trust
       </member>
   </from>
  </entry>
  <entry name="rule2">
   <from>
       <member>
            UnTrust
       </member>
   </from>
  </entry>

<rules>

If you want to reference something say 'rule2' then what you want is the attribute value. You would use entry[@name='rule2'] entry is the element, name is the attribute and 'rule2' is the attribute value

If you want to reference the text value within an element then you would use element[text()='value'].

For example if you want to reference 'Trust' you can use member[text()='Trust']

This leads to why it could get a little confusing when trying to edit and delete specific values using the Palo Alto Networks API. Let's look at the following rule.




To delete a source-user member named 'acme\bob' in a group of source users
, use the below xpath:

xpath=/config/devices/entry[@name='<domain>']/vsys/entry[@name='<vsysname>']/rulebase/security/rules/entry[@name='<rulename>']/source-user/member[text()='acme\bob']


$curl -k "https://192.168.1.1/api/?type=config&action=delete&xpath=/config/devices/entry\[@name='localhost.localdomain'\]/vsys/entry\[@name='vsys1'\]/rulebase/security/rules/entry\[@name='deny-rule1'\]/source-user/member\[text()='acme\bob'\]&key=<API-KEY>"

<response status="success" code="20"><msg>command succeeded</msg>


If you want to edit a member value, then you need to reference the original member value with member\[text()='<value>'\] and then use the element parameter for the modified member text value: element=<xml code>

for example using curl

$ curl -k "https://192.168.1.1/api/?type=config&action=edit&xpath=/config/devices/entry\[@name='localhost.localdomain'\]/vsys/entry\[@name='vsys1'\]/rulebase/security/rules/entry\[@name='deny-rule1'\]/source-user/member\[text()='acme\bob'\]&element=<member>acme\calvin</member>&key=<API-KEY>"

<response status="success" code="20"><msg>command succeeded</msg></response>

Below are all xpath expressions you can use when accessing the Palo Alto Networks api.

Examples:

/source-user/member[position()<4]
Selects the first three member elements that are children of the source-user element

$ curl -k "https://192.168.1.1/api/?type=config&action=get&xpath=/\[@name='localhost.localdomain'\]/vsys/entry\[@name='vsys1'\]/rulebase/security/rules/entry\[@name='rule1'\]/source-user/member\[position()<4\]&key=<API-KEY>"
<response status="success" code="19"><result total-count="3" count="3">
  <member admin="admin" time="2015/10/21 13:42:11">acme\amy</member>
  <member admin="admin" time="2015/10/21 13:42:11">acme\bob</member>
  <member admin="admin" time="2015/10/21 13:42:11">acme\calvin</member>

/source-user/member[2] Selects the second member element that is the child of the source-user element


$ curl -k "https://192.168.1.1/api/?type=config&action=get&xpath=/\[@name='localhost.localdomain'\]/vsys/entry\[@name='vsys1'\]/rulebase/security/rules/entry\[@name='rule1'\]/source-user/member\[2\]&key=<API-KEY>" <response status="success" code="19"><result total-count="1" count="1">
  <member admin="admin" time="2015/10/21 13:42:11">acme\bob</member>

Sunday, October 18, 2015

Using cURL to access the RESTful API of a Palo Alto Networks Firewall

There may be a situation where you would need to access the API of a Palo Alto Networks firewall. Some IT administrators may be more comfortable using cURL to access an API than a scripting language like PYTHON. Here's a few examples of how to perform some tasks using cURL.

The first thing you need to do is get an API key. How do you get an API Key? You query the firewall itself.

?type=keygen is the parameter to use.

curl -k "https://<firewall ip>/api/?type=keygen&user=<username>&password=<password>"


Make sure you wrap your url with Quotes. I tried this before and would get an error 400 response. 

You will get a response in xml. The value for key is the API key.

curl -k "https://192.168.1.1/api/?type=keygen&user=admin&password=admin"

<response status = 'success'>
<result>     
 <key>
   1234....
 </key>
 </result>
</response>


Then you can query the firewall using the API key instead of passing a username and password.

curl -k "https://<firewall ip>/api/?type=<api type>&<parameters>&key=<api-key>"

To get a list of api types and what commands to pass, you can use the REST API browser on the firewall. Just bring up a browser and authenticate against your firewall. After that you can add the path /api to the url.   https://<firewall ip>/api



Here you have the different api "types" you can query the firewall. Each of these types have some caveats as to the way you send your


For our first example, lets execute an operational command. From the cli the equivalent command would be "show system info".


If you drill down the Operational Commands to show system and info you will see the REST API URL at the bottom of the page. This is the url path you would copy to your curl command.










user@ubuntu-vm:~/API/curl$ curl -k "https://192.168.1.1/api/?

type=op&cmd=<show><system><info></info></system></show>&key=<API-KEY>"

<response status="success">
<result>
<system>
<hostname>NG-FW</hostname>
<ip-address>192.168.1.1</ip-address>
<netmask>255.255.255.0</netmask>
<default-gateway>192.168.1.254</default-gateway>
... TRUNCATED...

Notice that the command is in XML format. The firewall takes input and outputs in XML. 
This is important to note as you'll need to know the XPATH for most queries.

The following is to query the hostname with. You will need to add "action=get" as one of the parameters to read the hostname.


user@ubuntu-vm:~/API/curl$ curl -k "https://192.168.1.1/api/?type=config&action=get&xpath=/config/devices/entry\[@name='localhost.localdomain'\]/deviceconfig/system/hostname&key=<API-KEY>"
<response status="success" code="19"><result total-count="1" count="1">
  <hostname>NG-FW</hostname>

Notice that I had to put the literal '\[' for special characters like the square brackets. cURL would complain if I didn't have this.

curl: (3) [globbing] bad range specification in column 78


Now that we did a get. Let's try a set. When we do a set command, we need an extra parameter "element" that will contain the xml of the new content.

user@ubuntu-vm:~/API/curl$ curl -k "https://192.168.1.1/api/?type=config&action=set&xpath=/config/devices/entry\[@name='localhost.localdomain'\]/deviceconfig/system&element=<hostname>test</hostname>&key=<API-KEY>"
<response status="success" code="20"><msg>command succeeded</msg></response>

Doing a get command again shows us the hostname is changed.

user@ubuntu-vm:~/API/curl$curl -k "https://192.168.1.1/api/?type=config&action=get&xpath=/config/devices/entry\[@name='localhost.localdomain'\]/deviceconfig/system/hostname&key=<API-KEY>"
<response status="success" code="19"><result total-count="1" count="1">
  <hostname admin="admin" time="2015/10/14 11:08:11">test</hostname>

Next you need to commit the set command.

user@ubuntu-vm:~/API/curl$curl -k "https://192.168.1.1/api/?type=commit&cmd=<commit></commit>&key=<API-KEY>"
<response status="success" code="19"><result><msg><line>Commit job enqueued with jobid 699</line></msg><job>69

The response code will return a job id.

The last API type I'll cover are logs. Log retrieval is an asynchronous process so you need to query the particular log type first which will create a job id in the response and then query the FW again with the job id. Here’s an example using a threat log.

The first one is to generate the job id.
curl -k "https://<FW-IP>/api/?type=log&log-type=threat&key=<API-KEY>

The second will show you the results after the job task is finished.
curl -k "https://<FW-IP>/api/?type=log&action=get&jobid=<JOB-ID>&key=<API-KEY>

user@ubuntu-vm:~$ curl -k "https://192.168.1.1/api/?type=log&log-type=threat&key=<API-KEY>"
<response status="success" code="19"><result><msg><line>query job enqueued with jobid 3605</line></msg><job>3605</job></result></response>


user@ubuntu-vm:~$ curl -k "https://192.168.1.1/api/?type=log&action=get&jobid=3605&key=<API-KEY>"
<response status="success"><result>
  <job>
    <tenq>17:57:29</tenq>
    <tdeq>17:57:29</tdeq>
    <tlast>17:57:30</tlast>
    <status>FIN</status>
    <id>3605</id>
    <cached-logs>33</cached-logs>
  </job>
  <log>
   ... TRUNCATED ...
  </log>
  <meta>
    <devices>
      <entry name="localhost.localdomain">
        <hostname>localhost.localdomain</hostname>
        <vsys>
          <entry name="vsys1">
            <display-name>vsys1</display-name>
          </entry>
        </vsys>
      </entry>
    </devices>
  </meta>


Wednesday, October 14, 2015

Upload files to BOX over FTPS

I wanted to upload some files from a linux machine to my BOX account. However I wanted to do this all via the command line. There are scenarios where all I have access to is the cli and I didn't want to transfer the files to another machine that had a web browser. Besides there are times when using the cli is more efficient.

First I tried ftp. So I found on the internets that BOX supports ftp.

However since I have an account with SSO I need to create an external password. You have to do this within the BOX portal of your account settings.


I recommend a randomly generated long password (> 12 Chars) for this and use a password manager to keep track of it. Don't use your SSO password. Don't be lazy! There are probably hackers out there who are already trying to bruteforce attack accounts using this attack vector. If they get this password, they get your SSO password.

You probably want to click the Log out of all Box applications using this account check box. This will prevent this external password from being used on other types of apps like mobile devices.

Side note: I like the fact that BOX keeps track of your login activity with the application you are trying to use, ie FTP server, Windows Chrome, etc and the geo location of the login.

user@ubuntu:~$ ftp ftp.box.com
Connected to ftp.box.com.
220 Service ready for new user.
Name (ftp.box.com:user): <username>
331 User name okay, need password for <username>
Password:
530 Box: Company.com does not allow regular FTP; use FTPS instead. (Both "explicit" and "implicit" FTPS are supported.)

Doh. Fail.

Duh! We need a secure connection.

Ok. I haven't used FTPS before. I've used SFTP, but not this other method.

FTPS uses SSL.

So after consulting with the internets I found I can use lftp.

user@ubuntu:~$ sudo apt-get install lftp
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  lftp

My second try:

user@ubuntu:~$ lftp -u user@company.com ftp.box.com
Password:
lftp user@company.com@ftp.box.com:~> ls
drwx------   1 owner group            0 Sep 24 17:41 PoCs
drwx------   1 owner group            0 Aug 27 10:18 Demos
drwx------   1 owner group            0 Aug 27 10:19 Docs

interesting to see the shell.

lftp user@company.com@ftp.box.com:/> cd PoCs
lftp user@company.com@ftp.box.com:/PoCs> put file.zip
47408 bytes transferred
lftp user@company.com@ftp.box.com:/PoCs> exit

Success!

I can see that lftp has a parameter for sending a command. Which means I can automate this process using a shell script.

lftp -e 'unix command' -u <username>,<password> ftp.box.com

user@company.com:~$ lftp -e 'ls; bye' -u user,password ftp.box.com
lftp user@company.com@ftp.box.com:~> ls
drwx------   1 owner group            0 Sep 24 17:41 PoCs
drwx------   1 owner group            0 Aug 27 10:18 Demos
drwx------   1 owner group            0 Aug 27 10:19 Docs
-rw-------   1 owner group    47408 Sep 27 10:39 file.zip
user@company.com:~$

Tuesday, October 13, 2015

Checking file hashes against Palo Alto Network's Wildfire to find their verdicts

I had a list of files I needed to check to see if they were malware. There are a few ways to approach this. The first is to upload each file manually one at a time onto the wild fire portal and have wildfire check to see if they're either benign or malicious.

Or automate this to into the simplest and most efficient way. I chose the latter.

The first thing I did was do a hash checksum of all the files. I chose md5 hashes for this example, but you can do sha256 as that is supported by wildfire as well.

On a linux machine I issued the following command on some files.

user@ubuntu-vm:~/$ md5sum *
1842b2365bf67121462d0cc026fb9300  Test.pdf
cd75d3e263ff0d1d13aad24cdb9f2593 flashplayer19_ha_install.exe

Now that I have those hashes I put them into a text file. I also added a few hashes I knew were malware just verify my script.

Next I used pan-python to create my script that would query Wild Fire. Now there are two ways to approach this. One is to send each hash one at a time and get the results back for each query. If you send each request one at a time, you'll use up your daily allotment of API queries
the more hashes you have. The other method is to send a  bulk query. This would reduce the number of queries you have, but you only get to do a bulk of 500 hashes at a time.

Here's the code that I used.



import csv
import pan.wfapi


apikey = 'YOUR_API_KEY'
# loop through all the hashes and put this into a List
with open("sample-hashes.txt", "r") as ins:
  rows = csv.reader(ins)
  L= []
  for hash in rows:
    print hash[0]
    L.append(hash[0])

#make an API query to WF and do a bulk check
  WF = pan.wfapi.PanWFapi(api_key = apikey)
  WF.verdicts(hashes=L)
  print('hashes %s submitted' % L)

#print the xml response
  xml_response = WF.response_body
  print xml_response


I imported the python csv library to read my file that contained all my hashes. The reason I used csv was because I orginally recieved the hashes as a csv file with each filename associated with the hash value as you can see above there are filenames after each hash. So in this demo, I cropped out all the other fields and only used the hashes. If the file was delimited, I could read only the row with the hash value.

Then I imported the pan-python library that would allow me to run the query.

The loop is used so I can append to a list with all the hashes I found in the text file.
Then I would make a query to wildfire using the API key.

The api key can be found in your wildfire account on the wildfire portal.



$ python bulk-wf.py
1842b2365bf67121462d0cc026fb9300
bf1373d10842e96c85bf73a97ddec699
36944ab907576c10d217911ee6acc3c9
4b20dd78c13433f4ec47853bfecddc61
7309a9b75819dfd1496391fc75016d90
hashes ['1842b2365bf67121462d0cc026fb9300', 'ad9e1502d3fd341608fa4730a1609f8d', 'bf1373d10842e96c85bf73a97ddec699', '36944ab907576c10d217911ee6acc3c9', '4b20dd78c13433f4ec47853bfecddc61', '7309a9b75819dfd1496391fc75016d90'] submitted

<wildfire>
    <get-verdict-info>
        <sha256>6864e1fa5e0145c2f1ce6f403a3554fe9576287929b0e9e4e5fadb50915bb65e</sha256>
        <verdict>1</verdict>
        <md5>36944ab907576c10d217911ee6acc3c9</md5>
    </get-verdict-info>
    <get-verdict-info>
        <sha256>93cbeff02c16e7e09e41aa94ee37a3dae51849f14d335485fc936297b400ce04</sha256>
        <verdict>1</verdict>
        <md5>4b20dd78c13433f4ec47853bfecddc61</md5>
    </get-verdict-info>
    <get-verdict-info>
        <sha256>5683cc43393bfd01b5533a3c710c39d62387cbd5bdf9588f8b3c1dc13933473c</sha256>
        <verdict>1</verdict>
        <md5>7309a9b75819dfd1496391fc75016d90</md5>
    </get-verdict-info>
    <get-verdict-info>
        <sha256>fd54decc2b89c9ca00f4e6de39a1a9d677bd1c5a8ceb6d6b5b25eedc8d332e28</sha256>
        <verdict>0</verdict>
        <md5>1842b2365bf67121462d0cc026fb9300</md5>
    </get-verdict-info>
<\wildfire>


A verdict of 1 means the file is malicious, while a verdict of 0 means the file is benign.

If the verdict was -102 that means that the file is unkown and I would have to upload it to wildfire to have it further examine.
 I could make some more logic in my script to print the actual verdict, but it would require me to manipulate the response code. I would need something like the library xmltodict so I could specifically retrieve and do a evaluation only the verdicts.

Now this is good to cut down the amount of traffic that would be needed to be sent across a network and evaluate known good or known bad files in a very fast way. Then only when you need to upload the unkowns you can then limit that number of files to a smaller group.

Monday, February 9, 2015

The behavior of the Root login account on Juniper devices

Due to the number of hacks we've been seeing the past few years, I've decided to concentrate a lot of my future blog posts on Security.

It's imperative to secure any device that has an internet connection, whether it's a router, switch, tablet, smartphone, etc.

This was tested on Juniper routers and MXes and I have not tested this on the SRX.

When you first configure your brand new Juniper Device, you are logged into the unit as the user root with no password. Before you can commit your initial config on the device, it will tell you to set the root password for the system.

You do it like this:

[edit system]root@# set root-authentication plain-text-password New Password: type password hereRetype new password: retype password here
The thing you have to worry about is if you decide to enable ssh on the device, the root account is also allowed to ssh into the device.

services {
    ssh
}

$ ssh -l root 172.16.1.9
root@172.16.1.9's password:
--- JUNOS 11.4R7.5 built 2013-03-01 10:14:08 UTC


root@mx80:RE:0%
root@mx80:RE:0%
root@mx80:RE:0% cli
{master:0}
root@mx80> 

This is dangerous if this device is on the internet. Hackers will always try to brute-force ssh the root login account. I've asked some of my friends/coworkers who know juniper and they did not even know this behavior.

You have to explicitely disable ssh for the root account.

services {
    ssh {
        root-login deny;
    }
}

Juniper should have built this the opposite way and had root-login defaulted to deny.  So the admin knows that they're actually explicitly allowing users to ssh into the device using the root account.

BTW there is a parameter that says root-login allow.

You might wonder why Juniper did this? Well the reason is probably some engineer early in the software development phase did this for convenience.  Then as the code evolved over the years, you couldn't change the "default" behavior as some customer would complain about this change. So now you're stuck with it.

Small snippet of an attacker trying to gain access via ssh.


Jan 23 05:15:25 localhost sshd[8513]: Invalid user ftp from 82.222.9.122
Jan 23 05:15:26 localhost sshd[8517]: Invalid user guest from 82.222.9.122
Jan 23 05:25:17 localhost sshd[8522]: Invalid user root from 82.222.9.122
Jan 23 05:35:17 localhost sshd[8524]: Invalid user info from 82.222.9.122
Jan 23 05:45:14 localhost sshd[8526]: Invalid user jack from 82.222.9.122
Jan 23 05:55:18 localhost sshd[8528]: Invalid user karaf from 82.222.9.122
Jan 23 06:05:15 localhost sshd[8530]: Invalid user log from 82.222.9.122
Jan 23 06:25:03 localhost sshd[8786]: Invalid user nagios from 82.222.9.122
Jan 23 06:34:58 localhost sshd[9071]: Invalid user oracle from 82.222.9.122
Jan 23 06:44:52 localhost sshd[9307]: Invalid user pi from 82.222.9.122
Jan 23 06:54:43 localhost sshd[9483]: Invalid user postgres from 82.222.9.122


Best practice is to deny root-login and setup connection limits and rate limits.

ssh {
root-login deny;
protocol-version v2;
connection-limit 10;
rate-limit 2;
}

Any device on the internet should also have black-lists and white-lists for SSH to prevent malicious attackers from gaining access to your device.

Thursday, January 15, 2015

Vmware hack - Fix the "You cannot use the vSphere Client to edit the settings of virtual machines of version 10 or higher"

I ran into this error message when importing a vm that was created on VMWare workstation to vSphere ESXi 5.5. I got this message:


To create a workaround for this, you'll need to modify the .vmx text file that is on the ESXi server.

First you'll need to turn on ssh access to the server.

Under Configuration -> Security Profile -> Properties

Check the SSH Server to allow incoming connections and start the service.

Next SSH to the ESXi server with your credentials.


cd to the datastore of your vms.

cd /vmfs/volumes/datastore1/

cd to the VM image. Use "Quotes" if there are spaces in your vm name.

cd "Win 2008 Serv (DB3)"

Next edit the .vmx file

vi "Win 2008 Serv (DB3)"

Look for virtualHW.version

.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "10"
change the virtualHW.version from "10" to "8"

Save the changes.

Go back to the vSphere GUI.





Power off the VM image. Then right click and remove the VM from inventory.

Then go back to your SSH connection and reregister the image to bring it back online with the new changes.
Here's the command to reregister.

vim-cmd solo/registervm path-to-vm/VM.vmx





Example:
vim-cmd solo/registervm /vmfs/volumes/53bc3b04-5e592400-c6f3-f8bc124b4cea/Win 2008 Serv (DB3)/Win\ 2008\ Serv\ \(DB3\).vmx

It should editable after that.

Thursday, December 18, 2014

Gathering and graphing snmp stats on a Palo Alto Networks Firewall

So I was task to the challenge of gathering cpu utilization and active sessions on a Palo Alto Networks Firewall.

There are two CPUs on a Firewall. There is a management plane cpu and a data plane cpu.

The OIDs are below.

Active sessions: .1.3.6.1.4.1.25461.2.1.2.3.3

MGMT Utilization: .1.3.6.1.2.1.25.3.3.1.2.1

Data Plane Utilization: .1.3.6.1.2.1.25.3.3.1.2.2



The first step is to enable snmp on the Firewall.

Under

Device/Setup/Operations/Miscellaneous/SNMP Setup

enter your community string and pick the version.



Now I'm using unsecured snmp v2 because I didn't know how to use snmpv3 on the Splunk Application that I'm going to use to generate charts.

on an ubuntu vm I tested it.

admin@ubuntu-poc-vm:~$ snmpget 10.48.64.112 -v 2c -c public .1.3.6.1.2.1.25.3.2.1.3.2
iso.3.6.1.2.1.25.3.2.1.3.2 = STRING: "Slot-1 Data Processor"
admin@ubuntu-poc-vm:~$ snmpget 10.48.64.112 -v 2c -c public .1.3.6.1.2.1.25.3.2.1.3.1
iso.3.6.1.2.1.25.3.2.1.3.1 = STRING: "Management Processor"
admin@ubuntu-poc-vm:~$ snmpget 10.48.64.112 -v 2c -c public .1.3.6.1.2.1.25.3.3.1.2.1
iso.3.6.1.2.1.25.3.3.1.2.1 = INTEGER: 23
admin@ubuntu-poc-vm:~$ snmpwalk 10.48.64.112 -v 2c -c public .1.3.6.1.2.1.25.3.3.1.2.2
iso.3.6.1.2.1.25.3.3.1.2.2 = INTEGER: 5


Next I setup Splunk on a Windows 7 VM. It was pretty easy as you go to their website and download the app for your particular flavor of OS.

Once installed you hop onto the webui and change your credentials.

Next you need to install the SNMP Modular Input app. It's free.


Then you need to go Settings > Data Inputs > SNMP > Add New

Here I created a new input for each SNMP OID that I wanted to query.


 There's a reason for this. I could have added a list of OIDs using comma delimited but I had a hard time trying parsing the data I wanted to graph. If someone has a better method let me know.

Last I used set source type to Manual and the actual source type to "snmp_ta" which is the SNMP Modular app.

Next I went to Manage App and looked for snmp_ta app and edited the permissions. I made the app visible.


A new icon appears in the dashboard and now I can double click it to examine the data being polled.


I first click on data summary and select the source tab and choose your source.


I should now see all the data that SNMP Modular Input queried from the firewall.


Now it's time to manipulate the data and create some nice graphs. First I have to manipulate the search fields.


I add a pipe and enter fields value.

This will specifically give me the value what is return from the SNMP OID.

Next I choose the Visualization tab and click on Pivot. I will only have 1 field to use which is the value field.


Next I choose the graph I want to create. I chose line chart.

 Then on the Y Axis I make sure the field says #value and I can label this as Data Plane CPU.

Then on the X Axis side I choose _time which will graph the data collected over time.

Make sure the Null Values say connected to give me a nice line graph instead of a bunch of dots.

Last I save this panel and give it a name so I can put it on my Dashboard.



Now you may say, Big deal, all this work to do that. I can just spin up Solar Winds and it's really easy. No need to create search queries and add pipes and then do all this to create one graph. Well the reason for using Splunk is that Palo Alto Networks has a nice plug-in (it's free) that works directly with Splunk. So with one tab I can check on all the traffic, threats and wildfire data collected and on the other tab I can look at the CPU Utilization and Session counts. This gives me one single pane of glass instead of having to jump onto different management tools to give me the same information.