Exporting device inventory from Prime to NetBox part 2: Preparing input data for NetBox


I finished part 1 with ready .yaml files. We need them in case the input data in Prime is not enough and we need to ssh into our devices to get supplementary data.

For our scenario I need the following data fields in Netbox:

Device function, manufacturer, device full name including the stack member number, mgmt ip address, device model, device ios type, device serial number, device short name (=virtual chassis name), virtual chassis position, virtual chassis priority, device status and site name.
To give an example of a two-device stack, I imagine the final output to be the following:

User_Access;Cisco;s01wro001-1;10.101.1.11;WS-C2960X-48FPS-L;IOS;FCQ1942A4UR;s01wro001;1;15;active;PL-WRO-1


User_Access;Cisco;s01wro001-2;10.101.1.11;WS-C2960X-48FPS-L;IOS;FCQ1942A4UR;s01wro001;2;14;active;PL-WRO-1

First, this is a small function that I will use to query Prime for a set of devices that contain the abbreviation ‚WRO’ in their names.

def query_prime(keyword):
   
      url="https://prime.yourcompany.com/webacs/api/v4/data/Devices?.full=true&.sort=ipAddress&deviceName=contains(%22"+keyword+"%22)"
response=requests.get(url,auth=primeCreds,verify=False)
obj=xmltodict.parse(response.text)
internal_obj=obj['queryResponse']
return internal_obj

This is the main function. It should use the response from Prime to save the data.

def get_virtual_chassis_data_for_topology_devices(internal_obj):
    #case - some search results returned
    if 'entity' in internal_obj.keys():
          entity=internal_obj['entity']

          #case - query returned more than 1 device
          if isinstance(entity,list):
             for item in entity:
  
                   device=item['devicesDTO']
                   write_row(device)

          #case - query returned exactly 1 device
          else:
                  device=entity['devicesDTO']
                  write_row(device)

    #case - no devices with this keyword have been found
    else:
       print('no search results returned')


Now I need to write up the function write_row from the code snipper above to have a nice .csv file with each device as a separate row with input data for Netbox.

Part 1

This is just the beginning of the write_row function

def write_row(device):
       
       f = open(filename, 'a')
       writer = csv.writer(f)
       device_name, device_name_short, device_type, device_ip, device_location, manuf_nrs_dict, manuf_list, software_type = get_device_properties(device)

Now the get_device_properties function is a small helper function to get the data fields that are already available in Prime

def get_device_properties(device):
   
    device_name = device['deviceName']
    device_name_short=device_name.replace('.schaeffler.com', '')
    device_type=device['deviceType']
    device_ip=device['ipAddress']
    if 'location' in device.keys():
         device_location=device['location']
    else:
         device_location='default location'
    manuf_nrs_dict=device['manufacturerPartNrs']
    manuf_list=manuf_nrs_dict['manufacturerPartNr']
    software_type=device['softwareType']
    return device_name, device_name_short, device_type, device_ip, device_location, manuf_nrs_dict, manuf_list, software_type

Now that we have some input data from Prime, there is also some data that Prime is not aware of, for example virtual chassis priorities etc. We need to ssh to our devices and get more data. To do that, we will divide our devices into three groups and use conditions for each group:

  • stackable switches,
  • vss-capable switches,
  • other devices

This is the code for the first group (stackable switches), which includes two subcases: stackable switches which form a stack, and individual non-stacked switches. The code does the following: if the device belongs to the first group, and if it it is a stack (isinstance List), use the .yaml inventory file to connect to the device, get ‚show switch’ data output, parse it, do an additional prime query to find out what the device function is (udf_url), produce the row with input data, save the row. If it is not a stack (manuf_list is not a list of items), there is no need to ssh into the device because there is no stack data to be had, so make a row with zeroes for virtual chassis data.

def write_row(device):
       
       f = open(filename, 'a')
       writer = csv.writer(f)
       device_name, device_name_short, device_type, device_ip, device_location, manuf_nrs_dict, manuf_list, software_type = get_device_properties(device)
       fixed_device_location = device_location.split(',')[0].split(' ')[0]
       if device['deviceType']=='Cisco Catalyst 29xx Stack-able Ethernet Switch' or device['deviceType']=='Cisco Catalyst38xx stack-able ethernet switch' or device['deviceType']=='Cisco Catalyst 9300 Switch':
          if isinstance(manuf_list,list):
    
             dev=load_yaml().devices[device_name_short]
             dev.connect()
             switch_vc_data=dev.parse('show switch')
  
             for list_item_dict in manuf_list:
           
                 switch_number, serial, model, vc_position, vc_priority = get_stack_data_from_stack(switch_vc_data, list_item_dict)
                 udf_url="https://prime.yourcompamy.com/webacs/api/v2/op/devices/exportDevices?ipAddress=%22" + device_ip + "%22"
                 udf_response=requests.get(udf_url,auth=primeCreds,verify=False)
                 udf_obj=xmltodict.parse(udf_response.text)
                 udf_value = udf_obj['mgmtResponse']['devicesExportResult']['devices']['device']['udfs']['udf']['value']

                 row = [udf_value + ';Cisco;' + device_name_short + '-' + switch_number + ';' + device_ip + ';' + model + ';' + software_type + ';' + serial + ';' + device_name_short + ';' + vc_position + ';' + vc_priority + ';' + 'active;' + fixed_device_location]
                 writer.writerow(row)
          else:
              switch_number=manuf_list['name']
              serial=manuf_list['serialNumber']
              model=manuf_list['partNumber']
              udf_url="https://prime.yourcompany.com/webacs/api/v2/op/devices/exportDevices?ipAddress=%22" + device_ip + "%22"
              response=requests.get(udf_url,auth=primeCreds,verify=False)
              udf_obj=xmltodict.parse(response.text)
              udf_value = udf_obj['mgmtResponse']['devicesExportResult']['devices']['device']['udfs']['udf']['value']
              row = [udf_value + ';Cisco;' + device_name_short + '-' + switch_number[-1] + ';' + device_ip + ';' + model + ';' + software_type + ';' + serial + ';' + device_name_short + ';' + '0; ' + '0;' + 'active;' + fixed_device_location]
              writer.writerow(row)

The helper function get_stack_data_from_stack is as follows:

def get_stack_data_from_stack(switch_vc_data, list_item_dict):
       
        pre_switch_number = list_item_dict['name']
        switch_number=pre_switch_number[-1]
        
        serial=list_item_dict['serialNumber']
        model=list_item_dict['partNumber']
        vc_position=switch_number
        vc_priority=switch_vc_data['switch']['stack'][str(switch_number)]['priority']
        return switch_number, serial, model, vc_position, vc_priority

Now the code for the second group – the vss capable devices:

   elif device['deviceType']=='Cisco Catalyst 4500 Virtual Switching System' or device['deviceType']=='Cisco Catalyst 6500 Virtual Switching System' or device['deviceType']=='Cisco Catalyst 9500 SVL Switch':
           if isinstance(manuf_list, list):
              vss_data = get_vss_properties(device_name_short)
              switch_number = 1
              for item in manuf_list:
                  role = vss_data[switch_number]['role']
                  model = vss_data[switch_number]['model']
                  serial = vss_data[switch_number]['serial']
                  vss_position = str(vss_data[switch_number]['switch number'])
                  vss_priority = str(vss_data[switch_number]['priority'])
                  udf_url="https://prime.yourcompany.com/webacs/api/v2/op/devices/exportDevices?ipAddress=%22" + device_ip + "%22"
                  udf_response=requests.get(udf_url,auth=primeCreds,verify=False)
                  udf_obj=xmltodict.parse(udf_response.text)
                  udf_value = udf_obj['mgmtResponse']['devicesExportResult']['devices']['device']['udfs']['udf']['value']
                  row = [udf_value + ';Cisco;' + device_name_short + '-' + str(switch_number) + ';' + device_ip + ';' + model + ';' + software_type + ';' + serial + ';' + device_name_short + ';' + vss_position + ';' + vss_priority + ';' + 'active;' + fixed_device_location]
                  writer.writerow(row)
                  switch_number +=1

…with the helper function get_vss_properties:

def get_vss_properties(device_name_short):
    '''
    helper function to get data from vss devices
    '''
    dev=load_yaml().devices[device_name_short]
    dev.connect()
    obj1 = dev.execute('show switch virtual role')
    role_regex = re.compile(r'^(?P<ROLE>(LOCAL)|(REMOTE))\s+ (?P<SWNO>\d)\s+(UP|DOWN)\s+\w+[(]\w\s[)]\s+(?P<PRIORITY>\d+)[(]\d+[)]\s+(?P<STATUS>\w+)\s+\d+\s+\d+')
    virtual_role_dict = {}
    inventory_index = 1
    for line in obj1.splitlines():
        line = line.strip()
        result = role_regex.match(line)
        if result:
            sub_dict = virtual_role_dict.setdefault(inventory_index,{})
            group = result.groupdict()
            sub_dict['role'] = group['ROLE']
            sub_dict['switch number'] = group['SWNO']
            sub_dict['priority'] = group['PRIORITY']
            inventory_index = inventory_index + 1
            continue
        virtual_role_dict.pop(3, None)
        virtual_role_dict.pop(4, None)
    obj = dev.execute('show module')
    p1 = re.compile(r'^(Switch Number:)\s(?P<NUMBER>\d)\s(Role:)\s(Virtual Switch)\s(?P<ROLE>(Active|Standby))')
    p2 = re.compile(r'^(?P<SWNUMBER>\d)\s+\d+\s+.*(?P<MODEL>(WS-C\d{4}X-\d+))\s+(?P<SERIAL>\w{3}\d{4}.{4})')
    parsed_dict = {}
    inventory_index = 1
    for line in obj.splitlines():
            line = line.strip()
            result = p1.match(line)
            result2 = p2.match(line)
            if result:
                inventory_dict = virtual_role_dict.setdefault(inventory_index,{})
                group = result.groupdict()
                inventory_dict['switch number'] = group['NUMBER']
                inventory_dict['role'] = group['ROLE']
                continue
            if result2:
                group = result2.groupdict()
                inventory_dict['model'] = group['MODEL']
                inventory_dict['serial'] = group ['SERIAL']
                inventory_index +=1
                continue
    return virtual_role_dict

Finally, the third group (non-stackable, non-vss devices):

  else:
           switch_number=manuf_list['name']
           serial=manuf_list['serialNumber']
           model=manuf_list['partNumber']
           udf_url="https://prime.yourcompany.com/webacs/api/v2/op/devices/exportDevices?ipAddress=%22" + device_ip + "%22"
           response=requests.get(udf_url,auth=primeCreds,verify=False)
           udf_obj=xmltodict.parse(response.text)
           udf_value = udf_obj['mgmtResponse']['devicesExportResult']['devices']['device']['udfs']['udf']['value']
           row = [udf_value + ';Cisco;' + device_name_short + '-' + switch_number[-1] + ';' + device_ip + ';' + model + ';' + software_type + ';' + serial + ';' + device_name_short + ';' + '0; ' + '0;' + 'active;' + fixed_device_location]
           writer.writerow(row)

Now I need a main function to put it all together:

get_virtual_chassis_data_for_topology_devices(query_prime('wro'))

I run it and the output is as follows (serial numbers have been modified post-run because i don’t want to show my real serial numbers)

User_Access;Cisco;c01wro005-1;10.101.1.105;WS-C2960X-48FPS-L;IOS;PCW1942A4UW;c01wro005;1;15;active;PL-Wro-1
User_Access;Cisco;c01wro005-2;10.101.1.105;WS-C2960X-48FPD-L;IOS;POD1917T1PN;c01wro005;2;14;active;PL-Wro-1
User_Access;Cisco;c01wro006-1;10.101.1.113;WS-C2960X-48FPD-L;IOS;POD1917T1Q5;c01wro006;0; 0;active;wroisio
User_Access;Cisco;c01wro007-1;10.101.1.121;WS-C2960X-48FPD-L;IOS;POD1917T1QM;c01wro007;0; 0;active;PL-Wro-1
User_Access;Cisco;c01wro008-1;10.101.1.129;WS-C2960X-48FPD-L;IOS;POD1917T1PR;c01wro008;0; 0;active;PL-Wro-1
Foreign_Controller;Cisco;b01wro001-s;10.101.1.137;AIR-CT5508-K9;Cisco Controller;PCW1919B3BB;b01wro001;0; 0;active;PL-Wro-1
Production_Router;Cisco;r01wro003-1;10.101.1.252;WS-C3850-24P-E;IOS-XE;PCW1912D14R;r01wro003;0; 0;active;PL-Wro-1
Production_Router;Cisco;r01wro002-1;10.101.1.253;WS-C3850-24P-E;IOS-XE;POD1912X14J;r01wro002;0; 0;active;PL-Wro-1
Core;Cisco;r01wro001-vss-1;10.101.1.254;WS-C4500X-16;IOS-XE;JAF191812PW;r01wro001-vss;1;120;active;default
Core;Cisco;r01wro001-vss-2;10.101.1.254;WS-C4500X-16;IOS-XE;JAF191812MJ;r01wro001-vss;2;110;active;default
Server_Access;Cisco;c01wro101-1;10.101.1.65;WS-C3850-24T-L;IOS-XE;POD1913R0AL;c01wro101;1;15;active;PL-Wro-1
Server_Access;Cisco;c01wro101-2;10.101.1.65;WS-C3850-24T-L;IOS-XE;POD1913X0AP;c01wro101;2;14;active;PL-Wro-1
User_Access;Cisco;c01wro001-1;10.101.1.73;WS-C2960X-48FPD-L;IOS;PCW1917B2AC;c01wro001;1;15;active;PL-Wro-1
User_Access;Cisco;c01wro001-2;10.101.1.73;WS-C2960X-48FPD-L;IOS;POD1917S1QK;c01wro001;2;14;active;PL-Wro-1
User_Access;Cisco;c01wro001-3;10.101.1.73;WS-C2960X-48FPD-L;IOS;POD1917S1H3;c01wro001;3;1;active;PL-Wro-1
User_Access;Cisco;c01wro002-1;10.101.1.81;WS-C2960X-48FPD-L;IOS;PCW1917B2A7;c01wro002;0; 0;active;PL-Wro-1
User_Access;Cisco;c01wro003-1;10.101.1.89;WS-C2960X-48FPD-L;IOS;POD1917S1R0;c01wro003;1;15;active;PL-Wro-1
User_Access;Cisco;c01wro003-2;10.101.1.89;WS-C2960X-48FPD-L;IOS;POD1917S1Q4;c01wro003;2;14;active;PL-Wro-1
User_Access;Cisco;c01wro004-1;10.101.1.97;WS-C2960X-48FPD-L;IOS;POD1917S1HA;c01wro004;1;15;active;PL-Wro-1
User_Access;Cisco;c01wro004-2;10.101.1.97;WS-C2960X-48FPD-L;IOS;POD1917S1RE;c01wro004;2;14;active;PL-Wro-1

Now I can start writing the code to export this to NetBox.

Exporting inventory from Prime to Cisco Genie part 1. Creating per-site inventory .yaml files from Prime inventory.


It’s a simple scenario – you have all your devices in Prime and you want to start playing with your devices using Genie or Cisco PyATS but you don’t want to start creating .yaml inventories from scratch.

Let’s assume that your device names follow a pattern xxxWROyyy where the middle three letters stand for a site name and that you have a number of sites.

First, let’s create a set with all your site (abbreviation) names:

def get_all_devices_csv():
    url = "https://prime.yourcompany.com/webacs/api/v4/data/Devices?.full=true"
    response = requests.get(url, auth=primeCreds, verify=False)
    obj=xmltodict.parse(response.text)
    internal_obj = obj['queryResponse']['entity']
    unique_site_set = set()

    for item in internal_obj:
        name = item['devicesDTO']['deviceName']
        fixed_name = name[3:6]
        unique_site_set.add(fixed_name)

    return unique_site_set

We will use that set later in a loop that will go through all sites.

Now let’s create a prime query in the following way: we will try to search for all device names that contain a given abbreviation (e.g WRO):

def query_prime(filterword, searched_thing):
         url="https://yourcompany.com/webacs/api/v4/data/Devices?.full=true&.sort=ipAddress&deviceName=" + filterword + "(%22"+searched_thing+"%22)"
     response=requests.get(url,auth=primeCreds,verify=False)
     obj=xmltodict.parse(response.text)
     internal_obj=obj['queryResponse']
     return internal_obj

Now for the most important part: a function that creates a dictionary with device ip addresses and os type.

def parse_prime_query_response(response_object):
    
    device_topology = {}

    if 'entity' in response_object.keys():
        query_response = response_object['entity']
        if isinstance(query_response, list):
           for item in query_response:
               device=item['devicesDTO']
               device_name=device['deviceName']
               device_name_short=device_name.replace('.yourcompany.com', '')
               device_type=device['deviceType']
               device_ip=device['ipAddress']
               #sometimes the field location is not there if someone forgot to fill it in on Prime
               if 'location' in device.keys():
                  device_location=device['location']
               else:
                  device_location='default location'
               try:
                  manuf_nrs_dict=device['manufacturerPartNrs']
                  manuf_list=manuf_nrs_dict['manufacturerPartNr']
               except:
                  print(f'error creating entry for {device_name_short}')
                #we look here at a specific model
               if device['deviceType']=='Cisco Catalyst 29xx Stack-able Ethernet Switch' or device['deviceType']=='Cisco Catalyst 68xx Virtual Switch' or device['deviceType']=='Cisco 3750 Stackable Switches' or device['deviceType']=='Cisco 2821 Integrated Services Router' or device['deviceType']=='Cisco Catalyst 6500 Virtual Switching System' or device['deviceType']=='Cisco Catalyst 2960CX-8PC-L Switch':
                  device_topology[device_name_short]={}
                  device_topology[device_name_short]['ip_address'] = device_ip
                  device_topology[device_name_short]['device_os'] = 'ios'
               elif device['deviceType']=='Cisco Catalyst38xx stack-able ethernet switch' or device['deviceType']=='Cisco Catalyst 4500 Virtual Switching System' or device['deviceType']=='Cisco Catalyst 9300 Switch':
                  device_topology[device_name_short]={}
                  device_topology[device_name_short]['ip_address'] = device_ip
                  device_topology[device_name_short]['device_os'] = 'iosxe'
               elif device['deviceType']=='Cisco 5508 Wireless LAN Controller':
                   try:
                     device_topology[device_name_short]={}
                     device_topology[device_name_short]['ip_address'] = device_ip
                     device_topology[device_name_short]['device_os'] = 'other'
                   except Exception as e:
                     print('something went wrong here')
               else:
                   try:
                     device_topology[device_name_short]={}
                     device_topology[device_name_short]['ip_address'] = device_ip
                     device_topology[device_name_short]['device_os'] = 'other'
                   except Exception as e:
                       print('something went wrong if type is not listed in code')

        else:
                   device=entity['devicesDTO']
                   if device['deviceType']=='Cisco Catalyst 29xx Stack-able Ethernet Switch':
                    print('oooh there is one device on this site only')
                     #TODO #if no results for this query meaning a dummy site with zero devices but this should never happen
    #else:
        #print('no results for this query')
        #quit()
    return device_topology

Now we need a function that takes the dictionary with device data and creates a topology file for each site.

def create_topology_file(filename, device_topology):
  
     filepath = os.path.join('./topologies', filename)
     f = open(filepath, 'w')
     writer = csv.writer(f)
     header = ['hostname','ip' , 'username' ,'password' , 'protocol' , 'os']
     writer.writerow(header)
     try:
         for key,value in device_topology.items():
             ip_address = device_topology[key]['ip_address']
             device_os = device_topology[key]['device_os']
             row = [key, ip_address, user, passkey, 'ssh', device_os]
             writer.writerow(row)
     except:
         print('the query did not return any results')
         quit()
     print('PHASE TWO: GENERATING A TOPOLOGY FILE FOR A LOCATION. PLEASE WAIT.')

Now we need a function that glues it all together:

def create_site_csv_for_yaml(filterword, searched_thing, user, passkey):
         internal_obj=query_prime(filterword, searched_thing)
         device_topology= parse_prime_query_response(internal_obj)
         filename = 'devices_' + filterword + '-' + searched_thing + '.csv'
         create_topology_file(filename, device_topology)

Finally, let’s loop through all the sites to create separate .csv files for each site:

for x in get_all_devices_csv():
  create_site_csv_for_yaml(filterword, x, user, passkey)

Then I created a bash script that I will use to control the script flow (there will be much more to do in the future).

#!/bin/bash

while [[ "$#" -gt 0 ]]; do
            case $1 in
                -l|--location) location="$2"; shift ;;
                -f|--filterword) prime_filterword="$2"; shift ;;
                -q|--query_content) query_content="$2"; shift ;;
                -n|--netbox) netbox='true' ;;
                *) echo "Unknown parameter passed: $1"; exit 1 ;;
            esac
            shift
done
echo "Please enter your admin username"

read user
echo "Please enter your password"
read -s passkey

echo "Please enter your netbox token"
read -s token

python3 first_python.py --filterword $prime_filterword --user $user --passkey $passkey
FILES="./topologies/*"
for f in $FILES
do
  y=${f%.*}
  pyats create testbed file --path $f --output $y.yaml --encode-password
done
rm ./topologies/*.csv

For now, I will only use part of the available options with
./do_all_bash.sh -f ‚contains’

The result is:

with each file containing my devices per site:

 r01wro001-vss:
    connections:
      cli:
        ip: 172.16.0.1
        protocol: ssh
    credentials:
      default:
        password: '%ENC{hashedkey}'
        username: myadminuser
      enable:
        password: '%ENC{hashedkey}'
    os: iosxe
    type: iosxe
  r01wro002:
    connections:
      cli:
        ip: 172.16.0.2
        protocol: ssh
    credentials:
      default:
        password: '%ENC{hashedkey}'
        username: myadminuser
      enable:
        password: '%ENC{hashedkey}'
    os: iosxe
    type: iosxe
  r01wro003:
    connections:
      cli:
        ip: 172.16.0.3
        protocol: ssh
    credentials:
      default:
        password: '%ENC{hashedkey}'
        username: myadminuser
      enable:
        password: '%ENC{hashedkey}'
    os: iosxe
    type: iosxe

And the little bits and pieces that are necessary to run this:

import pandas as pd
import os
import xlrd
import argparse
import pynetbox
import requests
import pprint
import json
import warnings
import re
import sys
import xmltodict
import pprint
import getopt
import csv
from csv import reader, writer
from requests.auth import HTTPBasicAuth
from requests.packages.urllib3.exceptions import InsecureRequestWarning
warnings.filterwarnings("ignore")
from genie.testbed import load

p = argparse.ArgumentParser()
p.add_argument('--filterword', type=str)
p.add_argument('--user', type=str)
p.add_argument('--passkey', type=str)
args = p.parse_args()

user = args.user
passkey = args.passkey
filterword = args.filterword


primeServer="prime.yourcompany.com"
primeCreds = HTTPBasicAuth(user, passkey)

Getting device data from Prime with Python

Hello

I had to find out from Prime if a device is a stack or not, and how many devices there are if it is a stack, so I came up with this script:

First, the connection:

primeServer="<your_server.com"
primeCreds = HTTPBasicAuth('your_username','<your_password')     

url = "https://" + primeServer + "/webacs/api/v4/data/Devices?.full=true&.sort=ipAddress&deviceName=startsWith(%22" + device_name + "%22)"
response=requests.get(url, auth=primeCreds, verify=False)
obj = xmltodict.parse(response.text)
internal_obj = obj['queryResponse']
entity = internal_obj['entity']    

Now we need to account for the fact that there may be one or more results for ”startsWith”.

if isinstance(entity, list):          #if several results returned by Prime 
        print(f'Returning several objects for {device_name}')
        
        for item in entity:
           device = item['devicesDTO']
           local_name = device['deviceName']
           location = device['location']
           vc_dict = device['manufacturerPartNrs']
           internal_vc_dict = vc_dict['manufacturerPartNr']
           
           if isinstance(internal_vc_dict, list):        #if it is a stack
               vc_members = len(internal_vc_dict)
               print(f'stack size {local_name} in prime is {vc_members}')
 
           else:                                         # if it is not a stack
               vc_members = int(internal_vc_dict['name'])
              
else:                                   #if one result only
           device = entity['devicesDTO']
           vc_dict = device['manufacturerPartNrs']
           location = device['location']
           internal_vc_dict = vc_dict['manufacturerPartNr']
           if isinstance(internal_vc_dict, list):   #if it is a stack
               vc_members = len(internal_vc_dict)
               print(f'stack size {local_name} in prime is {vc_members}')

           else:                                            # if it is not a stack
               vc_members = int(internal_vc_dict['name'])
               print('not a stack')

     return vc_members

Importing network prefixes from Excel to NetBox.


There comes a time where companies want to move all their excel lists with things to Netbox. The problem is, the excels are huge so we may want to automate that.

First, we need to build a session to Netbox and read in the excel file.

import pandas as pd
import xlrd
import pynetbox
import requests
import pprint
import json
import collections
from collections import OrderedDict

nb = pynetbox.api("https://netbox.acme.com", "<your_netbox_token>")


session = requests.Session()
session.verify = False
nb.http_session = session

df = pd.read_excel('test.xlsx', sheet_name='Cloud-IPs')

Now an important thing in Netbox is that we need to deal with ID’s. Each site has an ID. Each device has an ID. So once we have our sites in netbox, we need to find an ID for this site because we will need that in our import of prefixes.

def get_side_id(location_name):
  sites = nb.dcim.sites.all()

  searched_site = location_name
  for site in sites:
    sitesdict = dict(site)
    if searched_site in sitesdict.values():
      print("Found the site under number " + str(sitesdict.get('id')) + " " + str(sitesdict.get('display')))
      siteid = str(sitesdict.get('id'))
      print(siteid)
    
  return siteid

Now we need to know which rows should be imported. One column in my excel file (with the column title: Location) has the site name. I would like to get a list of rows that have that location so that later we can use this list to import the correct rows:

def get_row_range_from_excel(location_name):
    my_location_name = location_name
    
    if my_location_name == "Mars":
        rows = df.loc[df['Location'] == 'Mars']
    if my_location_name == "Pluton":
        rows = df.loc[df['Location'] == 'Pluton']
    if my_location_name == "Saturn":
        rows = df.loc[df['Location'] == 'Saturn']
    if my_location_name == "Earth":
        rows = df.loc[df['Location'] == 'Earth']
    #I change the type to dictionary because i'm not comfortable working with pandas objects    
    mydict = rows.to_dict()
    #i need another dictionary to reorder things a bit and create one consolidated dictionary
    d = collections.defaultdict(dict)
    for key, value in mydict.items():
     
        for deepkey, deepvalue in value.items():
          
          d[deepkey][key] = deepvalue
    #here i get the row numbers from the keys of the last dictionary     
    ordered_list = []
    for key, value in d.items():
          ordered_list.append(key)
    
    
    return ordered_list

Now it’s time to write up the import function:

def import_azure_excel_to_netbox(location_name):
  task_location_name = location_name
  row_range_list = get_row_range_from_excel(task_location_name)
  

  firstitem = row_range_list[0]        #FIRST ROW TO IMPORT
  lastitem = row_range_list[-1]        #LAST ROW TO IMPORT
 
  siteid = get_side_id(task_location_name) #WE NEED THE SITE ID 
  for i in range (int(firstitem), int(lastitem)):   #get proper range for given location using first and last keys from get_row_range_from_excel(location_name)
    row_dict = dict(df.iloc[i])
    if row_dict['Status'] == 'assigned':  
      #HERE WE NEED TO CREATE VARIABLES FOR THE IMPORT BASED ON CONTENT OF COLUMNS FROM MY EXCEL FILE
      network_comment = row_dict['Comment']
      network_description = row_dict['VNET']
      full_description = str(network_description) + " " + str(network_comment)
      network_role = '1'  #THIS IS THE ID FOR MY NETBOX ROLE: INTRANET. 
      network_prefix = str(row_dict['Network']) + '.' + str(row_dict['Unnamed: 4']) + '.' + str(row_dict['Unnamed: 5']) + '.' + str(row_dict['Unnamed: 6']) + str(row_dict['Mask'])
      mask = str(row_dict['Mask'])
      mask_fixed = mask.lstrip('/')
      
      #I NEED TO MAKE A DECISION WHICH PREFIXES ARE JUST CONTAINERS AND WHICH ARE ACTIVE NETWORKS WITH HOSTS... SOME NICE LOGIC IS NEEDED AND FOR NOW THIS MUST DO BECAUSE I HAVE NO BETTER IDEAS.
      if int(mask_fixed) > 23:
        status = 'active'
      else:
        status = 'container'
      #CHECKING...
      print(network_prefix, network_role, full_description, siteid, status)
      
      try:
        prefix = nb.ipam.prefixes.create(site=siteid, prefix = network_prefix, role= network_role, description= full_description, status= status)
      except pynetbox.RequestError as e:
        print(e.error)

Automating Avocent with REST API

Today a short script that shows how to get a authorization token from devices with a REST API and use it to change a config. The example device is an Avocent 8032.

import requests
from requests.structures import CaseInsensitiveDict

urls = ["https://1.1.1.1"]
for url in urls:
   headers = {'Content-type': 'application/json', 'Accept': 'application/json'}
   r = requests.post(url + ':48048/api/v1/sessions/login', verify=False, json={"username": "<yourusername","password": "<yourpassword>"}, headers=headers)
   json_response = r.json()
   mytoken = json_response["token"]

   headers = CaseInsensitiveDict()
   tokenstring = str(mytoken)



#GETTING SYSTEM INFO
   result = requests.get(url + ':48048/api/v1/system/info', verify=False, headers={'Content-Type':'application/json', 'Authorization': 'Bearer {}'.format(tokenstring)})
   systeminfo_response = result.json()
   print(systeminfo_response)



#CHANGING_IDLE_TIMEOUT
   result = requests.put(url + ':48048/api/v1/security', verify=False, json={'idleTimeout': 3600}, headers={'Content-Type':'application/json', 'Authorization': 'Bearer {}'.format(tokenstring)})
   print(result)

…and Bob’s your uncle!

Prerequisites:
API needs to be enabled: SSH into your device and change the parameter
enable_api_https_access = yes

The path to the parameter is under path /system/security/security_profile
The code itself is pretty self-explanatory. Any json data you want to send (=modify values) must be inside the json body. verify=False is used to bypass self-signed certificate warning. Once you have the token, you need to include it in later requests as an auth header. Once the PUT request is sent, the expected response is 204 or 200.
API of Avocent is described at https://www.vertiv.com/4a7004/globalassets/shared/avocent-acs8008000-application-programming-interface_0.pdf

Automate any HTML GUI with Selenium and pyautogui

I’ve recently had to upgrade a large number of Avocent console servers using GUI and it was an awfully boring experience. Because i don’t like such repetitive work, I decided that it’s high time I learnt how to automate mindless clicking.

Prerequisites:

Install Python, pip, the selenium library, and the pyautogui library. Detailed steps depend on your OS.

Code:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
import pyautogui

#this ensures that you don't need to deal with the https self-signed-certificate issue
options = webdriver.ChromeOptions()
options.add_argument('ignore-certificate-errors')
browser = webdriver.Chrome(chrome_options=options)

#here I go to my console server, find the username field (to find the ID, press F12 in your browser to open devtools and find the right GUI element in the html code. Mind you, it might take you a while to find the right element. ), and type in 'myadminusername'
browser.get('https://192.168.1.1')
username = browser.find_element_by_id("username")
username.send_keys("myadminusername")

#I find the password field (and then waste an hour because the password field is non-interactable so I can't send keys in any way)
password = browser.find_element_by_id("password").click()

#so i deal with this problem in a different way. The field is active because i clicked on it so i can just type away...
webdriver.ActionChains(browser).key_down(Keys.SHIFT).send_keys("m").perform()
webdriver.ActionChains(browser).key_up(Keys.SHIFT).send_keys("y").perform()
webdriver.ActionChains(browser).key_down(Keys.SHIFT).send_keys("p").perform()
webdriver.ActionChains(browser).key_up(Keys.SHIFT).send_keys("a").perform()
webdriver.ActionChains(browser).send_keys("s").perform()
webdriver.ActionChains(browser).send_keys("s").perform()
webdriver.ActionChains(browser).send_keys(Keys.RETURN).perform()

#now i'm in the actual GUI, I click on a menu element on the lefthandside. Here I use the find_by_css_selector method instead of find_element_by_id
browser.find_element_by_css_selector("a[onclick*=overview]").click();

#I execute some js code instead of clicking on an option
browser.execute_script("XML_request(\"units.overview\",\"upgradeFirmware\");")

#I find a radio button to select how i want to select the file used for the upgrade. Finding by xpath is yet another way of finding an element. 
browser.find_element_by_xpath("//input[@value='from-mycomputer']").click()

#I select the option "Choose from file" which will open the file explorer to locate the file
browser.find_element_by_id("fileXferForm").click()

#now i can't use selenium anymore because Selenium only works in the browser and now i have File Explorer window open and i need to find the upgrade file on my PC. 
#I need to use pyautogui to move the mouse and click

pyautogui.position()
pyautogui.FAILSAFE = False
#i move the mouse cursor to where I want it to be to click on firmware download (later)
pyautogui.moveTo(900, 300, duration =4) 

#i type in the path where my upgrade file is and hit ENTER
pyautogui.typewrite('F:\\my_upgrade_file\n', interval=0)

#now i click on the DOWNLOAD button (because the cursor is in the right place)
pyautogui.click(clicks=2, interval=2, button='left')

#now the file download will take a moment, after which I need to click again on the Install button. This is #TODO because i haven't worked out yet how to deal with the changing download time before I can click. Of course I can click 100 times with an interval of 3 seconds but... 

Azure part 2 – adding subnets to existing vnets

It’s now time to add subnets to the first vnet.

Let’s do the first two with Powershell:

PS /home/tode> $virtualNetwork = Get-AzVirtualNetwork -Name HUBVNET

PS /home/tode> Add-AzVirtualNetworkSubnetConfig -Name gatewaySubnet -VirtualNetwork $virtualNetwork -AddressPrefix "10.0.0.0/27"

PS /home/tode> Add-AzVirtualNetworkSubnetConfig -Name FirewallSubnet -VirtualNetwork $virtualNetwork -AddressPrefix "10.0.0.32/27"
PS /home/tode> $virtualNetwork | Set-AzVirtualNetwork 

Name                   : HUBVNET
ResourceGroupName      : MainRG
Location               : eastus
Id                     : /subscriptions/62dd0295-0094-443a-9b60-4c75dac248eb/resourceGroups/MainRG/providers/Microsoft.Network/virtualNetworks/HUBVNET
Etag                   : W/"e81a3b5f-a4e9-4986-8d2f-d7b79e3237df"
ResourceGuid           : 9ccba534-fa70-4e97-9901-ac19afe36051
ProvisioningState      : Succeeded
Tags                   : 
AddressSpace           : {
                           "AddressPrefixes": [
                             "10.0.0.0/16"
                           ]
                         }
DhcpOptions            : {}
FlowTimeoutInMinutes   : null
Subnets                : [
                           {
                             "Delegations": [],
                             "Name": "gatewaySubnet",
                             "Etag": "W/\"e81a3b5f-a4e9-4986-8d2f-d7b79e3237df\"",
                             "Id": "/subscriptions/62dd0295-0094-443a-9b60-4c75dac248eb/resourceGroups/MainRG/providers/Microsoft.Network/virtualNetworks/HUBVNET/subne
                         ts/gatewaySubnet",
                             "AddressPrefix": [
                               "10.0.0.0/27"
                             ],
                             "IpConfigurations": [],
                             "ServiceAssociationLinks": [],
                             "ResourceNavigationLinks": [],
                             "ServiceEndpoints": [],
                             "ServiceEndpointPolicies": [],
                             "PrivateEndpoints": [],
                             "ProvisioningState": "Succeeded",
                             "PrivateEndpointNetworkPolicies": "Enabled",
                             "PrivateLinkServiceNetworkPolicies": "Enabled",
                             "IpAllocations": []
                           },
                           {
                             "Delegations": [],
                             "Name": "FirewallSubnet",
                             "Etag": "W/\"e81a3b5f-a4e9-4986-8d2f-d7b79e3237df\"",
                             "Id": "/subscriptions/62dd0295-0094-443a-9b60-4c75dac248eb/resourceGroups/MainRG/providers/Microsoft.Network/virtualNetworks/HUBVNET/subne
                         ts/FirewallSubnet",
                             "AddressPrefix": [
                               "10.0.0.32/27"
                             ],
                             "IpConfigurations": [],
                             "ServiceAssociationLinks": [],
                             "ResourceNavigationLinks": [],
                             "ServiceEndpoints": [],
                             "ServiceEndpointPolicies": [],
                             "PrivateEndpoints": [],
                             "ProvisioningState": "Succeeded",
                             "PrivateEndpointNetworkPolicies": "Enabled",
                             "PrivateLinkServiceNetworkPolicies": "Enabled",
                             "IpAllocations": []
                           }
                         ]
VirtualNetworkPeerings : []
EnableDdosProtection   : false
DdosProtectionPlan     : null
ExtendedLocation       : null

Now let’s do the third subnet with a template. I exported the template for the vnet in the azure portal and added the third subnet accordingly:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "virtualNetworks_HUBVNET_name": {
            "defaultValue": "HUBVNET",
            "type": "String"
        }
    },
    "variables": {},
    "resources": [
        {
            "type": "Microsoft.Network/virtualNetworks",
            "apiVersion": "2020-11-01",
            "name": "[parameters('virtualNetworks_HUBVNET_name')]",
            "location": "eastus",
            "properties": {
                "addressSpace": {
                    "addressPrefixes": [
                        "10.0.0.0/16"
                    ]
                },
                "subnets": [
                    {
                        "name": "gatewaySubnet",
                        "properties": {
                            "addressPrefix": "10.0.0.0/27",
                            "serviceEndpoints": [],
                            "delegations": [],
                            "privateEndpointNetworkPolicies": "Enabled",
                            "privateLinkServiceNetworkPolicies": "Enabled"
                        }
                    },
                    {
                        "name": "FirewallSubnet",
                        "properties": {
                            "addressPrefix": "10.0.0.32/27",
                            "serviceEndpoints": [],
                            "delegations": [],
                            "privateEndpointNetworkPolicies": "Enabled",
                            "privateLinkServiceNetworkPolicies": "Enabled"
                        }
                    },
					{
                        "name": "OtherSubnet",
                        "properties": {
                            "addressPrefix": "10.0.0.64/27",
                            "serviceEndpoints": [],
                            "delegations": [],
                            "privateEndpointNetworkPolicies": "Enabled",
                            "privateLinkServiceNetworkPolicies": "Enabled"
                        }
                    }
                ],
                "virtualNetworkPeerings": [],
                "enableDdosProtection": false
            }
        },
        {
            "type": "Microsoft.Network/virtualNetworks/subnets",
            "apiVersion": "2020-11-01",
            "name": "[concat(parameters('virtualNetworks_HUBVNET_name'), '/FirewallSubnet')]",
            "dependsOn": [
                "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_HUBVNET_name'))]"
            ],
            "properties": {
                "addressPrefix": "10.0.0.32/27",
                "serviceEndpoints": [],
                "delegations": [],
                "privateEndpointNetworkPolicies": "Enabled",
                "privateLinkServiceNetworkPolicies": "Enabled"
            }
        },
        {
            "type": "Microsoft.Network/virtualNetworks/subnets",
            "apiVersion": "2020-11-01",
            "name": "[concat(parameters('virtualNetworks_HUBVNET_name'), '/gatewaySubnet')]",
            "dependsOn": [
                "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_HUBVNET_name'))]"
            ],
            "properties": {
                "addressPrefix": "10.0.0.0/27",
                "serviceEndpoints": [],
                "delegations": [],
                "privateEndpointNetworkPolicies": "Enabled",
                "privateLinkServiceNetworkPolicies": "Enabled"
            }
        },
		{
            "type": "Microsoft.Network/virtualNetworks/subnets",
            "apiVersion": "2020-11-01",
            "name": "[concat(parameters('virtualNetworks_HUBVNET_name'), '/OtherSubnet')]",
            "dependsOn": [
                "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_HUBVNET_name'))]"
            ],
            "properties": {
                "addressPrefix": "10.0.0.64/27",
                "serviceEndpoints": [],
                "delegations": [],
                "privateEndpointNetworkPolicies": "Enabled",
                "privateLinkServiceNetworkPolicies": "Enabled"
            }
        }
    ]
}

Now let’s deploy this json:

PS /home/tode> New-AzResourceGroupDeployment -ResourceGroupName MainRG -TemplateFile /home/tode/Dokumente/addsubnets.json

DeploymentName          : addsubnets
ResourceGroupName       : MainRG
ProvisioningState       : Succeeded
Timestamp               : 29.09.2021 13:41:59
Mode                    : Incremental
TemplateLink            : 
Parameters              : 
                          Name                            Type                       Value     
                          ==============================  =========================  ==========
                          virtualNetworks_HUBVNET_name    String                     HUBVNET   
                          
Outputs                 : 
DeploymentDebugLogLevel : 

Finally, I created a short script:

$vnetlist = Get-AzResource -ResourceGroupName MainRG
foreach ($item in $vnetlist) {
$vnet = Get-AzVirtualNetwork -Name $item.Name 
foreach ($subnet in $vnet.Subnets) {
	$vnet.Name
	$vnet.Location
	$subnet.Name
	$subnet.AddressPrefix
	}
}

Starting with Azure. Part 1 – Setting up the Azure environment and creating VNETs

Task: Create a resource group and three virtual networks in Azure using powershell or ARM templates.
Erase the environment afterwards to minimize costs.

Note!
I’m using Ubuntu 18.04 so i had to install powershell and Az module before doing this activity. Similarly, i created my Azure account before and this is not shown in this blog post. Follow this link to install Powershell and Az module if you also run Ubuntu.

If you use Windows you will only need to install Az module.

Step 1
Log in to your azure account.

Step 2

Create your resource group with a Powershell script:

Step 3

Create 3 VNETs. The first vnet with Powershell, the other ones with a template.

Now another VNET with an arm template: First, I created a json file that describes the new vnet.

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "metadata": {
    "_generator": {
      "name": "bicep",
      "version": "0.4.412.5873",
      "templateHash": "17421496824516967129"
    }
  },
  "parameters": {
    "vnetName": {
      "type": "string",
      "defaultValue": "SpokeVNET",
      "metadata": {
        "description": "VNet name"
      }
    },
    "vnetAddressPrefix": {
      "type": "string",
      "defaultValue": "10.1.0.0/16",
      "metadata": {
        "description": "Address prefix"
      }
    },
    "location": {
      "type": "string",
      "defaultValue": "[resourceGroup().location]",
      "metadata": {
        "description": "Location for all resources."
      }
    }
  },
  "functions": [],
  "resources": [
    {
      "type": "Microsoft.Network/virtualNetworks",
      "apiVersion": "2020-06-01",
      "name": "[parameters('vnetName')]",
      "location": "[parameters('location')]",
      "properties": {
        "addressSpace": {
          "addressPrefixes": [
            "[parameters('vnetAddressPrefix')]"
          ]
        }
      }

Then I run the deployment on the main resource group.

Then the third vnet in the same way, after modifying the json file with new values:

Let’s now try to do something creative to retrieve all vnets and their values:

$vnetlist = Get-AzResource -ResourceGroupName MainRG
PS /home/tode> foreach ($item in $vnetlist) { Get-AzVirtualNetwork -Name $item.Name | Select-Object -Property Name, Location -ExpandProperty AddressSpace | Select-Object -ExcludeProperty AddressPrefixesText }

Generate configs in bulk using Ansible

You may need to generate a lot of config (like interface config) where only some parts change. How to do this quickly? with Ansible. Let’s create a simple playbook file confgenerator.yml:

---
- name: Generate config
  hosts: myhost ##this is not important because we will not be connecting to anything
  connection: local

  gather_facts: false

  tasks:
    - name: Generate config
      template: src=configtemplate.j2 dest=config.txt
      delegate_to: localhost
      run_once: true

Now we create the template:

{% set intno = 1 | int % }
{% set baseip = 11 | int % }
{% for i in range(50) %}
{% set intno = intno + i %}
{% set baseip = baseip + i %}
int fa0/{{intno}}
ip addr 192.168.0.{{baseip}}
no shut
{% endfor %}

Finally we execute the playbook and voila we have config for 50 interfaces ready to paste into a Cisco device.

Working around parser logic in Genie

Hello

Today I came across the following problem: I needed to export some Cisco device data from Prime to a CSV file using Ansible.
I had some success initially: I used the show inventory parser with a 2960x and was able to export the data. However, I had some 9xxx switches in my mix, too.
When I tried using the ”show inventory” Genie parser on a Cisco 9300, I got the ”rp_dict variable not initialised” error. So the parser works great on a 2960x but on any C9xxx switch it fails miserably. Where’s the problem?
Let’s have a look at the source code of the show inventory Genie parser.

if 'STACK' in pid:
                    main_dict = ret_dict.setdefault('main', {})
                    main_dict['swstack'] = True

                if ('ASR-9') in pid and ('PWR' not in pid) and ('FAN' not in pid):
                    rp_dict = ret_dict.setdefault('slot', {}).\
                        setdefault('0', {}).\
                        setdefault('rp', {}).\
                        setdefault(pid, {})
                    rp_dict['name'] = name
                    rp_dict['descr'] = descr
                    rp_dict['pid'] = pid
                    rp_dict['vid'] = vid
                    rp_dict['sn'] = sn
                    asr900_rp = True

                # Ensure name, slot have been previously parsed
                    if not name or not slot:
                    continue

                # PID: ASR1000-RP2       , VID: V02  , SN: JAE153408NJ
                # PID: ASR1000-RP2       , VID: V03  , SN: JAE1703094H
                # PID: WS-C3850-24P-E    , VID: V01  , SN: FCW1932D0LB
                       if ('RP' in pid) or ('WS-C' in pid) or ('R' in name):
                         rp_dict = slot_dict.setdefault('rp', {}).\
                         setdefault(pid, {})
                         rp_dict['name'] = name
                         rp_dict['descr'] = descr
                         rp_dict['pid'] = pid
                         rp_dict['vid'] = vid
                         rp_dict['sn'] = sn

                # PID: ASR1000-SIP40     , VID: V02  , SN: JAE200609WP
                # PID: ISR4331/K9        , VID:      , SN: FDO21520TGH
                # PID: ASR1002-X         , VID: V07, SN: FOX1111P1M1
                # PID: ASR1002-HX        , VID:      , SN:
                elif (('SIP' in pid)  or ('-X' in pid) or \
                     ('-HX' in pid) or ('-LC' in pid) or ('module' in name and not ('module F' in name))) and \
                     ('subslot' not in name):

                    lc_dict = slot_dict.setdefault('lc', {}).\
                        setdefault(pid, {})
                    lc_dict['name'] = name
                    lc_dict['descr'] = descr
                    lc_dict['pid'] = pid
                    lc_dict['vid'] = vid
                    lc_dict['sn'] = sn

                # PID: SP7041-E          , VID: E    , SN: MTC164204VE
                # PID: SFP-GE-T          , VID: V02  , SN: MTC2139029X
                # PID: EM7455/EM7430     , VID: 1.0  , SN: 355813070074072
                elif subslot:
                    if ('STACK' in pid) or asr900_rp:
                        subslot_dict = rp_dict.setdefault('subslot', {}).\
                            setdefault(subslot, {}).\
                            setdefault(pid, {})

The problem is that the rp_dict variable won’t get initialised unless the module name meets some conditions ( e.g. : if (‚RP’ in pid) or (‚WS-C’ in pid) or (‚R’ in name): ) but the problem is that for the C9xxx switches, PIDs no longer begin with „WS-C”.

The solution is to rewrite the script or modify the values before assigning the data output to the Genie parser in Ansible.

- name: show inventory
      block:
        - name: Run show inventory
          ios_command:
            commands: show inventory
          register: show_inventory

        - name: Print my data
          set_fact:
            int_data_modify: "{{ show_inventory['stdout'][0] | regex_replace('C9','WS-C9') }}"

        - name: Print new data
          set_fact:
            inventory_data_final: "{{ int_data_modify | clay584.genie.parse_genie(command='show inventory', os='ios') }}"