17. November 2018 · Comments Off on Convert ASA access-list rules to a parseable YAML format. · Categories: AWK, Cisco, Firewall, Linux Scripts, Networking · Tags: , , , , ,

This script spun out of a string of firewall migrations off the legacy ASA platform, I need the ability to convert access-lists to a parseable format. There are multiple reasons for needing this script. First is for human readability and auditing purposes. Second is to have a parseable rule base for duplication or migration to other firewall types.

ASA_acls.sh: Bourne-Again shell script text executable, ASCII text

#!/bin/bash
## Convert ASA access-list rules to a parseable YAML format.
## 2018 (v.01) - Script from www.davideaves.com
 
### VARIABLES ###
 
asa_config_file="${1}"
search_string="${2}"
 
### MAIN SCRIPT ###
 
[ -z "${asa_config_file}" ] && { echo -e "${0} - ERROR: missing ASA config"; exit 0; }
 
for ACCESSGROUP in `awk '/^access-group /{print $2}' "${asa_config_file}" | sort --ignore-case`
 do
 
  echo "${ACCESSGROUP}:"
  awk 'BEGIN{ REMARK=""; ACTION=""; SERVICE=""; SOURCE=""; DESTINATION=""; PORT=""; LOG=""; DISABLED=""; previous="" }
 
        # convert number to bits
        function bits(N){
          c = 0
          for(i=0; i<8; ++i) if(and(2**i, N)) ++c
          return c
        }
 
        # convert ipv4 to prefix
        function to_prefix(mask) {
          split(mask, octet, ".")
          return bits(octet[1]) + bits(octet[2]) + bits(octet[3]) + bits(octet[4])
        }
 
        # test if a string is an ipv4 address
        function is_v4(address) {
          split(address, octet, ".")
          if ( octet[1] <= 255 && octet[2] <= 255 && octet[3] <= 255 && octet[4] <= 255 )
          return address
        }
 
        # Only look at access-lists lines
        /^access-list '''${ACCESSGROUP}''' .*'''${search_string}'''/{
 
        # If line is a remark store it else continue
        if ( $3 == "remark" ) { $1=$2=$3=""; REMARK=substr($0,4) }
        else { $1=$2=$3=""; gsub("^   ", "")
 
          # Itterate through columns
          for(col = 1; col <= NF; col++) {
 
           # Append prefix to SOURCE & DESTINATION
           if ( is_v4(previous) && is_v4($col) ) {
            if ( DESTINATION != "" ) { DESTINATION=DESTINATION"/"to_prefix($col); previous="" }
            else if ( SOURCE != "" ) { SOURCE=SOURCE"/"to_prefix($col); previous="" }
          } else {
 
            # Determine col variable
            if ( col == "1" ) { ACTION=$col; SERVICE=""; SOURCE=""; DESTINATION=""; PORT=""; LOG=""; DISABLED=""; previous="" }
            else if ( $col ~ /^(eq|interface|object|object-group)$/ ) { previous=$col }
            else if ( SERVICE == "" && $col !~ /^(host|object|object-group)$/ ) { SERVICE=$col; PORT=""; previous="" }
            else if ( SOURCE == "" && $col !~ /^(host|object|object-group)$/ ) {
              if ( previous == "interface" ) { SOURCE=previous"/"$col }
              else { SOURCE=$col }; PORT=""; previous=to_prefix($col) }
            else if ( DESTINATION == "" && $col !~ /^(host|object|object-group)$/ ) {
              if ( previous == "interface" ) { DESTINATION=previous"/"$col }
              else { DESTINATION=$col }; PORT=""; previous=to_prefix($col) }
            else if ( previous ~ /^(eq|object-group)$/ ) { PORT=$col; previous="" }
            else if ( $col == "log" ) { LOG=$col; previous="" }
            else if ( $col == "inactive" ) { DISABLED=$col; previous="" }
            else { LAST=$col; previous="" }
 
          }
 
        }}
 
        # Display the output
        if ( DESTINATION != "" ) { count++
          print "  - name: '''${ACCESSGROUP}''' rule",count,"line",NR
          print "    debug:",$0
          if ( REMARK != "" ) { print "    description:",REMARK }
          print "    action:",ACTION
          print "    source:",SOURCE
          print "    destination:",DESTINATION
          if ( PORT == "" ) { print "    service:",SERVICE }
          else { print "    service:",SERVICE"/"PORT }
          if ( LOG != "" ) { print "    log: true" }
          if ( DISABLED != "" ) { print "    disabled: true" }
          REMARK=""; ACTION=""; SERVICE=""; SOURCE=""; DESTINATION=""; PORT=""; LOG=""; DISABLED=""; previous=""
        }
 
  }' "${asa_config_file}"
 
done
31. March 2017 · Comments Off on TCL/Expect script to backup Cisco device configs. · Categories: Cisco, Linux, Linux Admin, Linux Scripts, Networking · Tags: , , , , , , , ,

I am not a software developer, but I do like challenges and am interested in learning about different software languages. For this project I decided to practice some TCL/Expect so I rewrote a poorly written Perl script I came across. This script will back up Cisco device configurations by reading 2 files: command.db and device.db … It loads them into a data dictionary and iterates through it using a control loop. It then logs into the device, by shelling out to rancid, and executes all its commands. Its a little hacky, but works. I even wrote a shell script to parse the log output into separate files.

/srv/rtrinfo/rtrinfo.exp: a expect script, ASCII text executable

#!/usr/bin/expect -f
# Login to a list of devices and collect show output.
#
## Requires: clogin (rancid)
 
exp_version -exit 5.0
set timeout 5
 
set DEVDB "[lindex $argv 0]"
set LOGDIR "/var/log/rtrinfo"
set OUTLOG "/srv/rtrinfo/output.log"

## Validate input files or print usage.
if {0==[llength $DEVDB]} {
    send_user "usage: $argv0 -device.db-\n"
    exit
} else {
   if {[file isfile "cmd.db"] == "1"} {
      set CMDDB "cmd.db"
   } elseif {[file isfile "[file dirname $argv0]/cmd.db"] == "1"} {
      set CMDDB "[file dirname $argv0]/cmd.db"
   } else {
    send_user "Unable to find cmd.db file, can not start...\n"
    exit 1
   }
}

################################################################

### Procedure to create 3 column dictionary ###
proc addDICT {dbVar field1 field2 field3} {
 
   # Initialize the DEVICE dictionary
   if {![info exists $dbVar]} {
      dict set $dbVar ID 0
   }
 
   upvar 1 $dbVar db

   # Create a new ID
   dict incr db ID
   set id [dict get $db ID]

   # Add columns into dictionary
   dict set db $id "\"$field1\" \"$field2\" \"$field3\""
}

### Build the CMD and DEVICE dicts from db files ###
foreach DB [list $CMDDB $DEVDB] {
   set DBFILE [open $DB]
   set file [read $DBFILE]
   close $DBFILE

   ## Split into records on newlines
   set records [split $file "\n"]

   ## Load records for dictionary
   foreach rec $records {
      ## split into fields on colons
      set fields [split $rec ";"]
      lassign $fields field1 field2 field3
 
      if {"[file tail $DB]" == "cmd.db"} {
         # Cols: OUTPUT TYPE CMD
         foreach field2 [split $field2 ","] {
            addDICT CMDS $field2 $field1 $field3
         }
      } else {
         # Cols: HOST TYPE STATE DESC
         addDICT DEVICES $field1 $field2 $field3
      }
   }
}

################################################################

### Open $OUTLOG to be used for post parcing.
set OUTLOG [open "$OUTLOG" w 0664]

### Itterate the DEVICES dictionary ###
dict for {id row} $DEVICES {
 
   ## Assign field names
   lassign $row DEVICE DEVTYPE STATUS

   ## Process device status
   if {"$STATUS" == "up"} {
 
      ## Create log output directory if does not exist
      if {[file isdirectory "$LOGDIR"] != "1"} {
         file mkdir "$LOGDIR"
      }
 
      log_file
      log_file -noappend "$LOGDIR/$DEVTYPE\_$DEVICE.log"

      ## Run rancid's clogin with a 5min timeout.
      spawn timeout 300 clogin $DEVICE
 
      expect "*#" {
 
      ## Set proper terminal length ##
      if {$DEVTYPE != "asa"} {
         send "terminal length 0\r"
      } else {
         send "terminal pager 0\r"
      }

      ### Itterate the CMDS dictionary ###
      dict for {id row} $CMDS {
         ## Assign field names
         lassign $row CMDTYPE OUTPUT CMD

         ## Push commands to device & update $OUTLOG
         if {($DEVTYPE == $CMDTYPE)&&($OUTPUT != "")} {
            puts $OUTLOG "$LOGDIR/$DEVTYPE\_$DEVICE.log;$OUTPUT;$CMD"
            expect "*#" { send "$CMD\r" }
         }
      }

      ## We are done! logout
      expect "*#" { send "exit\r" }
      expect EOF
      }
 
   }
}
 
close $OUTLOG

### Run a shell script to parse the output.log ###
#exec "[file dirname $argv0]/rtrparse.sh"

/srv/rtrinfo/cmd.db: ASCII text

acl;asa,router;show access-list
arp;ap,ace,asa,router,switch;show arp
arpinspection;ace;show arp inspection
arpstats;ace;show arp statistics
bgp;router;show ip bgp
bgpsumm;router;show ip bgp summary
boot;switch;show boot
cdpneighbors;ap,router,switch;show cdp neighbors
conferror;ace;sh ft config-error
controller;router;show controller
cpuhis;ap,router,switch;show process cpu history
debug;ap,router,switch;show debug
dot11ass;ap;show dot11 associations
envall;switch;show env all
env;router;show environment all
errdis;switch;show interface status err-disabled
filesys;router,switch;dir
flash;asa;show flashfs
intdesc;ap,router,switch;show interface description
interface;ap,asa,router,switch;show interface
intfbrie;ap,ace,router,switch;show ip interface brief
intipbrief;asa;show interface ip brief
intstatus;switch;show interface status
intsumm;router;show int summary
inventory;asa,router,switch;show inventory
iparp;ap,switch;show ip arp
ipint;router;show ip int
mac;switch;show mac address-table
nameif;asa;show nameif
ntpassoc;ap,asa,router,switch;show ntp assoc
plat;router;show platform
power;switch;show power inline
probe;ace;show probe
routes;asa;show route
routes;ap,router,switch;show ip route
rserver;ace;show rserver
running;ace;show running-config
running;ap,asa,router,switch;more system:running-config
serverfarm;ace;show serverfarm
service-policy;ace;show service-policy
service-pol-summ;ace;show service-policy summary
spantree;switch;show spanning-tree
srvfarmdetail;ace;show serverfarm detail
version;ap,ace,asa,router,switch;show version
vlan;switch;show vlan

/srv/rtrinfo/device.db: ASCII text

192.168.0.1;router;up;Site Router
192.168.0.2;ap;up;Atonomous AP
192.168.0.3;asa;ASA Firewall
192.168.0.5;switch;Site Switch
192.168.0.10;ace;Cisco ACE

/srv/rtrinfo/rtrparse.sh: Bourne-Again shell script, ASCII text executable

#!/bin/bash
# Parse the new rtrinfo output.log and create individual cmd output.
# 2016 (v.03) - Script from www.davideaves.com
 
OUTLOG="/srv/rtrinfo/output.log"
RTRPATH="$(dirname $OUTLOG)"
 
### Delete previous directories.
for DIR in ace asa router switch
 do [ -d "$RTRPATH/$DIR" ] && { rm -rf "$RTRPATH/$DIR"; }
done
 
### Itterate through $OUTLOG
grep "\.log" "$OUTLOG" | while IFS=';' read LOGFILE OUTPUT CMD
 do
 
 ### Get device name and type.
 TYPE="$(basename "$LOGFILE" | awk -F'_' '{print $1}')"
 DEVICE="$(basename "$LOGFILE" | awk -F'_' '{print $2}' | sed 's/\.log$//')"
 
 ### Create output directory.
 [ ! -d "$RTRPATH/$TYPE/$OUTPUT" ] && { mkdir -p "$RTRPATH/$TYPE/$OUTPUT"; }
 
 ### Extract rtrinfo:output logs and dump into individual files.
 # 1) sed identify $CMD output between prompts.
 # 2) awk drops X beginning line(s).
 # 3) sed to drop the last line.
 sed -n "/[^.].*[#, ]$CMD\(.\)\{1,2\}$/,/[^.].*#.*$/p" "$LOGFILE" \
 | awk 'NR > 0' | sed -n '$!p' > "$RTRPATH/$TYPE/$OUTPUT/$DEVICE.txt"
 
 ## EX: sed -n "/[^.]\([a-zA-Z]\)\{3\}[0-9].*[#, ]$CMD\(.\)\{1,2\}$/,/[^.]\([a-zA-Z]\)\{3\}[0-9].*#.*$/p"
 
done

Since this is something that would be collect nightly or weekly, I would probably kick this off using logrotate (as opposed to using crontab). The following would be what I would drop in my /etc/logrotate.d directory…

/etc/logrotate.d/rtrinfo: ASCII text

/var/log/rtrinfo/*.log {
        rotate 14
        daily
        missingok
        compress
        sharedscripts
        postrotate
                /srv/rtrinfo/rtrinfo.exp /srv/rtrinfo/device.db > /dev/null
        endscript
}
26. August 2016 · Comments Off on Backing up your F5 load balancers. · Categories: F5, Linux, Linux Scripts, Load Balancing, Networking · Tags: , , , , ,

The following script is for performing scheduled backups of F5 load balancers. The Script initiates a backup against the F5 via SSH and then SCP’s the UCS output file off the box. It is meant to be ran in the crontab, on a Linux box, against the F5’s in an environment.

For further reading please reference the following F5 Support Documentation:

Feel free to review, modify or use this script however you see fit. Remember you do so at your own risk!

#!/bin/bash
## Create/Backup a UCS file against a list of F5 loadbalancers.
## 2016 (v1.0) - Script from www.davideaves.com
 
F5HOSTS="bigip01 bigip02"
BACKUPDIR="/srv/f5backup"
 
# FUNCTION: End Script if error.
DIE() {
 echo "ERROR: Validate \"$_\" is installed and working on your system."
 exit 0
}
 
# FUNCTION: Fetch the UCS or private id_rsa keyfile.
UCSFETCH() {
 if [ -e "$BACKUPDIR/.$F5.identity" ]
  then
        printf "$F5 "
 
        # Delete backup files older than 90 days.
        find "$BACKUPDIR" -maxdepth 1 -type f -name "$F5*.ucs" -mtime +90 -exec rm {} \;
 
        # Create the UCS backup file.
        ssh -q -o StrictHostKeyChecking=no -i "$BACKUPDIR/.$F5.identity" root@$F5 "tmsh save /sys ucs $(echo $F5) > /dev/null 2>&1"
 
        # Copy down the UCS backup file.
        scp -q -o StrictHostKeyChecking=no -i "$BACKUPDIR/.$F5.identity" root@$F5:/var/local/ucs/$F5.ucs "$BACKUPDIR/" && UCSRENAME
 else
        printf "\n$F5 "
 
        # Copy down the F5's private id_rsa keyfile for root user.
        scp -o StrictHostKeyChecking=no root@$F5:/var/ssh/root/identity "$BACKUPDIR/.$F5.identity" 2> /dev/null
 fi
}
 
# FUNCTION: Rename the UCS file.
UCSRENAME() {
 mv "$BACKUPDIR/$F5.ucs" "$BACKUPDIR/$F5$(echo $F5 | cksum | awk '{print "_"$1}') ($(date +%F -d "$(file "$BACKUPDIR/$F5.ucs" | awk -F': ' '{print $NF}' | awk -F',' '{print $1}')")).ucs"
}
 
# Validate script requirements are meet.
type -p scp > /dev/null || DIE
 
### Main Loop ###
for F5 in $(echo $F5HOSTS | tr [:lower:] [:upper:]); do
 
 # Validate host is pingable before fetching UCS file.
 ping -c1 $F5 > /dev/null 2>&1 && UCSFETCH
 
done; echo
05. January 2016 · Comments Off on Traceroute script to detect route changes. · Categories: Linux, Linux Scripts, Networking · Tags: , , ,

The following script relies on MTR and is meant to be run in cron. It could be useful to log and/or detect route changes you the downstream provider path to multiple endpoint IP’s. Additionally the log-file is compressed using XZ tools so you do not have to worry about the logs growing to an unmanageable size very quickly.

#!/bin/bash
## Crontab Example: @hourly /opt/mtreport.sh -p
 
HOSTS="10.100.100.43 192.168.3.4 172.16.16.10"
LOGFILE="/srv/mtreport.log.xz"
 
# FUNCTION: End Script if error.
DIE() {
 echo "ERROR: Validate \"$_\" is installed and working on your system."
 exit 0
}
 
MTRRUN() {
 /usr/sbin/mtr --report --report-cycles 1 --raw --no-dns $HOST |\
  awk 'NR%2==1 {printf  " "$NF;} NR%2==0 {printf "|"$NF/1000;}'
}
 
# Validate script requirements are meet.
type -p /usr/sbin/mtr > /dev/null || DIE
 
if [ "$1" == "-p" ]; then
 
 # Main Loop.
 for HOST in $HOSTS
  do echo "$(date +%s)$(MTRRUN)" | xz -9 -c >> "$LOGFILE"
 done
 
elif [ ! -z "$1" ]; then
 
 xzgrep "$1" "$LOGFILE" | while read LINE
  do ARRAY=( $LINE )
 
   ## Show the Timestamp ##
   echo; date -d @${ARRAY[0]} +'%Y/%m/%d_%H:%M:%S'
   ARRAY=("${ARRAY[@]:1}") # Drop the timestamp array element
 
   ## Itirate through hops ##
   for HOP in "${ARRAY[@]}"
    do [ -z "$COUNT" ] && { COUNT=0; }
     echo "$COUNT|$HOP ms"
     let COUNT++ # Increment Hop Count
    done | column -ts\|
   done
 
else
 
 echo "Poll --> $0: -p"
 echo "View --> $0: x.x.x.x"
 
fi
20. March 2015 · Comments Off on Installing Rancid w/ViewVC under Debian/Ubuntu · Categories: Cisco, Linux, Linux Admin, Networking · Tags: , , , , , ,

When managing a network there are tools out there like Solarwinds, Cisco NCS or even Cisco Works that allow engineers to backup configurations; but tools like them can sometimes be unwieldy and/or clunky at their best. In the enterprise I general run ARCHIVE on Cisco routers and switches to automatically back up my configs to a Linux TFTP server. For other devices like the ASA and non-cisco gear, that do not have ARCHIVE functionality, I use a tool called RANCID and ViewVC for config backup and change tracking purposes.
There is a lot of documentation on the internet about setting up rancid, so the following is a very condensed step-by-step on getting Rancid up and running on a Debian distro.

Package Installation

sudo apt-get install rancid viewvc

Configure Rancid

Set the CVS folder

sudo pico /etc/rancid/rancid.conf

Add LIST_OF_GROUPS=”rancid to the config file.

sudo -u rancid /usr/lib/rancid/bin/rancid-cvs

Configure /etc/cron.d/rancid

sudo pico /etc/cron.d/rancid
MAILTO=root
 
# Run config differ hourly
0 23 * * *  rancid /usr/lib/rancid/bin/rancid-run
 
# Clean out config differ logs
50 23 * * * rancid /usr/bin/find /var/log/rancid -type f -mtime +2 -exec rm {} \;

Configure ~rancid/.cloginrc

sudo -u rancid pico ~/.cloginrc && chmod 600 ~/.cloginrc

add user        *       rancid
add password    *       RANCIDPW RANCIDPW
add method      *       ssh telnet

Configure device database

sudo -u rancid pico /var/lib/rancid/rancid/router.db

EX: DEXTER-CS01;cisco;up;Dexters Lab Core Switch
* Remember! For IPv6 compatibility RANCID now uses semicolons as a deliminator.

Create Symbolic Lync for clogin

sudo ln -s /usr/lib/rancid/bin/clogin /usr/local/bin/clogin

Configure ViewVC