The following Ansible playbook is how I manage firewall rules on a Palo Alto firewall. My overall playbook methodology is to be able to reuse playbook task lists as though they were building blocks. Also, to be able to both add and remove configuration using the same playbook. To do this, a common trick I like to use is the CLI flag “-e” to specify an input file. The input file is where the abstracted configuration is defined and how I tell the playbook what to build.

Depending on the resources of the company most ticketing systems, like Service Now or CA Service Desk, can output the proper YAML input file after all certain workflow items have been approved. The ticketing system can then output to a Samba Share, that has a crontab to kick off and ingest any new input files or the ticketing system itself can kick off the playbook directly if you have a Ansible Tower or AWX in the environment.

The following is my input. When all is said and done, I put most of my mental effort on how best to structure the input. Ideally I try to ask for as little as possible and try to make it so it can be adapted to any vendor product, such as a Cisco FMC.

ER/CO99999.yaml: ASCII text: ASCII text

---
ticket: CO99999
security_rule:
- description: Ansible test rule 0
  source_ip:
  - 192.168.0.100
  - 192.168.100.96
  destination_ip:
  - any
  service:
  - tcp_9000
- description: Another Ansible test rule 1
  source_ip:
  - 192.168.100.104
  - 192.168.100.105
  destination_ip:
  - 192.168.0.100
  service:
  - tcp_9000
  - tcp_9100-9200
- description: Another Ansible test rule 2
  source_ip:
  - 192.168.100.204
  - 192.168.100.205
  - 192.168.100.206
  - 192.168.100.207
  destination_ip:
  - 8.8.8.8
  - 192.168.0.42
  service:
  - udp_1053-2053
  - tcp_1053-2053
- description: Another Ansible test rule 3
  source_ip:
  - 192.168.100.204
  destination_ip:
  - 192.168.0.42
  service:
  - udp_123
- description: Another Ansible test rule 4
  source_ip:
  - 192.168.100.204
  - 192.168.100.205
  destination_ip:
  - 192.168.0.100
  service:
  - tcp_1-65535
- description: Another Ansible test rule 5
  source_ip:
  - 192.168.100.204
  - 192.168.100.207
  destination_ip:
  - 8.8.8.8
  service:
  - tcp_8081

Since the PA firewall is zone based I read the following CSV file to the playbook quicker. The CSV table contains the firewall (or device-group) the network and the Security zone that the network belongs to. Without this, I would need to perform a lot more tasks looking this information on each pass.

fwzones.csv: ASCII text

LABPA,192.168.0.0/24,AWS-PROD
LABPA,192.168.100.0/24,AWS-DEV
LABPA,0.0.0.0/0,Layer3-Outside

The following is the inventory in my lab. I don’t recommend storing any credentials here.

inventory: ASCII text

[all:vars]
ansible_connection="local"
ansible_python_interpreter="/usr/bin/env python"
username="admin"
password="admin"
 
[labpa]
labpa01

The following is my main playbook. It will prompt for username password credentials and read the input variables related to the change. Since this is a sample, I am only calling a single task list “panos_security_rule.yaml” which is responsible for managing the security rules on the PA.

main.yaml: a /usr/bin/ansible-playbook -f 10 script text executable, ASCII text

#!/usr/local/bin/ansible-playbook -f 10
---
- name: "manage panos devices"
hosts: labpa01
connection: local
gather_facts: False

vars_prompt:

- name: "username"
prompt: "Username"
private: no

- name: "password"
prompt: "Password"

vars:

- panos_provider:
ip_address: "{{ inventory_hostname }}"
username: "{{ username | default('admin') }}"
password: "{{ password | default('admin') }}"

pre_tasks:

- name: "fail: check for required input"
fail:
msg: "Example: ./main.yaml -e state=present -e er=./ER/CO99999.yaml"
when: (er is undefined) and (state is undefined)

- name: "include_vars: load security rules"
include_vars:
file: "{{ er }}"

roles:
- role: PaloAltoNetworks.paloaltonetworks

tasks:

- name: "include: create panos security rule"
include: panos_security_rule.yaml
with_indexed_items: "{{ security_rule }}"
when: state is defined

handlers:

- name: "commit pending changes"
local_action:
module: panos_commit
provider: "{{ panos_provider }}"

The following is my task list for managing PanOS security rules. If I were to manage any other vendors firewall I would make it read the same input and just simply create a different task list for that vendor device type. There are two tricks that I am performing within this tasklist… I am reading the fwzones.csv file into a variable for lookups. I am also calling another task list that will build the L4 service groups that will be referenced in the security rule.

panos_security_rule.yaml: ASCII text

## Manage security rules on a Palo Alto Firewall
## Requires: panos_object_service.yaml
#
## Vars Example:
#
# ticket: CO99999
# security_rule:
# - source_ip: ["192.168.0.100"]
#   destination_ip: ["any"]
#   service: ["tcp_9000"]
#   description: "Ansible test rule 0"
#
## Task Example:
#
#  - name: "include: create panos security rule"
#    include: panos_security_rule.yaml
#    with_indexed_items: "{{ security_rule }}"
#    when: state is defined
#
---
 
###
# Derive firewall zone and devicegroup from prebuilt CSV.
# Normally we would retrieve this from a functional IPAM.
###
 
# Example CSV file
#
# devicegroup,192.168.0.0/24,prod
# devicegroup,192.168.100.0/24,dev
# devicegroup,0.0.0.0/0,outside

- name: "read_csv: read firewall zones from csv"
  local_action:
    module: read_csv
    path: fwzones.csv
    fieldnames: devicegroup,network,zone
  register: fwzones
  run_once: true

- name: "set_fact: source details"
  set_fact:
    source_dgrp: "{{ item_tmp.1['devicegroup'] }}"
    source_addr: "{{ source_addr|default([]) + [ item_tmp.0 ] }}"
    source_zone: "{{ source_zone|default([]) + [ item_tmp.1['zone'] ] }}"
  with_nested:
  - "{{ item.1.source_ip }}"
  - "{{ fwzones.list }}"
  loop_control:
    loop_var: item_tmp
  when: ( item_tmp.0|ipaddr('int') >= item_tmp.1['network']|ipaddr('network')|ipaddr('int') ) and
        ( item_tmp.0|ipaddr('int') <= item_tmp.1['network']|ipaddr('broadcast')|ipaddr('int') ) and
        ( item_tmp.1['network']|ipaddr('int') != "0/0" )

- name: "set_fact: destination zone"
  set_fact:
    destination_dgrp: "{{ item_tmp.1['devicegroup'] }}"
    destination_zone: "{{ destination_zone|default([]) + [ item_tmp.1['zone'] ] }}"
  with_nested:
  - "{{ item.1.destination_ip }}"
  - "{{ fwzones.list }}"
  loop_control:
    loop_var: item_tmp
  when: ( item_tmp.0|ipaddr('int') >= item_tmp.1['network']|ipaddr('network')|ipaddr('int') ) and
        ( item_tmp.0|ipaddr('int') <= item_tmp.1['network']|ipaddr('broadcast')|ipaddr('int') ) and
        ( item_tmp.1['devicegroup'] == source_dgrp ) and ( destination_zone|default([])|length < item.1.destination_ip|unique|length )
 
##
# Done collecting firewall zone & devicegroup.
##

- name: "set_fact: services"
  set_fact:
    services: "{{ services|default([]) + [ service ] }}"
    service_list: "{{ service_list|default([]) + [ {\"protocol\": {service.split('_')[0]: {\"port\": service.split('_')[1]}}, \"name\": service }] }}"
  with_items: "{{ item.1.service }}"
  loop_control:
    loop_var: service

- name: "include: create panos service object"
  include: panos_object_service.yaml
  with_items: "{{ service_list|unique }}"
  loop_control:
    loop_var: service
  when: (state == "present")
 
###
# Testing against a single PA firewall, uncomment if running against Panorama
###

- name: "panos_security_rule: firewall rule"
  local_action:
    module: panos_security_rule
    provider: "{{ panos_provider }}"
    state: "{{ state }}"
    rule_name: "{{ ticket|upper }}-{{ item.0 }}"
    description: "{{ item.1.description }}"
    tag_name: "ansible"
    source_zone: "{{ source_zone|unique }}"
    source_ip: "{{ source_addr|unique }}"
    destination_zone: "{{ destination_zone|unique }}"
    destination_ip: "{{ item.1.destination_ip|unique }}"
    service: "{{ services|unique }}"
#   devicegroup: "{{ source_dgrp|unique }}"
    action: "allow"
    commit: "False"
  notify:
  - commit pending changes

- name: "include: create panos service object"
  include: panos_object_service.yaml
  with_items: "{{ service_list|unique }}"
  loop_control:
    loop_var: service
  when: (state == "absent")

- name: "set_fact: clear facts from run"
  set_fact:
    services: []
    service_list: []
    source_dgrp: ""
    source_addr: []
    source_zone: []
    destination_dgrp: ""
    destination_addr: []
    destination_zone: []

The following will parse the “service” variable from the input and will manage the creation or removal of its service group. This is probably not best practice, but I like to initially build all PA rules as L4 then after a month to bake in, I will use the Expedition tool or the PanOS9 AppID migration tool to convert rules to L7 later. I never assume that an app owner knows how their application works, which is why I choose to migrate to L7 rules based on what I actually see in the logs.

panos_object_service.yaml: ASCII text

## Var Example:
#
#  services:
#  - { name: service-abc, protocol: { tcp: { port: '5000,6000-7000' } } }
#
## Task Example:
#
#  - name: "include: create panos address object"
#    include: panos_object_service.yaml state="absent"
#    with_items: "{{ services }}"
#    loop_control:
#      loop_var: service
#
---
- name: attempt to locate existing address
  block:

  - name: "panos_object: service - find {{ service.name }}"
    local_action:
      module: panos_object
      ip_address: "{{ inventory_hostname }}"
      username: "{{ username }}"
      password: "{{ password }}"
      serviceobject: "{{ service.name }}"
      devicegroup: "{{ devicegroup | default('') }}"
      operation: "find"
    register: result

  - name: 'set_fact: existing service object'
    set_fact:
      existing: "{{ result.stdout_lines|from_json|json_query('entry')|regex_replace('@') }}"
    when: (state == "present")

  rescue:

  - name: "panos_object: service - add {{ service.name }}"
    local_action:
      module: panos_object
      ip_address: "{{ inventory_hostname }}"
      username: "{{ username }}"
      password: "{{ password }}"
      serviceobject: "{{ service.name }}"
      protocol: "{{ service.protocol | flatten | list | join('\", \"') }}"
      destination_port: "{{ service | json_query('protocol.*.port') | list | join('\", \"') }}"
      description: "{{ service.description | default('') }}"
      devicegroup: "{{ devicegroup | default('') }}"
      operation: 'add'
    when: (state == "present")

- name: "panos_object: service - update {{ service.name }}"
  local_action:
    module: panos_object
    ip_address: "{{ inventory_hostname }}"
    username: "{{ username }}"
    password: "{{ password }}"
    serviceobject: "{{ service.name }}"
    protocol: "{{ service.protocol | flatten | list | join('\", \"') }}"
    destination_port: "{{ service | json_query('protocol.*.port') | list | join('\", \"') }}"
    description: "{{ service.description | default('') }}"
    devicegroup: "{{ devicegroup | default('') }}"
    operation: 'update'
  when: (state == "present") and (existing is defined) and (existing != service)

- name: "panos_object: service - delete {{ service.name }}"
  local_action:
    module: panos_object
    ip_address: "{{ inventory_hostname }}"
    username: "{{ username }}"
    password: "{{ password }}"
    serviceobject: "{{ service.name }}"
    devicegroup: "{{ devicegroup | default('') }}"
    operation: 'delete'
  ignore_errors: yes
  when: (state == "absent") and (result.stdout_lines is defined)
01. January 2019 · Comments Off on Ansible playbook to manage objects on a Cisco Firepower Management Center (FMC) · Categories: Ansible, Cisco, Firewall, Networking · Tags: , , , , , , , , , , , ,

I really wish Cisco would support the DevOps community and release Ansible modules for their products like most other vendors. That being said, since there are no modules for the Cisco Firepower you have to manage the device through the APIs directly. Managing anything using raw API requests in Ansible can be a little tricky but not impossible. When creating playbooks like this you will typically spend most time figuring out the structure of responses and how best to iterate through them.

The following Ansible playbook is a refactor of a previous script I wrote last year to post/delete objects up to a firepower in bulk. I have spent a lot of time with Ansible playbooks and I recommend grouping and modularizing related tasks into separate importable YAML files. This not only makes reusing common groups of tasks much easier but also means later those logical task groupings can simply be copied up into a role with little to no effort.

main.yaml: a /usr/bin/ansible-playbook -f 10 script text executable, ASCII text

#!/usr/bin/ansible-playbook -f 10
## Ansible playbook to manage objects on a FMC
# 2019 (v.01) - Playbook from www.davideaves.com
---
- name: manage firepower objects
  hosts: fmc
  connection: local
  gather_facts: no

  vars:

  - ansible_connection: "local"
  - ansible_python_interpreter: "/usr/bin/env python"

  - fmc_provider:
      username: "{{ username | default('apiuser') }}"
      password: "{{ password | default('api1234') }}"

  - fmc_objects:
    - name: server1
      value: 192.0.2.1
      description: Test Server

  tasks:

  ## Note ##
  # Firepower Management Center REST API authentication tokens are valid for 30 minutes, and can be refreshed up to three times
  # Ref: https://www.cisco.com/c/en/us/td/docs/security/firepower/623/api/REST/Firepower_Management_Center_REST_API_Quick_Start_Guide_623/Connecting_with_a_Client.html

  - name: "fmc_platform: generatetoken"
    local_action:
      module: uri
      url: "https://{{ inventory_hostname }}/api/fmc_platform/v1/auth/generatetoken"
      method: POST
      user: "{{ fmc_provider.username }}"
      password: "{{ fmc_provider.password }}"
      validate_certs: no
      return_content: no
      force_basic_auth: yes
      status_code: 204
    register: auth

  - include: fmc_objects.yaml
    when: auth.x_auth_access_token is defined

The following is the task grouping that will make object changes to the FMC using Ansibles built in URI module. I have tried to make this playbook as idempotent as possible so I first register an array with all of the objects that exist on the FMC. I then iterate through that array in subsequent tasks so I only change what does not match. If it sees a fmc_object name key with no value set, the delete task will remove the object from the FMC.

fmc_objects.yaml: ASCII text

## Cisco FMC object management tasks for Ansible
## Requires: VAR:auth.x_auth_access_token
## 2019 (v.01) - Playbook from www.davideaves.com
#
## VARIABLE EXAMPLE ##
#
#  - fmc_objects:
#    - name: server1
#      value: 192.0.2.1
#
## USAGE EXAMPLE ##
#  - include: fmc_objects.yaml
#    when: auth.x_auth_access_token is defined
#
---
 
## NOTE ##
# Currently only handling host and network objects!
# Other object types will likely require a j2 template to construct the body submission.

- name: "fmc_config: get all objects"
  local_action:
    module: uri
    url: "https://{{ inventory_hostname }}/api/fmc_config/v1/domain/{{ auth.domain_uuid }}/object/{{ item }}?limit=10000&expanded=true"
    method: GET
    validate_certs: no
    status_code: 200
    headers:
      Content-Type: application/json
      X-auth-access-token: "{{ auth.x_auth_access_token }}"
  with_items:
    - hosts
    - networks
  register: "all_objects_raw"
 
# Unable to figure out how to do this without a j2 template.
# FMC returns too many subelements to easily filter.

- name: "fmc_config: post new objects"
  local_action:
    module: uri
    url: "https://{{ inventory_hostname }}/api/fmc_config/v1/domain/{{ auth.domain_uuid }}/object/{{ fmc_objects | selectattr('name', 'equalto', item) | map(attribute='type') | list | last | default('hosts') | lower }}"
    method: POST
    validate_certs: no
    status_code: 201
    headers:
      Content-Type: application/json
      X-auth-access-token: "{{ auth.x_auth_access_token }}"
    body_format: json
    body:
      name: "{{ item }}"
      value: "{{ fmc_objects | selectattr('name', 'equalto', item) | map(attribute='value') | list | last }}"
      description: "{{ fmc_objects | selectattr('name', 'equalto', item) | map(attribute='description') | list | last | default('Ansible Created') }}"
      overridable: "{{ fmc_objects | selectattr('name', 'equalto', item) | map(attribute='overridable') | list | last | default('False') | bool }}"
  with_items: "{{ lookup('template', 'fmc_objects-missing.j2').split('\n') }}"
  when: (item != "") and (fmc_objects | selectattr('name', 'equalto', item) | map(attribute='value') | list | last is defined)
  changed_when: True
 
## NOTE ##
# The conditions below will not catch the sudden removal of the description or overridable key

- name: "fmc_config: modify existing objects"
  local_action:
    module: uri
    url: "{{ item.1.links.self }}"
    method: PUT
    validate_certs: no
    status_code: 200
    headers:
      Content-Type: application/json
      X-auth-access-token: "{{ auth.x_auth_access_token }}"
    body_format: json
    body:
      name: "{{ item.1.name }}"
      id: "{{ item.1.id }}"
      type: "{{ item.1.type }}"
      value: "{{ fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='value') | list | last }}"
      description: "{{ fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='description') | list | last | default('Ansible Created') }}"
      overridable: "{{ fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='overridable') | list | last | default('False') | bool }}"
  with_subelements:
    - "{{ all_objects_raw['results'] }}"
    - json.items
  when: (fmc_objects | selectattr('name', 'equalto', item.1.name) | list | count > 0) and
        (((fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='value') | list | last is defined) and (fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='value') | list | last != item.1.value)) or
         ((fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='description') | list | last is defined) and (fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='description') | list | last | default('Ansible Created') != item.1.description)) or
         ((fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='overridable') | list | last is defined) and (fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='overridable') | list | last | default('False') | bool != item.1.overridable)))
  changed_when: True

- name: "fmc_config: delete objects"
  local_action:
    module: uri
    url: "{{ item.1.links.self }}"
    method: DELETE
    validate_certs: no
    status_code: 200
    headers:
      X-auth-access-token: "{{ auth.x_auth_access_token }}"
  with_subelements:
    - "{{ all_objects_raw['results'] }}"
    - json.items
  when: (fmc_objects | selectattr('name', 'equalto', item.1.name) | list | count > 0)
        and(fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='name') | list | last is defined)
        and(fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='value') | list | last is undefined)
  changed_when: True

Sometimes when trying to munge an array and perform comparisons you have to do it in a Jinja2 Template. The following template creates a list of existing object names then will check to see if that object needs to be created. This is what my POST task uses to determine what new objects will be created.

templates/fmc_objects-missing.j2: ASCII text

{#- Build a list of the existing objects -#}
{% set EXISTING = [] %}
{% for object_result in all_objects_raw['results'] %}
{% for object_line in object_result['json']['items'] %}
{{- EXISTING.append( object_line['name'] ) -}}
{% endfor %}
{% endfor %}
 
{#- Check fmc_objects to see if missing -#}
{% for fmc_object in fmc_objects %}
{% if fmc_object['name'] not in EXISTING %}
{{ fmc_object['name'] }}
{% endif %}
{% endfor %}
28. December 2018 · Comments Off on Search for object matches in an ASA config. · Categories: AWK, Firewall, Linux, Linux Scripts, Networking · Tags: , , , , , ,

Having to parse ASA configs for migration purposes provides a never-ending source of reasons to write scripts. The following AWK script will munge an ASA config searching for any specified address or object name and will output any objects that reference it. This script is something I use in conjunction with the ASA_acls.sh script to find security rules relating to an address. As far as I know this is the closest offline tool simmilar to the “Where Used” feature in ASDM for finding addresses.

ASA_obj.awk: awk script, ASCII text executable

#!/usr/bin/awk -f
## Search for object matches in an ASA config.
## 2018 (v.01) - Script from www.davideaves.com
 
### BEGIN ###
 
BEGIN {
  dig_range="y"
  dig_subnet="n"
 
  # Script arguments: ASA configuration + Search objects
  if ( ARGV[1] == "" ) {
    print "ERROR: No Input ASA config provided!" > "/dev/stderr"
    exit 1
  } else if ( ARGV[2] == "" ) {
    print "ERROR: No address or object to search for!" > "/dev/stderr"
    exit 1
  } else {
    # Saving everything after ARGV[1] in search_array.
    for (i = 2; i < ARGC; i++) {
      search_array[ARGV[i]] = ARGV[i]
      delete ARGV[i]
  } }
}
 
### FUNCTIONS ###
 
# Convert IP to Interger.
function ip_to_int(input) {
  split(input, oc, ".")
  ip_int=(oc[1]*(256^3))+(oc[2]*(256^2))+(oc[3]*(256))+(oc[4])
  return ip_int
}
 
# test if a string is an ipv4 address
function is_v4(address) {
  split(address, octet, ".")
  if ( octet[1] <= 255 && octet[2] <= 255 && octet[3] <= 255 && octet[4] <= 255 )
  return address
}
 
# convert number to bits
function bits(N){
  c = 0
  for(i=0; i<8; ++i) if( and(2**i, N) ) ++c
  return c
}
 
# convert ipv4 to prefix
function to_prefix(mask) {
  split(mask, octet, ".")
  return bits(octet[1]) + bits(octet[2]) + bits(octet[3]) + bits(octet[4])
}
 
### SCRIPT ###
 
//{ gsub(/\r/, "") # Strip CTRL+M
 
  ### LINE IS NAME ###
  if ( $1 ~ /^name$/ ) {
 
    name=$3; host=$2; type=$1
    for(col = 5; col <= NF; col++) { previous=previous" "$col }
    description=substr(previous,2)
    previous=""
 
    # Add to search_array
    for (search in search_array) if ( host == search ) search_array[name]
  }
 
  ### LINE IS OBJECT ### 
  else if ( $1 ~ /^object/ ) {
 
    tab="Y"
    name=$3
    type=$2
    if ( type == "service" ) service=$4
    previous=""
 
  } else if ( tab == "Y" && substr($0,1,1) == " " ) {
 
    # object is single host.
    if ( $1 == "host" ) {
      host=$NF
      for (search in search_array) if ( host == search ) search_array[name]
    }
 
    # object is a subnet
    else if ( $1 == "subnet" && dig_subnet == "y" ) {
      for (search in search_array) if ( is_v4(search) ) {
 
        NETWORK=ip_to_int($2)
        PREFIX=to_prefix($3)
        BROADCAST=(NETWORK + (2 ^ (32 - PREFIX) - 1))
 
        if ( ip_to_int(search) >= int(NETWORK) && ip_to_int(search) <= int(BROADCAST) ) {
          search_array[name]
      } }
    }
 
    # object is a range
    else if ( $1 == "range" && dig_range == "y" ) {
      for (search in search_array) if ( is_v4(search) ) {
        if ( ip_to_int(search) >= ip_to_int($2) && ip_to_int(search) <= ip_to_int($3) ) {
          search_array[name]
      } }
    }
 
    # object is group of other objects
    else if ( $2 ~ /(host|object)/ ) {
      for (search in search_array) if ( $NF == search ) search_array[name]
    }
 
    # object contains nat statement
    else if ( $1 == "nat" ) {
      for (search in search_array) if ( $NF == search ) search_array[name]
    }
 
    ### Debug everything else within an object
    #else { print "DEBUG:",$0 }
 
  }
  else { tab="" }
 
}
 
### END ###
 
END{
  if ( isarray(search_array) ) {
    print "asa_objects:"
    for (search in search_array) print "  -",search
  }
}
19. December 2018 · Comments Off on Collect all sensor information from the FMC. · Categories: Cisco, Firewall, Linux Scripts, Networking, Uncategorized · Tags: , , , , , ,

Eventually I plan on refactoring all my firepower scripts into Ansible Playbooks. But in the meanwhile the following is a quick script that will collect all sensor information from a Firepower Management Center and save that information to a CSV file. The output is pretty handy for migrations and general data collection.

#!/bin/bash
## Collect all sensor devicerecords from a FMC.
## Requires: python:PyYAML,shyaml
## 2018 (v.01) - Script from www.davideaves.com
 
username="fmcusername"
password="fmcpassword"
 
FMC="192.0.2.13 192.0.2.14 192.0.2.15 192.0.2.16 192.0.2.17 192.0.2.18 192.0.2.21 192.0.2.22 192.0.2.23"
 
### Convert JSON to YAML.
j2y() {
 python -c 'import sys, yaml, json; yaml.safe_dump(json.load(sys.stdin), sys.stdout, default_flow_style=False)' 2> /dev/null
}
 
### Convert YAML to JSON.
y2j() {
 python -c 'import sys, yaml, json; y=yaml.load(sys.stdin.read()); print json.dumps(y)' 2> /dev/null
}
 
echo "FMC,healthStatus,hostName,model,name," > "$(basename ${0%.*}).csv"
 
# Itterate through all FMC devices
for firepower in ${FMC}
 do eval "$(curl -skX POST https://${firepower}/api/fmc_platform/v1/auth/generatetoken \
        -H "Authorization: Basic $(printf "${username}:${password}" | base64)" -D - |\
        awk '/(auth|DOMAIN|global)/{gsub(/[\r|:]/,""); gsub(/-/,"_",$1); print $1"=\""$2"\""}')"
 
    ### Get expanded of list devices
    curl -skX GET "https://${firepower}/api/fmc_config/v1/domain/${DOMAIN_UUID}/devices/devicerecords?offset=0&limit=1000&expanded=true" -H "X-auth-access-token: ${X_auth_access_token}" |\
     j2y | awk 'BEGIN{ X=0; }/^(-|  [a-z])/{if($1 == "-") {X+=1; printf "'''${firepower}''',"} else if($1 == "healthStatus:" || $1 == "hostName:" || $1 == "model:" || $1 == "name:") {printf $NF","} else if($1 == "type:") {printf "\n"}}'
 
done >> "$(basename ${0%.*}).csv"
19. December 2018 · Comments Off on Ansible playbook to provision Netscaler VIPs. · Categories: Ansible, Linux, Linux Admin, Load Balancing, NetScaler, Networking · Tags: , , ,

The following playbook will create a fully functional VIP; including the supporting monitor, service-group (pool) and servers (nodes) on a netscaler loadbalancer. Additionally, the same playbook has the ability to fully deprovision a VIP and all its supporting artifacts. To do all this I use the native Netscaler Ansible modules. When it comes to using the netscaler_servicegroup module, since the number of servers are not always consistent; I create that task with a Jinja2 template, where its imported back into the play.

netscaler_provision.yaml: a /usr/bin/ansible-playbook -f 10 script text executable, ASCII text

#!/usr/bin/ansible-playbook -f 10
## Ansible playbook to provision Netscaler VIPs.
# Requires: nitrosdk-python
# 2018 (v.01) - Playbook from www.davideaves.com
---
- name: Netscaler VIP provision
  hosts: netscaler
  connection: local
  gather_facts: False

  vars:

    ansible_connection: "local"
    ansible_python_interpreter: "/usr/bin/env python"

    state: 'present'

    lbvip:
      name: testvip
      address: 203.0.113.1
      server:
        - name: 'server-1'
          address: '192.0.2.1'
          description: 'Ansible Test Server 1'
          disabled: 'true'
        - name: 'server-2'
          address: '192.0.2.2'
          description: 'Ansible Test Server 2'
          disabled: 'true'
        - name: 'server-3'
          address: '192.0.2.3'
          description: 'Ansible Test Server 3'
          disabled: 'true'
        - name: 'server-4'
          address: '192.0.2.4'
          description: 'Ansible Test Server 4'
          disabled: 'true'
        - name: 'server-5'
          address: '192.0.2.5'
          description: 'Ansible Test Server 5'
          disabled: 'true'
        - name: 'server-6'
          address: '192.0.2.6'
          description: 'Ansible Test Server 6'
          disabled: 'true'
        - name: 'server-7'
          address: '192.0.2.7'
          description: 'Ansible Test Server 7'
          disabled: 'true'
        - name: 'server-8'
          address: '192.0.2.8'
          description: 'Ansible Test Server 8'
          disabled: 'true'
      vserver:
        - port: '80'
          description: 'Generic service running on 80'
          type: 'HTTP'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '443'
          description: 'Generic service running on 443'
          type: 'SSL_BRIDGE'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '8080'
          description: 'Generic service running on 8080'
          type: 'HTTP'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '8081'
          description: 'Generic service running on 8081'
          type: 'HTTP'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '8443'
          description: 'Generic service running on 8443'
          type: 'SSL_BRIDGE'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'

  tasks:

    - name: Build lbvip and all related componets.
      block:
      - local_action:
          module: netscaler_server
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "{{ item.name }}"
          ipaddress: "{{ item.address }}"
          comment: "{{ item.description | default('Ansible Created') }}"
          disabled: "{{ item.disabled | default('false') }}"
        with_items: "{{ lbvip.server }}"
      - local_action:
          module: netscaler_lb_monitor
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          monitorname: "tcp_{{ lbvip.name }}_{{ item.port }}"
          type: TCP
          destport: "{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
        no_log: false
      - local_action:
          module: copy
          content: "{{ lookup('template', 'templates/netscaler_servicegroup.j2') }}"
          dest: "/tmp/svg_{{ lbvip.name }}_{{ item.port }}.yaml"
          mode: "0644"
        with_items: "{{ lbvip.vserver }}"
        changed_when: false
      - include_tasks: "/tmp/svg_{{ lbvip.name }}_{{ item.port }}.yaml"
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: file
          state: absent
          path: "/tmp/svg_{{ lbvip.name }}_{{ item.port }}.yaml"
        with_items: "{{ lbvip.vserver }}"
        changed_when: false
      - local_action:
          module: netscaler_lb_vserver
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "vs_{{ lbvip.name }}_{{ item.port }}"
          servicetype: "{{ item.type }}"
          ipv46: "{{ lbvip.address }}"
          port: "{{ item.port }}"
          lbmethod: "{{ item.method | default('LEASTCONNECTION') }}"
          persistencetype: "{{ item.persistence | default('SOURCEIP') }}"
          servicegroupbindings:
            - servicegroupname: "svg_{{ lbvip.name }}_{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
      when: state == "present"

    - name: Destroy lbvip and all related componets.
      block:
      - local_action:
          module: netscaler_lb_vserver
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "vs_{{ lbvip.name }}_{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: netscaler_servicegroup
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          servicegroupname: "svg_{{ lbvip.name }}_{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: netscaler_lb_monitor
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          monitorname: "tcp_{{ lbvip.name }}_{{ item.port }}"
          type: TCP
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: netscaler_server
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "{{ item.name }}"
        with_items: "{{ lbvip.server }}"
      when: state == "absent"

The following is the Jinja2 template that creates the netscaler_servicegroup task. An important thing to note is my use of the RAW block. When the task is created and stored in /tmp it does not contain any account credentials, instead I preserve the variable in the raw to prevent leaking sensitive information to anyone who may be snooping around on the server while the playbook is running.

templates/netscaler_servicegroup.j2: ASCII text, with CRLF line terminators

---
- local_action:
    module: netscaler_servicegroup
    nsip: {% raw %}"{{ inventory_hostname }}"
{% endraw %}
    nitro_user: {% raw %}"{{ nitro_user }}"
{% endraw %}
    nitro_pass: {% raw %}"{{ nitro_pass }}"
{% endraw %}
    nitro_protocol: "https"
    validate_certs: no

    state: "{{ state | default('present') }}"

    servicegroupname: "svg_{{ lbvip.name }}_{{ item.port }}"
    comment: "{{ item.description | default('Ansible Created') }}"
    servicetype: "{{ item.type }}"
    servicemembers:
{% for i in lbvip.server %}
      - servername: "{{ i.name }}"
        port: "{{ item.port }}"
{% endfor %}
    monitorbindings:
      - monitorname: "tcp_{{ lbvip.name }}_{{ item.port }}"