11. November 2019 · Comments Off on Ansible playbook to manage security rules on a Palo Alto firewall · Categories: Ansible, Firewall, Networking · Tags: , , , , , , , ,

The following Ansible playbook is how I manage firewall rules on a Palo Alto firewall. My overall playbook methodology is to be able to reuse playbook task lists as though they were building blocks. Also, to be able to both add and remove configuration using the same playbook. To do this, a common trick I like to use is the CLI flag “-e” to specify an input file. The input file is where the abstracted configuration is defined and how I tell the playbook what to build.

Depending on the resources of the company most ticketing systems, like Service Now or CA Service Desk, can output the proper YAML input file after all certain workflow items have been approved. The ticketing system can then output to a Samba Share, that has a crontab to kick off and ingest any new input files or the ticketing system itself can kick off the playbook directly if you have a Ansible Tower or AWX in the environment.

The following is my input. When all is said and done, I put most of my mental effort on how best to structure the input. Ideally I try to ask for as little as possible and try to make it so it can be adapted to any vendor product, such as a Cisco FMC.

ER/CO99999.yaml: ASCII text: ASCII text

---
ticket: CO99999
security_rule:
- description: Ansible test rule 0
  source_ip:
  - 192.168.0.100
  - 192.168.100.96
  destination_ip:
  - any
  service:
  - tcp_9000
- description: Another Ansible test rule 1
  source_ip:
  - 192.168.100.104
  - 192.168.100.105
  destination_ip:
  - 192.168.0.100
  service:
  - tcp_9000
  - tcp_9100-9200
- description: Another Ansible test rule 2
  source_ip:
  - 192.168.100.204
  - 192.168.100.205
  - 192.168.100.206
  - 192.168.100.207
  destination_ip:
  - 8.8.8.8
  - 192.168.0.42
  service:
  - udp_1053-2053
  - tcp_1053-2053
- description: Another Ansible test rule 3
  source_ip:
  - 192.168.100.204
  destination_ip:
  - 192.168.0.42
  service:
  - udp_123
- description: Another Ansible test rule 4
  source_ip:
  - 192.168.100.204
  - 192.168.100.205
  destination_ip:
  - 192.168.0.100
  service:
  - tcp_1-65535
- description: Another Ansible test rule 5
  source_ip:
  - 192.168.100.204
  - 192.168.100.207
  destination_ip:
  - 8.8.8.8
  service:
  - tcp_8081

Since the PA firewall is zone based I read the following CSV file to the playbook quicker. The CSV table contains the firewall (or device-group) the network and the Security zone that the network belongs to. Without this, I would need to perform a lot more tasks looking this information on each pass.

fwzones.csv: ASCII text

LABPA,192.168.0.0/24,AWS-PROD
LABPA,192.168.100.0/24,AWS-DEV
LABPA,0.0.0.0/0,Layer3-Outside

The following is the inventory in my lab. I don’t recommend storing any credentials here.

inventory: ASCII text

[all:vars]
ansible_connection="local"
ansible_python_interpreter="/usr/bin/env python"
username="admin"
password="admin"
 
[labpa]
labpa01

The following is my main playbook. It will prompt for username password credentials and read the input variables related to the change. Since this is a sample, I am only calling a single task list “panos_security_rule.yaml” which is responsible for managing the security rules on the PA.

main.yaml: a /usr/bin/ansible-playbook -f 10 script text executable, ASCII text

#!/usr/local/bin/ansible-playbook -f 10
---
- name: "manage panos devices"
hosts: labpa01
connection: local
gather_facts: False

vars_prompt:

- name: "username"
prompt: "Username"
private: no

- name: "password"
prompt: "Password"

vars:

- panos_provider:
ip_address: "{{ inventory_hostname }}"
username: "{{ username | default('admin') }}"
password: "{{ password | default('admin') }}"

pre_tasks:

- name: "fail: check for required input"
fail:
msg: "Example: ./main.yaml -e state=present -e er=./ER/CO99999.yaml"
when: (er is undefined) and (state is undefined)

- name: "include_vars: load security rules"
include_vars:
file: "{{ er }}"

roles:
- role: PaloAltoNetworks.paloaltonetworks

tasks:

- name: "include: create panos security rule"
include: panos_security_rule.yaml
with_indexed_items: "{{ security_rule }}"
when: state is defined

handlers:

- name: "commit pending changes"
local_action:
module: panos_commit
provider: "{{ panos_provider }}"

The following is my task list for managing PanOS security rules. If I were to manage any other vendors firewall I would make it read the same input and just simply create a different task list for that vendor device type. There are two tricks that I am performing within this tasklist… I am reading the fwzones.csv file into a variable for lookups. I am also calling another task list that will build the L4 service groups that will be referenced in the security rule.

panos_security_rule.yaml: ASCII text

## Manage security rules on a Palo Alto Firewall
## Requires: panos_object_service.yaml
#
## Vars Example:
#
# ticket: CO99999
# security_rule:
# - source_ip: ["192.168.0.100"]
#   destination_ip: ["any"]
#   service: ["tcp_9000"]
#   description: "Ansible test rule 0"
#
## Task Example:
#
#  - name: "include: create panos security rule"
#    include: panos_security_rule.yaml
#    with_indexed_items: "{{ security_rule }}"
#    when: state is defined
#
---
 
###
# Derive firewall zone and devicegroup from prebuilt CSV.
# Normally we would retrieve this from a functional IPAM.
###
 
# Example CSV file
#
# devicegroup,192.168.0.0/24,prod
# devicegroup,192.168.100.0/24,dev
# devicegroup,0.0.0.0/0,outside

- name: "read_csv: read firewall zones from csv"
  local_action:
    module: read_csv
    path: fwzones.csv
    fieldnames: devicegroup,network,zone
  register: fwzones
  run_once: true

- name: "set_fact: source details"
  set_fact:
    source_dgrp: "{{ item_tmp.1['devicegroup'] }}"
    source_addr: "{{ source_addr|default([]) + [ item_tmp.0 ] }}"
    source_zone: "{{ source_zone|default([]) + [ item_tmp.1['zone'] ] }}"
  with_nested:
  - "{{ item.1.source_ip }}"
  - "{{ fwzones.list }}"
  loop_control:
    loop_var: item_tmp
  when: ( item_tmp.0|ipaddr('int') >= item_tmp.1['network']|ipaddr('network')|ipaddr('int') ) and
        ( item_tmp.0|ipaddr('int') <= item_tmp.1['network']|ipaddr('broadcast')|ipaddr('int') ) and
        ( item_tmp.1['network']|ipaddr('int') != "0/0" )

- name: "set_fact: destination zone"
  set_fact:
    destination_dgrp: "{{ item_tmp.1['devicegroup'] }}"
    destination_zone: "{{ destination_zone|default([]) + [ item_tmp.1['zone'] ] }}"
  with_nested:
  - "{{ item.1.destination_ip }}"
  - "{{ fwzones.list }}"
  loop_control:
    loop_var: item_tmp
  when: ( item_tmp.0|ipaddr('int') >= item_tmp.1['network']|ipaddr('network')|ipaddr('int') ) and
        ( item_tmp.0|ipaddr('int') <= item_tmp.1['network']|ipaddr('broadcast')|ipaddr('int') ) and
        ( item_tmp.1['devicegroup'] == source_dgrp ) and ( destination_zone|default([])|length < item.1.destination_ip|unique|length )
 
##
# Done collecting firewall zone & devicegroup.
##

- name: "set_fact: services"
  set_fact:
    services: "{{ services|default([]) + [ service ] }}"
    service_list: "{{ service_list|default([]) + [ {\"protocol\": {service.split('_')[0]: {\"port\": service.split('_')[1]}}, \"name\": service }] }}"
  with_items: "{{ item.1.service }}"
  loop_control:
    loop_var: service

- name: "include: create panos service object"
  include: panos_object_service.yaml
  with_items: "{{ service_list|unique }}"
  loop_control:
    loop_var: service
  when: (state == "present")
 
###
# Testing against a single PA firewall, uncomment if running against Panorama
###

- name: "panos_security_rule: firewall rule"
  local_action:
    module: panos_security_rule
    provider: "{{ panos_provider }}"
    state: "{{ state }}"
    rule_name: "{{ ticket|upper }}-{{ item.0 }}"
    description: "{{ item.1.description }}"
    tag_name: "ansible"
    source_zone: "{{ source_zone|unique }}"
    source_ip: "{{ source_addr|unique }}"
    destination_zone: "{{ destination_zone|unique }}"
    destination_ip: "{{ item.1.destination_ip|unique }}"
    service: "{{ services|unique }}"
#   devicegroup: "{{ source_dgrp|unique }}"
    action: "allow"
    commit: "False"
  notify:
  - commit pending changes

- name: "include: create panos service object"
  include: panos_object_service.yaml
  with_items: "{{ service_list|unique }}"
  loop_control:
    loop_var: service
  when: (state == "absent")

- name: "set_fact: clear facts from run"
  set_fact:
    services: []
    service_list: []
    source_dgrp: ""
    source_addr: []
    source_zone: []
    destination_dgrp: ""
    destination_addr: []
    destination_zone: []

The following will parse the “service” variable from the input and will manage the creation or removal of its service group. This is probably not best practice, but I like to initially build all PA rules as L4 then after a month to bake in, I will use the Expedition tool or the PanOS9 AppID migration tool to convert rules to L7 later. I never assume that an app owner knows how their application works, which is why I choose to migrate to L7 rules based on what I actually see in the logs.

panos_object_service.yaml: ASCII text

## Var Example:
#
#  services:
#  - { name: service-abc, protocol: { tcp: { port: '5000,6000-7000' } } }
#
## Task Example:
#
#  - name: "include: create panos address object"
#    include: panos_object_service.yaml state="absent"
#    with_items: "{{ services }}"
#    loop_control:
#      loop_var: service
#
---
- name: attempt to locate existing address
  block:

  - name: "panos_object: service - find {{ service.name }}"
    local_action:
      module: panos_object
      ip_address: "{{ inventory_hostname }}"
      username: "{{ username }}"
      password: "{{ password }}"
      serviceobject: "{{ service.name }}"
      devicegroup: "{{ devicegroup | default('') }}"
      operation: "find"
    register: result

  - name: 'set_fact: existing service object'
    set_fact:
      existing: "{{ result.stdout_lines|from_json|json_query('entry')|regex_replace('@') }}"
    when: (state == "present")

  rescue:

  - name: "panos_object: service - add {{ service.name }}"
    local_action:
      module: panos_object
      ip_address: "{{ inventory_hostname }}"
      username: "{{ username }}"
      password: "{{ password }}"
      serviceobject: "{{ service.name }}"
      protocol: "{{ service.protocol | flatten | list | join('\", \"') }}"
      destination_port: "{{ service | json_query('protocol.*.port') | list | join('\", \"') }}"
      description: "{{ service.description | default('') }}"
      devicegroup: "{{ devicegroup | default('') }}"
      operation: 'add'
    when: (state == "present")

- name: "panos_object: service - update {{ service.name }}"
  local_action:
    module: panos_object
    ip_address: "{{ inventory_hostname }}"
    username: "{{ username }}"
    password: "{{ password }}"
    serviceobject: "{{ service.name }}"
    protocol: "{{ service.protocol | flatten | list | join('\", \"') }}"
    destination_port: "{{ service | json_query('protocol.*.port') | list | join('\", \"') }}"
    description: "{{ service.description | default('') }}"
    devicegroup: "{{ devicegroup | default('') }}"
    operation: 'update'
  when: (state == "present") and (existing is defined) and (existing != service)

- name: "panos_object: service - delete {{ service.name }}"
  local_action:
    module: panos_object
    ip_address: "{{ inventory_hostname }}"
    username: "{{ username }}"
    password: "{{ password }}"
    serviceobject: "{{ service.name }}"
    devicegroup: "{{ devicegroup | default('') }}"
    operation: 'delete'
  ignore_errors: yes
  when: (state == "absent") and (result.stdout_lines is defined)
19. December 2018 · Comments Off on Ansible playbook to provision Netscaler VIPs. · Categories: Ansible, Linux, Linux Admin, Load Balancing, NetScaler, Networking · Tags: , , ,

The following playbook will create a fully functional VIP; including the supporting monitor, service-group (pool) and servers (nodes) on a netscaler loadbalancer. Additionally, the same playbook has the ability to fully deprovision a VIP and all its supporting artifacts. To do all this I use the native Netscaler Ansible modules. When it comes to using the netscaler_servicegroup module, since the number of servers are not always consistent; I create that task with a Jinja2 template, where its imported back into the play.

netscaler_provision.yaml: a /usr/bin/ansible-playbook -f 10 script text executable, ASCII text

#!/usr/bin/ansible-playbook -f 10
## Ansible playbook to provision Netscaler VIPs.
# Requires: nitrosdk-python
# 2018 (v.01) - Playbook from www.davideaves.com
---
- name: Netscaler VIP provision
  hosts: netscaler
  connection: local
  gather_facts: False

  vars:

    ansible_connection: "local"
    ansible_python_interpreter: "/usr/bin/env python"

    state: 'present'

    lbvip:
      name: testvip
      address: 203.0.113.1
      server:
        - name: 'server-1'
          address: '192.0.2.1'
          description: 'Ansible Test Server 1'
          disabled: 'true'
        - name: 'server-2'
          address: '192.0.2.2'
          description: 'Ansible Test Server 2'
          disabled: 'true'
        - name: 'server-3'
          address: '192.0.2.3'
          description: 'Ansible Test Server 3'
          disabled: 'true'
        - name: 'server-4'
          address: '192.0.2.4'
          description: 'Ansible Test Server 4'
          disabled: 'true'
        - name: 'server-5'
          address: '192.0.2.5'
          description: 'Ansible Test Server 5'
          disabled: 'true'
        - name: 'server-6'
          address: '192.0.2.6'
          description: 'Ansible Test Server 6'
          disabled: 'true'
        - name: 'server-7'
          address: '192.0.2.7'
          description: 'Ansible Test Server 7'
          disabled: 'true'
        - name: 'server-8'
          address: '192.0.2.8'
          description: 'Ansible Test Server 8'
          disabled: 'true'
      vserver:
        - port: '80'
          description: 'Generic service running on 80'
          type: 'HTTP'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '443'
          description: 'Generic service running on 443'
          type: 'SSL_BRIDGE'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '8080'
          description: 'Generic service running on 8080'
          type: 'HTTP'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '8081'
          description: 'Generic service running on 8081'
          type: 'HTTP'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '8443'
          description: 'Generic service running on 8443'
          type: 'SSL_BRIDGE'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'

  tasks:

    - name: Build lbvip and all related componets.
      block:
      - local_action:
          module: netscaler_server
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "{{ item.name }}"
          ipaddress: "{{ item.address }}"
          comment: "{{ item.description | default('Ansible Created') }}"
          disabled: "{{ item.disabled | default('false') }}"
        with_items: "{{ lbvip.server }}"
      - local_action:
          module: netscaler_lb_monitor
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          monitorname: "tcp_{{ lbvip.name }}_{{ item.port }}"
          type: TCP
          destport: "{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
        no_log: false
      - local_action:
          module: copy
          content: "{{ lookup('template', 'templates/netscaler_servicegroup.j2') }}"
          dest: "/tmp/svg_{{ lbvip.name }}_{{ item.port }}.yaml"
          mode: "0644"
        with_items: "{{ lbvip.vserver }}"
        changed_when: false
      - include_tasks: "/tmp/svg_{{ lbvip.name }}_{{ item.port }}.yaml"
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: file
          state: absent
          path: "/tmp/svg_{{ lbvip.name }}_{{ item.port }}.yaml"
        with_items: "{{ lbvip.vserver }}"
        changed_when: false
      - local_action:
          module: netscaler_lb_vserver
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "vs_{{ lbvip.name }}_{{ item.port }}"
          servicetype: "{{ item.type }}"
          ipv46: "{{ lbvip.address }}"
          port: "{{ item.port }}"
          lbmethod: "{{ item.method | default('LEASTCONNECTION') }}"
          persistencetype: "{{ item.persistence | default('SOURCEIP') }}"
          servicegroupbindings:
            - servicegroupname: "svg_{{ lbvip.name }}_{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
      when: state == "present"

    - name: Destroy lbvip and all related componets.
      block:
      - local_action:
          module: netscaler_lb_vserver
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "vs_{{ lbvip.name }}_{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: netscaler_servicegroup
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          servicegroupname: "svg_{{ lbvip.name }}_{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: netscaler_lb_monitor
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          monitorname: "tcp_{{ lbvip.name }}_{{ item.port }}"
          type: TCP
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: netscaler_server
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "{{ item.name }}"
        with_items: "{{ lbvip.server }}"
      when: state == "absent"

The following is the Jinja2 template that creates the netscaler_servicegroup task. An important thing to note is my use of the RAW block. When the task is created and stored in /tmp it does not contain any account credentials, instead I preserve the variable in the raw to prevent leaking sensitive information to anyone who may be snooping around on the server while the playbook is running.

templates/netscaler_servicegroup.j2: ASCII text, with CRLF line terminators

---
- local_action:
    module: netscaler_servicegroup
    nsip: {% raw %}"{{ inventory_hostname }}"
{% endraw %}
    nitro_user: {% raw %}"{{ nitro_user }}"
{% endraw %}
    nitro_pass: {% raw %}"{{ nitro_pass }}"
{% endraw %}
    nitro_protocol: "https"
    validate_certs: no

    state: "{{ state | default('present') }}"

    servicegroupname: "svg_{{ lbvip.name }}_{{ item.port }}"
    comment: "{{ item.description | default('Ansible Created') }}"
    servicetype: "{{ item.type }}"
    servicemembers:
{% for i in lbvip.server %}
      - servername: "{{ i.name }}"
        port: "{{ item.port }}"
{% endfor %}
    monitorbindings:
      - monitorname: "tcp_{{ lbvip.name }}_{{ item.port }}"
05. August 2018 · Comments Off on Ansible playbook to handle IOS upgrades. · Categories: Ansible, Cisco, Linux, Networking · Tags: , , , ,

The following is an Ansible playbook I created to handle IOS upgrades on against an excessively large number of Cisco routers at a customer site I was doing some work at. I saved a lot of time by staging the IOS images on flash before kicking off the playbook, if I missed anything this playbook would of uploaded the image for me before setting the boot statement. I think moving forward I will start leveraging the NTC (Network to Code) Ansible modules a lot more, its have proven itself to be superior and more feature rich than the built in Cisco Ansible modules.

In addition to the NTC requirements, this playbook also requires 2 directories:

  • ./images: directory that contains IOS images.
  • ./backups: directory repository for config backups.

ansible.cfg: ASCII text

[defaults]
transport = ssh
host_key_checking = false
retry_files_enabled = false
#stdout_callback = unixy
#stdout_callback = actionable
display_skipped_hosts = false
 
timeout = 5
 
inventory = ./hosts
log_path   = ./ansible.log
 
[ssh_connection]
pipelining = True

platform_facts.csv: ASCII text

C3900,IOS,ROUTER,c3900-universalk9-mz.SPA.156-3.M4.bin
C2900,IOS,ROUTER,c2900-universalk9-mz.SPA.156-3.M4.bin
ISR4300,IOS,ROUTER,isr4300-universalk9.16.03.06.SPA.bin

ios_upgrade.yaml: a ansible-playbook script text executable, ASCII text

#!/usr/local/bin/ansible-playbook -f 10
## Ansible playbook to handle IOS upgrades.
# Playbook will not reboot any device unless the variable REBOOT exists.
#
# Requires: https://github.com/networktocode/ntc-ansible
# Example: ansible-playbook --extra-vars "REBOOT=yes" ios_upgrade.yaml
# Example: ansible-playbook ios_upgrade.yaml --skip-tags=change
---
- name: Cisco IOS Upgrade
  hosts: [ "all" ]
  connection: local
  gather_facts: no
  tags: [ "IOS", "upgrade" ]

  vars_prompt:

  - name: "username"
    prompt: "Username"
    private: no

  - name: "password"
    prompt: "Password"

  vars:

  - ansible_connection: "local"
  - ansible_python_interpreter: "/usr/bin/env python"

  - ios_provider:
      username: "{{ username }}"
      password: "{{ password }}"
      authorize: true
      auth_pass: "{{ password }}"
      host: "{{ inventory_hostname }}"
      timeout: 120

  pre_tasks:

  - name: "ios_facts: hardware"
    ios_facts:
      gather_subset: hardware
      provider: "{{ ios_provider }}"
    connection: local
    when: (PLATFORM is not defined)
    tags: [ "pre_task", "ios_facts", "hardware" ]

  - name: "ios_command: boot configuration"
    ios_command:
      provider: "{{ ios_provider }}"
      commands:
        - "show running-config | include ^boot.system"
    connection: local
    register: COMMANDS
    tags: [ "pre_task", "ios_command", "boot", "COMMANDS" ]

  - name: "set_fact: PLATFORM"
    set_fact:
      PLATFORM: "{{ ansible_net_image|upper | regex_replace('.*[:/]') | regex_replace('([A-Z]-|-).*') }}"
    no_log: True
    when: (ansible_net_image is defined) and (PLATFORM is not defined)
    tags: [ "pre_task", "set_fact", "PLATFORM", "ansible_net_image" ]

  - name: "set_fact: SYSTEM"
    set_fact:
      SYSTEM: "{{ lookup('csvfile', PLATFORM + ' file=platform_facts.csv col=1 delimiter=,')|upper }}"
    no_log: True
    when: (PLATFORM is defined) and (SYSTEM is not defined)
    tags: [ "pre_task", "set_fact", "lookup", "platform_facts.csv", "PLATFORM", "SYSTEM" ]

  - name: "set_fact: TYPE"
    set_fact:
      TYPE: "{{ lookup('csvfile', PLATFORM + ' file=platform_facts.csv col=2 delimiter=,')|upper }}"
    no_log: True
    when: (PLATFORM is defined) and (TYPE is not defined)
    tags: [ "pre_task", "set_fact", "lookup", "platform_facts.csv", "PLATFORM", "TYPE" ]

  - name: "set_fact: IMAGE"
    set_fact:
      IMAGE: "{{ lookup('csvfile', PLATFORM + ' file=platform_facts.csv col=3 delimiter=,') }}"
    no_log: True
    when: (PLATFORM is defined) and (IMAGE is not defined)
    tags: [ "pre_task", "set_fact", "lookup", "platform_facts.csv", "PLATFORM", "IMAGE" ]

  - name: "stat: BACKUP_FILE"
    stat: path="backups/{{ inventory_hostname }}.cfg"
    no_log: True
    register: BACKUP_FILE
    tags: [ "pre_task", "stat", "BACKUP_FILE" ]

  - name: "stat: IMAGE_FILE"
    stat: path="images/{{ IMAGE }}"
    no_log: True
    register: IMAGE_FILE
    tags: [ "pre_task", "stat", "IMAGE_FILE" ]

  tasks:

  - name: "fail: missing image"
    fail:
      msg: "Platform image missing: {{ PLATFORM }}"
    when: (IMAGE[0] is undefined)

  - name: "ntc_save_config: host > local" 
    ntc_save_config:     
      platform: cisco_ios_ssh
      local_file: "backups/{{ inventory_hostname }}.cfg"
      provider: "{{ ios_provider }}"
    connection: local
    when: (BACKUP_FILE.stat.exists == False)
    tags: [ "ntc-ansible", "ntc_save_config", "cisco_ios_ssh", "BACKUP_FILE" ]

  - name: "ntc_file_copy: local > host"
    ntc_file_copy:
      platform: cisco_ios_ssh
      local_file: "images/{{ IMAGE }}"
      host: "{{ inventory_hostname }}"
      provider: "{{ ios_provider }}"
    connection: local
    when: (IMAGE_FILE.stat.exists == True) and (PLATFORM is defined) and (IMAGE is defined)
    tags: [ "ntc-ansible", "ntc_file_copy", "cisco_ios_ssh", "IMAGE", "PLATFORM", "IMAGE_FILE" ]

  - name: "ios_config: remove boot system lines"
    ios_config:
      provider: "{{ ios_provider }}"
      lines: "no {{ item }}"
    connection: local
    register: config_boot_rem
    with_items: "{{ COMMANDS.stdout_lines[0] }}"
    when: (PLATFORM is defined) and (IMAGE is defined) and
          not(IMAGE in item) and not(item == '')
    tags: [ "ios_config", "boot", "PLATFORM", "remove", "config_boot_rem", "change" ]
    notify:
      - ios write memory

  - name: "ios_config: add boot system line"
    ios_config:
      provider: "{{ ios_provider }}"
      lines: "boot system flash:{{ IMAGE }}"
      match: line
    connection: local
    register: config_boot_add
    when: (PLATFORM is defined) and (IMAGE is defined)
    tags: [ "ios_config", "boot", "PLATFORM", "IMAGE", "add", "config_boot_add", "change" ]
    notify:
      - ios write memory

  - meta: flush_handlers

  post_tasks:

  - name: "ntc_reboot: when REBOOT is defined"
    ntc_reboot:
      platform: cisco_ios_ssh
      confirm: true
      host: "{{ inventory_hostname }}"
      provider: "{{ ios_provider }}"
    connection: local
    when: (REBOOT is defined) and
          (config_boot_add.changed == true) or (config_boot_rem.changed == true)
    tags: [ "post_task", "ntc-ansible", "ntc_reboot", "REBOOT", "change" ]
    notify:
      - wait for tcp

  handlers:

  - name: "ios write memory"
    ios_command:
      provider: "{{ ios_provider }}"
      commands: "write memory"
    connection: local

  - name: "wait for tcp"
    wait_for:
      port: 22
      host: "{{inventory_hostname}}"
      timeout: 420
    connection: local
13. June 2018 · Comments Off on Using netaddr in Ansible to manipulate network IP, CIDR, MAC and prefix. · Categories: Ansible, Cloud, Linux Admin, Networking · Tags: , , , , , , , , , , , ,

The following ansible playbook is an example that demonstrates using netaddr to manipulate network IP, CIDR, MAC and prefix. Additional examples can be found in the Ansible docs or if your looking to do manipulation in python the following are the docs for netaddr.

#!/usr/local/bin/ansible-playbook
## Using netaddr in Ansible to manipulate network IP, CIDR, MAC and prefix
## 2018 (v.01) - Playbook from www.davideaves.com
---
- hosts: localhost
  gather_facts: false

  vars:
  - IP: 172.31.3.13/23
  - CIDR: 192.168.0.0/16
  - MAC: 1a:2b:3c:4d:5e:6f
  - PREFIX: 18

  tasks:
    - debug: msg="___ {{ IP }} ___ ADDRESS {{ IP | ipaddr('address') }}"
    - debug: msg="___ {{ IP }} ___ BROADCAST {{ IP | ipaddr('broadcast') }}"
    - debug: msg="___ {{ IP }} ___ NETMASK {{ IP | ipaddr('netmask') }}"
    - debug: msg="___ {{ IP }} ___ NETWORK {{ IP | ipaddr('network') }}"
    - debug: msg="___ {{ IP }} ___ PREFIX {{ IP | ipaddr('prefix') }}"
    - debug: msg="___ {{ IP }} ___ SIZE {{ IP | ipaddr('size') }}"
    - debug: msg="___ {{ IP }} ___ WILDCARD {{ IP | ipaddr('wildcard') }}"
    - debug: msg="___ {{ IP }} ___ RANGE {{ IP | ipaddr('range_usable') }}"
    - debug: msg="___ {{ IP }} ___ REVERSE DNS {{ IP | ipaddr('revdns') }}"
    - debug: msg="___ {{ IP }} ___ HEX {{ IP | ipaddr('address') | ip4_hex() }}"
    - debug: msg="___ {{ MAC }} ___ CISCO {{ MAC | hwaddr('cisco') }}"
    - debug: msg="___ {{ CIDR }} ___ Last /20 CIDR {{ CIDR | ipsubnet(20, -1) }}"
    - debug: msg="___ {{ CIDR }} ___ 1st IP {{ CIDR | ipaddr(1) }}"
    - debug: msg="___ {{ CIDR }} ___ 3rd from last IP {{ CIDR | ipaddr(-3) }}"
21. December 2017 · Comments Off on Using Ansible to manage ACL’s on Cisco IOS · Categories: Ansible, Cisco, Networking · Tags: , , , , , , , , ,

Finding the smartest way to broadly manage ACLs on Cisco Devices is always a cause of heartburn. In the past I have written ugly tcl/expect scripts to blindly push changes out to thousands of routers with little validation. Over time I started to get good at writing hackey checks to fake idempotentcy to prevent unneeded changes from being made. No tool is perfect, even using vendor tools like CSM or APIC-EM to manage ACLs can easily result in loss of communication to the target device. If not written properly Ansible can easily suffer similar shortcomings, although in Ansible’s case its likely your own fault for not testing properly.

The Coyote problem with Ansible

In my quest to find the least intrusive way to manage a consistent set of ACLs across all my Cisco devices I have yet to find any satisfactory playbooks. All, if not most, playbooks on github or blogs deal with access-lists by deleting and recreating the entire block. Even the ios_config docs page show examples of deleting target ACLs. Some of the fancier playbooks will go as far as de-referencing the ACL in the line, interface, or route-map before deleting it. Those approaches work but they are sub-optimal because when you delete an ACL you must be mindful of the following:

  • Routers pass live traffic, even during maintenance windows, deleting and recreating an ACL will interrupt traffic.
  • ACL’s tied to interfaces that are deleted without first removing the access-group from the interface will result no traffic passing; even management traffic.
  • Temporarily removing an access-group from an interface before deleting it will allow *all* traffic to pass.
  • Unfortunately the way configuration is done on Cisco devices there is not a straight forward way to commit all changes all at once, thereby taking a single quick hit like with carrier grade equipment. For example when making configuration changes its common to see services bouncing in and out of service each time you press the enter key. That being said the following is the most functionally and least intrusive solution I have been able to come up with. Its still *not* perfect! In this playbook I am still deleting unmatched sequence numbers that could potentially still be in use; only to re-add them on the next task item.

    ios_acl.yaml: Ansible executable playbook

    #!/usr/local/bin/ansible-playbook -f 10
    ---
    - name: ACL
      hosts: ios_lab
      gather_facts: false
      connection: local
      tags: [ "acl", "ios" ]
    
      vars_prompt:
    
      - name: "aclNAME"
        prompt: "ACL Name"
        private: no
        when: aclNAME is undefined
    
      vars:
    
      - aclLIST: "{{ ACL[aclNAME].LIST }}"
      - aclTYPE: "{{ ACL[aclNAME].TYPE }}"
    
      tasks:
    
      - name: "GET access-list"
        register: get_acl_config
        ios_command:
          provider: "{{ provider }}"
          commands:
            - "show access-lists {{ aclNAME }} | include ^\ +[1-9]"
    
      - name: "DEL access-list lines"
        when: "(get_acl_config.stdout_lines[0][0] != '') and (item not in lookup('template', 'ios_acl.j2'))"
        with_items: "{{ get_acl_config.stdout_lines[0] |\
                            regex_replace('[ \t]{2}') |\
                            regex_replace(', wildcard bits') |\
                            regex_replace(' [(].{9,30}[)]') }}"
        ios_config:
          provider: "{{ provider }}"
          lines: "no {{ item }}"
          parents: "ip access-list {{ aclTYPE }} {{ aclNAME }}"
        notify:
          - Save Configuration
    
      - name: "PUT access-list lines"
        when: "(item not in get_acl_config.stdout_lines[0] |\
                            regex_replace('[ \t]{2}') |\
                            regex_replace(', wildcard bits') |\
                            regex_replace(' [(].{9,30}[)]'))"
        with_items: "{{ lookup('template', 'ios_acl.j2').split('\n') }}"
        ios_config:
          provider: "{{ provider }}"
          lines: "{{ item }}"
          parents: "ip access-list {{ aclTYPE }} {{ aclNAME }}"
        notify:
          - Save Configuration
    
      handlers:
    
      - name: "Save Configuration"
        ios_command:
          provider: "{{ provider }}"
          commands: "write memory"

    group_vars/ios_lab.yaml: Ansible group variables

    ---
    provider:
      username: cisco
      password: cisco
      authorize: true
      auth_pass: cisco
      host: "{{ inventory_hostname }}"
      timeout: 120
    
    ACL:
      NETWORK_MANAGEMENT:
        TYPE: extended
        LIST:
          - permit ip 192.168.10.0 0.0.0.255 any
          - permit ip 192.168.20.0 0.0.0.255 any
          - permit ip 192.168.30.0 0.0.0.255 any
          - permit ip 192.168.40.0 0.0.0.255 any

    templates/ios_acl.j2: Jinja2 template

    {% for line in aclLIST %}{{ loop.index * 10 }} {{ line }}
    {% endfor %}