01. January 2019 · Comments Off on Ansible playbook to manage objects on a Cisco Firepower Management Center (FMC) · Categories: Ansible, Cisco, Firewall, Networking · Tags: , , , , , , , , , , , ,

I really wish Cisco would support the DevOps community and release Ansible modules for their products like most other vendors. That being said, since there are no modules for the Cisco Firepower you have to manage the device through the APIs directly. Managing anything using raw API requests in Ansible can be a little tricky but not impossible. When creating playbooks like this you will typically spend most time figuring out the structure of responses and how best to iterate through them.

The following Ansible playbook is a refactor of a previous script I wrote last year to post/delete objects up to a firepower in bulk. I have spent a lot of time with Ansible playbooks and I recommend grouping and modularizing related tasks into separate importable YAML files. This not only makes reusing common groups of tasks much easier but also means later those logical task groupings can simply be copied up into a role with little to no effort.

main.yaml: a /usr/bin/ansible-playbook -f 10 script text executable, ASCII text

#!/usr/bin/ansible-playbook -f 10
## Ansible playbook to manage objects on a FMC
# 2019 (v.01) - Playbook from www.davideaves.com
---
- name: manage firepower objects
  hosts: fmc
  connection: local
  gather_facts: no

  vars:

  - ansible_connection: "local"
  - ansible_python_interpreter: "/usr/bin/env python"

  - fmc_provider:
      username: "{{ username | default('apiuser') }}"
      password: "{{ password | default('api1234') }}"

  - fmc_objects:
    - name: server1
      value: 192.0.2.1
      description: Test Server

  tasks:

  ## Note ##
  # Firepower Management Center REST API authentication tokens are valid for 30 minutes, and can be refreshed up to three times
  # Ref: https://www.cisco.com/c/en/us/td/docs/security/firepower/623/api/REST/Firepower_Management_Center_REST_API_Quick_Start_Guide_623/Connecting_with_a_Client.html

  - name: "fmc_platform: generatetoken"
    local_action:
      module: uri
      url: "https://{{ inventory_hostname }}/api/fmc_platform/v1/auth/generatetoken"
      method: POST
      user: "{{ fmc_provider.username }}"
      password: "{{ fmc_provider.password }}"
      validate_certs: no
      return_content: no
      force_basic_auth: yes
      status_code: 204
    register: auth

  - include: fmc_objects.yaml
    when: auth.x_auth_access_token is defined

The following is the task grouping that will make object changes to the FMC using Ansibles built in URI module. I have tried to make this playbook as idempotent as possible so I first register an array with all of the objects that exist on the FMC. I then iterate through that array in subsequent tasks so I only change what does not match. If it sees a fmc_object name key with no value set, the delete task will remove the object from the FMC.

fmc_objects.yaml: ASCII text

## Cisco FMC object management tasks for Ansible
## Requires: VAR:auth.x_auth_access_token
## 2019 (v.01) - Playbook from www.davideaves.com
#
## VARIABLE EXAMPLE ##
#
#  - fmc_objects:
#    - name: server1
#      value: 192.0.2.1
#
## USAGE EXAMPLE ##
#  - include: fmc_objects.yaml
#    when: auth.x_auth_access_token is defined
#
---
 
## NOTE ##
# Currently only handling host and network objects!
# Other object types will likely require a j2 template to construct the body submission.

- name: "fmc_config: get all objects"
  local_action:
    module: uri
    url: "https://{{ inventory_hostname }}/api/fmc_config/v1/domain/{{ auth.domain_uuid }}/object/{{ item }}?limit=10000&expanded=true"
    method: GET
    validate_certs: no
    status_code: 200
    headers:
      Content-Type: application/json
      X-auth-access-token: "{{ auth.x_auth_access_token }}"
  with_items:
    - hosts
    - networks
  register: "all_objects_raw"
 
# Unable to figure out how to do this without a j2 template.
# FMC returns too many subelements to easily filter.

- name: "fmc_config: post new objects"
  local_action:
    module: uri
    url: "https://{{ inventory_hostname }}/api/fmc_config/v1/domain/{{ auth.domain_uuid }}/object/{{ fmc_objects | selectattr('name', 'equalto', item) | map(attribute='type') | list | last | default('hosts') | lower }}"
    method: POST
    validate_certs: no
    status_code: 201
    headers:
      Content-Type: application/json
      X-auth-access-token: "{{ auth.x_auth_access_token }}"
    body_format: json
    body:
      name: "{{ item }}"
      value: "{{ fmc_objects | selectattr('name', 'equalto', item) | map(attribute='value') | list | last }}"
      description: "{{ fmc_objects | selectattr('name', 'equalto', item) | map(attribute='description') | list | last | default('Ansible Created') }}"
      overridable: "{{ fmc_objects | selectattr('name', 'equalto', item) | map(attribute='overridable') | list | last | default('False') | bool }}"
  with_items: "{{ lookup('template', 'fmc_objects-missing.j2').split('\n') }}"
  when: (item != "") and (fmc_objects | selectattr('name', 'equalto', item) | map(attribute='value') | list | last is defined)
  changed_when: True
 
## NOTE ##
# The conditions below will not catch the sudden removal of the description or overridable key

- name: "fmc_config: modify existing objects"
  local_action:
    module: uri
    url: "{{ item.1.links.self }}"
    method: PUT
    validate_certs: no
    status_code: 200
    headers:
      Content-Type: application/json
      X-auth-access-token: "{{ auth.x_auth_access_token }}"
    body_format: json
    body:
      name: "{{ item.1.name }}"
      id: "{{ item.1.id }}"
      type: "{{ item.1.type }}"
      value: "{{ fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='value') | list | last }}"
      description: "{{ fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='description') | list | last | default('Ansible Created') }}"
      overridable: "{{ fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='overridable') | list | last | default('False') | bool }}"
  with_subelements:
    - "{{ all_objects_raw['results'] }}"
    - json.items
  when: (fmc_objects | selectattr('name', 'equalto', item.1.name) | list | count > 0) and
        (((fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='value') | list | last is defined) and (fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='value') | list | last != item.1.value)) or
         ((fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='description') | list | last is defined) and (fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='description') | list | last | default('Ansible Created') != item.1.description)) or
         ((fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='overridable') | list | last is defined) and (fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='overridable') | list | last | default('False') | bool != item.1.overridable)))
  changed_when: True

- name: "fmc_config: delete objects"
  local_action:
    module: uri
    url: "{{ item.1.links.self }}"
    method: DELETE
    validate_certs: no
    status_code: 200
    headers:
      X-auth-access-token: "{{ auth.x_auth_access_token }}"
  with_subelements:
    - "{{ all_objects_raw['results'] }}"
    - json.items
  when: (fmc_objects | selectattr('name', 'equalto', item.1.name) | list | count > 0)
        and(fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='name') | list | last is defined)
        and(fmc_objects | selectattr('name', 'equalto', item.1.name) | map(attribute='value') | list | last is undefined)
  changed_when: True

Sometimes when trying to munge an array and perform comparisons you have to do it in a Jinja2 Template. The following template creates a list of existing object names then will check to see if that object needs to be created. This is what my POST task uses to determine what new objects will be created.

templates/fmc_objects-missing.j2: ASCII text

{#- Build a list of the existing objects -#}
{% set EXISTING = [] %}
{% for object_result in all_objects_raw['results'] %}
{% for object_line in object_result['json']['items'] %}
{{- EXISTING.append( object_line['name'] ) -}}
{% endfor %}
{% endfor %}
 
{#- Check fmc_objects to see if missing -#}
{% for fmc_object in fmc_objects %}
{% if fmc_object['name'] not in EXISTING %}
{{ fmc_object['name'] }}
{% endif %}
{% endfor %}
19. December 2018 · Comments Off on Ansible playbook to provision Netscaler VIPs. · Categories: Ansible, Linux, Linux Admin, Load Balancing, NetScaler, Networking · Tags: , , ,

The following playbook will create a fully functional VIP; including the supporting monitor, service-group (pool) and servers (nodes) on a netscaler loadbalancer. Additionally, the same playbook has the ability to fully deprovision a VIP and all its supporting artifacts. To do all this I use the native Netscaler Ansible modules. When it comes to using the netscaler_servicegroup module, since the number of servers are not always consistent; I create that task with a Jinja2 template, where its imported back into the play.

netscaler_provision.yaml: a /usr/bin/ansible-playbook -f 10 script text executable, ASCII text

#!/usr/bin/ansible-playbook -f 10
## Ansible playbook to provision Netscaler VIPs.
# Requires: nitrosdk-python
# 2018 (v.01) - Playbook from www.davideaves.com
---
- name: Netscaler VIP provision
  hosts: netscaler
  connection: local
  gather_facts: False

  vars:

    ansible_connection: "local"
    ansible_python_interpreter: "/usr/bin/env python"

    state: 'present'

    lbvip:
      name: testvip
      address: 203.0.113.1
      server:
        - name: 'server-1'
          address: '192.0.2.1'
          description: 'Ansible Test Server 1'
          disabled: 'true'
        - name: 'server-2'
          address: '192.0.2.2'
          description: 'Ansible Test Server 2'
          disabled: 'true'
        - name: 'server-3'
          address: '192.0.2.3'
          description: 'Ansible Test Server 3'
          disabled: 'true'
        - name: 'server-4'
          address: '192.0.2.4'
          description: 'Ansible Test Server 4'
          disabled: 'true'
        - name: 'server-5'
          address: '192.0.2.5'
          description: 'Ansible Test Server 5'
          disabled: 'true'
        - name: 'server-6'
          address: '192.0.2.6'
          description: 'Ansible Test Server 6'
          disabled: 'true'
        - name: 'server-7'
          address: '192.0.2.7'
          description: 'Ansible Test Server 7'
          disabled: 'true'
        - name: 'server-8'
          address: '192.0.2.8'
          description: 'Ansible Test Server 8'
          disabled: 'true'
      vserver:
        - port: '80'
          description: 'Generic service running on 80'
          type: 'HTTP'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '443'
          description: 'Generic service running on 443'
          type: 'SSL_BRIDGE'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '8080'
          description: 'Generic service running on 8080'
          type: 'HTTP'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '8081'
          description: 'Generic service running on 8081'
          type: 'HTTP'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'
        - port: '8443'
          description: 'Generic service running on 8443'
          type: 'SSL_BRIDGE'
          method: 'LEASTCONNECTION'
          persistence: 'SOURCEIP'

  tasks:

    - name: Build lbvip and all related componets.
      block:
      - local_action:
          module: netscaler_server
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "{{ item.name }}"
          ipaddress: "{{ item.address }}"
          comment: "{{ item.description | default('Ansible Created') }}"
          disabled: "{{ item.disabled | default('false') }}"
        with_items: "{{ lbvip.server }}"
      - local_action:
          module: netscaler_lb_monitor
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          monitorname: "tcp_{{ lbvip.name }}_{{ item.port }}"
          type: TCP
          destport: "{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
        no_log: false
      - local_action:
          module: copy
          content: "{{ lookup('template', 'templates/netscaler_servicegroup.j2') }}"
          dest: "/tmp/svg_{{ lbvip.name }}_{{ item.port }}.yaml"
          mode: "0644"
        with_items: "{{ lbvip.vserver }}"
        changed_when: false
      - include_tasks: "/tmp/svg_{{ lbvip.name }}_{{ item.port }}.yaml"
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: file
          state: absent
          path: "/tmp/svg_{{ lbvip.name }}_{{ item.port }}.yaml"
        with_items: "{{ lbvip.vserver }}"
        changed_when: false
      - local_action:
          module: netscaler_lb_vserver
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "vs_{{ lbvip.name }}_{{ item.port }}"
          servicetype: "{{ item.type }}"
          ipv46: "{{ lbvip.address }}"
          port: "{{ item.port }}"
          lbmethod: "{{ item.method | default('LEASTCONNECTION') }}"
          persistencetype: "{{ item.persistence | default('SOURCEIP') }}"
          servicegroupbindings:
            - servicegroupname: "svg_{{ lbvip.name }}_{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
      when: state == "present"

    - name: Destroy lbvip and all related componets.
      block:
      - local_action:
          module: netscaler_lb_vserver
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "vs_{{ lbvip.name }}_{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: netscaler_servicegroup
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          servicegroupname: "svg_{{ lbvip.name }}_{{ item.port }}"
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: netscaler_lb_monitor
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          monitorname: "tcp_{{ lbvip.name }}_{{ item.port }}"
          type: TCP
        with_items: "{{ lbvip.vserver }}"
      - local_action:
          module: netscaler_server
          nsip: "{{ inventory_hostname }}"
          nitro_user: "{{ nitro_user | default('nsroot') }}"
          nitro_pass: "{{ nitro_pass | default('nsroot') }}"
          nitro_protocol: "https"
          validate_certs: no
          state: "{{ state }}"
          name: "{{ item.name }}"
        with_items: "{{ lbvip.server }}"
      when: state == "absent"

The following is the Jinja2 template that creates the netscaler_servicegroup task. An important thing to note is my use of the RAW block. When the task is created and stored in /tmp it does not contain any account credentials, instead I preserve the variable in the raw to prevent leaking sensitive information to anyone who may be snooping around on the server while the playbook is running.

templates/netscaler_servicegroup.j2: ASCII text, with CRLF line terminators

---
- local_action:
    module: netscaler_servicegroup
    nsip: {% raw %}"{{ inventory_hostname }}"
{% endraw %}
    nitro_user: {% raw %}"{{ nitro_user }}"
{% endraw %}
    nitro_pass: {% raw %}"{{ nitro_pass }}"
{% endraw %}
    nitro_protocol: "https"
    validate_certs: no

    state: "{{ state | default('present') }}"

    servicegroupname: "svg_{{ lbvip.name }}_{{ item.port }}"
    comment: "{{ item.description | default('Ansible Created') }}"
    servicetype: "{{ item.type }}"
    servicemembers:
{% for i in lbvip.server %}
      - servername: "{{ i.name }}"
        port: "{{ item.port }}"
{% endfor %}
    monitorbindings:
      - monitorname: "tcp_{{ lbvip.name }}_{{ item.port }}"
04. December 2018 · Comments Off on Using Ansible to perform a Netscaler backup · Categories: Ansible, Load Balancing, NetScaler · Tags: , , , ,

The following Ansible playbook is a rewrite of a script from a long time ago to perform backups of a Netscaler. As far as I know, there are no native Ansible or Vendor modules to perform a system backup. Within the playbook I am simply performing a raw call using the URI module against the Nitro API and fetching the backup file.

The following Vendor links contain good/related reference information:

netscaler_systembackup.yaml: a /usr/bin/ansible-playbook -f 10 script text executable, ASCII text

#!/usr/bin/ansible-playbook -f 10
## Ansible playbook to perform a full backup of Netscaler systems
## 2018 (v.01) - Playbook from www.davideaves.com
---
- name: Netscaler full backup
  hosts: netscalers
  connection: local
  gather_facts: False

  vars:

    ansible_connection: "local"
    ansible_python_interpreter: "/usr/bin/env python"

    backup_location: "/srv/nsbackup"

    ns_sys_backup: "/var/ns_sys_backup"

  tasks:

    - name: Check backup file status
      local_action:
        module: stat
        path: "{{ backup_location }}/{{ inventory_hostname }}_{{ lookup('pipe', 'date +%Y%m%d') }}_nsbackup.tgz"
      register: stat_result

    - name: Check backup directory location
      local_action:
        module: file
        path: "{{ backup_location }}"
        state: directory
        mode: 0775
        recurse: yes
      run_once: True
      when: stat_result.stat.exists == False

    - name: Full backup of Netscaler configuration.
      block:

      - name: Create Netscaler system backup
        local_action:
          module: uri
          url: "https://{{ inventory_hostname }}/nitro/v1/config/systembackup?action=create"
          method: POST
          validate_certs: no
          return_content: yes
          headers:
            X-NITRO-USER: "{{ nitro_user | default('nsroot') }}"
            X-NITRO-PASS: "{{ nitro_pass | default('nsroot') }}"
          body_format: json
          body: 
            systembackup:
              filename: "{{ inventory_hostname | hash('md5') }}"
              level: full
              comment: Ansible Generated Backup

      - name: Fetch Netscaler system backup
        local_action:
          module: uri
          url: "https://{{ inventory_hostname }}/nitro/v1/config/systemfile?args=filename:{{ inventory_hostname | hash('md5') }}.tgz,filelocation:{{ ns_sys_backup | replace('/','%2F') }}"
          method: GET
          status_code: 200
          validate_certs: no
          return_content: yes
          headers:
            X-NITRO-USER: "{{ nitro_user | default('nsroot') }}"
            X-NITRO-PASS: "{{ nitro_pass | default('nsroot') }}"
        register: result

      - name: Save Netscaler system backup to backup directory
        local_action: "shell echo '{{ result.json.systemfile[0].filecontent }}' | base64 -d > '{{ backup_location }}/{{ inventory_hostname }}_{{ lookup('pipe', 'date +%Y%m%d') }}_nsbackup.tgz'"

      - name: Chmod saved backup file permissions
        local_action:
          module: file
          path: "{{ backup_location }}/{{ inventory_hostname }}_{{ lookup('pipe', 'date +%Y%m%d') }}_nsbackup.tgz"
          mode: 0644

      always:

      - name: Delete system backup from Netscaler
        local_action:
          module: uri
          url: "https://{{ inventory_hostname }}/nitro/v1/config/systembackup/{{ inventory_hostname | hash('md5') }}.tgz"
          method: DELETE
          validate_certs: no
          return_content: yes
          headers:
            X-NITRO-USER: "{{ nitro_user | default('nsroot') }}"
            X-NITRO-PASS: "{{ nitro_pass | default('nsroot') }}"

      - name: Locate backup files older than 90 days
        local_action:
          module: find
          paths: "{{ backup_location }}"
          age: "1d"
        run_once: true
        register: files_matched

      - name: Purge old backup files
        local_action:
          module: file
          path: "{{ item.path }}"
          state: absent
        run_once: true
        with_items: "{{ files_matched.files }}"

      when: stat_result.stat.exists == False
05. August 2018 · Comments Off on Ansible playbook to handle IOS upgrades. · Categories: Ansible, Cisco, Linux, Networking · Tags: , , , ,

The following is an Ansible playbook I created to handle IOS upgrades on against an excessively large number of Cisco routers at a customer site I was doing some work at. I saved a lot of time by staging the IOS images on flash before kicking off the playbook, if I missed anything this playbook would of uploaded the image for me before setting the boot statement. I think moving forward I will start leveraging the NTC (Network to Code) Ansible modules a lot more, its have proven itself to be superior and more feature rich than the built in Cisco Ansible modules.

In addition to the NTC requirements, this playbook also requires 2 directories:

  • ./images: directory that contains IOS images.
  • ./backups: directory repository for config backups.

ansible.cfg: ASCII text

[defaults]
transport = ssh
host_key_checking = false
retry_files_enabled = false
#stdout_callback = unixy
#stdout_callback = actionable
display_skipped_hosts = false
 
timeout = 5
 
inventory = ./hosts
log_path   = ./ansible.log
 
[ssh_connection]
pipelining = True

platform_facts.csv: ASCII text

C3900,IOS,ROUTER,c3900-universalk9-mz.SPA.156-3.M4.bin
C2900,IOS,ROUTER,c2900-universalk9-mz.SPA.156-3.M4.bin
ISR4300,IOS,ROUTER,isr4300-universalk9.16.03.06.SPA.bin

ios_upgrade.yaml: a ansible-playbook script text executable, ASCII text

#!/usr/local/bin/ansible-playbook -f 10
## Ansible playbook to handle IOS upgrades.
# Playbook will not reboot any device unless the variable REBOOT exists.
#
# Requires: https://github.com/networktocode/ntc-ansible
# Example: ansible-playbook --extra-vars "REBOOT=yes" ios_upgrade.yaml
# Example: ansible-playbook ios_upgrade.yaml --skip-tags=change
---
- name: Cisco IOS Upgrade
  hosts: [ "all" ]
  connection: local
  gather_facts: no
  tags: [ "IOS", "upgrade" ]

  vars_prompt:

  - name: "username"
    prompt: "Username"
    private: no

  - name: "password"
    prompt: "Password"

  vars:

  - ansible_connection: "local"
  - ansible_python_interpreter: "/usr/bin/env python"

  - ios_provider:
      username: "{{ username }}"
      password: "{{ password }}"
      authorize: true
      auth_pass: "{{ password }}"
      host: "{{ inventory_hostname }}"
      timeout: 120

  pre_tasks:

  - name: "ios_facts: hardware"
    ios_facts:
      gather_subset: hardware
      provider: "{{ ios_provider }}"
    connection: local
    when: (PLATFORM is not defined)
    tags: [ "pre_task", "ios_facts", "hardware" ]

  - name: "ios_command: boot configuration"
    ios_command:
      provider: "{{ ios_provider }}"
      commands:
        - "show running-config | include ^boot.system"
    connection: local
    register: COMMANDS
    tags: [ "pre_task", "ios_command", "boot", "COMMANDS" ]

  - name: "set_fact: PLATFORM"
    set_fact:
      PLATFORM: "{{ ansible_net_image|upper | regex_replace('.*[:/]') | regex_replace('([A-Z]-|-).*') }}"
    no_log: True
    when: (ansible_net_image is defined) and (PLATFORM is not defined)
    tags: [ "pre_task", "set_fact", "PLATFORM", "ansible_net_image" ]

  - name: "set_fact: SYSTEM"
    set_fact:
      SYSTEM: "{{ lookup('csvfile', PLATFORM + ' file=platform_facts.csv col=1 delimiter=,')|upper }}"
    no_log: True
    when: (PLATFORM is defined) and (SYSTEM is not defined)
    tags: [ "pre_task", "set_fact", "lookup", "platform_facts.csv", "PLATFORM", "SYSTEM" ]

  - name: "set_fact: TYPE"
    set_fact:
      TYPE: "{{ lookup('csvfile', PLATFORM + ' file=platform_facts.csv col=2 delimiter=,')|upper }}"
    no_log: True
    when: (PLATFORM is defined) and (TYPE is not defined)
    tags: [ "pre_task", "set_fact", "lookup", "platform_facts.csv", "PLATFORM", "TYPE" ]

  - name: "set_fact: IMAGE"
    set_fact:
      IMAGE: "{{ lookup('csvfile', PLATFORM + ' file=platform_facts.csv col=3 delimiter=,') }}"
    no_log: True
    when: (PLATFORM is defined) and (IMAGE is not defined)
    tags: [ "pre_task", "set_fact", "lookup", "platform_facts.csv", "PLATFORM", "IMAGE" ]

  - name: "stat: BACKUP_FILE"
    stat: path="backups/{{ inventory_hostname }}.cfg"
    no_log: True
    register: BACKUP_FILE
    tags: [ "pre_task", "stat", "BACKUP_FILE" ]

  - name: "stat: IMAGE_FILE"
    stat: path="images/{{ IMAGE }}"
    no_log: True
    register: IMAGE_FILE
    tags: [ "pre_task", "stat", "IMAGE_FILE" ]

  tasks:

  - name: "fail: missing image"
    fail:
      msg: "Platform image missing: {{ PLATFORM }}"
    when: (IMAGE[0] is undefined)

  - name: "ntc_save_config: host > local" 
    ntc_save_config:     
      platform: cisco_ios_ssh
      local_file: "backups/{{ inventory_hostname }}.cfg"
      provider: "{{ ios_provider }}"
    connection: local
    when: (BACKUP_FILE.stat.exists == False)
    tags: [ "ntc-ansible", "ntc_save_config", "cisco_ios_ssh", "BACKUP_FILE" ]

  - name: "ntc_file_copy: local > host"
    ntc_file_copy:
      platform: cisco_ios_ssh
      local_file: "images/{{ IMAGE }}"
      host: "{{ inventory_hostname }}"
      provider: "{{ ios_provider }}"
    connection: local
    when: (IMAGE_FILE.stat.exists == True) and (PLATFORM is defined) and (IMAGE is defined)
    tags: [ "ntc-ansible", "ntc_file_copy", "cisco_ios_ssh", "IMAGE", "PLATFORM", "IMAGE_FILE" ]

  - name: "ios_config: remove boot system lines"
    ios_config:
      provider: "{{ ios_provider }}"
      lines: "no {{ item }}"
    connection: local
    register: config_boot_rem
    with_items: "{{ COMMANDS.stdout_lines[0] }}"
    when: (PLATFORM is defined) and (IMAGE is defined) and
          not(IMAGE in item) and not(item == '')
    tags: [ "ios_config", "boot", "PLATFORM", "remove", "config_boot_rem", "change" ]
    notify:
      - ios write memory

  - name: "ios_config: add boot system line"
    ios_config:
      provider: "{{ ios_provider }}"
      lines: "boot system flash:{{ IMAGE }}"
      match: line
    connection: local
    register: config_boot_add
    when: (PLATFORM is defined) and (IMAGE is defined)
    tags: [ "ios_config", "boot", "PLATFORM", "IMAGE", "add", "config_boot_add", "change" ]
    notify:
      - ios write memory

  - meta: flush_handlers

  post_tasks:

  - name: "ntc_reboot: when REBOOT is defined"
    ntc_reboot:
      platform: cisco_ios_ssh
      confirm: true
      host: "{{ inventory_hostname }}"
      provider: "{{ ios_provider }}"
    connection: local
    when: (REBOOT is defined) and
          (config_boot_add.changed == true) or (config_boot_rem.changed == true)
    tags: [ "post_task", "ntc-ansible", "ntc_reboot", "REBOOT", "change" ]
    notify:
      - wait for tcp

  handlers:

  - name: "ios write memory"
    ios_command:
      provider: "{{ ios_provider }}"
      commands: "write memory"
    connection: local

  - name: "wait for tcp"
    wait_for:
      port: 22
      host: "{{inventory_hostname}}"
      timeout: 420
    connection: local
13. June 2018 · Comments Off on Using netaddr in Ansible to manipulate network IP, CIDR, MAC and prefix. · Categories: Ansible, Cloud, Linux Admin, Networking · Tags: , , , , , , , , , , , ,

The following ansible playbook is an example that demonstrates using netaddr to manipulate network IP, CIDR, MAC and prefix. Additional examples can be found in the Ansible docs or if your looking to do manipulation in python the following are the docs for netaddr.

#!/usr/local/bin/ansible-playbook
## Using netaddr in Ansible to manipulate network IP, CIDR, MAC and prefix
## 2018 (v.01) - Playbook from www.davideaves.com
---
- hosts: localhost
  gather_facts: false

  vars:
  - IP: 172.31.3.13/23
  - CIDR: 192.168.0.0/16
  - MAC: 1a:2b:3c:4d:5e:6f
  - PREFIX: 18

  tasks:
    - debug: msg="___ {{ IP }} ___ ADDRESS {{ IP | ipaddr('address') }}"
    - debug: msg="___ {{ IP }} ___ BROADCAST {{ IP | ipaddr('broadcast') }}"
    - debug: msg="___ {{ IP }} ___ NETMASK {{ IP | ipaddr('netmask') }}"
    - debug: msg="___ {{ IP }} ___ NETWORK {{ IP | ipaddr('network') }}"
    - debug: msg="___ {{ IP }} ___ PREFIX {{ IP | ipaddr('prefix') }}"
    - debug: msg="___ {{ IP }} ___ SIZE {{ IP | ipaddr('size') }}"
    - debug: msg="___ {{ IP }} ___ WILDCARD {{ IP | ipaddr('wildcard') }}"
    - debug: msg="___ {{ IP }} ___ RANGE {{ IP | ipaddr('range_usable') }}"
    - debug: msg="___ {{ IP }} ___ REVERSE DNS {{ IP | ipaddr('revdns') }}"
    - debug: msg="___ {{ IP }} ___ HEX {{ IP | ipaddr('address') | ip4_hex() }}"
    - debug: msg="___ {{ MAC }} ___ CISCO {{ MAC | hwaddr('cisco') }}"
    - debug: msg="___ {{ CIDR }} ___ Last /20 CIDR {{ CIDR | ipsubnet(20, -1) }}"
    - debug: msg="___ {{ CIDR }} ___ 1st IP {{ CIDR | ipaddr(1) }}"
    - debug: msg="___ {{ CIDR }} ___ 3rd from last IP {{ CIDR | ipaddr(-3) }}"