20. February 2018 · Comments Off on Using Ansible to provision Vultr virtual servers · Categories: Ansible, Cloud, Linux, Networking, Vultr · Tags: , , ,

Ansible is a great automation tool, for work I’ve been using it primarily as a network policy enforcement and automation tool, like to manage ACLs on Cisco routers. But personally I use it more as a server deployment and automation tool. For a while now I’ve been wanting to migrate a few of my general use virtual servers to Vultr [LINK], but hit a road block with free time. Last week in my lab I upgraded to the development version of Ansible (2.6) and noticed that it Vultr support has been added to the cloud modules so started experimenting with them. After a day or two of experimentation it sprouted into a full blown provisioning playbook. The following is what I created…

What I want is a single playbook that can build a server from scratch and have another playbook that can do the same in reverse. For cost saving and security reasons, the deprovisioning playbook I want to be able to run it in crontab to automatically destroy any LAB VMs I spin up at the end of the day.

The following is the provisioning playbook… I have a host_var file that contains my API key and other things common for all my instances. The instances to be provisioned are passed via a variable called “VARS” when the playbook is ran. Since my vars are modular, if I had a real ticketing system, I could have it output a properly formatted instance.yaml file for the playbook to use. Concerning the Ansible Modules; there is one quirky things about them… It requires an ENVRONMENT variable called VULTR_API_KEY. It also requires a vultr.ini file with the same VULTR_API_KEY to be set there. Just to make things easier I try to deal with that transparently within the playbook so it will create that INI file if it does not automatically exist.

vr_provision.yaml: a /usr/local/bin/ansible-playbook script, Vultr server provisioning

#!/usr/local/bin/ansible-playbook
## Provision virtual instances on VULTR.
## 2018 (v.01) - Playbook from www.davideaves.com
---
- name: "VULTR provision instance"
  hosts: localhost
  connection: local
  gather_facts: false

  environment:
    VULTR_API_KEY: "{{ vultr_common.apikey }}"

  tasks:

    - name: "GET user running the deploy"
      local_action: command whoami
      changed_when: False
      register: WHOAMI
 
    ### Prerequisite validation ###

    - block:
        - name: "Playbook external variables include"
          include_vars: "{{ VARS }}"
          when: (VARS is defined)
      always:
        - name: "Playbook external variables example"
          local_action:
            debug msg="playbook.yaml -e VARS=tag.yaml"
          when: (VARS is not defined)

    - name: "Playbook requirement check"
      fail:
        msg: |
          Required variable is undefined!
              > vultr_common.apikey
              > vultr_common.inventory_file
              > vultr.servers: ...
      when: (vultr_common is undefined) or
            (vultr_common.apikey is undefined) or
            (vultr_common.inventory_file is undefined) or
            (vultr is undefined) or
            (vultr.servers is undefined)

    - name: "Inventory file status"
      stat: path="{{ vultr_common.inventory_file }}"
      register: INVENTORY_FILE

    - name: "Inventory file writeable"
      fail:
        msg: "{{ vultr_common.inventory_file }} not writeable"
      when: not(INVENTORY_FILE.stat.writeable)

    - name: "~/.vultr.ini handling block"
      block:
        - name: "Validate ini file exits"
          file:
            path: "~/.vultr.ini"
            mode: 0600
            state: touch
          changed_when: False
        - name: "VULTR_API_KEY is present"
          ini_file:
            path: "~/.vultr.ini"
            section: default
            option: "VULTR_API_KEY"
            value: "{{ vultr_common.apikey }}"
            no_extra_spaces: yes
            state: present
      rescue:
        - fail:
            msg: "Unable to handle ~/.vultr.ini"
 
    ### Collect account balance and Fail if too low ###

    - name: "VULTR account facts"
      local_action:
        module: vr_account_facts

    - name: "Account balance requirement check"
      fail:
        msg: "Account balance low: {{ ansible_facts.vultr_account_facts.balance|int }}"
      when: (ansible_facts.vultr_account_facts.balance|int > vultr_common.min_balance)
 
    ### Configure SSH Keys ###

    - name: "VULTR user authorized_key"
      local_action:
        module: vr_ssh_key
        name: "{{ WHOAMI.stdout }}"
        ssh_key: "{{ lookup('file', '~/.ssh/authorized_keys') }}"
 
    ### Configure Firewall ###
    # These are additive, nothing is removed unless specified or done manually.

    - name: "VULTR firewall groups"
      local_action:
        module: vr_firewall_group
        name: "{{ firewall_group }}"
      when: (firewall_group is defined) and
            (vultr_common.firewall_group is defined) and
            (vultr_common.firewall_group[firewall_group] is defined)

    - name: "Get public IP of ansible host"
      local_action:
        module: ipify_facts
      when: not(ansible_facts.ipify_public_ip is defined)

    - name: "VULTR firewall rule: {{ firewall_group }}/management"
      local_action:
        module: vr_firewall_rule
        group: "{{ firewall_group }}"
        protocol: tcp
        port: 22
        ip_version: v4
        cidr: "{{ ansible_facts.ipify_public_ip | ipv4 }}/32"
      when: (firewall_group is defined) and
            (ansible_facts.ipify_public_ip is defined) and
            (vultr_common.firewall_group[firewall_group] is defined)

    - name: "VULTR firewall rule: {{ firewall_group }}"
      local_action:
        module: vr_firewall_rule
        group: "{{ firewall_group }}"
        protocol: "{{ item.protocol | default('tcp') }}"
        port: "{{ item.port | default('0') }}"
        ip_version: "{{ item.ip_version | default('v4') }}"
        state: "{{ item.state | default('present') }}"
      with_items: "{{ (vultr_common.firewall_group[firewall_group]) }}"
      when: (firewall_group is defined) and
            (vultr_common.firewall_group[firewall_group] is defined)
 
    ### Deploy Instances ###

    - name: "VULTR provision instances"
      local_action:
        module: vr_server
        name: "{{ item.name }}"
        hostname: "{{ item.name }}"
        ipv6_enabled: yes
        os: "{{ item.os }}"
        plan: "{{ item.plan }}"
        private_network_enabled: yes
        region: "{{ item.region }}"
        ssh_key: "{{ WHOAMI.stdout }}"
        state: present
        firewall_group: "{{ firewall_group | default('') }}"
        force: False
        tag: "{{ tag | default('none') }}"
      with_items: "{{ vultr.servers }}"
      register: BUILD
 
    ### Update Ansible inventory ###

    - name: "Initialize SERVER list"
      set_fact: SERVER=[]
      when: (BUILD is defined)
      no_log: True

    - name: "Populate SERVER list"
      set_fact:
        SERVER: "{{ SERVER }} + [ '{{ item.vultr_server.name }},{{ item.vultr_server.v4_main_ip }},{{ item.vultr_server.v6_main_ip }}' ]"
      with_items: "{{ BUILD.results }}"
      when: (SERVER is defined)
      no_log: True

    - name: "Update inventory file with SERVER list"
      ini_file:
        path: "{{ vultr_common.inventory_file }}"
        section: vultr
        option: "{{ item.split(',')[0] }} ansible_host"
        value: "{{ item.split(',')[1] }}"
        no_extra_spaces: yes
        mode: 0666
        state: present
        backup: yes
      with_items: "{{ SERVER }}"
      when: (SERVER is defined)
 
    ### Update DNS ###

    - name: "VULTR DNS domain"
      local_action:
        module: vr_dns_domain
        name: "{{ item.split(',')[0] | regex_replace('^\\w+.') }}"
        server_ip: 127.0.0.1
        state: present
      with_items: "{{ SERVER }}"
      when: (SERVER is defined)

    - name: "VULTR DNS A record"
      vr_dns_record:
        record_type: A
        name: "{{ item.split(',')[0] | regex_replace('[.]\\w*') }}"
        domain: "{{ item.split(',')[0] | regex_replace('^\\w+.') }}"
        data: "{{ item.split(',')[1] }}"
        ttl: 300
      with_items: "{{ SERVER }}"
      when: (SERVER is defined)

    - name: "VULTR DNS AAAA record"
      vr_dns_record:
        record_type: AAAA
        name: "{{ item.split(',')[0] | regex_replace('[.]\\w*') }}"
        domain: "{{ item.split(',')[0] | regex_replace('^\\w+.') }}"
        data: "{{ item.split(',')[2] }}"
        ttl: 300
      with_items: "{{ SERVER }}"
      when: (SERVER is defined)

Below is the host_vars file that I am using for this playbook. The vultr_common.firewall_groups contain the firewall rules to create and must match whats specified in the server.yaml file.

host_vars/localhost.yaml: vultr_common variables

---
vultr_common:
  apikey: !!! YOURAPIKEY !!!
  inventory_file: /etc/ansible/hosts
  min_balance: -50
  firewall_group:
    shellserver:
    -
      protocol: icmp
      ip_version: v4
    -
      protocol: icmp
      ip_version: v6
    webserver:
    -
      protocol: tcp
      port: 80
      ip_version: v4
    -
      protocol: tcp
      port: 443
      ip_version: v4
    -
      protocol: tcp
      port: 80
      ip_version: v6
    -
      protocol: tcp
      port: 443
      ip_version: v6
    -
      protocol: icmp
      ip_version: v4
    -
      protocol: icmp
      ip_version: v6

Below is the list of servers to be created or destroyed by the playbooks.

PROD-WEB.yaml: vultr server list

---
tag: PROD-WEB
firewall_group: webserver
modified: Mon, 5 Feb 2018 21:53:52 -0500
vultr:
  servers:
  - name: curly.example.com
    os: Debian 9 x64 (stretch)
    plan: 1024 MB RAM,25 GB SSD,1.00 TB BW
    region: Chicago
  - name: larry.example.com
    os: Debian 9 x64 (stretch)
    plan: 1024 MB RAM,25 GB SSD,1.00 TB BW
    region: Chicago
  - name: moe.example.com
    os: Debian 9 x64 (stretch)
    plan: 1024 MB RAM,25 GB SSD,1.00 TB BW
    region: Chicago
...

Once the servers are created, I just simply run a followup playbooks against the “vultr” servers section of my inventory file to install, configure and secure any software on the individual servers.


To clean things up, the following is my deprovisioning playbook. This will destroy the servers, related DNS records and host entries in the Ansible inventory. Firewall groups, DNS zones are left intact for future deployments or for other servers that may still be running.

vr_deprovision.yaml: a /usr/local/bin/ansible-playbook script: Vultr server deprovisioning

#!/usr/local/bin/ansible-playbook
## Deprovision virtual instances on VULTR.
## 2018 (v.01) - Playbook from www.davideaves.com
---
- name: "VULTR deprovision instance"
  hosts: localhost
  connection: local
  gather_facts: false

  environment:
    VULTR_API_KEY: "{{ vultr_common.apikey }}"

  tasks:

    ### Prerequisite validation ###

    - block:
        - name: "Playbook external variables include"
          include_vars: "{{ VARS }}"
          when: (VARS is defined)
      always:
        - name: "Playbook external variables example"
          local_action:
            debug msg="playbook.yaml -e VARS=tag.yaml"
          when: (VARS is not defined)

    - name: "Playbook requirement check"
      fail:
        msg: |
          Required variable is undefined!
              > vultr_common.apikey
              > vultr_common.inventory_file
              > vultr.servers: ...
      when: (vultr_common is undefined) or
            (vultr_common.apikey is undefined) or
            (vultr_common.inventory_file is undefined) or
            (vultr is undefined) or
            (vultr.servers is undefined)

    - name: "Inventory file status"
      stat: path="{{ vultr_common.inventory_file }}"
      register: INVENTORY_FILE

    - name: "Inventory file writeable"
      fail:
        msg: "{{ vultr_common.inventory_file }} not writeable"
      when: not(INVENTORY_FILE.stat.writeable)

    - name: "~/.vultr.ini handling block"
      block:
        - name: "Validate ini file exits"
          file:
            path: "~/.vultr.ini"
            mode: 0600
            state: touch
          changed_when: False
        - name: "VULTR_API_KEY is present"
          ini_file:
            path: "~/.vultr.ini"
            section: default
            option: "VULTR_API_KEY"
            value: "{{ vultr_common.apikey }}"
            no_extra_spaces: yes
            state: present
      rescue:
        - fail:
            msg: "Unable to handle ~/.vultr.ini"
 
    ### Destroy Instances ###

    - name: "VULTR deprovision instances"
      local_action:
        module: vr_server
        name: "{{ item.name }}"
        state: absent
      with_items: "{{ vultr.servers }}"
      register: BUILD
 
    ### Update Ansible inventory ###

    - name: "Initialize empty list (SERVER)"
      set_fact: SERVER=[]
      when: (BUILD is defined) and (BUILD.changed)
      no_log: True

    - name: "Populate empty list (SERVER)"
      set_fact:
        SERVER: "{{ SERVER }} + [ '{{ item.vultr_server.name }},{{ item.vultr_server.v4_main_ip }},{{ item.vultr_server.v6_main_ip }}' ]"
      with_items: "{{ BUILD.results }}"
      when: (SERVER is defined)
      no_log: True

    - name: "Remove servers from inventory file."
      ini_file:
        path: "{{ vultr_common.inventory_file }}"
        section: vultr
        option: "{{ item.split(',')[0] }} ansible_host"
        value: "{{ item.split(',')[1] }}"
        no_extra_spaces: yes
        mode: 0666
        state: absent
        backup: yes
      with_items: "{{ SERVER }}"
      when: (SERVER is defined)
 
    ### Update DNS ###

    - name: "VULTR DNS A record"
      vr_dns_record:
        record_type: A
        name: "{{ item.split(',')[0] | regex_replace('[.]\\w*') }}"
        domain: "{{ item.split(',')[0] | regex_replace('^\\w+.') }}"
        state: absent
      with_items: "{{ SERVER }}"
      when: (SERVER is defined)

    - name: "VULTR DNS AAAA record"
      vr_dns_record:
        record_type: AAAA
        name: "{{ item.split(',')[0] | regex_replace('[.]\\w*') }}"
        domain: "{{ item.split(',')[0] | regex_replace('^\\w+.') }}"
        state: absent
      with_items: "{{ SERVER }}"
      when: (SERVER is defined)
04. February 2018 · Comments Off on Collect and archive all runtime information, statistics and status on F5 systems · Categories: F5, Linux, Linux Scripts, Load Balancing, Networking · Tags: , , , , , , , , , , ,

Last march I posted a TCL/Expect script (rtrinfo.exp) to backup configs and regularly collect runtime information via show commands on Cisco devices for archival purposes. It’s proven itself to be very useful and replaces the need to purchase convoluted commercial software to archive device configs. Not only do I use it, but I know of a few companies that have adopted it as well. Recently I needed something similar that could collect and archive all runtime information, statistics and status on F5 systems; the following is that script.

The script works by pushing a small, base64 encoded, command string to the F5 to be executed. The command string simply does a “tmsh -q show \?” to get a list of all show commands based on the enabled modules. The available runtime information is collected and piped to a for loop that runs all available show commands.

echo "Zm9yIE1PRCBpbiBgdG1zaCAtcSBzaG93IFw/IHwgc2VkIC1uIC1lICcvTW9kdWxlczovLC9PcHRpb25zOi9wJyB8IGF3ayAnL14gIC97cHJpbnQgJDF9J2A7IGRvIHRtc2ggLXEgc2hvdyAkTU9EIDI+IC9kZXYvbnVsbDsgZG9uZQ==" | base64 -d
for MOD in `tmsh -q show \? | sed -n -e '/Modules:/,/Options:/p' | awk '/^  /{print $1}'`; do tmsh -q show $MOD 2> /dev/null; done

All the output from the F5 is collected and some awk-foo is used to determine appropriate output destination on a per-line basis. The while loop appends the line to the appropriate file. Additionally all previous output archived to an appropriately named tar.gz file. I have also added the ability to silence the output, specify an output path override and to use root’s ssh private key instead of using the password (for running via cron).

f5info.sh: Bourne-Again shell script, ASCII text executable

#!/bin/bash
## Collect and archive all runtime information, statistics and status on F5 systems.
## 2018 (v1.0) - Script from www.davideaves.com
 
OUTDIR="."
 
### Script Functions ###
function USAGE () {
 # Display the script arguments.
 printf "Usage: $0 -d bigip -i id_rsa -p path\n\n"
 printf "Requires:\n"
 printf "\t-d: Target F5 system.\n"
 printf "Options:\n"
 printf "\t-i: Private id_rsa of root user.\n"
 printf "\t-p: Destination of output directory.\n"
 printf "\t-q: Quiet, do not show anything.\n"
}
 
function CLEANUP {
 # Cleanup after the script finishes.
 [ -e "${IDENTITY}" ] && { rm -rf "${IDENTITY}"; }
}
 
### Get CLI options ###
while getopts "d:i:p:q" ARG; do
 case "${ARG}" in
  d) F5="${OPTARG^^}";;
  i) trap CLEANUP EXIT
     IDENTITY="$(mktemp)"
     chmod 600 "${IDENTITY}" && cat "${OPTARG}" > "${IDENTITY}";;
  p) OUTDIR="${OPTARG}";;
  q) QUIET="YES";;
 esac
done 2> /dev/null
 
### Display USAGE if F5 not defined ###
[ -z "${F5}" ] && { USAGE && exit 1; }
 
### Archive & Create OUTDIR ###
if [ -d "${OUTDIR}/${F5}" ]
 then ARCHIVE="${OUTDIR}/${F5}_$(date +%Y%m%d -d @$(stat -c %Y "${OUTDIR}/${F5}")).tar.gz"
 
  [ -e "${ARCHIVE}" ] && { rm -f "${ARCHIVE}"; }
  [ -z "${QUIET}" ] && { echo "Archiving: ${ARCHIVE}"; }
  tar zcfP "${ARCHIVE}" "${OUTDIR}/${F5}" && rm -rf "${OUTDIR}/${F5}"
 
fi && ssh -q -o StrictHostKeyChecking=no `[ -r "${IDENTITY}" ] && { echo -i "${IDENTITY}"; }` root@${F5} \
'bash -c "$(base64 -di <<< Zm9yIE1PRCBpbiBgdG1zaCAtcSBzaG93IFw/IHwgc2VkIC1uIC1lICcvTW9kdWxlczovLC9PcHRpb25zOi9wJyB8IGF3ayAnL14gIC97cHJpbnQgJDF9J2A7IGRvIHRtc2ggLXEgc2hvdyAkTU9EIDI+IC9kZXYvbnVsbDsgZG9uZQ==)"' |\
 awk 'BEGIN{
  FS=": "
 }
 // {
  gsub(/[ \t]+$/, "")
 
  # LN buffer
  LN[1]=LN[0]
  LN[0]=$0
 
  # Build OUTPUT variable
  ## LN ends with "{" - special case header
  if(substr(LN[0],length(LN[0]),1) == "{") {
   COUNT=split(LN[0],FN," ") - "1"
   for (i = 1; i <= COUNT; i++) FILE=FILE FN[i] "_"
   OUTPUT=substr(FILE, 1, length(FILE)-1)
  }
  ## LN does not contain "::" but is a header
  else if(OUTPUT == "" && LN[0] ~ /^[a-z,A-Z]/) {
   OUTPUT=LN[0]
  }
  ## LN contains "::" and is a header
  else if(LN[0] ~ /^[A-Z].*::[A-Z]/) {
   gsub(/::/,"-")
   DIR=gensub(/\ /, "_", "g", tolower($1))
   FILE=gensub(/[ ,:].*/, "", "g", $2)
   if(FILE != "") {
    OUTPUT=DIR"/"FILE
   } else {
    OUTPUT=DIR
   }
  }
 
  # Print OUTPUT & LN buffer
  if(OUTPUT != "") print(OUTPUT"<"LN[1])
 }
 END{
  print(OUTPUT"<"LN[0])
 }' | while IFS="<" read OUTPUT LN
  do
 
     if [ ! -w "${OUTDIR}/${F5}/${OUTPUT}" ]
      then [ -z "${QUIET}" ] && { echo "Saving: ${OUTDIR}/${F5}/${OUTPUT}"; }
           install -D /dev/null -m 644 "${OUTDIR}/${F5}/${OUTPUT}"
     fi && echo "${LN}" >> "${OUTDIR}/${F5}/${OUTPUT}"
 
 done