Ansible Runner is a Python library and utility which helps interfacing with Ansible playbook runs and also supports running ad-hoc commands and individual roles. It is used in AWX/Tower as the basis for executing playbooks, and I assume it grew from the original ansible.runner code, but I don’t have evidence. I discovered Ansible Runner (hereafter: Runner) about two years ago but mainly forgot about it until it crossed my mind fleetingly while explaining something about AWX in Antwerp recently.
Runner can be used standalone which may sound strange as we could easily invoke ansible-playbook
from, say, a shell script so why use an additional program? For one, Runner can be set up with a specific environment in which it runs. I can give it an SSH key, variables, passwords which it feeds to SSH or sudo prompts, and even extra vars. It’s very much like how AWX/Tower work when they obtain all manner of data from their underlying database and provide that in such a way as that a “job template” (think “playbook” but with all environment that needs) can be executed on remote hosts.
$ mkdir jp
$ cat jp/one.yml
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Je danse le ping
ping:
$ ansible-runner -p one.yml run jp/
PLAY [localhost] ***************************************************************
TASK [Je danse le ping] ********************************************************
ok: [localhost]
With the run
subcommand ansible-runner
launches the playbook and waits for it to complete. I can also start
the job in the background, and keep an eye on jp/pid
or use the is-alive
subcommand to check with Runner whether the playbook has completed.
In order to have ansible-playbook
authenticate via SSH key to the remote systems, I install a private key into env/ssh_key
for Runner to use. At runtime it copies that key to a temporary area in a private directory and wraps ansible-playbook
in an invocation of ssh-agent
. In the sample invocation below, $T
is a path to a subdirectory Runner creates in artifacts/
. (The subdirectory is named by UUID but I can specify a name for it by setting ident
.)
ssh-agent sh -c "ssh-add $T/ssh_key_data && rm -f $T/ssh_key_data && ansible-playbook two.yml
If my servers are configured to require passwords (something I’d not do as we can use SSH keys), I ensure sshpass
is available on the controller, and configure Runner to send specific options to ansible-playbook
and to interact with the password prompts:
$ cat env/cmdline
--ask-pass
$ cat env/passwords
"^SSH password.*$": "ansible"
The passwords
file contains expect(1)-like responses for the password entry.
I think Runner could be interesting for scheduling jobs to run via cron.
Embedded
I can also embed Runner into my own Python program (shown in the diagram above), and what follows is an example of a very poor Mens’ AWX, triggered by an HTTP POST to a tiny bottle-based program. The POST triggers en embedded Runner to launch a specific playbook, and the (curl) client gets the job’s stdout when the job has run.
$ curl -Ss http://localhost:8081/job_run/two.yml -d "$(jo username=Jane number=55 botella='vino tinto' )"
PLAY [localhost] ***********************************************
TASK [Do ze pong] **********************************************
ok: [localhost]
TASK [debug] ***************************************************
ok: [localhost] => {
"botella": "vino tinto"
}
PLAY RECAP *****************************************************
localhost : ok=2 changed=0 unreachable=0 ...
The small program prepares Ansible’s extra_vars, runs the specified playbook (which must be in the private_data_dir
directory) using ansible-runner
, and returns the standard output we’d normally see during a playbook run with ansible-playbook
to the HTTP client. I set NOCOLOR
to avoid escape sequences I’d otherwise get in stdout and NOCOWS
because.
#!/usr/bin/env python
from bottle import route, run, request
import ansible_runner
import json
import sys
import os
@route('/job_run/<playbook>', method='POST')
def job_run(playbook):
extravars = {}
try:
extravars = json.loads(request.body.read())
except Exception as e:
print(str(e), file=sys.stderr)
os.environ["ANSIBLE_NOCOLOR"] = "1"
os.environ["ANSIBLE_NOCOWS"] = "1"
params = {
"playbook" : playbook,
"extravars" : extravars,
"json_mode" : False, # well-known playbook output
"quiet" : True, # no stdout here in runner
}
r = ansible_runner.run(private_data_dir='jp/', **params)
# return the playbook run output
return open(r.stdout.name, "r").read()
run(host='localhost', port=8081, debug=False)
Runner writes the last job’s extra_vars to env/extravars
in the project directory.
{"username": "Jane", "number": 55, "botella": "vino tinto"}
Runner can optionally send status and events (i.e. each of the tasks at start and termination, with all of the metadata a task returns) to an external service via, say, HTTP. All I need to do is install the ansible-runner-http
module and create settings within the project directory
$ cat env/settings
runner_http_url: http://127.0.0.1/~jpm/post.php
runner_http_headers: { secreto: "beefdead", nombre: Jane }
and the status receiver gets this:
Host: 127.0.0.1
User-Agent: python-requests/2.23.0
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
secreto: beefdead
nombre: Jane
Content-Length: 80
Content-Type: application/json
{
"status": "successful",
"runner_ident": "04cbd0d5-0cb7-4ed8-8753-32f73b861912"
}
There’s also a ZeroMQ plugin which I’ve not tested, but I couldn’t resist taking the source of the http plugin, basically replacing “http” by “mqtt”, and watching how Runner publishes events over MQTT.
All tasks and events with their meta-data, the gathered facts, the command, stdout, the status, .. all is stored in the artifacts/
directory (the path to which can be configured), in a directory named by UUID but which I can also explicitly name by setting ident
. This artifacts directory can be automatically cleared out by setting rotate_artifacts
to a number of directories to keep.
$ tree jp
jp
├── ansible.cfg
├── artifacts
│ └── a3b819cc-81b9-4856-8349-f571de661684
│ ├── command
│ ├── fact_cache
│ │ └── localhost
│ ├── job_events
│ │ ├── 1-f22f73bb-fff7-4d7a-a52c-97ed01c15734.json
│ │ ├── 10-13d4810a-ce18-4f3d-aabf-b49bc67364a8.json
│ │ ├── 11-21536bc3-0f33-4fdb-91b0-c64a8adcf3d6.json
│ │ ├── 12-4ff9d68a-28e9-44a5-9e68-8edafed3a7d6.json
│ │ ├── 2-48bf6be9-46a2-97b1-c5d5-000000000006.json
│ │ ├── 3-48bf6be9-46a2-97b1-c5d5-00000000000d.json
│ │ ├── 4-9122d5fc-3c30-42d3-bb0e-2d2f28c27bba.json
│ │ ├── 5-3d240a76-3711-41ff-80c6-56e2d7f61ada.json
│ │ ├── 6-48bf6be9-46a2-97b1-c5d5-000000000008.json
│ │ ├── 7-a2ea061e-4261-41d6-9b74-b48c4e7805ae.json
│ │ ├── 8-22c899d2-f708-4929-9acf-35233a9bdcab.json
│ │ └── 9-48bf6be9-46a2-97b1-c5d5-000000000009.json
│ ├── rc
│ ├── status
│ └── stdout
├── env
│ ├── extravars
│ ├── settings
│ └── ssh_key
├── one.yml
└── two.yml
The job_events
JSON files contain all metadata returned in the individual events.
for f in jp/artifacts/a3b*/job_events/*; do jq -r .event < $f ; done
playbook_on_start
runner_on_start
runner_on_ok
playbook_on_stats
playbook_on_play_start
playbook_on_task_start
runner_on_start
runner_on_ok
playbook_on_task_start
runner_on_start
runner_on_ok
playbook_on_task_start
You can look at the actual json_events for the above example playbook run.
I think Ansible Runner is a solution for cases in which AWX/Tower is too large a system to deploy/maintain in your environment. It’s much more lightweight and easier to deploy than AWX is, but the latter has a large number of impressive features not in Runner: a database back-end, ready-made logging, via callback plugins, and custom credentials are just some which come to mind.
I conducted my ansible-runner experiments on macOS and on FreeBSD.