Let’s assume you want to access a monitoring host from an Ansible play which is launched by Ansible Tower/AWX, and let’s further assume that you require credentials with which to do so. The trivial demonstration play will be this one:

- name: Zshow me your variables
  hosts: all
  - name: Which is the zurl?
    debug: msg="it is {{ zurl }}"

  - name: And the password?
    debug: msg="the secret is {{ zpass }}"

You could well use Ansible’s extra_vars with which to do so, something along the lines of

zurl: http://zabbix.example.net/zabbix
zuser: zabbix
zpass: password

This means, though, that whoever can access the Tower/AWX job template description will see the values (in particular the password) in clear text. Furthermore, logs, of job template runs will also show them, and you really don’t want that to happen, do you? You don’t!

extra vars are visible at any time in job logs

Ansible Tower/AWX have somthing I particularly enjoy (and I spend some time explaining it in the trainings I give): it’s called “custom credential types”.

I can create a new “type” of credential, which AWX/Tower presents to the user as though it were built-in. Furthermore, AWX/Tower secure the data therein by encrypting fields marked secret into the backend database, just like they do for other types of credentials such as machine or AWS credentials.

I first create a new type by defining the fields it will have and how these will be passed to my playbook. The former is called input configuration and the latter injector configuration.


Both these configurations are specified as YAML or as JSON, and ought to be pretty self-explanatory; note the secret attribute on my zpass field which ensures its value cannot be seen on data entry or later, when the credential is opened.

- id: zurl
  type: string
  label: Zurl
- id: zuser
  type: string
  label: Zusername
- id: zpass
  type: string
  secret: true
  label: Zpassword
- zpass
- zuser
- zurl

There’s also a multiple choice field which can be quite practical, though it cannot be fed from a URL, unfortunately. (But thanks to the AWX API, we can pump data into that field from outside; I digress.)

The injector configuration defines how these variables should be passed to our Play. We can use extra vars as I’ve shown, but AWX/Tower can also create INI files for us and drop those in a temporary location on the controller. I find extra vars easier to handle (and to explain) and they should suffice for most use-cases.

As soon as the credential type is defined, I can go ahead and create a credential of that type:

I create a credential of our new type

Now that I have a credential with the data we want in it, I can assign that to our job template, and I do that just like assigning any other credential type:

assign credential to job template

When I now launch my job template (note that there are no extra_vars in the template definition), the play runs as I want it to:

the play runs

Tower/AWX takes particular care of these values: they don’t appear in logs (unless I’m careless enough to provoke that as in this example where I output the values), and they cannot be overwritten, so if I erroneously create an extra var called zurl, say, AWX will not clobber the one from the credential type. (I would try to apply naming conventions so that that can’t happen by mistake, though.)

I used Zabbix in this example for two reasons: first of all I use it myself (did I tell you I intend to give a talk about Zabbix loadable C modules at the upcoming LOADays conference?), and then because I recently stumbled across a similar example on a Red Hat site which upset me because it showed credentials in extra variables; that should be a no-no.

ansible, credentials, and awx :: 16 Apr 2019 :: e-mail

After backing up all my gists and cloning all my starred repositories there is one more thing I want to accomplish: backup my Github repositories, and by that I really mean the ones I manage and have commit rights to. I could do this by cloning and periodically pulling (as we discussed here), but you might have noticed that I explicitly exclude my own repositories in that script by checking for repo.owner.login. The reason is: I want to mirror them into Gitea.

a mirrored repository in Gitea

Why Gitea? Untypically, I’d like a Web UI onto these repositories in addition to the files in the file system. It could have been Gitlab, but I think Gitea is probably the option with the lowest resource requirements.

When I add a repository to Gitea and specify I want it to be mirrored, Gitea will take charge of periodically querying the source repository and pulling changes in it. I’ve mentioned Gitea previously, and I find it’s improving as it matures. I’ve been doing this with version 1.7.5.

After setting up Gitea and creating a user, I create an API token in Gitea with which I can create repositories programatically. The following program will obtain a list of all Github repositories I have, skip those I’ve forked from elsewhere, and then create the repository in Gitea.

#!/usr/bin/env python -B

from github import Github		# https://github.com/PyGithub/PyGithub
import requests
import json
import sys
import os

gitea_url = ""
gitea_token = open(os.path.expanduser("~/.gitea-api")).read().strip()

session = requests.Session()        # Gitea
    "Content-type"  : "application/json",
    "Authorization" : "token {0}".format(gitea_token),

r = session.get("{0}/user".format(gitea_url))
if r.status_code != 200:
    print("Cannot get user details", file=sys.stderr)

gitea_uid = json.loads(r.text)["id"]

github_username = "jpmens"
github_token = open(os.path.expanduser("~/.github-token")).read().strip()
gh = Github(github_token)

for repo in gh.get_user().get_repos():
    # Mirror to Gitea if I haven't forked this repository from elsewhere
    if not repo.fork:
        m = {
            "repo_name"         : repo.full_name.replace("/", "-"),
            "description"       : repo.description or "not really known",
            "clone_addr"        : repo.clone_url,
            "mirror"            : True,
            "private"           : repo.private,
            "uid"               : gitea_uid,

        if repo.private:
            m["auth_username"]  = github_username
            m["auth_password"]  = "{0}".format(github_token)

        jsonstring = json.dumps(m)

        r = session.post("{0}/repos/migrate".format(gitea_url), data=jsonstring)
        if r.status_code != 201:            # if not CREATED
            if r.status_code == 409:        # repository exists
            print(r.status_code, r.text, jsonstring)

You’ll notice that I handle private Github repositories specifically in that I add username and Github token to the Gitea mirror request. While I could do that as a matter of course, the username/token tuple is stored in Gitea and is, unfortunately, displayed in the Clone from URL field when you view the mirror properties in the UI. For this reason, I limit specifying the Github repository authorization to repos which actually require it.

Gitea stores clones of the repositories it mirrors in a directory I specify when setting it up (the ROOT key in the [repository] section of app.ini), so I could access the repositories from that if something goes wrong with Gitea:

$ git clone http://localhost:3000/jpm/jpmens-jo.git


$ tree -d /path/to/gitea-repositories/jpm/jpmens-jo.git/
├── hooks
├── info
├── objects
│   ├── info
│   └── pack
└── refs
    ├── heads
    └── tags

$ git clone /path/to/gitea-repositories/jpm/jpmens-jo.git/
Cloning into 'jpmens-jo'...

I can configure Gitea’s cron schedule with an entry in app.ini:

; Enable running cron tasks periodically.
ENABLED = true
; Run cron tasks when Gitea starts.

; Update mirrors
SCHEDULE = @every 10m

; Default interval as a duration between each check
; Min interval as a duration must be > 1m

The DEFAULT_INTERVAL is the default which is copied into the respository-specific mirror settings when creating the mirror. I can modify the interval in the UI, and MIN_INTERVAL is a setting which forbids users (i.e. myself) from entering shorter intervals:

repository-specific mirror settings

If I’m impatient or want to prod Gitea into mirroring a particular repository on demand, I can POST a request to its API:

curl -s -XPOST http://localhost:3000/api/v1/repos/jpm/jpmens-jo/mirror-sync \
     -H "accept: application/json" \
     -H "Authorization: token xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

In order to monitor that mirroring is actually happening, I will periodically obtain the SHA of the last commit to the master branch on Github (that’s the best I can come up with in terms of “last updated” as there really isn’t a “last SHA” independent of a particular branch) and will see if I find that particular commit on Gitea’s side. If Gitea doesn’t carry it, I yell.

So, where importing is a one-time thing, mirroring causes Gitea to periodically check whether the source repo has changed, and if so, it pulls changes in. Mirroring doesn’t pull in issues or pull requests from Github, which is a bit of a shame, but I understand it’s not trivial to do. If you want a utility which does that, gitea-github-migrator is a one-shot program which does what it says on the tin. What Gitea does bring accross is a repository’s Wiki, and it does so by creating a *.wiki.git repository next to the actual repo, visible in the file system; within the UI it’s where you’d expect it to be and not separately listed.

If you want to set up your own self-hosted Gitea, it’s not difficult, and it doesn’t have to be public: mine is not Internet-accessible, but it has Internet access in order to be able to mirror repositories from GitHub.

I am not migrating away from GitHub because I see no reason to: the platform is very useful to me, and I’d not like to loose it. What I’m trying to accomplish is a fail-safe in case something happens to GitHub which would make me loose access, be that voluntarily or involuntarily.

git, gitea, and github :: 15 Apr 2019 :: e-mail

I want to be able to remote-control the launch of a restic backup via SSH from a “controlled” client (“C” in the diagram below) while being able to access the actual backup directly from “C”. The remote control is used so as to not have to keep the password of restic’s backend on the machines in the center.

diagram of Restic via SSH

In this instance, the backup server is the rest-server as it’s the fastest restic backend.

The client exports the password for the backend store as an environment variable. Our SSH server already accepts LC_ environment variables so we use one of those:

# sshd: Allow client to pass locale environment variables
AcceptEnv LANG LC_*

On the machines which will be backed up (in the center of the diagram), I have an authorized_keys entry for a dedicated user:

command="/home/resticuser/runb.sh",no-port-forwarding,no-X11-forwarding ssh-ed25519 AAAAC....

The small shell script does a bit of logging and sets the correct environment variable for restic to authenticate to its destination REST server before actually running the backup:


echo "** kick restic on $(hostname)"

export RESTIC_REPOSITORY="rest:https://ruser:password@store.example.net:8000/bak"

restic --cacert ca.pem backup /data

This small script can optionally check $SSH_ORIGINAL_COMMAND to determine which portion of the system respectively which database it should back up.

On the controlling side (the “C” at the top of the diagram), I kick the backup this via


export LC_RESTICPW=<supersecretpasswordfromsomewhere>
ssh -o IdentityFile=keyfile \
    -o IdentitiesOnly=true \
    -o SendEnv=LC_RESTICPW \
    -l resticuser \

I’ve also prototyped a very similar setup with BorgBackup because we just might decide to use that in this particular case as pruning is quite a bit faster, and we’re interested in the different compression methods it has to offer.

backup and restic :: 12 Apr 2019 :: e-mail

It must have been in 1989 that I literally spent weeks of my life trying to get X11 compiled and running on a 486 Compaq running SCO Unix; compilation times were a thing which took many hours, but having to cross fingers that the parameters for the screen wouldn’t burn anything is what I recall most vividly. Times have changed of course, and I’d think there are not many people who do that any more – I for one have thankfully forgotten most of it, but it still rankles, and when I think of (non-commercial) Unix on a laptop I see the ghost of RAMDAC in my nightmares.

I know many who use some variant of GNU/Linux (or systemd/Linux? ;-) on a laptop, and I’ve done it myself, albeit typically sans X11, but the combination has never tempted me away from a Mac. I can’t nail it: Linux is more than fast enough (probably faster than BSDs), certainly reliable enough – that’s not it. I think it’s an increasing dislike of the environment in terms of the documentation, the disparity of syadmin procedures and utilities, the adoption of systemd, the differences across distributions, and the feeling that there’s so much wrong in that environment due to different groups recreating the same stuff in different wrappings, losing track of what bug has been fixed where, and has it been upstreamed, and has it been re-downstreamed, etc. It’s difficult for me to put a finger on it. Linux is here to stay, of course, and that’s good, but I don’t feel like carrying it around with me.

Early February I had an idea which sparked a bit of response:

tweet suggesting talk

So the question I wanted answered is: can you make a laptop so attractive that I’ll be tempted to move away from a Mac? TL;DR: you can.

For some time now I’ve been happy with OpenBSD; the installer is slim and fast, the software is very stable, and the documentation is almost unbeatably good. Have you an idea how valuable it is to type man and get a manual page which actually describes the program or file that’s on your system? I had an excellent experience with OpenBSD a year ago when I slapped the OS onto an old Thinkpad.

I quit smoking 922 days ago (at the time of this writing, and yes, I still keep count), and have kept track of the real money I’ve saved. I took some of that and splurged: I purchased a Lenovo Thinkpad X1 Carbon 6th gen, and I was really very impressed when I opened the carton: quite an Apple-like look-and-feel on unboxing.

root xwd

The device is lovely, and I think it’s slimmer even than my MacBook Air, and it feels lighter. I booted it up, intending to shrink the Windows 10 installation (who knows what a copy of Windows might be good for one day), and changed my mind on that after waiting for many minutes for Windows to welcome me.

I had decided I wanted FreeBSD because of ZFS and the possibility of running VirtualBox. I started off installing the latest version of TrueOS as it’s supposed to be a good method for noobs in terms of ease of installation. The installer was fine until when creating a user (username jpm) it said “Jan-Piet Mens” contains an invalid character. For the gecos field. I kid you not. The result was a bootable system (on a second try), but it left me crying as to the sluggishness of almost everyting. Then I installed Trident as TrueOS’ successor but had to attach an external USB mouse to click the GUI installer. The result didn’t boot; didn’t boot as in “it doesn’t boot”, if you know what I mean.

I should have stuck to plain FreeBSD of course, so I installed that. Several times. And then I stopped. I gave up having basically bricked the X1 in terms of not being able to give it to somebody who wants a Windows laptop.

Henrik again (he’s tireless in that respect) said OpenBSD, so I tried. To be honest my expectations in terms of “laptop and graphics and wifi and all that jazz with OpenBSD” were rock-bottom. OpenBSD as server? Any time, but on a laptop … ?

The installation worked, and I got more than I expected (which surprised the heck out of me), but the result was dreadful: FireFox hardly able to scroll a page, no video output (YouTube), a miserable window manager, … I went to bed and read a book.

It was Henrik (yeah, the same chap), who pointed me to a blog post by Cullum Smith entitled OpenBSD on a Laptop. After spending a few hours working through that text, I was gratified with a very workable environment using an “i3”-like window manager called cwm.

All in all the result is a laptop which can, for me, almost compete with a Mac, at least for most of my use-cases. It has full-disk encryption, working Ethernet and Wifi (imagine being able to type man iwm for the name of the WiFi network interface and being shown up-to-date documentation with examples). S3 works as do volume buttons on the keyboard. I have a development environment (C compiler with all the Unix utilities I’d ever want), I’ve got syncthing, restic, and matterhorn installed, and I have OpenBSD’s built-in httpd and smtpd up and running. (The former because I sometimes test clients and need an HTTP server, and the latter because I like being able to send mails with pastes etc, to myself or others.).

There are, at the end of this first day, a few things for which I need solutions: VLC plays audio but doesn’t show an image and video, using the X11 driver (moving pictures work in Firefox and Chrome), and there are a few “usability” things I have to either tweak or get used to, e.g. copy/paste from an xterm into a Web browser or vice versa. Also, the right side of the laptop gets very hot; I haven’t yet determined where the CPUs are, but I’m guessing right there in which case something’s running too hot, literally. (top doesn’t show anything untoward.) It was Thunderbolt BIOS setting.

It’s really pleasant to work on a system in which, when I want to use, say mosquitto, all I do is pkg_add mosquitto, and I get all binaries, and libraries, and header files without having to guess what the package with the binaries and the package with the libraries is called. (I’m looking mainly at you, Debian.) Everything is properly documented, all programs and files have man-pages, and the quality of the guides is very good.

I’m not done, and I’m not sure I will abandon Macs and macOS, but this is very promising. Some things which I’ve relied upon are going to be difficult or impossible to replace. They include:

  • iTerm2
  • Enpass
  • Calendar.app
  • being asked to connect to WiFi networks
  • and instantaneous wakeup on opening the laptop’s lid

There might be workarounds for some or all of these, but they’d be just that. All in all, however, I’m impressed with the result so far. FWIW, it’s likely that an appropriately configured Linux laptop would be just as good or maybe even better; as I said earlier: I cannot put my finger on it.

The question I have is: will I be brave enough to take this machine to BSDCan and do a presentation with it? We’ll see. (Certainly not if I cannot practice projecting beforehand.)

Further reading:

openbsd, laptop, and os :: 06 Apr 2019 :: e-mail

I mentioned a bee, and I still hear it buzzing, though it might just the tinnitus I’ve had for a dozen years… Quite annoying.

When I find a repository on GitHub containing something I find interesting or think I’ll use, I will typically star it. Originally I starred to show the owner that I appreciate her or his work, and I still do it for that reason; I think of it as a hat tip. Then it became a method of “bookmarking” a GitHub repository. The only question I never tried to answer was “how do I find those bookmarks?”

The answer is actually quite simple once I dig into the GitHub API, and I’m fortunate in being able to use PyGithub which simplifies things greatly for me.

The following program produces a JSON file with all repositories I’ve starred and details of them:

#!/usr/bin/env python -B

# https://pygithub.readthedocs.io/en/latest/introduction.html
from github import Github # https://github.com/PyGithub/PyGithub
import json
import os

g = Github(open(os.path.expanduser("~/.gist")).read())

starred = []
for repo in g.get_user().get_starred():

    # I find stars some of my own repositories, and I don't think
    # I actually did that; artefact of prior GitHub practices?
    if repo.owner.login != "jpmens":
        data = {
            "name"          : repo.name,
            "owner"         : repo.owner.login,
            "full_name"     : repo.full_name,
            "clone_url"     : repo.clone_url,
        if repo.owner.name:
            data["owner_name"] = repo.owner.name
        if repo.owner.email:
            data["owner_email"] = repo.owner.email
        if repo.owner.avatar_url:
            data["owner_avatar_url"] = repo.owner.avatar_url

        if repo.description:
            data["description"] = repo.description
        if repo.homepage:
            data["homepage"] = repo.homepage

        topics = repo.get_topics() # removing this speeds up the program
        if len(topics) != 0:
            data["topics"] = topics


with open("starred.json", "w") as f:
    f.write(json.dumps(starred, indent=4))

The program runs for a couple of minutes because I use an additional API call to obtain more information for my archive: the topics of the particular repository. The result is (I hope) something I can use long-term to find what I’m looking for. Thinking aloud: I have the name, its owner, the owner’s full name (sometimes easier to remember than a cryptic username), the topics, and a description, hoping the repository owner’s taken the trouble to set those.

        "name": "haricot",
        "owner": "catwell",
        "full_name": "catwell/haricot",
        "clone_url": "https://github.com/catwell/haricot.git",
        "owner_name": "Pierre Chapuis",
        "owner_avatar_url": "https://avatars1.githubusercontent.com/u/221332?v=4",
        "description": "Beanstalk client for Lua",
        "topics": [

I can then have a program run through that list and create clones of the clone_urls:



jq -r '.[]| "\(.owner)-\(.name) \(.clone_url)"' < starred.json |
  awk '{ gsub(/[ \/]/, "-", $1); $1 = tolower($1);  print; }' | while read d u
	test -d "$target" || (
		mkdir -p "$target"
		git clone "$u" "$target"

The jq invocation produces lines of output containing owner name and repository name joined by a dash, and I have verified that neither owner nor repo name can contain a slash:

catwell-haricot https://github.com/catwell/haricot.git
dw-py-lmdb https://github.com/dw/py-lmdb.git

Some repositories are quite large, so cloning all of this takes time and costs space, but it’s worth it, to me. I will periodically have a program visit each directory and pull changes.

So far GitHub contains most of what I’ve been interested in, but the odd GitLab or Gitea etc. repository I just clone manually.

There’s something missing in order to silence the buzzing bee; I’ll be back with a third installment to my “GitHub trilogy”.

git, GitHub, and repository :: 04 Apr 2019 :: e-mail

Other recent entries