I spend a bit of time explaining the DNS Start Of Authority (SOA) record in introductory DNS trainings. This is what a DNS SOA record (the first record in a zone file and one which must exist exactly once in a zone) looks like:

example.net.   3600 IN  SOA mname rname (
                        17       ; serial
                        7200     ; refresh (2 hours)
                        3600     ; retry (1 hour)
                        1209600  ; expire (2 weeks)
                        900      ; negttl [minimum] (15 minutes)

We discuss the individual fields and scenarios for their values (also pointing out that recommended SOA values may or may not be useful). I specifically talk about the expire field and what its use is. You will know that if a secondary server for this zone cannot contact a primary for expire seconds, the secondary server will no longer respond to queries for this zone, preferring to SERVFAIL rather than to respond with stale data. That is how I learned what the field means. Quite straightforward actually.

I would not have brought up the topic had it not been for a participant who asked what happens if expire is configured to zero (0) seconds.

After saying “don’t do that!” and threatening to get a frozen trout from the fridge if further such question arose, I put the question aside, but that evening I decided to investigate. Unfortunately, as it turns out.

DNS specifications and exceptions ... ;)

Shaft points out that Wikipedia says:

This value must be bigger than the sum of Refresh and Retry

but that there’s no source for the statement, nor is there an affirmation in RFCs 1034,1035.

What would actually happen if an authoritative primary provided a zone with expire=0 in its SOA?

My first thought was the secondary server, upon receiving a transfer with expire=0, would just immediately expire the zone. Easy enough to test, and it turned out that a BIND secondary does not do that at all but continues serving the zone “for a while”. (I initially reported BIND serves the zone for an hour before expiring it, but that is wrong.) Thanks to Evan who directs me to the function I wasn’t able to find in the source code, I learn expire is set to at least refresh + retry (and has been since 1999), whereby the latter two values have a minimum of 5 minutes each. I also learned that BIND limits expire to 14515200 seconds or 24 weeks.

The introductory training had already finished, but I contacted the participants and reported our findings. (I try to not leave questions unanswered.)

And how do the other Open Source DNS servers react?

PowerDNS and Knot DNS do not expire the zone data when receiving expire=0; the former because it doesn’t ever expire a zone (see below).

Admittedly this whole topic of expire with value 0 seconds is super edge-case, and there’s no reason to get involved in looking into it. (So why did I do that!?!?)

But what about “regular” expiration? Assume a zone has a valid expire field in its SOA, how will these servers handle that when operating as secondaries?

PowerDNS originally made a deliberate design choice to never expire zones. I learned about this yesterday upon submitting an issue report.

NSD implements zone expiry and logs the fact when it occurs. (Here are notes I took.)

nsd[45521]: error: xfrd: zone a1.dnslab.org has expired

Knot DNS also expires the zone when expire elapses, logging the fact (my notes).

info: [a1.dnslab.org.] zone expired

BIND also expires the zone when the SOA expire elapses (my notes), and logs the fact:

general: zone a1.dnslab.org/IN: expired

These last three respond with SERVFAIL when the zone has expired, meaning that a legitimate client such as a resolver will attempt to query a different nameserver.

I spent the better part of a day doing this. I should have left it at don’t do that!

DNS :: 14 Jan 2022 :: e-mail

I’ve been messing around with macOS keychains part of the morning, and it occurred to me that I hadn’t jotted down how to use Ansible vault with generic passwords in a macOS keychain, so here goes.

I create a generic password from the CLI or via the GUI

$ security add-generic-password -a jpmens -j "vault pw for example.com" -s vpw-example-com -w
password data for new item:
retype password for new item:

password in keychain

A one-line shell script I place in ~/bin/vaultpw.sh obtains that generic password


/usr/bin/security find-generic-password -a jpmens -s vpw-example-com  -w

and I configure ansible.cfg to use that executable script from which to obtain the vault password on stdout (or I specify it at runtime as argument to --vault-password-file)

nocows = 1
vault_password_file = ~/bin/vaultpw.sh

Whenever I use Ansible vault, its password is obtained automatically.

$ EDITOR=ed ansible-vault create secrets.yml
dbpass: superverysecret

$ head -2 secrets.yml

$ ansible-vault view secrets.yml
dbpass: superverysecret

Note that it’s not possible to keep the vault password secret from anyone who must be able to launch playbooks which use vaulted files from the CLI.

Ansible and macOS :: 17 Dec 2021 :: e-mail

When BIND is built with GeoIP support, ACLs can be used for restricting access based on geographical location of the client’s IP address using the MaxMind API to query their GeoIP database, or databases in compatible formats.

BIND detects libmaxmind by default, but I explicitly specify the path to the library during configuration. (I find a bit confusing is that the switch to enable GeoIP is called maxminddb, but the rest of the configuration calls it geoip2.)

$ ./configure --prefix=/usr/local/bind9git \
	--with-openssl="${OSSL}" \
checking for library containing MMDB_open... -lmaxminddb
configure: GeoIP2 default database path set to /usr/local/Cellar/libmaxminddb/1.6.0/share/GeoIP
Optional features enabled:                                                                                       
    Memory allocator: jemalloc                                                                                   
    GeoIP2 access control (--enable-geoip)  

GeoLite2 databases are free IP geo-location databases comparable to, but less accurate than, MaxMind’s GeoIP2 databases and require an account to download. I create an account and download the country and city databases, extract the *.mmdb files, and install them into an appropriate location:

$ ls -loh /var/named/geoip
-r--r--r--@ 1 jpm    71M Nov 23 14:55 GeoLite2-City.mmdb
-r--r--r--@ 1 jpm   5.7M Nov 23 14:48 GeoLite2-Country.mmdb

Using the mmdblookup utility from the libmaxminddb distribution with known IP addresses, I test the databases. Let’s see in which country ISC.org and which city PowerDNS.com are reported being in:

$ mmdblookup --file GeoLite2-Country.mmdb \
             --ip $(dig +short isc.org) country names en

  "United States" <utf8_string>

$ mmdblookup --file GeoLite2-City.mmdb \
             --ip $(dig +short www.powerdns.com) city names en

  "Amsterdam" <utf8_string>

Sounds about right. :-)


GeoIP ACLs are of the form

geoip db database field value
  • database specifies which GeoIP database to search for a match. If it isn’t specified, queries are first answered from the city and country databases if they are installed.
  • field indicates which field (country, region, city, continent, postal, isp) is to be searched for a match.
  • value is the value to search for within the database.
  • ACLs use a “first-match” logic rather than a “best-match”.

I can now create access control lists (ACLs) which use the GeoIP2 functionality to limit access to resources. Here I’m showing a simple ACL for permitting queries, but GeoIP2 ACLs are often used for matching access to views.

acl "yurpeans" {
     geoip city "Frankfurt am Main";
     geoip country Netherlands;

options {
	geoip-directory "/var/named/geoip";

	allow-query { 

ISC’s Using the GeoIP Features knowledge base article has further information and an example using match-clients in a view.

Create a custom GeoIP mmdb database

I thought it might be interesting to create my own GeoIP database and use that in BIND, somewhat along the lines of the experimental Location-Based (Geo-)DNS in a Private Network I did many years ago. (We’ve also seen something similar using GeoDNS with the PowerDNS GeoIP back-end.) The use-case I envision is permitting queries for certain domains to individual departments/divisions within large organizations by name instead of coding up ACLs with addresses in them. And of course I wanted to better understand how this all fits together.

It turns out BIND attempts to open databases by name in bin/named/geoip.c; I had assumed it would readdir() its way through the geoip-directory, but it doesn’t. I choose GeoIP2-ISP.mmdb as the one to use.

I create the mmdb file based upon MaxMind’s Build your own MMDB database blog post and the Vagrant setup in getting started, and I patch together this Perl program. My changes are indicated in # JPM comments. In particular I create a new field called “isp” and permit dumping of reserved networks. I query the result on the command line:

$ mmdblookup --file GeoIP2-ISP.mmdb --ip isp
  "Jane" <utf8_string>

I configure named.conf to use the GeoIP database, specifying the database and the field to use to query for the value.

acl "humans" {
     // geoip db isp isp Jane;
     geoip db isp isp jpmens;
     geoip db isp isp rabbit;
zone "example.net" IN {
        type primary;
        file "example.net";
        allow-query { humans; };

As usual I keep an eye on the logs when named starts up to ensure the database can be found. In this case I already have the City and Country databases in the directory so named opens all three.

28-Nov-2021 10:21:25.216 opened GeoIP2 database '/var/named/geoip/GeoLite2-Country.mmdb'
28-Nov-2021 10:21:25.216 opened GeoIP2 database '/var/named/geoip/GeoLite2-City.mmdb'
28-Nov-2021 10:21:25.216 opened GeoIP2 database '/var/named/geoip/GeoIP2-ISP.mmdb'

Creating the code to populate custom mmdb files is likely not worth the effort.

DNS and Geo :: 27 Nov 2021 :: e-mail

Specifically for use with Ansible, I’m known to recommend adding NOPASSWD: ALL to the sudoers entry and be done with it. No mucking about with sudo passwords (in essence users’ login passwords), no -K option, no passwords in clear-text files because people are unwilling to use Ansible vault, etc. It makes lives easier all around, and yes, I am aware that there are people who get the screaming heebie-jeebies when I say NOPASSWD:. So be it.

There is an alternative to authenticating use of sudo using SSH agent forwarding instead of login passwords. If you’re new to agent forwarding, I recommend you read the Illustrated Guide to SSH Agent Forwarding, which explains the concept and its pitfalls very well.

We’re going to have sudo use PAM (pluggable authentication modules) to ask our remote SSH agent whether we’re permitted to use sudo. Nifty. No passwords will be harmed or transported over the network in doing so.

ssh agent

pam_ssh_agent_auth is a PAM module which permits PAM authentication via a forwarded SSH agent; as such it can be used to provide authentication for anything that supports PAM. This can be used on most Linux variants as well as on FreeBSD. I’ve explictly not linked to the original Sourceforge page as that’s not been maintained in what feels like forever. I’ve had good experience with the version packaged for FreeBSD which has had some patches applied to it.

I install the sudo and pam_ssh_agent_auth packages and configure the former to use the latter by inserting the first line:

$ cat /usr/local/etc/pam.d/sudo
auth     sufficient  pam_ssh_agent_auth.so  file=/etc/security/authorized_keys
auth     include     system
account  include     system
session  required    pam_permit.so
password include     system

I add public keys of users permitted to authenticate to the file I specify in the PAM configuration.

$ cat /etc/security/authorized_keys
ssh-ed25519 AAAAC3Nza[...]D6K1Fvn7EpD0Oz Ansible mgmt at JPM enterprises

In sudoers proper, I configure an environment variable sudo should keep (theoretically not required on newer versions, but that failed for me) and set up users which should be permitted sudo. Note how I change Jane’s entry to no longer have NOPASSWD on it:

Defaults env_keep += "SSH_AUTH_SOCK"
jane ALL=(ALL) ALL

Let’s see if that works:

$ eval $(ssh-agent)
Agent pid 75648
$ ssh-add .ssh/ansibull
Enter passphrase for .ssh/ansibull:
Identity added: .ssh/ansibull (Ansible mgmt at JPM enterprises)
$ ssh -A -l jane sudo id
uid=0(root) gid=0(wheel) groups=0(wheel),5(operator)

That looks good. Recall that -A on our SSH client means agent forwarding, which we can also provide as -o ForwardAgent=true or specify in ssh_config on the client.


I mentioned above that sudoers is used by Ansible on platforms which support it. Can we use PAM SSH agent forwarding for authenticating sudo here as well?

I set up my inventory with ansible_become_pass set to any value; Ansible complains about a “Missing sudo password” otherwise. (Why is a different question, and I’ve not found an answer yet.) Note that the password is just a placebo – we won’t use it, and it’s not Jane’s password.

$ cat inventory
alice ansible_host=

ansible_ssh_common_args="-o ForwardAgent=true"

Let’s try and access the machine alice, and then attempt with “become”:

$ ansible alice -m command -a id
alice | CHANGED | rc=0 >>
uid=1014(jane) gid=1014(jane) groups=1014(jane)

$ ansible alice -b -m command -a id
alice | CHANGED | rc=0 >>
uid=0(root) gid=0(wheel) groups=0(wheel),5(operator)

This works well. If you are like I and want to actually see whether sudo is “phoning home” for asking the agent, launch ssh-agent with the -d option to see it being spoken to by the client on the target host.

I am aware there are people who don’t approve of SSH agent forwarding either. So be it.


  • Grant reminds me that this also works for su(1); as mentioned above, it will work with any component which supports PAM authentication.

SSH, PAM, and Ansible :: 21 Nov 2021 :: e-mail

Let me tell you a cock cow and bull story.

Back in the days when Ansible was invented, support for cowsay was implemented very early on, and I even added code for angry cows indicating failed tasks, but Michael rejected that patch.

/ JP doesn't much appreciate cows on \
\ screen                             /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

I wasn’t resentful or anything, I think, but I wanted to safely work in a professional cow-free environment so, one evening in a hotel, I was motivated to add code for disabling cows entirely. That later became configurable in spite of saddening some people:

this option, however, makes me sad, to know there is a world that does not want the cows.

After a few years, the Ansible Book is published, and as I tell René in a tweet in July 2019:

I’m ready to review. :-) and remember to tell O’Reilly that the cow should look to the left! #moo

Fast forward many years, and I keep seeing bulls but don’t pay attention until Carol sends me a boxful to distribute amongst cow lovers. (Here’s a photo of the bulls by Ton.)

herd of bulls

So, what’s with the bull?

Carol knows the answer: Ansible’s headquarters are in Durham, North Carolina, conveniently located near to the Durham Bulls baseball club, and Durham is also known as the “Bull City”, something most Yurpeans (including yours truly) are bound not to know. And there’s even a bull near their offices. (Thank you Carol, for letting me use this photo of yours.)

ansibull in Durham

Furthermore, Ansibull sounds a bit like Ansible, though I think that’s probably a pronunciation thing.

But wait, there’s more! The Bullhorn is Ansible’s developer community newsletter, and you’re bound to want to print and colour AnsiBull’s Galactic Adventures coloring book.

So, enough of the bull. Back to work!

ansible :: 18 Nov 2021 :: e-mail

Other recent entries