Can I have an SCM (Source Code Management) update trigger the launch of an AWX job? The answer is yes, and it’s one of the interesting things I can do to remote-control AWX.

hooks

What I need is some method to invoking a program as soon as I commit something. Subversion, git, etc. support hooks, but some systems also support Webhooks which I’ll use here.

In this example I’m using gitea which calls itself a painless self-hosted Git service; it’s all of that and more – it’s one of those gorgeous single-binary Go programs which I’m liking more and more. (I showed you another recently – CoreDNS.) Gitea is very much like using Github, but you host it yourself, and it’s trivial to get started with it.

Within gitea, I configure an outgoing Webhook:

Gitea with a Webhook

From now on, as soon as this repository gets a commit pushed to it, the specified URL will be invoked

On the other side, I run a lightweight, configurable utility (again in Go), called adnanh/webhook. This listens for HTTP payloads, extracts JSON from them, and it can invoke a program of my choice to react to the hook. This could be any kind of HTTP endpoint which reacts to a Webhook POST, but I chose this for simplicity.

I configure webhook to run with the following configuration, which will extract the repository’s name and the secret specified in the hook invocation from the incoming payload (here is the full payload sent by gitea).

[
  {
    "id": "awx-atest",
    "execute-command": "/Users/jpm/bin/awx-hook.sh",
    "command-working-directory": "/tmp/",
    "pass-arguments-to-command": [
      {
        "source": "payload",
        "name": "repository.full_name"
      },
      {
        "source": "payload",
        "name": "secret"
      }
    ]
  }
]

I launch webhook and watch what happens when I commit and push to the repository:

./webhook -hooks hooks.json -verbose
[webhook] 2017/10/23 18:17:07 version 2.6.5 starting
[webhook] 2017/10/23 18:17:07 setting up os signal watcher
[webhook] 2017/10/23 18:17:07 attempting to load hooks from hooks.json
[webhook] 2017/10/23 18:17:07 os signal watcher ready
[webhook] 2017/10/23 18:17:07 found 1 hook(s) in file
[webhook] 2017/10/23 18:17:07 	loaded: awx-atest
[webhook] 2017/10/23 18:17:07 serving hooks on http://0.0.0.0:9000/hooks/{id}
[webhook] 2017/10/23 18:17:09 incoming HTTP request from [::1]:54005
[webhook] 2017/10/23 18:17:09 awx-atest got matched
[webhook] 2017/10/23 18:17:09 awx-atest hook triggered successfully
[webhook] 2017/10/23 18:17:09 200 | 388.746µs | localhost:9000 | POST /hooks/awx-atest
[webhook] 2017/10/23 18:17:09 executing /Users/jpm/bin/awx-hook.sh with arguments ["/Users/jpm/bin/awx-hook.sh" "jpm/atest" "none-of-your-business"] and environment [] using /tmp/ as cwd
[webhook] 2017/10/23 18:17:10 command output: {"job":331,"ignored_fields":{},...
[webhook] 2017/10/23 18:17:10 finished handling awx-atest

The truncated output in the second to last line is the JSON returned from the AWX job launch which happens in the awx-hook.sh script:

#!/bin/sh

mysecret="none-of-your-business"

repo="$1"
secret="$2"

if [ "$secret" == "none-of-your-business" ]; then
    curl -qs \
        -d '{"extra_vars":{"newpoem":"hello good world"}}' \
        -H "Content-type: application/json" \
        -u admin:password  \
        "http://192.168.1.210/api/v2/job_templates/${repo}/launch/"
fi

All this is obviously just an example. Refine to your taste (and add lots of error-handling!)

View Comments :: Ansible and AWX :: 23 Oct 2017 :: e-mail

I believe there’s a document floating around somewhere in which is written that “JP Mens brought Ansible to Europe in 2012” or something to similar effect. Whilst I think that may be a tad exaggerated it is true that I did a few conferences and talks during which I enthusiastically spoke about the then new kid on the block. I’m recounting this anecdote because something similar may happen with Ansible AWX. I’ll be talking about AWX to anybody who wants to listen.

Ansible AWX is the upstream project which holds the code which at some point in time, and I guess periodically, turns into Ansible Tower. It’s been a long time coming, but Ansible has now open sourced AWX, and I’ll tell you two things:

  1. I wouldn’t want to have to use AWX and forgo the command line (but I know how to overcome the angst)
  2. I know a lot of people have been waiting for this to happen

Forget about my first point: that’s possibly just I, but I do mean it: Ansible without ansible-playbook on the CLI, seeing stdout move past, etc. wouldn’t feel right to me.

I’ve been kicking AWX’ tires quite a bit for several days, and I’ll say one thing: it really is very capable, and I will be recommending organizations take a closer look at it. If you know Tower you know AWX, but there are many who don’t know Tower.

Let me start with a few things I dislike, because it’s quite a short list:

  • documentation is basically what’s available for Ansible Tower, but some bits in that are just not available in AWX, or at least I cannot find them (e.g. settings.py) What we need are docs for things like management, backups, etc. But that’ll hopefully be written in the course of time
  • installation is supported for either docker, OpenShift or Minishift. That’s it. (I had a bit of difficulty wrapping my head around the *shifts, but I got along with the docker install.)
  • the UI needs a huge screen to be usable and occasionally feels sluggish (possibly due to delayed reaction due to background architecture)

Now for the things which I like in AWX:

  • the API, the API, and the API. Honestly, these guys got most of this very right. All we see in the UI is available in the API. tower-cli is also very good
  • the UI which updates via Websockets
  • multiple authentication backends. (I’ve tested TACACS+ and LDAP; both work). Even so, AWX supports local users (yes, which can also be created via the API); there’s also Github, Google, and whatnot
  • some of the terminology is a bit funny, but I quickly got the gist of it, and it makes sense (project, jobs, templates, etc)
  • inventories. Lots of them. Dynamic, static, internal, from SCM.
  • SCM all over. AWX is basically something you can replace and it obtains all it needs from external sources (SCM and PostgreSQL)
  • Role Based Access Control for those who need it. Works pretty well. Give access to a template and user inherits required access to credentials, inventory, etc.
  • Credentials store. Hugely useful.
  • Webhooks (outgoing) as well as API trigger from incoming hooks. That’s how I’d use AWX to avoid having to click in the UI
  • Workflows. Neat. Like a mini CI/CD thing.
  • external logging (ELK, Splunk.etc.) though what I see going out in the logs is meh
  • Notifications galore. Why wasn’t my mqtt notifier implemented? :-)
  • Clustering and High-Availability.

This isn’t an introduction to AWX. It’s more me wanting to whet your appetite. I’ll be speaking about AWX very soon, and I’m already working on an AWX training. At the first talk which I’ll give in the Netherlands, at the NLUUG I’ll be diving into as good an overview as I can give in 45 minutes. With screen-shots & things.

View Comments :: Ansible :: 20 Oct 2017 :: e-mail

The DNSSEC chain of trust starts at the root of the DNS with a resolver typically trusting said root by the fact that it’s got the root key (or hash thereof, called a Delegation Signer – DS record) built-in or configured into it. From there, a resolver chases delegation signer records (DS) which indicate to it, that a child zone is signed. We can compare this to how a resolver chases name server (NS) records to find delegations. The hash of a child zone’s DNSKEY is a DS record which is located in it’s parent zone and which has therefore been signed by the parent.

chain of trust

In the case of example.net, we know that net is signed, so the root zone contains a DS record for net. If example.net is signed, its parent zone (net) contains a DS record for example.net, and so forth.

Any child zone which is signed must have a hash of its secure entry point as a DS record in its parent zone.

Uploading DS from a child to a parent zone can be an entertaining proposition. Anything from copy/paste into some (often lousy) Web form to sending an email might be available. Unfortunately there’s no real standard to accomplish this as some parent zones want DS records whereas others insist on DNSKEY records (from which they calculate the DS themselves). Be that as it may, what we typically do is to obtain the DS. For utilities provided by BIND or PowerDNS:

$ dnssec-dsfromkey Kexample.com.+005+08419
example.com. IN DS 8419 5 1 2E4D616E70FED736A08D7854BCDD3D269A604FD3
example.com. IN DS 8419 5 2 6682CC1E528930DB7E097101C838F8D3D0DBB8EC5D1E8B50A5425FE57AB058C6

$ dig sub.example.net DNSKEY | dnssec-dsfromkey -f - sub.example.net
sub.example.net. IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net. IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

With a bit of zone name mangling and TTL adding we can use pdnsutil with dnssec-dsfromkey, but pdnsutil has its own subcommand as well:

$ pdnsutil export-zone-dnskey sub.example.net 32 |
     awk 'NR==1 { sub(" ", ". 60 "); print; }' |
     dnssec-dsfromkey -f - -T 120 sub.example.net
sub.example.net. 120 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net. 120 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

$ pdnsutil export-zone-ds sub.example.net
... (shown below)

Generally speaking the story stops here, and I’d leave you in charge of getting that DS-set to your parent zone somehow. Digressing only slightly, OpenDNSSEC has for ages, had a DelegationSignerSubmitCommand program in its configuration which can upload DS/DNSKEY to a parent via a program you create; the script you write and configure gets new keys via stdin and you can then automate submission to a parent zone to your heart’s content.

Can I haz automatik?

What we really want is automatic DS submission such as that the child zone uploads the DS directly to the parent zone where it is then signed. Unless the parent and the child zone are both under my administrative charge, that’s easier said than done: it’s unlikely the parent will allow me to do that.

Enter RFC 7344 which allows me to indicate, in my child’s zone, that I have a new DS record for submission. (This also works for DNSKEY records for those parents which prefer DNSKEY.) The fact that the child zone has a new DS for submission is indicated with a CDS record (child DS) and/or CDNSKEY (child DNSKEY) respectively. What will actually happen is that the parent will “consume” CDS/CDNSKEY records instead of the child “pushing” them somewhere. Hereunder I will be using CDS because they’re shorter, but CDNSKEYs work equally well.

As per section 4 of RFC 7344, if a child publishes either CDS or CDNSKEY it should publish both, unless the child knows the parent will use one of a kind only.

Using PowerDNS, I can configure the Authoritative server to automatically publish CDS and/or CDNSKEY records:

$ pdnsutil set-publish-cds zone
$ pdnsutil set-publish-cdskey zone

The process for BIND is a bit more involved. What I do here is to set a timing parameter on a key when I create a new key (or just after having created it).

$ dnssec-settime -P sync +1mi Kexample.com.+005+08419.

$ $ grep Sync Kexample.com.+005+08419.key
; SyncPublish: 20170921094522 (Thu Sep 21 11:45:22 2017)

When running as an in-line signer, BIND will publish CDS and CDNSKEY records for the particular key until I use dnssec-settime to have it remove such records from the zone. (Note that BIND as smart signer (dnssec-signzone -S) does not add CDS or CDNSKEY records to the signed zone. Why? Good question; IMO an omission.)

So, ideally, what we then need is a mechanism by which a server checks for CDS/CDNSKEY records in a child zone and then updates the corresponding parent zone.

dnssec-cds

A combination of dig and a new utility will allow me to automate the process.

child/parent

Tony Finch has written such a beast. It’s called dnssec-cds and it’s currently in a git tree he maintains. What this program does is to change DS records at a delegation point based on CDS or CDNSKEY records published in the child zone. By default CDS records are used if both CDS and CDNSKEY records are present.

What we’ll actually be doing in order to add a new signed child zone is:

  1. Create and sign the zone.
  2. Obtain the DS-set, copy that securely to the parent, and sign the result. We do this step once and we do it securely because this is how we affirm trust between parent and child.
  3. Once in the parent zone, the DS records of the child indicate the child zone’s secure entry point: validation can be chased down into the child zone.
  4. When the child’s KSK rolls, ensure child zone contains CDS/CDNSKEY records.
  5. Parent will periodically query for child’s CDS/CDNSKEY records; if there are none, processing stops.
  6. As soon as CDS/CDNSKEY records are visible in the child, dnssec-cds validates these by affirming, using the original DS-set obtained in 2, that they’re valid and not being replayed.
  7. A dynamic (or other) update can be triggered on the parent to add the child’s new DS-set.

dnssec-cds protects against replay attacks by requiring that signatures on the child’s CDS are not older than they were on a previous run of the program. (This time is obtained by the modification time of the dsset- file or from the -s option. Note below that I touch the dsset- file to ensure this, just the first time.) Furthermore, dnssec-cds protects against breaking the delegation by ensuring that the DNSKEY RRset can be verified by every key algorithm in the new DS RRset and that the same set of keys is covered by every DS digest type.

dnssec-cds writes replacement DS records (i.e. The new DS-set_ to standard output or to the input file if -i is specified, and -u prints commands suitable to be read by a dynamic DNS utility such as nsupdate. The replacement DS records will be the same as the existing records when no change is required. The output can be empty if the CDS / CDNSKEY records specify that the child zone wants to go insecure.

servers in use

The BIND name server in my example hosts the parent zone example.net, and we’ll create a child zone (sub.example.net) on PowerDNS Authoritative (because we can). Which server brand the zone’s hosted on is quite irrelevant other than it must be able to serve CDS/CDNSKEY records in the zone. This is particularly easy to automate with PowerDNS.

First we sign the child zone and export its DS-set:

$ pdnsutil secure-zone sub.example.net
Securing zone with default key size
Adding CSK (257) with algorithm ecdsa256
Zone sub.example.net secured
Adding NSEC ordering information

$ pdnsutil export-zone-ds sub.example.net > dsset-sub.example.net.
$ cat dsset-sub.example.net.
sub.example.net. IN DS 32128 13 1 6823d9bb1b03df714dd0eb163e20b341c96d18c0 ; ( SHA1 digest )
sub.example.net. IN DS 32128 13 2 039b660206db76611305288042ee3fa132f3709e229005baf2b24bcdae7bc513 ; ( SHA256 digest )
sub.example.net. IN DS 32128 13 3 753cf5f1c9a73fdaf3e09454a55916e7381bf24ce3c0e077defe1cfd15b6da5f ; ( GOST R 34.11-94 digest )
sub.example.net. IN DS 32128 13 4 e772f48556bf23effe80946a5306e5d00c6138d321f6d0a66a2673d2b2bee4e3fe13340a7bcf921484081046e92ca983 ; ( SHA-384 digest )

Note how the exported dsset- contains one DS for each algorithm supported by my PowerDNS installation. We now copy the dsset- to the parent server, and add its content to the parent zone. The zone is configured with auto-dnssec maintain so BIND will immediately sign anything we add to it.

( echo "ttl 60"
  sed -e "s/^/update add /" -e "s/;.*//" dsset-sub.example.net.
  echo "send" )  | nsupdate -l

If I now query for the DS records for sub.example.net in the parent zone (recall a DS RRset is in the parent) I obtain an appropriate response:

$ dig +norec @BIND sub.example.net ds
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14192
;; flags: qr aa; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; ANSWER SECTION:
sub.example.net.        60      IN      DS      32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
sub.example.net.        60      IN      DS      32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513
sub.example.net.        60      IN      DS      32128 13 3 753CF5F1C9A73FDAF3E09454A55916E7381BF24CE3C0E077DEFE1CFD 15B6DA5F
sub.example.net.        60      IN      DS      32128 13 4 E772F48556BF23EFFE80946A5306E5D00C6138D321F6D0A66A2673D2 B2BEE4E3FE13340A7BCF921484081046E92CA983

Our parent zone is signed, our child zone is signed, our parent has a signed DS record (more than one actually, but that’s fine) for our child zone: the chain of trust is in place. (Note the key tag on the DS: 32128.)

Let it roll!

At some point in time we want to roll the child’s KSK, and I am not going to address timing issues of the roll proper; I’m discussing CDS only.

In order to roll a key, we create a new key in the child zone. Simultaneously we request PowerDNS publish CDS records in the zone for all keys:

$ pdnsutil add-zone-key sub.example.net ksk 256 active ecdsa256
Added a KSK with algorithm = 13, active=1
Requested specific key size of 256 bits

$ pdnsutil set-publish-cds sub.example.net

$ pdnsutil show-zone sub.example.net
This is a Master zone
Last SOA serial number we notified: 0 != 3 (serial in the database)
Metadata items:
        PUBLISH-CDS     1,2
Zone has NSEC semantics
keys:
ID = 31 (CSK), flags = 257, tag = 32128, algo = 13, bits = 256    Active ( ECDSAP256SHA256 )
CSK DNSKEY = sub.example.net. IN DNSKEY 257 3 13 12lrJwo8w/PbnD8JssSlmuN7adbidwCsCaFn2yiXctj2k9g9dlGw+KTDqRsanj4InPgGcQwllBRGSojfwZVHRQ== ; ( ECDSAP256SHA256 )
DS = sub.example.net. IN DS 32128 13 1 6823d9bb1b03df714dd0eb163e20b341c96d18c0 ; ( SHA1 digest )
DS = sub.example.net. IN DS 32128 13 2 039b660206db76611305288042ee3fa132f3709e229005baf2b24bcdae7bc513 ; ( SHA256 digest )
DS = sub.example.net. IN DS 32128 13 3 753cf5f1c9a73fdaf3e09454a55916e7381bf24ce3c0e077defe1cfd15b6da5f ; ( GOST R 34.11-94 digest )
DS = sub.example.net. IN DS 32128 13 4 e772f48556bf23effe80946a5306e5d00c6138d321f6d0a66a2673d2b2bee4e3fe13340a7bcf921484081046e92ca983 ; ( SHA-384 digest )
ID = 32 (CSK), flags = 257, tag = 48629, algo = 13, bits = 256    Active ( ECDSAP256SHA256 )
CSK DNSKEY = sub.example.net. IN DNSKEY 257 3 13 EY2fpwiU3dcg22g83gC+9oQ65vJHPELR6sU1MLB8r8F+6egarSIDzjyM5AY2RlbFGgOkjpPMaUonCONPalOQ4A== ; ( ECDSAP256SHA256 )
DS = sub.example.net. IN DS 48629 13 1 4e324c9416d0009b4262c39494a1c7989f9c055c ; ( SHA1 digest )
DS = sub.example.net. IN DS 48629 13 2 87081d41bbaba1c25d28f48ede7718e96ea8387cae2a286fa5c61e57971b8c66 ; ( SHA256 digest )
DS = sub.example.net. IN DS 48629 13 3 99eadcdc47adfe2f68df3e1a4aa775fa409bafbd7815ca1c2643cdf49a0996bf ; ( GOST R 34.11-94 digest )
DS = sub.example.net. IN DS 48629 13 4 f961984bc561906cde1987bf89f90654865d4b9500ee7eed8bf4a0245244ac492eeb66776475e7448826f74638ad9e9e ; ( SHA-384 digest )

This output is easy to follow once we notice that the top part has some metadata and then come the keys. Note that pdnsutil is printing a DS record for each of the algorithms PowerDNS supports, hence the verbosity. Let’s pay attention to the key tags: in above list we see our original 32128 tag and the new tag 48629.

The child zone is still signed; there are two keys in the zone, and we’ve requested CDS records be published. Does that work?

$ dig @POWERDNS sub.example.com cds
;; ANSWER SECTION:
sub.example.net.        3600    IN      CDS     32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
sub.example.net.        3600    IN      CDS     48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net.        3600    IN      CDS     32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513
sub.example.net.        3600    IN      CDS     48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

The CDS records are available with the digest algorithms currently implemented for DS, namely 1 (SHA1) and 2 (SHA256).

.. to the parent

Back on the parent, we prepare to use dnssec-cds for the magic. We already have the dsset- file, and as discussed above I touch its timestamp (or use -s switch):

$ touch -t 201709140000 dsset-sub.example.net.

$ cat run-cds.sh
z=sub.example.net

dig @POWERDNS +dnssec +noall +answer $z DNSKEY $z CDNSKEY $z CDS |
    dnssec-cds -u -i -f /dev/stdin -T 42 -d . -i.orig $z |
    tee /tmp/nsup |
    nsupdate -l

$ ./run-cds.sh

dnssec_cds with the -u option creates a script suitable for feeding into nsupdate; for debugging purposes, I tee it into a file to show you here:

$ cat /tmp/nsup
update add sub.example.net. 42 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
update add sub.example.net. 42 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66
update del sub.example.net. IN DS 32128 13 3 753CF5F1C9A73FDAF3E09454A55916E7381BF24CE3C0E077DEFE1CFD15B6DA5F
update del sub.example.net. IN DS 32128 13 4 E772F48556BF23EFFE80946A5306E5D00C6138D321F6D0A66A2673D2B2BEE4E3FE13340A7BCF921484081046E92CA983
send

querying the parent we see the DS records with the superflous algorithms have been deleted and the DS records for the new key have been added. We also see our dsset- file has been updated accordingly (and I pay attention to the file’s modification time which has been set to the inception time of the DNSKEY RRSIG of the child zone):

$ dig +norec @BIND sub.example.net ds
;; ANSWER SECTION:
sub.example.net.        42      IN      DS      32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
sub.example.net.        42      IN      DS      32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513
sub.example.net.        42      IN      DS      48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net.        42      IN      DS      48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

$ cat dsset-sub.example.net.
sub.example.net. 42 IN DS 32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
sub.example.net. 42 IN DS 32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCDAE7BC513
sub.example.net. 42 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net. 42 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

Now I delete the “old” key from the child zone using its (in my opinion slightly confusing) ID which is 31 – compare with the output of pdnsutil show zone above. (I would have preferred pdnsutil utilize key tags to refer to keys for a zone):

$ pdnsutil remove-zone-key sub.example.net 31

Now comes the drum-roll moment: if we re-run our dnssec-cds script will it blend?

$ ./run-cds.sh

$ cat /tmp/nsup
update del sub.example.net. IN DS 32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
update del sub.example.net. IN DS 32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCDAE7BC513
send
$ dig +norec @BIND sub.example.net ds
;; ANSWER SECTION:
sub.example.net.        42      IN      DS      48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net.        42      IN      DS      48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

A few points to note:

  • when lookup at the nsupdate script produced by dnssec-cds pay attention to add vs. del on the update statements.
  • it’s not necessary to have dnssec-cds maintain the dsset- file on the file system, but it gives me a warm and fuzzy feeling so I think I’d always do that
  • I should also mention that the dnssec-dsfromkey utility is quite versatile; we saw it above, and it’s good to know that the -C option creates CDS records au lieu de DS records.

Tony’s dnssec-cds together with a wee bit of scripting will basically allow us to add new DS for zones to their parent zones. In the examples above I’ve used nsupdate, but this could equally well be accomplished by other means.

View Comments :: DNS and DNSSEC :: 21 Sep 2017 :: e-mail

DNS servers optionally log queries on demand by formatting a message and storing that in a file, sending it through syslog, etc. This is an I/O-intensive operation which can dramatically slow down busy servers, and the biggest issue is we get the query but not the associated response.

[1505125481] unbound[89142:0] info: 127.0.0.2 dnstap.info. A IN

10-Sep-2017 08:31:03.644 client @0x7f9b12dd5c00 127.0.0.2#52872 (dnstap.info): view internal: query: dnstap.info IN A +E(0)K (127.0.0.2)

In addition to having to format the data into human-readable format and write the resulting string to a file, DNS server authors haven’t been able to standardize on query logging formats. As can be seen by the two examples above (first Unbound, then BIND), the strings differ dramatically. The different results also mean that further parsing/processing of these logs will have to be different as well. (Have fun building regular expressions for both and having more than two problems.)

One method to overcome this is to capture packets externally, such as how DSC does it, but doing it in this fashion means the software must deal with several things the name server has already dealt with: fragments, TCP stream reassembly, spoofed packets, etc. (Here’s a bit of a “versus” thread.)

An issue with both these methods is that the query a name server received and the response it returned aren’t bundled together. Only the name server software itself knows, really, what belongs together at the time the query occurred and the response was returned. Can you imagine a DNS log so complete that you could see what query a client issued and which response it got?

dnstap is a solution which introduces a flexible, binary log-format for DNS servers together with Protocol Buffers, a mechanism for serializing structured data. Robert Edmonds had the idea for dnstap and created the first implementation with two specific use cases in mind:

  • make query-logging faster by eliminating synchronous I/O bottlenecks and message formatting
  • avoid complicated state reconstruction by capturing full messages instead of packets for passive DNS

What dnstap basically does is to add a lightweight message-copy routine into a DNS server. This routine duplicates a DNS message with context and moves it out of the DNS server to a listening process, typically via a Unix domain socket. In case of extreme load, a DNS server can simply start dropping log payloads instead of degrading server performance.

dnstap enabled DNS server (from project)

The dnstap protocol buffer content is defined in this schema, and includes a type of message (see below), the socket family queries/responses were transported on, socket protocols, query and responder addresses, the initiator’s DNS query message in wire format, timestamps, and the original wire-format DNS response message, verbatim.

dnstap is currently implemented in a few utilities as well as in these DNS servers:

  • BIND
  • CoreDNS
  • Knot 2.x
  • Knot Resolver (> 1.2.5)
  • Unbound

For my experiments, I’ll be using BIND 9.11.2, CoreDNS-011, Knot 2.5.4, and Unbound 1.6.5.

Before launching a dnstap-enabled (and configured) DNS server, we have to ensure a listener has created the Unix domain socket. The dnstap code in the BIND, Unbound etc. acts as a client rather than a server, so it requires a server which will accept connections. Robert Edmonds, dnstap inventor, did it this way so that a single socket could be used by different dnstap senders (like how a system daemon listens to messages from multiple clients). If the Unix socket isn’t present or nothing’s listening on it, the client code (in the DNS server) will periodically attempt reconnection.

We’ll be looking at two programs which provide this functionality.

Prerequisites

We’ll likely have to build our DNS server installations ourselves as official packages are typically not built with dnstap support. The requirements for all the below (except CoreDNS which provides everything in its single statically linked binary) will be fstrm, protobuf, and protobuf-c:

  • fstrm is a frame streams implementation in C. It implements a lightweight protocol with which any serialized data format which produces byte sequences can be transported and provides a Unix domain listener (fstrm_capture) for dnstap records written by the DNS servers.
  • protobuf is the implementation of Google’s Protocol Buffers format. We install it in order to build and use some of the utilities, namely the protobuf compiler.
  • protobuf-c is a C implementation of the latter; this includes a library (libprotobuf-c) which some of the utilities require.

Other than these requirements, a number of the DNS server implementation I mention have their own additional requirements which I will not specify here – the projects’ documentation will tell you more.

Utilities

What follows are some utilities we’ll be using for working with and/or decoding (i.e. printing) dnstap records.

dnstap -u

fstrm_capture

I can use fstrm_capture to create a required Unix domain socket which dnstap clients can write to. The program needs a “Frame Streams” content type specified as well as the path to the Unix socket and the file name it should protocol buffer frames to:

$ fstrm_capture -t protobuf:dnstap.Dnstap -u /var/run/dnstap.sock -w fstrm.tap

While there is provision in the code to handle SIGHUP (fstrm_capture then flushes the output file), there is no provision for file rotation.

An alternative method for doing similarly is to use the dnstap utility from the dnstap packag.

dnstap

The dnstap project maintains a dnstap utility written in Go. There are, unfortunately, no prebuilt binaries on the releases page, but building the program is easy (after you go through the hassle of installing go).

I launch the dnstap utility (instead of launching fstrm_capture) like this:

dnstap -u /var/run/dnstap.sock -w file.tap

I can also use dnstap to read a tap file from the file system and print it out in various formats, which I will be showing below when we look at some examples. dnstap can also create a TCP endpoint (e.g. for CoreDNS) with dnstap -l <address:port>

dnstap-ldns

For the actual decoding of dnstap files (i.e. printing them out), we can use dnstap as just discussed, or the reference utility which is called dnstap-ldns which has thankfully kept the option letters used by dnstap. However, this utility, as the name implies, brings an additional dependency, namely ldns. (But you have that already for its utility programs, don’t you?)

dnstap-read

Whilst on the subject of decoding dnstap files, dnstap-read, from the BIND distribution, can also do that nicely. By default it prints the short version, but with -y it’ll also do the long YAML format.

$ dnstap-read file.tap
11-Sep-2017 10:59:00.652 CR 127.0.0.2:55453 <- 127.0.0.2:53 UDP 107b www.test.aa/IN/A
11-Sep-2017 10:59:00.954 CR 127.0.0.2:61308 <- 127.0.0.2:53 UDP 107b www.test.aa/IN/A
$ dnstap-read -y file.tap
---
type: MESSAGE
identity: tiggr
version: bind-9.11.2
message:
  type: CLIENT_RESPONSE
  response_time: !!timestamp 2017-09-11T08:59:00Z
  message_size: 107b
  socket_family: INET
  socket_protocol: UDP
  query_address: 127.0.0.2
  response_address: 127.0.0.2
  query_port: 61308
  response_port: 53
  response_message_data:
    opcode: QUERY
    status: NOERROR
    id:  24094
    flags: qr aa rd ra
    QUESTION: 1
    ANSWER: 1
    AUTHORITY: 1
    ADDITIONAL: 1
    OPT_PSEUDOSECTION:
      EDNS:
        version: 0
        flags:
        udp: 4096
        COOKIE: 67816834b9432822c5a508fd59b65054fb5bbab0c5fe14f8
    QUESTION_SECTION:
      - www.test.aa. IN A
    ANSWER_SECTION:
      - www.test.aa. 60 IN A 192.168.1.20
    AUTHORITY_SECTION:
      - test.aa. 60 IN NS localhost.
  response_message: |
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id:  24094
    ;; flags: qr aa rd ra    ; QUESTION: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ; COOKIE: 67816834b9432822c5a508fd59b65054fb5bbab0c5fe14f8
    ;; QUESTION SECTION:
    ;www.test.aa.			IN	A

    ;; ANSWER SECTION:
    www.test.aa.		60	IN	A	192.168.1.20

    ;; AUTHORITY SECTION:
    test.aa.		60	IN	NS	localhost.

kdig

kdig is the dig-like utility shipped with Knot, and it too can read dnstap files and present them in dig-like (I should probably say kdig-like) manner. I note that kdig doesn’t show version information recorded in the tap file.

$ kdig -G file.tap +multiline
;; Received 759 B
;; Time 2017-09-10 06:28:20 UTC
;; From 199.249.113.1@53(TCP) in 32.1 ms
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 38621
;; Flags: qr aa; QUERY: 1; ANSWER: 3; AUTHORITY: 0; ADDITIONAL: 1

;; EDNS PSEUDOSECTION:
;; Version: 0; flags: do; UDP size: 1280 B; ext-rcode: NOERROR

;; QUESTION SECTION:
;; dnstap.info.          IN A

;; ANSWER SECTION:
dnstap.info.            3600 IN A 192.30.252.154
dnstap.info.            3600 IN A 192.30.252.153
dnstap.info.            3600 IN RRSIG A 5 2 3600 20171006034256 (
                                20170906032611 36186 dnstap.info.
                                ekqkRvSmO0csExldaVG5RlxEvKSs/Spi4szHukeM
                                dz6dW0493rv+wXsKzjnyyTrOWPZWiplfGJZL2MQL
                                Yc4hg1h0J89YPEfomkE0d6yIybuFjljhuX/YT34E
                                FLv45Wbq+N20mBrdSupajgPQEmFUgnhUT6hg4Ayf
                                t6T3UuVcJ3M=
                                )

What’s quite practical is that kdig can record a live query / response (i.e. something you’d do right now) into a tap file. So, in the following example, I use kdig to perform a query and the program writes in dnstap format to the specified file what we get on stdout:

$ kdig -E iis-a.tap iis.se AAAA
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 53652
;; Flags: qr rd ra; QUERY: 1; ANSWER: 1; AUTHORITY: 3; ADDITIONAL: 0

;; QUESTION SECTION:
;; iis.se.             		IN	AAAA

;; ANSWER SECTION:
iis.se.             	60	IN	AAAA	2001:67c:124c:4006::214

;; AUTHORITY SECTION:
iis.se.             	3600	IN	NS	i.ns.se.
iis.se.             	3600	IN	NS	ns.nic.se.
iis.se.             	3600	IN	NS	ns3.nic.se.

$ ls -l *.tap
-rw-r--r-- 1 jpm users    305 Sep 10 14:55 iis-a.tap

# (I have reported the epoch of 1970-01-01 as a bug to the knot-dns project)

$ dnstap-ldns -r iis-a.tap
1970-01-01 04:22:23.659631 TQ 192.168.1.81 UDP 24b "iis.se." IN AAAA
1970-01-01 04:22:23.725209 TR 192.168.1.81 UDP 110b "iis.se." IN AAAA

$ dnstap-ldns -r iis-a.tap -y
---
type: MESSAGE
version: "kdig 2.5.4"
message:
  type: TOOL_RESPONSE
  query_time: !!timestamp 1970-01-01 04:22:23.659631
  response_time: !!timestamp 1970-01-01 04:22:23.725209
  socket_family: INET
  socket_protocol: UDP
  query_address: 0.0.0.0
  response_address: 192.168.1.81
  query_port: 54370
  response_port: 53
  response_message: |
    ;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 53652
    ;; flags: qr rd ra ; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 0

    ;; QUESTION SECTION:
    ;iis.se.	IN	AAAA

    ;; ANSWER SECTION:
    iis.se.	60	IN	AAAA	2001:67c:124c:4006::214

(Note how kdig uses the TOOL_* subtypes in the dnstap records.)

After discussing some of the tools for working with (in particular for decoding) dnstap, I now turn to the DNS servers proper.

DNS servers

Before we look at the individual DNS servers and how to configure them for support of dnstap, it’s interesting to know that dnstap currently has 12 defined subtypes of dnstap “Message”. dnstap tags a log record with a subtype corresponding to the location at which a record was recorded, so we can at any point in time see where the record was collected.

dnstap flow

  • AUTH_QUERY (AQ)
  • AUTH_RESPONSE (AR)
  • RESOLVER_QUERY (RQ)
  • RESOLVER_RESPONSE (RR)
  • CLIENT_QUERY (CQ)
  • CLIENT_RESPONSE (CR)
  • FORWARDER_QUERY (FQ)
  • FORWARDER_RESPONSE (FR)
  • STUB_QUERY (SQ)
  • STUB_RESPONSE (SR)
  • TOOL_QUERY (TQ)
  • TOOL_RESPONSE (TR)

These subtypes ought to be pretty self-explanatory, but their full description is in the dnstap protocol schema. The diagram above illustrates at which point they are obtained. The mnemonics in parenthesis are those which are output by the utilities in “quiet” mode.

BIND

BIND’s configuration is as flexible as Unbound’s in terms of dnstap logging. I build named by adding --enable-dnstap to ./configure and then modify named.conf.

I can set different types to be logged for each view (but I dislike views so I wont’t do that). Supported types are client, auth, resolver, and forwarder as well as all, which causes all dnstap messages to be logged regardless of their type. Each type may take an additional argument to indicate whether to log query or response messages. If not specified, BIND should log both, but this didn’t work for me.

options {
    dnstap { all; };
    // dnstap { auth; resolver query; resolver response; };

    /* where to capture to: file or unix (socket) */
    // dnstap-output file "/tmp/named.tap";
    dnstap-output unix "/var/run/dnstap.sock";

    dnstap-identity "tiggr";
    dnstap-version "bind-9.11.2";
};

When named starts it starts producing dnstap log data. When writing to a file, we can instruct named to truncate and reopen the file or we can instruct named to roll its dnstap output file using rndc:

$ rndc dnstap -reopen        # Close, truncate and re-open the DNSTAP output file.
$ rndc dnstap -roll <count>  # Close, rename and re-open the DNSTAP output file(s).

If you’re interested in the nitty-gritty of dnstap with servers with are both authoritative and recursive, here’s a thread Evan Hunt started. (But in my opinion you should not be interested in servers which are simultaneously authoritative and recursive …)

Other than that, there’s a good single-page document on using dnstap with BIND.

In BIND 9.12.x, dnstap logfiles can be configured to automatically roll when they reach a specified size, for example:

dnstap-output file "/taps/prod.tap" size 15M versions 100 suffix increment;

CoreDNS

I spoke earlier of CoreDNS, and one of the really great things of this single-binary program is that it’s bundled with all I need to produce dnstap frames.

The following configuration suffices for CoreDNS to provide a forwarder which logs all requests in to the specified Unix domain socket:

.:53 {
    dnstap /var/run/dnstap.sock full
    proxy . 192.168.1.10:53
}

If I then look at a query I see what type of DNS server produced this query, namely a forwarder.

$ dnstap -r coredns.tap -y
type: MESSAGE
message:
  type: FORWARDER_QUERY
  socket_family: INET
  socket_protocol: UDP
  response_address: 192.168.1.10
  response_port: 53
  query_message: |
    ;; opcode: QUERY, status: NOERROR, id: 60806
    ;; flags: rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

    ;; QUESTION SECTION:
    ;dnstap.info.       IN       A

    ;; ADDITIONAL SECTION:

    ;; OPT PSEUDOSECTION:
    ; EDNS: version 0; flags: ; udp: 4096
    ; COOKIE: 7f8f3ebbf66ffc95
---

...

I note that CoreDNS records neither identity nor version in the tap file. CoreDNS can log to a remote endpoint by specifying tcp://address:port as sink.

Unbound

Unbound has had dnstap support for a few versions, since Robert Edmonds did the first prototype. I build dnstap support into Unbound with --enable-dnstap.

dnstap:
    dnstap-enable: yes
    dnstap-socket-path: "/var/run/dnstap.sock"
    dnstap-send-identity: yes
    dnstap-send-version: yes
    dnstap-log-client-query-messages: yes
    dnstap-log-client-response-messages: yes
    dnstap-log-forwarder-query-messages: yes
    dnstap-log-forwarder-response-messages: yes
    dnstap-log-resolver-query-messages: yes
    dnstap-log-resolver-response-messages: yes

A local-zone is answered directly by Unbound without performing recursion, so you’ll only see response messages for those domains if you set “dnstap-log-client-response-messages: yes”.

The documentation of dnstap in unbound.conf is, well, no it’s not, it’s simply not there. Actually there is no documentation at all for dnstap in unbound-1.6.5/doc/ which is quite atypical: Unbound’s usually very good about that…

Knot

I tested knot-2.5.4 (authoritative) and built it with

./configure --with-module-dnstap=yes --enable-dnstap

I configure dnstap in knot.conf by specifying the module to load (mod-dnstap) and its parameters, most of which are self-explanatory and have sensible defaults. The sink directive specifies either a file on the file system (which is opened for truncate) or, if prefixed with the string "unix:", a Unix domain socket, e.g. as created by fstrm_capture.

mod-dnstap:
  - id: tap
    sink: /root/taps/knot-auth.tap
    # sink: unix:/var/run/dnstap.sock
    log-queries: false
    log-responses: true

template:
  - id: default
    global-module: mod-dnstap/tap
$ dnstap-ldns -r knot-auth.tap  -y
type: MESSAGE
identity: "knot.ww.mens.de"
version: "Knot DNS 2.5.4"
message:
  type: AUTH_RESPONSE
  query_time: !!timestamp 1970-01-01 21:10:59.484261
  response_time: !!timestamp 1970-01-01 21:10:59.484261
  socket_family: INET
  socket_protocol: UDP
  query_address: 192.168.1.130
  query_port: 53394
  response_message: |
    ;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 35495
    ;; flags: qr aa rd ; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

    ;; QUESTION SECTION:
    ;www.k20.aa.	IN	A

    ;; ANSWER SECTION:
    www.k20.aa.	3600	IN	A	192.168.1.111

    ;; AUTHORITY SECTION:

    ;; ADDITIONAL SECTION:

    ;; EDNS: version 0; flags: ; udp: 4096
---

Daniel writes that a knotc reload will rotate the dnstap output file as will a SIGHUP.

Wrapping up

There’s a lot of good in dnstap, and it’s a huge improvement over what existed beforehand. There are however a few things to take note of:

  • While debugging, it can take a while until queries start showing up in the dnstap file due to the buffering which the DNS servers do
  • I’m convinced (but have not taken the time to prove) some servers drop logs even though they’re completely idle. This might be due to either real drops or to not flushing existing records when a server is stopped. For example, after shutting down named (rndc stop) and killing dnstap I find the last query missing. People with heavy-traffic servers won’t notice this of course.
  • The existing toolset is a bit sloppy at times. For example fstrm_capture or dnstap -u which cannot rotate output files (the former can rotate every N seconds). This is easy to fix and it needs doing.
  • There’s no network transport of dnstap other than CoreDNS which can send dnstap to a tcp:// target. Somebody started some work but I haven’t seen it.

dnstap is a relatively young, open standard for DNS query logging. It was designed for large, busy DNS servers and offers minimal performance loss. It has already wide adoption amongst open source DNS server implementations, even if there are some missing: NSD, PowerDNS Authoritative, PowerDNS Recursor come to mind, and I hope they’ll very soon join the party. (There’s already a pull request to add dnstap support to dnsdist which will greatly simplify the work for the PowerDNS products, according to the project.)

Further reading:

View Comments :: DNS, logging, and monitoring :: 11 Sep 2017 :: e-mail

If CoreDNS had existed when I wrote Alternative DNS Servers I’d have included it; it’s quite a versatile beast.

CoreDNS was created by Miek Gieben, and he tells me there was a time during which CoreDNS was actually a forked Web server doing DNS, but that changed a bit. Whilst CoreDNS has its roots in and resembles Caddy, it’s a different beast. It’s not difficult to get to know, but some of the terminology CoreDNS uses confused me: for example the term middleware: I see that as a plugin, all the more so because this program’s option to list said middleware is called … drum roll … -plugins. Another thing I needed assistance for was some of the syntax, or rather the semantics, within the configuration file.

CoreDNS is another one of those marvelous single-binary, no-dependencies Go programs which I download and run. All that’s missing is a configuration file called a Corefile. (I associate a core file with the word Corefile … #justkidding ;-)

Launching coredns as root (so that the process may bind to port 53 – use -dns.port 5353 on the command line to specify an alternative, or cap_net_bind_service with systemd) will bring up a DNS server which uses the whoami middleware to provide no response, but an additional section to queries for any domain:

$ dig @192.168.1.207 www.example.org

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8021
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 3

;; QUESTION SECTION:
;www.example.org.		IN	A

;; ADDITIONAL SECTION:
www.example.org.	0	IN	A	192.168.1.130
_udp.www.example.org.	0	IN	SRV	0 0 60934 .

Quite useless, if you ask me, but at least I know the server’s running and it’s doing something.

The hosts middleware serves a zone from an /etc/hosts-type file, checking the file for changes and reloading the zone accordingly. A, AAAA, and PTR records are supported.

$ cat /etc/hosts
1.2.3.4         laptop.example.hosts
1.2.3.10         bigbox

$ cat Corefile
example.hosts {
        hosts /etc/hosts
}

With this configuration CoreDNS will respond authoritatively to a query for laptop.example.hosts only; the entry for bigbox is not found.

Let’s do something authoritative, and create a zone master file (in zone.db) and a Corefile:

$ cat Corefile
example.aa {
    file zone.db {
        transfer to *
        transfer to 192.168.1.130:53
    }
}

The file middleware loads the specified master zone file and serves it. That’s it. Simple. Not only that, but it also periodically checks whether the file has changed and actually reloads the zone when the SOA serial number changes. In the transfer stanza I specify that any client (*) may transfer the zone and that the host 192.168.1.130 gets a DNS NOTIFY when the zone is reloaded. (The port number on the address defaults to 53, I just show that it can be specified.) I tested NOTIFY with nsnotifyd and it works reliably.

Similar to file, the auto middleware can serve a whole directory of zone files, determining their origins using a regular expression.

The following Corefile uses the slave^H secondary middleware to create a slave zone which is transferred into RAM from the specified address. (Adding appropriate transfer to stanzas would make this secondary zone transferable by other secondaries.)

$ cat Corefile
ww.mens.de {
    secondary {
        transfer from 192.168.1.10
    }
    errors stdout
    log stdout
}

Note that the zone is served from RAM which means, in the event coredns launches and cannot contact any of its zone masters, the zone cannot be served.

If I need a forwarder, I configure it, here for the root zone, i.e. for all zones not explicitly defined within the Corefile:

$ cat Corefile
. {
    proxy . 192.168.1.10:53
}

Other middleware includes bind which overrides the address to which CoreDNS should bind to, and cache which can cap TTL values when operating as a forwarder. Middleware probably worth looking at is etcd which can read zone data from an etcd instance and kubernetes. If you’re into that sort of stuff of course.

Then there’s the dnssec middleware which promises to enable on-the-fly, a.k.a. “online”, DNSSEC signing of data with NSEC as authenticated denial of existence. In order to test this, I first create a key and then configure an authoritative zone in the Corefile which uses that key file:

$ ldns-keygen -a ECDSAP256SHA256 -k sec.aa
Ksec.aa.+013+28796

$ cat Corefile
sec.aa {
    file sec.aa
    dnssec {
        key file Ksec.aa.+013+28796
    }
}
;; ANSWER SECTION:
sec.aa.		3600 IN	DNSKEY 257 3 13 (
			meU/r4MKJ73gDanOfsiysUWn1PKDCGz6NxulydpAeFx3
			zNrepJTSVc65vJXt9koLI+PI+1uu9TadUlhEosyPjA==
			) ; KSK; alg = ECDSAP256SHA256 ; key id = 28796
sec.aa.		3600 IN	RRSIG DNSKEY 13 2 3600 (
			20170917103509 20170909073509 28796 sec.aa.
			k296ZBgiScV72AYXDuDFxBNaoZEXiBVhE57yAfgYVKYi
			nY9cmdO8tB81KX+OGA7d7V4cb6wrk876B5qRUWUZ2A== )

CoreDNS signs all records online; if I specify more than one key during configuration it signs each record with all keys.

CoreDNS binaries are provided with middleware for logging and monitoring. For example dnstap enables it to use dnstap.info’s structured binary log format, and I decide for which of the authoritative zones or proxy entries I want to log queries and responses by configuring dnstap accordingly. On the other hand, the health middleware enables an HTTP endpoint at a port you specify, and it returns a simple string if the server is alive:

$ cat Corefile
example.aa {
    health :8080
}

$ curl http://192.168.1.207:8080/health
OK

The tls middleware allows me to create a DNS-over-TLS (RFC 7858) respectively a DNS-over-gRPC (does anybody really need that?) server.

The server can act as a round-robin DNS loadbalancer, and it can provide responses to TXT queries in the CH (chaos) class:

$ cat Corefile
# define the CH "tlds"
bind {
    chaos CoreDNS-010 "Miek Gieben" miek@miek.nl
}
server {
    chaos CoreDNS-010 "Miek Gieben" miek@miek.nl
}
$ dig ...
...
;; ANSWERS:
version.bind.		0	CH	TXT	"CoreDNS-010"
version.server.		0	CH	TXT	"CoreDNS-010"
authors.bind.		0	CH	TXT	"Miek Gieben"
authors.bind.		0	CH	TXT	"miek@miek.nl"
hostname.bind.		0	CH	TXT	"t420.ww.mens.de"
id.server.		0	CH	TXT	"t420.ww.mens.de"

There are more middleware “plugins” (rewrite is fun!) and there’s also some documentation as to how to write your own.

Apparently it’s not possible to configure middleware globally. So, for example if you have two servers configured in a single Corefile (by specifying different ports, for example), both blocks need the middleware you want to share configured (documented here). This in turn means, that certain things cannot be done, e.g. dnstap into the same Unix socket.

Apropos documentation: that is, very unfortunately, a bit lacking in clarity. While the information is there, it’s presented in a form which made me pull lots of hair, and I frequently found myself grepping (is that a verb?) my way through the project’s Github issues in search of how, respectively where to write a directive in the Corefile, and Miek prodded me along, for which I thank him!

Other than that, CoreDNS is huge fun and has a lot of potential.

Other recent entries