Gogs and Gitea are Open Source, single-binary Go implementations of a Github-like git repository hosting system which I can use privately or in teams. Both are light on system requirements, contrary to a pretty large and quite popular Open Source Java implementation of a similar utility.

Gitea is a fork of Gogs, and it’s very difficult to say which is better: Gogs is, at times, a bit more Github-lookalike I’d say, whereas Gitea’s look is a bit fresher and feels “younger”. Gitea brings everything in a single binary whereas Gogs has a number of support files (CSS, JavaScript, templates) it requires. (This is advantageous because I can change or replace templates if desired. Fabian reminds me that it’s possible to customize Gitea as well.) Both projects appear to be alive and kicking in spite of rumors that Gogs had been abandoned.

gogs installer

Installation basically consists of ensuring the appropriate binary is launched. I then launch a Web browser and point it at the port number indicated on the console ( by default) and answer a few questions. The responses are written into an INI-type configuration file (app.ini) which can be pre-populated of course and looks like this:

DB_TYPE  = sqlite3
HOST     =
NAME     = gogs
USER     = root
SSL_MODE = disable
PATH     = data/gogs.db

ROOT = data/repos

DOMAIN           = localhost
HTTP_PORT        = 4000
ROOT_URL         = http://localhost:4000/
DISABLE_SSH      = false
SSH_PORT         = 4022
OFFLINE_MODE     = true

A cursory search will also provide plenty of resources for installing either with, say, Ansible, if that’s your preferred method. Alternatively, both programs support installation with docker.

Both programs support different backend databases (SQLite3, PostgreSQL, MySQL) and SMTP, PAM, or LDAP authentication. They offer git over HTTP and have an optional embedded SSH server for git over SSH. (Just like in Github or Gitlab, I upload one or more SSH keys which Gogs/Gitea use for authentication.) While there exists a list of Gitea features and Gogs features these lists are difficult to compare.

Both programs have a CLI albeit with slightly differing commands. The CLI is used for backups, user creation, and other administrative commands.

$ gogs admin create-user --name jane --password sekrit --email jane@example.com
New user 'jane' has been successfully created!

I initially chose to use Gitea but thought I’d ask yesterday in a poll: 50/50 is about the reponse I get. A friendly user wrote privately and said:

I’ve been running Gogs for ~20 active users on a low-end VPS alongside other services for a year now. The cli is quite simple and works well, especially backups. Manual upgrades via git went well too 6 month ago.

Both utilities have a dump respectively backup CLI command to create a backup of their data in a ZIP file.

I find Gogs documentation more comprehensive (and Gitea’s sometimes links to it). Featurewise, both are more or less on par, at least in terms of visible features. Both are Open Source, and both projects have over 500 open issues in their trackers and several dozen open pull requests.

Gogs and Gitea can import projects, so I used one of my Github-hosted repositories as a source to produce the following screen shots.





A bit of both

Both programs display commits like Github does and have a unified diff and a split (side-by-side) diff view. In Gitea the knobs are located as I know them from Github, but that doesn’t mean Gogs’ knob placement isn’t better. Interestingly, repository settings and other pages in Gogs are styled more similarly to Github than in Gitea. So again, six of one and half a dozen of the other. Both allow import of existing repositories (as I did above), though just the repository is imported: neither the issues nor the pull requests, at least not from Github.

issue tracker gitea

Both have an issue tracker with github-flavored Markdown support, file attachments, etc. There are slight cosmetic differences but nothing drastic that I can see. Both support Git hooks, Webhooks and deployment keys (and I do prefer the page layout that Gogs offers in the “sub pages” such as settings).

I’ve chosen to use Gitea, but as I’ve said: it’s a hard toss.

Can I have an SCM (Source Code Management) update trigger the launch of an AWX job? The answer is yes, and it’s one of the interesting things I can do to remote-control AWX.


What I need is some way to invoke a program as soon as I commit something. Subversion, git, etc. support hooks, but cloud-based repository systems (Github, Bitbucket) also support Webhooks which I’ll use here.

In this example I’m using gitea which calls itself a painless self-hosted Git service; it’s all of that and more – it’s one of those gorgeous single-binary Go programs which I’m liking more and more. (I showed you another recently – CoreDNS.) Gitea is very much like using Github, but you host it yourself, and it’s trivial to get started with it.

Within gitea, I configure an outgoing Webhook:

Gitea with a Webhook

From now on, as soon as this repository gets a commit pushed to it, the specified URL will be invoked

On the other side, I run a lightweight, configurable utility (again in Go), called adnanh/webhook. This listens for HTTP payloads, extracts JSON from them, and it can invoke a program of my choice to react to the hook. This could be any kind of HTTP endpoint which reacts to a Webhook POST, but I chose this for simplicity.

I configure webhook to run with the following configuration, which will extract the repository’s name and the secret specified in the hook invocation from the incoming payload (here is the full payload sent by gitea).

    "id": "awx-atest",
    "execute-command": "/Users/jpm/bin/awx-hook.sh",
    "command-working-directory": "/tmp/",
    "pass-arguments-to-command": [
        "source": "payload",
        "name": "repository.full_name"
        "source": "payload",
        "name": "secret"

I launch webhook and watch what happens when I commit and push to the repository:

./webhook -hooks hooks.json -verbose
[webhook] 2017/10/23 18:17:07 version 2.6.5 starting
[webhook] 2017/10/23 18:17:07 setting up os signal watcher
[webhook] 2017/10/23 18:17:07 attempting to load hooks from hooks.json
[webhook] 2017/10/23 18:17:07 os signal watcher ready
[webhook] 2017/10/23 18:17:07 found 1 hook(s) in file
[webhook] 2017/10/23 18:17:07 	loaded: awx-atest
[webhook] 2017/10/23 18:17:07 serving hooks on{id}
[webhook] 2017/10/23 18:17:09 incoming HTTP request from [::1]:54005
[webhook] 2017/10/23 18:17:09 awx-atest got matched
[webhook] 2017/10/23 18:17:09 awx-atest hook triggered successfully
[webhook] 2017/10/23 18:17:09 200 | 388.746µs | localhost:9000 | POST /hooks/awx-atest
[webhook] 2017/10/23 18:17:09 executing /Users/jpm/bin/awx-hook.sh with arguments ["/Users/jpm/bin/awx-hook.sh" "jpm/atest" "none-of-your-business"] and environment [] using /tmp/ as cwd
[webhook] 2017/10/23 18:17:10 command output: {"job":331,"ignored_fields":{},...
[webhook] 2017/10/23 18:17:10 finished handling awx-atest

The truncated output in the second to last line is the JSON returned from the AWX job launch which happens in the awx-hook.sh script:




if [ "$secret" == "none-of-your-business" ]; then
    curl -qs \
        -d '{"extra_vars":{"newpoem":"hello good world"}}' \
        -H "Content-type: application/json" \
        -u admin:password  \

All this is obviously just an example. Refine to your taste (and add lots of error-handling!)

From AWX

Whilst on the topic of Webhooks: AWX can trigger an arbitrary Webhook as a notification; these are invoked on success or on failure (as desired), and produce a payload which looks like this:

  "created_by": "admin",
  "credential": "jp-ed25519",
  "extra_vars": "{}",
  "finished": "2017-10-24T06:05:09.626734+00:00",
  "friendly_name": "Job",
  "hosts": {
    "alice": {
      "changed": 0,
      "dark": 0,
      "failed": false,
      "failures": 0,
      "ok": 2,
      "processed": 1,
      "skipped": 0
  "id": 335,
  "inventory": "j1",
  "limit": "",
  "name": "pi1",
  "playbook": "ping.yml",
  "project": "p1",
  "started": "2017-10-24T06:04:54.127124+00:00",
  "status": "successful",
  "traceback": "",
  "url": "https://towerhost/#/jobs/335"

The next step is to take bits of the payload to indicate success or failure on your monitoring blinkenlights.

View Comments :: Ansible and AWX :: 23 Oct 2017 :: e-mail

I believe there’s a document floating around somewhere in which is written that “JP Mens brought Ansible to Europe in 2012” or something to similar effect. Whilst I think that may be a tad exaggerated it is true that I did a few conferences and talks during which I enthusiastically spoke about the then new kid on the block. I’m recounting this anecdote because something similar may happen with Ansible AWX. I’ll be talking about AWX to anybody who wants to listen.

Ansible AWX is the upstream project which holds the code which at some point in time, and I guess periodically, turns into Ansible Tower. It’s been a long time coming, but Ansible has now open sourced AWX, and I’ll tell you two things:

  1. I wouldn’t want to have to use AWX and forgo the command line (but I know how to overcome the angst)
  2. I know a lot of people have been waiting for this to happen

Forget about my first point: that’s possibly just I, but I do mean it: Ansible without ansible-playbook on the CLI, seeing stdout move past, etc. wouldn’t feel right to me.

I’ve been kicking AWX’ tires quite a bit for several days, and I’ll say one thing: it really is very capable, and I will be recommending organizations take a closer look at it. If you know Tower you know AWX, but there are many who don’t know Tower.

Let me start with a few things I dislike, because it’s quite a short list:

  • documentation is basically what’s available for Ansible Tower, but some bits in that are just not available in AWX, or at least I cannot find them (e.g. settings.py) What we need are docs for things like management, backups, etc. But that’ll hopefully be written in the course of time
  • installation is supported for either docker, OpenShift or Minishift. That’s it. (I had a bit of difficulty wrapping my head around the *shifts, but I got along with the docker install.)
  • the UI needs a huge screen to be usable and occasionally feels sluggish (possibly due to delayed reaction due to background architecture)

Now for the things which I like in AWX:

  • the API, the API, and the API. Honestly, these guys got most of this very right. All we see in the UI is available in the API. tower-cli is also very good
  • the UI which updates via Websockets
  • multiple authentication backends. (I’ve tested TACACS+ and LDAP; both work). Even so, AWX supports local users (yes, which can also be created via the API); there’s also Github, Google, and whatnot
  • some of the terminology is a bit funny, but I quickly got the gist of it, and it makes sense (project, jobs, templates, etc)
  • inventories. Lots of them. Dynamic, static, internal, from SCM.
  • SCM all over. AWX is basically something you can replace and it obtains all it needs from external sources (SCM and PostgreSQL)
  • Role Based Access Control for those who need it. Works pretty well. Give access to a template and user inherits required access to credentials, inventory, etc.
  • Credentials store. Hugely useful.
  • Webhooks (outgoing) as well as API trigger from incoming hooks. That’s how I’d use AWX to avoid having to click in the UI
  • Workflows. Neat. Like a mini CI/CD thing.
  • external logging (ELK, Splunk.etc.) though what I see going out in the logs is meh
  • Notifications galore. Why wasn’t my mqtt notifier implemented? :-)
  • Clustering and High-Availability.

This isn’t an introduction to AWX. It’s more me wanting to whet your appetite. I’ll be speaking about AWX very soon, and I’m already working on an AWX training. At the first talk which I’ll give in the Netherlands, at the NLUUG I’ll be diving into as good an overview as I can give in 45 minutes. With screen-shots & things.

Further reading:

View Comments :: Ansible :: 20 Oct 2017 :: e-mail

The DNSSEC chain of trust starts at the root of the DNS with a resolver typically trusting said root by the fact that it’s got the root key (or hash thereof, called a Delegation Signer – DS record) built-in or configured into it. From there, a resolver chases delegation signer records (DS) which indicate to it, that a child zone is signed. We can compare this to how a resolver chases name server (NS) records to find delegations. The hash of a child zone’s DNSKEY is a DS record which is located in it’s parent zone and which has therefore been signed by the parent.

chain of trust

In the case of example.net, we know that net is signed, so the root zone contains a DS record for net. If example.net is signed, its parent zone (net) contains a DS record for example.net, and so forth.

Any child zone which is signed must have a hash of its secure entry point as a DS record in its parent zone.

Uploading DS from a child to a parent zone can be an entertaining proposition. Anything from copy/paste into some (often lousy) Web form to sending an email might be available. Unfortunately there’s no real standard to accomplish this as some parent zones want DS records whereas others insist on DNSKEY records (from which they calculate the DS themselves). Be that as it may, what we typically do is to obtain the DS. For utilities provided by BIND or PowerDNS:

$ dnssec-dsfromkey Kexample.com.+005+08419
example.com. IN DS 8419 5 1 2E4D616E70FED736A08D7854BCDD3D269A604FD3
example.com. IN DS 8419 5 2 6682CC1E528930DB7E097101C838F8D3D0DBB8EC5D1E8B50A5425FE57AB058C6

$ dig sub.example.net DNSKEY | dnssec-dsfromkey -f - sub.example.net
sub.example.net. IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net. IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

With a bit of zone name mangling and TTL adding we can use pdnsutil with dnssec-dsfromkey, but pdnsutil has its own subcommand as well:

$ pdnsutil export-zone-dnskey sub.example.net 32 |
     awk 'NR==1 { sub(" ", ". 60 "); print; }' |
     dnssec-dsfromkey -f - -T 120 sub.example.net
sub.example.net. 120 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net. 120 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

$ pdnsutil export-zone-ds sub.example.net
... (shown below)

Generally speaking the story stops here, and I’d leave you in charge of getting that DS-set to your parent zone somehow. Digressing only slightly, OpenDNSSEC has for ages, had a DelegationSignerSubmitCommand program in its configuration which can upload DS/DNSKEY to a parent via a program you create; the script you write and configure gets new keys via stdin and you can then automate submission to a parent zone to your heart’s content.

Can I haz automatik?

What we really want is automatic DS submission such as that the child zone uploads the DS directly to the parent zone where it is then signed. Unless the parent and the child zone are both under my administrative charge, that’s easier said than done: it’s unlikely the parent will allow me to do that.

Enter RFC 7344 which allows me to indicate, in my child’s zone, that I have a new DS record for submission. (This also works for DNSKEY records for those parents which prefer DNSKEY.) The fact that the child zone has a new DS for submission is indicated with a CDS record (child DS) and/or CDNSKEY (child DNSKEY) respectively. What will actually happen is that the parent will “consume” CDS/CDNSKEY records instead of the child “pushing” them somewhere. Hereunder I will be using CDS because they’re shorter, but CDNSKEYs work equally well.

As per section 4 of RFC 7344, if a child publishes either CDS or CDNSKEY it should publish both, unless the child knows the parent will use one of a kind only.

Using PowerDNS, I can configure the Authoritative server to automatically publish CDS and/or CDNSKEY records:

$ pdnsutil set-publish-cds zone
$ pdnsutil set-publish-cdskey zone

The process for BIND is a bit more involved. What I do here is to set a timing parameter on a key when I create a new key (or just after having created it).

$ dnssec-settime -P sync +1mi Kexample.com.+005+08419.

$ $ grep Sync Kexample.com.+005+08419.key
; SyncPublish: 20170921094522 (Thu Sep 21 11:45:22 2017)

When running as an in-line signer, BIND will publish CDS and CDNSKEY records for the particular key until I use dnssec-settime to have it remove such records from the zone. (Note that BIND as smart signer (dnssec-signzone -S) does not add CDS or CDNSKEY records to the signed zone. Why? Good question; IMO an omission.)

So, ideally, what we then need is a mechanism by which a server checks for CDS/CDNSKEY records in a child zone and then updates the corresponding parent zone.


A combination of dig and a new utility will allow me to automate the process.


Tony Finch has written such a beast. It’s called dnssec-cds and it’s currently in a git tree he maintains. What this program does is to change DS records at a delegation point based on CDS or CDNSKEY records published in the child zone. By default CDS records are used if both CDS and CDNSKEY records are present.

What we’ll actually be doing in order to add a new signed child zone is:

  1. Create and sign the zone.
  2. Obtain the DS-set, copy that securely to the parent, and sign the result. We do this step once and we do it securely because this is how we affirm trust between parent and child.
  3. Once in the parent zone, the DS records of the child indicate the child zone’s secure entry point: validation can be chased down into the child zone.
  4. When the child’s KSK rolls, ensure child zone contains CDS/CDNSKEY records.
  5. Parent will periodically query for child’s CDS/CDNSKEY records; if there are none, processing stops.
  6. As soon as CDS/CDNSKEY records are visible in the child, dnssec-cds validates these by affirming, using the original DS-set obtained in 2, that they’re valid and not being replayed.
  7. A dynamic (or other) update can be triggered on the parent to add the child’s new DS-set.

dnssec-cds protects against replay attacks by requiring that signatures on the child’s CDS are not older than they were on a previous run of the program. (This time is obtained by the modification time of the dsset- file or from the -s option. Note below that I touch the dsset- file to ensure this, just the first time.) Furthermore, dnssec-cds protects against breaking the delegation by ensuring that the DNSKEY RRset can be verified by every key algorithm in the new DS RRset and that the same set of keys is covered by every DS digest type.

dnssec-cds writes replacement DS records (i.e. The new DS-set_ to standard output or to the input file if -i is specified, and -u prints commands suitable to be read by a dynamic DNS utility such as nsupdate. The replacement DS records will be the same as the existing records when no change is required. The output can be empty if the CDS / CDNSKEY records specify that the child zone wants to go insecure.

servers in use

The BIND name server in my example hosts the parent zone example.net, and we’ll create a child zone (sub.example.net) on PowerDNS Authoritative (because we can). Which server brand the zone’s hosted on is quite irrelevant other than it must be able to serve CDS/CDNSKEY records in the zone. This is particularly easy to automate with PowerDNS.

First we sign the child zone and export its DS-set:

$ pdnsutil secure-zone sub.example.net
Securing zone with default key size
Adding CSK (257) with algorithm ecdsa256
Zone sub.example.net secured
Adding NSEC ordering information

$ pdnsutil export-zone-ds sub.example.net > dsset-sub.example.net.
$ cat dsset-sub.example.net.
sub.example.net. IN DS 32128 13 1 6823d9bb1b03df714dd0eb163e20b341c96d18c0 ; ( SHA1 digest )
sub.example.net. IN DS 32128 13 2 039b660206db76611305288042ee3fa132f3709e229005baf2b24bcdae7bc513 ; ( SHA256 digest )
sub.example.net. IN DS 32128 13 3 753cf5f1c9a73fdaf3e09454a55916e7381bf24ce3c0e077defe1cfd15b6da5f ; ( GOST R 34.11-94 digest )
sub.example.net. IN DS 32128 13 4 e772f48556bf23effe80946a5306e5d00c6138d321f6d0a66a2673d2b2bee4e3fe13340a7bcf921484081046e92ca983 ; ( SHA-384 digest )

Note how the exported dsset- contains one DS for each algorithm supported by my PowerDNS installation. We now copy the dsset- to the parent server, and add its content to the parent zone. The zone is configured with auto-dnssec maintain so BIND will immediately sign anything we add to it.

( echo "ttl 60"
  sed -e "s/^/update add /" -e "s/;.*//" dsset-sub.example.net.
  echo "send" )  | nsupdate -l

If I now query for the DS records for sub.example.net in the parent zone (recall a DS RRset is in the parent) I obtain an appropriate response:

$ dig +norec @BIND sub.example.net ds
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14192
;; flags: qr aa; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

sub.example.net.        60      IN      DS      32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
sub.example.net.        60      IN      DS      32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513
sub.example.net.        60      IN      DS      32128 13 3 753CF5F1C9A73FDAF3E09454A55916E7381BF24CE3C0E077DEFE1CFD 15B6DA5F
sub.example.net.        60      IN      DS      32128 13 4 E772F48556BF23EFFE80946A5306E5D00C6138D321F6D0A66A2673D2 B2BEE4E3FE13340A7BCF921484081046E92CA983

Our parent zone is signed, our child zone is signed, our parent has a signed DS record (more than one actually, but that’s fine) for our child zone: the chain of trust is in place. (Note the key tag on the DS: 32128.)

Let it roll!

At some point in time we want to roll the child’s KSK, and I am not going to address timing issues of the roll proper; I’m discussing CDS only.

In order to roll a key, we create a new key in the child zone. Simultaneously we request PowerDNS publish CDS records in the zone for all keys:

$ pdnsutil add-zone-key sub.example.net ksk 256 active ecdsa256
Added a KSK with algorithm = 13, active=1
Requested specific key size of 256 bits

$ pdnsutil set-publish-cds sub.example.net

$ pdnsutil show-zone sub.example.net
This is a Master zone
Last SOA serial number we notified: 0 != 3 (serial in the database)
Metadata items:
        PUBLISH-CDS     1,2
Zone has NSEC semantics
ID = 31 (CSK), flags = 257, tag = 32128, algo = 13, bits = 256    Active ( ECDSAP256SHA256 )
CSK DNSKEY = sub.example.net. IN DNSKEY 257 3 13 12lrJwo8w/PbnD8JssSlmuN7adbidwCsCaFn2yiXctj2k9g9dlGw+KTDqRsanj4InPgGcQwllBRGSojfwZVHRQ== ; ( ECDSAP256SHA256 )
DS = sub.example.net. IN DS 32128 13 1 6823d9bb1b03df714dd0eb163e20b341c96d18c0 ; ( SHA1 digest )
DS = sub.example.net. IN DS 32128 13 2 039b660206db76611305288042ee3fa132f3709e229005baf2b24bcdae7bc513 ; ( SHA256 digest )
DS = sub.example.net. IN DS 32128 13 3 753cf5f1c9a73fdaf3e09454a55916e7381bf24ce3c0e077defe1cfd15b6da5f ; ( GOST R 34.11-94 digest )
DS = sub.example.net. IN DS 32128 13 4 e772f48556bf23effe80946a5306e5d00c6138d321f6d0a66a2673d2b2bee4e3fe13340a7bcf921484081046e92ca983 ; ( SHA-384 digest )
ID = 32 (CSK), flags = 257, tag = 48629, algo = 13, bits = 256    Active ( ECDSAP256SHA256 )
CSK DNSKEY = sub.example.net. IN DNSKEY 257 3 13 EY2fpwiU3dcg22g83gC+9oQ65vJHPELR6sU1MLB8r8F+6egarSIDzjyM5AY2RlbFGgOkjpPMaUonCONPalOQ4A== ; ( ECDSAP256SHA256 )
DS = sub.example.net. IN DS 48629 13 1 4e324c9416d0009b4262c39494a1c7989f9c055c ; ( SHA1 digest )
DS = sub.example.net. IN DS 48629 13 2 87081d41bbaba1c25d28f48ede7718e96ea8387cae2a286fa5c61e57971b8c66 ; ( SHA256 digest )
DS = sub.example.net. IN DS 48629 13 3 99eadcdc47adfe2f68df3e1a4aa775fa409bafbd7815ca1c2643cdf49a0996bf ; ( GOST R 34.11-94 digest )
DS = sub.example.net. IN DS 48629 13 4 f961984bc561906cde1987bf89f90654865d4b9500ee7eed8bf4a0245244ac492eeb66776475e7448826f74638ad9e9e ; ( SHA-384 digest )

This output is easy to follow once we notice that the top part has some metadata and then come the keys. Note that pdnsutil is printing a DS record for each of the algorithms PowerDNS supports, hence the verbosity. Let’s pay attention to the key tags: in above list we see our original 32128 tag and the new tag 48629.

The child zone is still signed; there are two keys in the zone, and we’ve requested CDS records be published. Does that work?

$ dig @POWERDNS sub.example.com cds
sub.example.net.        3600    IN      CDS     32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
sub.example.net.        3600    IN      CDS     48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net.        3600    IN      CDS     32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513
sub.example.net.        3600    IN      CDS     48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

The CDS records are available with the digest algorithms currently implemented for DS, namely 1 (SHA1) and 2 (SHA256).

.. to the parent

Back on the parent, we prepare to use dnssec-cds for the magic. We already have the dsset- file, and as discussed above I touch its timestamp (or use -s switch):

$ touch -t 201709140000 dsset-sub.example.net.

$ cat run-cds.sh

dig @POWERDNS +dnssec +noall +answer $z DNSKEY $z CDNSKEY $z CDS |
    dnssec-cds -u -i -f /dev/stdin -T 42 -d . -i.orig $z |
    tee /tmp/nsup |
    nsupdate -l

$ ./run-cds.sh

dnssec_cds with the -u option creates a script suitable for feeding into nsupdate; for debugging purposes, I tee it into a file to show you here:

$ cat /tmp/nsup
update add sub.example.net. 42 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
update add sub.example.net. 42 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66
update del sub.example.net. IN DS 32128 13 3 753CF5F1C9A73FDAF3E09454A55916E7381BF24CE3C0E077DEFE1CFD15B6DA5F
update del sub.example.net. IN DS 32128 13 4 E772F48556BF23EFFE80946A5306E5D00C6138D321F6D0A66A2673D2B2BEE4E3FE13340A7BCF921484081046E92CA983

querying the parent we see the DS records with the superflous algorithms have been deleted and the DS records for the new key have been added. We also see our dsset- file has been updated accordingly (and I pay attention to the file’s modification time which has been set to the inception time of the DNSKEY RRSIG of the child zone):

$ dig +norec @BIND sub.example.net ds
sub.example.net.        42      IN      DS      32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
sub.example.net.        42      IN      DS      32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513
sub.example.net.        42      IN      DS      48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net.        42      IN      DS      48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

$ cat dsset-sub.example.net.
sub.example.net. 42 IN DS 32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
sub.example.net. 42 IN DS 32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCDAE7BC513
sub.example.net. 42 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net. 42 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

Now I delete the “old” key from the child zone using its (in my opinion slightly confusing) ID which is 31 – compare with the output of pdnsutil show zone above. (I would have preferred pdnsutil utilize key tags to refer to keys for a zone):

$ pdnsutil remove-zone-key sub.example.net 31

Now comes the drum-roll moment: if we re-run our dnssec-cds script will it blend?

$ ./run-cds.sh

$ cat /tmp/nsup
update del sub.example.net. IN DS 32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
update del sub.example.net. IN DS 32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCDAE7BC513
$ dig +norec @BIND sub.example.net ds
sub.example.net.        42      IN      DS      48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
sub.example.net.        42      IN      DS      48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

A few points to note:

  • when lookup at the nsupdate script produced by dnssec-cds pay attention to add vs. del on the update statements.
  • it’s not necessary to have dnssec-cds maintain the dsset- file on the file system, but it gives me a warm and fuzzy feeling so I think I’d always do that
  • I should also mention that the dnssec-dsfromkey utility is quite versatile; we saw it above, and it’s good to know that the -C option creates CDS records au lieu de DS records.

Tony’s dnssec-cds together with a wee bit of scripting will basically allow us to add new DS for zones to their parent zones. In the examples above I’ve used nsupdate, but this could equally well be accomplished by other means.

View Comments :: DNS and DNSSEC :: 21 Sep 2017 :: e-mail

DNS servers optionally log queries on demand by formatting a message and storing that in a file, sending it through syslog, etc. This is an I/O-intensive operation which can dramatically slow down busy servers, and the biggest issue is we get the query but not the associated response.

[1505125481] unbound[89142:0] info: dnstap.info. A IN

10-Sep-2017 08:31:03.644 client @0x7f9b12dd5c00 (dnstap.info): view internal: query: dnstap.info IN A +E(0)K (

In addition to having to format the data into human-readable format and write the resulting string to a file, DNS server authors haven’t been able to standardize on query logging formats. As can be seen by the two examples above (first Unbound, then BIND), the strings differ dramatically. The different results also mean that further parsing/processing of these logs will have to be different as well. (Have fun building regular expressions for both and having more than two problems.)

One method to overcome this is to capture packets externally, such as how DSC does it, but doing it in this fashion means the software must deal with several things the name server has already dealt with: fragments, TCP stream reassembly, spoofed packets, etc. (Here’s a bit of a “versus” thread.)

An issue with both these methods is that the query a name server received and the response it returned aren’t bundled together. Only the name server software itself knows, really, what belongs together at the time the query occurred and the response was returned. Can you imagine a DNS log so complete that you could see what query a client issued and which response it got?

dnstap is a solution which introduces a flexible, binary log-format for DNS servers together with Protocol Buffers, a mechanism for serializing structured data. Robert Edmonds had the idea for dnstap and created the first implementation with two specific use cases in mind:

  • make query-logging faster by eliminating synchronous I/O bottlenecks and message formatting
  • avoid complicated state reconstruction by capturing full messages instead of packets for passive DNS

What dnstap basically does is to add a lightweight message-copy routine into a DNS server. This routine duplicates a DNS message with context and moves it out of the DNS server to a listening process, typically via a Unix domain socket. In case of extreme load, a DNS server can simply start dropping log payloads instead of degrading server performance.

dnstap enabled DNS server (from project)

The dnstap protocol buffer content is defined in this schema, and includes a type of message (see below), the socket family queries/responses were transported on, socket protocols, query and responder addresses, the initiator’s DNS query message in wire format, timestamps, and the original wire-format DNS response message, verbatim.

dnstap is currently implemented in a few utilities as well as in these DNS servers:

  • BIND
  • CoreDNS
  • Knot 2.x
  • Knot Resolver (> 1.2.5)
  • Unbound

For my experiments, I’ll be using BIND 9.11.2, CoreDNS-011, Knot 2.5.4, and Unbound 1.6.5.

Before launching a dnstap-enabled (and configured) DNS server, we have to ensure a listener has created the Unix domain socket. The dnstap code in the BIND, Unbound etc. acts as a client rather than a server, so it requires a server which will accept connections. Robert Edmonds, dnstap inventor, did it this way so that a single socket could be used by different dnstap senders (like how a system daemon listens to messages from multiple clients). If the Unix socket isn’t present or nothing’s listening on it, the client code (in the DNS server) will periodically attempt reconnection.

We’ll be looking at two programs which provide this functionality.


We’ll likely have to build our DNS server installations ourselves as official packages are typically not built with dnstap support. The requirements for all the below (except CoreDNS which provides everything in its single statically linked binary) will be fstrm, protobuf, and protobuf-c:

  • fstrm is a frame streams implementation in C. It implements a lightweight protocol with which any serialized data format which produces byte sequences can be transported and provides a Unix domain listener (fstrm_capture) for dnstap records written by the DNS servers.
  • protobuf is the implementation of Google’s Protocol Buffers format. We install it in order to build and use some of the utilities, namely the protobuf compiler.
  • protobuf-c is a C implementation of the latter; this includes a library (libprotobuf-c) which some of the utilities require.

Other than these requirements, a number of the DNS server implementation I mention have their own additional requirements which I will not specify here – the projects’ documentation will tell you more.


What follows are some utilities we’ll be using for working with and/or decoding (i.e. printing) dnstap records.

dnstap -u


I can use fstrm_capture to create a required Unix domain socket which dnstap clients can write to. The program needs a “Frame Streams” content type specified as well as the path to the Unix socket and the file name it should protocol buffer frames to:

$ fstrm_capture -t protobuf:dnstap.Dnstap -u /var/run/dnstap.sock -w fstrm.tap

While there is provision in the code to handle SIGHUP (fstrm_capture then flushes the output file), there is no provision for file rotation.

An alternative method for doing similarly is to use the dnstap utility from the dnstap packag.


The dnstap project maintains a dnstap utility written in Go. There are, unfortunately, no prebuilt binaries on the releases page, but building the program is easy (after you go through the hassle of installing go).

I launch the dnstap utility (instead of launching fstrm_capture) like this:

dnstap -u /var/run/dnstap.sock -w file.tap

I can also use dnstap to read a tap file from the file system and print it out in various formats, which I will be showing below when we look at some examples. dnstap can also create a TCP endpoint (e.g. for CoreDNS) with dnstap -l <address:port>


For the actual decoding of dnstap files (i.e. printing them out), we can use dnstap as just discussed, or the reference utility which is called dnstap-ldns which has thankfully kept the option letters used by dnstap. However, this utility, as the name implies, brings an additional dependency, namely ldns. (But you have that already for its utility programs, don’t you?)


Whilst on the subject of decoding dnstap files, dnstap-read, from the BIND distribution, can also do that nicely. By default it prints the short version, but with -y it’ll also do the long YAML format.

$ dnstap-read file.tap
11-Sep-2017 10:59:00.652 CR <- UDP 107b www.test.aa/IN/A
11-Sep-2017 10:59:00.954 CR <- UDP 107b www.test.aa/IN/A
$ dnstap-read -y file.tap
identity: tiggr
version: bind-9.11.2
  response_time: !!timestamp 2017-09-11T08:59:00Z
  message_size: 107b
  socket_family: INET
  socket_protocol: UDP
  query_port: 61308
  response_port: 53
    opcode: QUERY
    status: NOERROR
    id:  24094
    flags: qr aa rd ra
    ANSWER: 1
        version: 0
        udp: 4096
        COOKIE: 67816834b9432822c5a508fd59b65054fb5bbab0c5fe14f8
      - www.test.aa. IN A
      - www.test.aa. 60 IN A
      - test.aa. 60 IN NS localhost.
  response_message: |
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id:  24094
    ;; flags: qr aa rd ra    ; QUESTION: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
    ; EDNS: version: 0, flags:; udp: 4096
    ; COOKIE: 67816834b9432822c5a508fd59b65054fb5bbab0c5fe14f8
    ;www.test.aa.			IN	A

    www.test.aa.		60	IN	A

    test.aa.		60	IN	NS	localhost.


kdig is the dig-like utility shipped with Knot, and it too can read dnstap files and present them in dig-like (I should probably say kdig-like) manner. I note that kdig doesn’t show version information recorded in the tap file.

$ kdig -G file.tap +multiline
;; Received 759 B
;; Time 2017-09-10 06:28:20 UTC
;; From in 32.1 ms
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 38621
;; Flags: qr aa; QUERY: 1; ANSWER: 3; AUTHORITY: 0; ADDITIONAL: 1

;; Version: 0; flags: do; UDP size: 1280 B; ext-rcode: NOERROR

;; dnstap.info.          IN A

dnstap.info.            3600 IN A
dnstap.info.            3600 IN A
dnstap.info.            3600 IN RRSIG A 5 2 3600 20171006034256 (
                                20170906032611 36186 dnstap.info.

What’s quite practical is that kdig can record a live query / response (i.e. something you’d do right now) into a tap file. So, in the following example, I use kdig to perform a query and the program writes in dnstap format to the specified file what we get on stdout:

$ kdig -E iis-a.tap iis.se AAAA
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 53652
;; Flags: qr rd ra; QUERY: 1; ANSWER: 1; AUTHORITY: 3; ADDITIONAL: 0

;; iis.se.             		IN	AAAA

iis.se.             	60	IN	AAAA	2001:67c:124c:4006::214

iis.se.             	3600	IN	NS	i.ns.se.
iis.se.             	3600	IN	NS	ns.nic.se.
iis.se.             	3600	IN	NS	ns3.nic.se.

$ ls -l *.tap
-rw-r--r-- 1 jpm users    305 Sep 10 14:55 iis-a.tap

# (I have reported the epoch of 1970-01-01 as a bug to the knot-dns project)

$ dnstap-ldns -r iis-a.tap
1970-01-01 04:22:23.659631 TQ UDP 24b "iis.se." IN AAAA
1970-01-01 04:22:23.725209 TR UDP 110b "iis.se." IN AAAA

$ dnstap-ldns -r iis-a.tap -y
version: "kdig 2.5.4"
  query_time: !!timestamp 1970-01-01 04:22:23.659631
  response_time: !!timestamp 1970-01-01 04:22:23.725209
  socket_family: INET
  socket_protocol: UDP
  query_port: 54370
  response_port: 53
  response_message: |
    ;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 53652
    ;; flags: qr rd ra ; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 0

    ;iis.se.	IN	AAAA

    iis.se.	60	IN	AAAA	2001:67c:124c:4006::214

(Note how kdig uses the TOOL_* subtypes in the dnstap records.)

After discussing some of the tools for working with (in particular for decoding) dnstap, I now turn to the DNS servers proper.

DNS servers

Before we look at the individual DNS servers and how to configure them for support of dnstap, it’s interesting to know that dnstap currently has 12 defined subtypes of dnstap “Message”. dnstap tags a log record with a subtype corresponding to the location at which a record was recorded, so we can at any point in time see where the record was collected.

dnstap flow


These subtypes ought to be pretty self-explanatory, but their full description is in the dnstap protocol schema. The diagram above illustrates at which point they are obtained. The mnemonics in parenthesis are those which are output by the utilities in “quiet” mode.


BIND’s configuration is as flexible as Unbound’s in terms of dnstap logging. I build named by adding --enable-dnstap to ./configure and then modify named.conf.

I can set different types to be logged for each view (but I dislike views so I wont’t do that). Supported types are client, auth, resolver, and forwarder as well as all, which causes all dnstap messages to be logged regardless of their type. Each type may take an additional argument to indicate whether to log query or response messages. If not specified, BIND should log both, but this didn’t work for me.

options {
    dnstap { all; };
    // dnstap { auth; resolver query; resolver response; };

    /* where to capture to: file or unix (socket) */
    // dnstap-output file "/tmp/named.tap";
    dnstap-output unix "/var/run/dnstap.sock";

    dnstap-identity "tiggr";
    dnstap-version "bind-9.11.2";

When named starts it starts producing dnstap log data. When writing to a file, we can instruct named to truncate and reopen the file or we can instruct named to roll its dnstap output file using rndc:

$ rndc dnstap -reopen        # Close, truncate and re-open the DNSTAP output file.
$ rndc dnstap -roll <count>  # Close, rename and re-open the DNSTAP output file(s).

If you’re interested in the nitty-gritty of dnstap with servers with are both authoritative and recursive, here’s a thread Evan Hunt started. (But in my opinion you should not be interested in servers which are simultaneously authoritative and recursive …)

Other than that, there’s a good single-page document on using dnstap with BIND.

In BIND 9.12.x, dnstap logfiles can be configured to automatically roll when they reach a specified size, for example:

dnstap-output file "/taps/prod.tap" size 15M versions 100 suffix increment;


I spoke earlier of CoreDNS, and one of the really great things of this single-binary program is that it’s bundled with all I need to produce dnstap frames.

The following configuration suffices for CoreDNS to provide a forwarder which logs all requests in to the specified Unix domain socket:

.:53 {
    dnstap /var/run/dnstap.sock full
    proxy .

If I then look at a query I see what type of DNS server produced this query, namely a forwarder.

$ dnstap -r coredns.tap -y
  socket_family: INET
  socket_protocol: UDP
  response_port: 53
  query_message: |
    ;; opcode: QUERY, status: NOERROR, id: 60806
    ;; flags: rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

    ;dnstap.info.       IN       A


    ; EDNS: version 0; flags: ; udp: 4096
    ; COOKIE: 7f8f3ebbf66ffc95


I note that CoreDNS records neither identity nor version in the tap file. CoreDNS can log to a remote endpoint by specifying tcp://address:port as sink.


Unbound has had dnstap support for a few versions, since Robert Edmonds did the first prototype. I build dnstap support into Unbound with --enable-dnstap.

    dnstap-enable: yes
    dnstap-socket-path: "/var/run/dnstap.sock"
    dnstap-send-identity: yes
    dnstap-send-version: yes
    dnstap-log-client-query-messages: yes
    dnstap-log-client-response-messages: yes
    dnstap-log-forwarder-query-messages: yes
    dnstap-log-forwarder-response-messages: yes
    dnstap-log-resolver-query-messages: yes
    dnstap-log-resolver-response-messages: yes

A local-zone is answered directly by Unbound without performing recursion, so you’ll only see response messages for those domains if you set “dnstap-log-client-response-messages: yes”.

The documentation of dnstap in unbound.conf is, well, no it’s not, it’s simply not there. Actually there is no documentation at all for dnstap in unbound-1.6.5/doc/ which is quite atypical: Unbound’s usually very good about that…


I tested knot-2.5.4 (authoritative) and built it with

./configure --with-module-dnstap=yes --enable-dnstap

I configure dnstap in knot.conf by specifying the module to load (mod-dnstap) and its parameters, most of which are self-explanatory and have sensible defaults. The sink directive specifies either a file on the file system (which is opened for truncate) or, if prefixed with the string "unix:", a Unix domain socket, e.g. as created by fstrm_capture.

  - id: tap
    sink: /root/taps/knot-auth.tap
    # sink: unix:/var/run/dnstap.sock
    log-queries: false
    log-responses: true

  - id: default
    global-module: mod-dnstap/tap
$ dnstap-ldns -r knot-auth.tap  -y
identity: "knot.ww.mens.de"
version: "Knot DNS 2.5.4"
  query_time: !!timestamp 1970-01-01 21:10:59.484261
  response_time: !!timestamp 1970-01-01 21:10:59.484261
  socket_family: INET
  socket_protocol: UDP
  query_port: 53394
  response_message: |
    ;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 35495
    ;; flags: qr aa rd ; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

    ;www.k20.aa.	IN	A

    www.k20.aa.	3600	IN	A



    ;; EDNS: version 0; flags: ; udp: 4096

Daniel writes that a knotc reload will rotate the dnstap output file as will a SIGHUP.

Wrapping up

There’s a lot of good in dnstap, and it’s a huge improvement over what existed beforehand. There are however a few things to take note of:

  • While debugging, it can take a while until queries start showing up in the dnstap file due to the buffering which the DNS servers do
  • I’m convinced (but have not taken the time to prove) some servers drop logs even though they’re completely idle. This might be due to either real drops or to not flushing existing records when a server is stopped. For example, after shutting down named (rndc stop) and killing dnstap I find the last query missing. People with heavy-traffic servers won’t notice this of course.
  • The existing toolset is a bit sloppy at times. For example fstrm_capture or dnstap -u which cannot rotate output files (the former can rotate every N seconds). This is easy to fix and it needs doing.
  • There’s no network transport of dnstap other than CoreDNS which can send dnstap to a tcp:// target. Somebody started some work but I haven’t seen it.

dnstap is a relatively young, open standard for DNS query logging. It was designed for large, busy DNS servers and offers minimal performance loss. It has already wide adoption amongst open source DNS server implementations, even if there are some missing: NSD, PowerDNS Authoritative, PowerDNS Recursor come to mind, and I hope they’ll very soon join the party. (There’s already a pull request to add dnstap support to dnsdist which will greatly simplify the work for the PowerDNS products, according to the project.)

Further reading:

View Comments :: DNS, logging, and monitoring :: 11 Sep 2017 :: e-mail

Other recent entries