After three (or was it almost four?) years of using Slack, we took the plunge and set up our own instance of Mattermost. The reasons for doing this include wanting more control over our data and wanting an unlimited history which Slack, as a hosted service, offers only to paying customers. This is more than fair enough – it’s not their fault that we’re too stingy. Apropos stingy: Mattermost exists in three editions – we chose the Team Edition; guess why.

Web UI

If you know how to wield Slack, you know how to use Mattermost. (Oh, please don’t mind the awful colors above – this is my test installation and I need severe optical distinction in order to not mistake the installations.) There are few differences, if any. Mattermost’s use of Markdown appears to be more comprehensive, in particular because its Webhooks support Markdown.

Installing Mattermost is easy thanks to the good documentation they provide which explains, step by step, what I have to do to install Mattermost on a machine. I chose to use PostgreSQL because I recall having read that’s Mattermost’s primary database candidate, and because it allows me to use PostgreSQL – a reason sufficient on its own. I selected to loosely follow advice given regarding location of config.json as that seemed a sensible thing to do. (config.json is Mattermost’s central configuration file which is reloaded on change.) If Ansible’s your drug, Pieter Lexis created an Ansible role for installing Mattermost on Debian/Unbuntu, and there’s playbook which does that and more also.

I can create as many teams as I want on a server, and each team can have as many channels as I want. In the Team Edition users authenticate with password and I enforced e-mail verification. (Other editions offer 2FA and LDAP.)

Mattermost users can upload files (images, code snippets, etc.) which have to be stored somewhere. By default a configurable directory on the local file system is used, but Mattermost’s system console allows me to configure Amazon S3 storage such as Minio.

Webhooks, API, Websocket, CLI, etc.

One of the things I like most about Slack are its integrations, and low and behold, Mattermost has these as well. Incoming and outgoing Webhooks as well as slash commands. Lovely.

Also very powerful is the Websocket API; there’s a Python3 driver which works very well, and next to that is Mattermost’s API with which I can create users, get their details, enumerate posts, create posts, etc. The following example using curl and jo shows how I can add a post from the command line:

json=$(jo channel=cartoons1 \
          username="my-script" \
          icon_url="" \
          text='Ha, this is _just_  an example using `curl`, :tada:')

curl -H 'Content-Type: application/json' \
     --data "$json" \

posted with curl

(For a much more flexible solution see 42wim’s mattertee.)

Programs which use Mattermost’s API must authenticate to the service and they can do so with either session tokens that expire and Personal Access Tokens which I create in my account preferences which don’t expire until I revoke them. Additionally Mattermost can act as an OAuth 2.0 provider.

Masses of messages

One of the channels we have is reserved for Nagios/Icinga-type notifications. One thing I wanted to be able to do is to delete and purge those messages; I don’t see why I need to know weeks later that something was offline for a moment. However, if I delete a message, either interactively or via the API, Mattermost soft deletes it; the message is marked as deleted with a time stamp, but it remains in the database.

So I went in search of an API to physically remove these messages, but it doesn’t exist. The solution? Use, say, the API to find the posts I want to remove, “delete” them using said API, and then use an SQL DELETE to purge:

DELETE FROM posts WHERE deleteat <> 0;

Mattermost at a console

Mattermost has a Web UI and some mobile and fat clients, but what does a person do with just a terminal at his/her disposal? Use either matterhorn or Irssi or your favorite IRC client with matterircd.

seed in Matterhorn

What you see in the screenshot above is matterhorn showing what the first Web UI screenshot shows. The program has some really cool features including scripts – just shell scripts which are given the text I enter on stdin and the stdout they produce is posted. Matterircd on the other hand is an IRC to Mattermost gateway written also by 42wim: you connect it to your remote Mattermost installation and talk to it via your IRC client.


Do I regret leaving Slack? Not really, even though their mobile apps are quite a bit more polished than Mattermost’s are – a result of development effort obviously. I now get a warm and quite fuzzy feeling knowing that we have control over our data, how we back it up, and what we do with it. And I’m confident that (other than the NSA) no third party has it.

Apropos 3rd party: while it’s possible to access dozens (or hundreds?) of integrations using an external service called Zapier we will not as that defeats the purpose of wanting to be the sole owners of said data. Similarly we’ve been discussing mobile notifications for which we could either set up mobile push or do it ourselves; we haven’t finally decided yet.

Do it yourself push

Do it ourselves, you ask? Yes, that’s possible by creating a notification endpoint which Mattermost uses whenever it’s about to notify a mobile device. The post I created with a shell script earlier, pushes to this example MQTT:

$ mosquitto_sub -v -t 'mm/#'
mm/_notif my-script in cartoons: Ha, this is _just_  an example using `curl`, :tada:

The way this happens is that Mattermost notifies a Web service I create which obtains the message and disposes of it in any way I want:

#!/usr/bin/env python

from bottle import run, request, post
import json
import paho.mqtt.publish as paho

__author__    = 'Jan-Piet Mens <jp()>'

def post1():

    data = json.loads(

    paho.single("mm/_notif", data['message'],

run(host='', port=8864)

If you’re not interested in mobile push, there’s always e-mail: when users are away or offline they can choose to be notified of new content by e-mail.

If I’d known about Mattermost before, I’d have migrated earlier.

View Comments :: Social :: 30 Jan 2018 :: e-mail

Gogs and Gitea are Open Source, single-binary Go implementations of a Github-like git repository hosting system which I can use privately or in teams. Both are light on system requirements, contrary to a pretty large and quite popular Open Source Java implementation of a similar utility.

Gitea is a fork of Gogs, and it’s very difficult to say which is better: Gogs is, at times, a bit more Github-lookalike I’d say, whereas Gitea’s look is a bit fresher and feels “younger”. Gitea brings everything in a single binary whereas Gogs has a number of support files (CSS, JavaScript, templates) it requires. (This is advantageous because I can change or replace templates if desired. Fabian reminds me that it’s possible to customize Gitea as well.) Both projects appear to be alive and kicking in spite of rumors that Gogs had been abandoned.

gogs installer

Installation basically consists of ensuring the appropriate binary is launched. I then launch a Web browser and point it at the port number indicated on the console ( by default) and answer a few questions. The responses are written into an INI-type configuration file (app.ini) which can be pre-populated of course and looks like this:

DB_TYPE  = sqlite3
HOST     =
NAME     = gogs
USER     = root
SSL_MODE = disable
PATH     = data/gogs.db

ROOT = data/repos

DOMAIN           = localhost
HTTP_PORT        = 4000
ROOT_URL         = http://localhost:4000/
DISABLE_SSH      = false
SSH_PORT         = 4022
OFFLINE_MODE     = true

A cursory search will also provide plenty of resources for installing either with, say, Ansible, if that’s your preferred method. Alternatively, both programs support installation with docker.

Both programs support different backend databases (SQLite3, PostgreSQL, MySQL) and SMTP, PAM, or LDAP authentication. They offer git over HTTP and have an optional embedded SSH server for git over SSH. (Just like in Github or Gitlab, I upload one or more SSH keys which Gogs/Gitea use for authentication.) While there exists a list of Gitea features and Gogs features these lists are difficult to compare.

Both programs have a CLI albeit with slightly differing commands. The CLI is used for backups, user creation, and other administrative commands.

$ gogs admin create-user --name jane --password sekrit --email
New user 'jane' has been successfully created!

I initially chose to use Gitea but thought I’d ask yesterday in a poll: 50/50 is about the reponse I get. A friendly user wrote privately and said:

I’ve been running Gogs for ~20 active users on a low-end VPS alongside other services for a year now. The cli is quite simple and works well, especially backups. Manual upgrades via git went well too 6 month ago.

Both utilities have a dump respectively backup CLI command to create a backup of their data in a ZIP file.

I find Gogs documentation more comprehensive (and Gitea’s sometimes links to it). Featurewise, both are more or less on par, at least in terms of visible features. Both are Open Source, and both projects have over 500 open issues in their trackers and several dozen open pull requests.

Gogs and Gitea can import projects, so I used one of my Github-hosted repositories as a source to produce the following screen shots.





A bit of both

Both programs display commits like Github does and have a unified diff and a split (side-by-side) diff view. In Gitea the knobs are located as I know them from Github, but that doesn’t mean Gogs’ knob placement isn’t better. Interestingly, repository settings and other pages in Gogs are styled more similarly to Github than in Gitea. So again, six of one and half a dozen of the other. Both allow import of existing repositories (as I did above), though just the repository is imported: neither the issues nor the pull requests, at least not from Github.

issue tracker gitea

Both have an issue tracker with github-flavored Markdown support, file attachments, etc. There are slight cosmetic differences but nothing drastic that I can see. Both support Git hooks, Webhooks and deployment keys (and I do prefer the page layout that Gogs offers in the “sub pages” such as settings).

I’ve chosen to use Gitea, but as I’ve said: it’s a hard toss.

Can I have an SCM (Source Code Management) update trigger the launch of an AWX job? The answer is yes, and it’s one of the interesting things I can do to remote-control AWX.


What I need is some way to invoke a program as soon as I commit something. Subversion, git, etc. support hooks, but cloud-based repository systems (Github, Bitbucket) also support Webhooks which I’ll use here.

In this example I’m using gitea which calls itself a painless self-hosted Git service; it’s all of that and more – it’s one of those gorgeous single-binary Go programs which I’m liking more and more. (I showed you another recently – CoreDNS.) Gitea is very much like using Github, but you host it yourself, and it’s trivial to get started with it.

Within gitea, I configure an outgoing Webhook:

Gitea with a Webhook

From now on, as soon as this repository gets a commit pushed to it, the specified URL will be invoked

On the other side, I run a lightweight, configurable utility (again in Go), called adnanh/webhook. This listens for HTTP payloads, extracts JSON from them, and it can invoke a program of my choice to react to the hook. This could be any kind of HTTP endpoint which reacts to a Webhook POST, but I chose this for simplicity.

I configure webhook to run with the following configuration, which will extract the repository’s name and the secret specified in the hook invocation from the incoming payload (here is the full payload sent by gitea).

    "id": "awx-atest",
    "execute-command": "/Users/jpm/bin/",
    "command-working-directory": "/tmp/",
    "pass-arguments-to-command": [
        "source": "payload",
        "name": "repository.full_name"
        "source": "payload",
        "name": "secret"

I launch webhook and watch what happens when I commit and push to the repository:

./webhook -hooks hooks.json -verbose
[webhook] 2017/10/23 18:17:07 version 2.6.5 starting
[webhook] 2017/10/23 18:17:07 setting up os signal watcher
[webhook] 2017/10/23 18:17:07 attempting to load hooks from hooks.json
[webhook] 2017/10/23 18:17:07 os signal watcher ready
[webhook] 2017/10/23 18:17:07 found 1 hook(s) in file
[webhook] 2017/10/23 18:17:07 	loaded: awx-atest
[webhook] 2017/10/23 18:17:07 serving hooks on{id}
[webhook] 2017/10/23 18:17:09 incoming HTTP request from [::1]:54005
[webhook] 2017/10/23 18:17:09 awx-atest got matched
[webhook] 2017/10/23 18:17:09 awx-atest hook triggered successfully
[webhook] 2017/10/23 18:17:09 200 | 388.746µs | localhost:9000 | POST /hooks/awx-atest
[webhook] 2017/10/23 18:17:09 executing /Users/jpm/bin/ with arguments ["/Users/jpm/bin/" "jpm/atest" "none-of-your-business"] and environment [] using /tmp/ as cwd
[webhook] 2017/10/23 18:17:10 command output: {"job":331,"ignored_fields":{},...
[webhook] 2017/10/23 18:17:10 finished handling awx-atest

The truncated output in the second to last line is the JSON returned from the AWX job launch which happens in the script:




if [ "$secret" == "none-of-your-business" ]; then
    curl -qs \
        -d '{"extra_vars":{"newpoem":"hello good world"}}' \
        -H "Content-type: application/json" \
        -u admin:password  \

All this is obviously just an example. Refine to your taste (and add lots of error-handling!)

From AWX

Whilst on the topic of Webhooks: AWX can trigger an arbitrary Webhook as a notification; these are invoked on success or on failure (as desired), and produce a payload which looks like this:

  "created_by": "admin",
  "credential": "jp-ed25519",
  "extra_vars": "{}",
  "finished": "2017-10-24T06:05:09.626734+00:00",
  "friendly_name": "Job",
  "hosts": {
    "alice": {
      "changed": 0,
      "dark": 0,
      "failed": false,
      "failures": 0,
      "ok": 2,
      "processed": 1,
      "skipped": 0
  "id": 335,
  "inventory": "j1",
  "limit": "",
  "name": "pi1",
  "playbook": "ping.yml",
  "project": "p1",
  "started": "2017-10-24T06:04:54.127124+00:00",
  "status": "successful",
  "traceback": "",
  "url": "https://towerhost/#/jobs/335"

The next step is to take bits of the payload to indicate success or failure on your monitoring blinkenlights.

View Comments :: Ansible and AWX :: 23 Oct 2017 :: e-mail

I believe there’s a document floating around somewhere in which is written that “JP Mens brought Ansible to Europe in 2012” or something to similar effect. Whilst I think that may be a tad exaggerated it is true that I did a few conferences and talks during which I enthusiastically spoke about the then new kid on the block. I’m recounting this anecdote because something similar may happen with Ansible AWX. I’ll be talking about AWX to anybody who wants to listen.

Ansible AWX is the upstream project which holds the code which at some point in time, and I guess periodically, turns into Ansible Tower. It’s been a long time coming, but Ansible has now open sourced AWX, and I’ll tell you two things:

  1. I wouldn’t want to have to use AWX and forgo the command line (but I know how to overcome the angst)
  2. I know a lot of people have been waiting for this to happen

Forget about my first point: that’s possibly just I, but I do mean it: Ansible without ansible-playbook on the CLI, seeing stdout move past, etc. wouldn’t feel right to me.

I’ve been kicking AWX’ tires quite a bit for several days, and I’ll say one thing: it really is very capable, and I will be recommending organizations take a closer look at it. If you know Tower you know AWX, but there are many who don’t know Tower.

Let me start with a few things I dislike, because it’s quite a short list:

  • documentation is basically what’s available for Ansible Tower, but some bits in that are just not available in AWX, or at least I cannot find them (e.g. What we need are docs for things like management, backups, etc. But that’ll hopefully be written in the course of time
  • installation is supported for either docker, OpenShift or Minishift. That’s it. (I had a bit of difficulty wrapping my head around the *shifts, but I got along with the docker install.)
  • the UI needs a huge screen to be usable and occasionally feels sluggish (possibly due to delayed reaction due to background architecture)

Now for the things which I like in AWX:

  • the API, the API, and the API. Honestly, these guys got most of this very right. All we see in the UI is available in the API. tower-cli is also very good
  • the UI which updates via Websockets
  • multiple authentication backends. (I’ve tested TACACS+ and LDAP; both work). Even so, AWX supports local users (yes, which can also be created via the API); there’s also Github, Google, and whatnot
  • some of the terminology is a bit funny, but I quickly got the gist of it, and it makes sense (project, jobs, templates, etc)
  • inventories. Lots of them. Dynamic, static, internal, from SCM.
  • SCM all over. AWX is basically something you can replace and it obtains all it needs from external sources (SCM and PostgreSQL)
  • Role Based Access Control for those who need it. Works pretty well. Give access to a template and user inherits required access to credentials, inventory, etc.
  • Credentials store. Hugely useful.
  • Webhooks (outgoing) as well as API trigger from incoming hooks. That’s how I’d use AWX to avoid having to click in the UI
  • Workflows. Neat. Like a mini CI/CD thing.
  • external logging (ELK, Splunk.etc.) though what I see going out in the logs is meh
  • Notifications galore. Why wasn’t my mqtt notifier implemented? :-)
  • Clustering and High-Availability.

This isn’t an introduction to AWX. It’s more me wanting to whet your appetite. I’ll be speaking about AWX very soon, and I’m already working on an AWX training. At the first talk which I’ll give in the Netherlands, at the NLUUG I’ll be diving into as good an overview as I can give in 45 minutes. With screen-shots & things.

Further reading:

View Comments :: Ansible :: 20 Oct 2017 :: e-mail

The DNSSEC chain of trust starts at the root of the DNS with a resolver typically trusting said root by the fact that it’s got the root key (or hash thereof, called a Delegation Signer – DS record) built-in or configured into it. From there, a resolver chases delegation signer records (DS) which indicate to it, that a child zone is signed. We can compare this to how a resolver chases name server (NS) records to find delegations. The hash of a child zone’s DNSKEY is a DS record which is located in it’s parent zone and which has therefore been signed by the parent.

chain of trust

In the case of, we know that net is signed, so the root zone contains a DS record for net. If is signed, its parent zone (net) contains a DS record for, and so forth.

Any child zone which is signed must have a hash of its secure entry point as a DS record in its parent zone.

Uploading DS from a child to a parent zone can be an entertaining proposition. Anything from copy/paste into some (often lousy) Web form to sending an email might be available. Unfortunately there’s no real standard to accomplish this as some parent zones want DS records whereas others insist on DNSKEY records (from which they calculate the DS themselves). Be that as it may, what we typically do is to obtain the DS. For utilities provided by BIND or PowerDNS:

$ dnssec-dsfromkey IN DS 8419 5 1 2E4D616E70FED736A08D7854BCDD3D269A604FD3 IN DS 8419 5 2 6682CC1E528930DB7E097101C838F8D3D0DBB8EC5D1E8B50A5425FE57AB058C6

$ dig DNSKEY | dnssec-dsfromkey -f - IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

With a bit of zone name mangling and TTL adding we can use pdnsutil with dnssec-dsfromkey, but pdnsutil has its own subcommand as well:

$ pdnsutil export-zone-dnskey 32 |
     awk 'NR==1 { sub(" ", ". 60 "); print; }' |
     dnssec-dsfromkey -f - -T 120 120 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C 120 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

$ pdnsutil export-zone-ds
... (shown below)

Generally speaking the story stops here, and I’d leave you in charge of getting that DS-set to your parent zone somehow. Digressing only slightly, OpenDNSSEC has for ages, had a DelegationSignerSubmitCommand program in its configuration which can upload DS/DNSKEY to a parent via a program you create; the script you write and configure gets new keys via stdin and you can then automate submission to a parent zone to your heart’s content.

Can I haz automatik?

What we really want is automatic DS submission such as that the child zone uploads the DS directly to the parent zone where it is then signed. Unless the parent and the child zone are both under my administrative charge, that’s easier said than done: it’s unlikely the parent will allow me to do that.

Enter RFC 7344 which allows me to indicate, in my child’s zone, that I have a new DS record for submission. (This also works for DNSKEY records for those parents which prefer DNSKEY.) The fact that the child zone has a new DS for submission is indicated with a CDS record (child DS) and/or CDNSKEY (child DNSKEY) respectively. What will actually happen is that the parent will “consume” CDS/CDNSKEY records instead of the child “pushing” them somewhere. Hereunder I will be using CDS because they’re shorter, but CDNSKEYs work equally well.

As per section 4 of RFC 7344, if a child publishes either CDS or CDNSKEY it should publish both, unless the child knows the parent will use one of a kind only.

Using PowerDNS, I can configure the Authoritative server to automatically publish CDS and/or CDNSKEY records:

$ pdnsutil set-publish-cds zone
$ pdnsutil set-publish-cdskey zone

The process for BIND is a bit more involved. What I do here is to set a timing parameter on a key when I create a new key (or just after having created it).

$ dnssec-settime -P sync +1mi

$ $ grep Sync
; SyncPublish: 20170921094522 (Thu Sep 21 11:45:22 2017)

When running as an in-line signer, BIND will publish CDS and CDNSKEY records for the particular key until I use dnssec-settime to have it remove such records from the zone. (Note that BIND as smart signer (dnssec-signzone -S) does not add CDS or CDNSKEY records to the signed zone. Why? Good question; IMO an omission.)

So, ideally, what we then need is a mechanism by which a server checks for CDS/CDNSKEY records in a child zone and then updates the corresponding parent zone.


A combination of dig and a new utility will allow me to automate the process.


Tony Finch has written such a beast. It’s called dnssec-cds and it’s currently in a git tree he maintains. What this program does is to change DS records at a delegation point based on CDS or CDNSKEY records published in the child zone. By default CDS records are used if both CDS and CDNSKEY records are present.

What we’ll actually be doing in order to add a new signed child zone is:

  1. Create and sign the zone.
  2. Obtain the DS-set, copy that securely to the parent, and sign the result. We do this step once and we do it securely because this is how we affirm trust between parent and child.
  3. Once in the parent zone, the DS records of the child indicate the child zone’s secure entry point: validation can be chased down into the child zone.
  4. When the child’s KSK rolls, ensure child zone contains CDS/CDNSKEY records.
  5. Parent will periodically query for child’s CDS/CDNSKEY records; if there are none, processing stops.
  6. As soon as CDS/CDNSKEY records are visible in the child, dnssec-cds validates these by affirming, using the original DS-set obtained in 2, that they’re valid and not being replayed.
  7. A dynamic (or other) update can be triggered on the parent to add the child’s new DS-set.

dnssec-cds protects against replay attacks by requiring that signatures on the child’s CDS are not older than they were on a previous run of the program. (This time is obtained by the modification time of the dsset- file or from the -s option. Note below that I touch the dsset- file to ensure this, just the first time.) Furthermore, dnssec-cds protects against breaking the delegation by ensuring that the DNSKEY RRset can be verified by every key algorithm in the new DS RRset and that the same set of keys is covered by every DS digest type.

dnssec-cds writes replacement DS records (i.e. The new DS-set_ to standard output or to the input file if -i is specified, and -u prints commands suitable to be read by a dynamic DNS utility such as nsupdate. The replacement DS records will be the same as the existing records when no change is required. The output can be empty if the CDS / CDNSKEY records specify that the child zone wants to go insecure.

servers in use

The BIND name server in my example hosts the parent zone, and we’ll create a child zone ( on PowerDNS Authoritative (because we can). Which server brand the zone’s hosted on is quite irrelevant other than it must be able to serve CDS/CDNSKEY records in the zone. This is particularly easy to automate with PowerDNS.

First we sign the child zone and export its DS-set:

$ pdnsutil secure-zone
Securing zone with default key size
Adding CSK (257) with algorithm ecdsa256
Zone secured
Adding NSEC ordering information

$ pdnsutil export-zone-ds >
$ cat IN DS 32128 13 1 6823d9bb1b03df714dd0eb163e20b341c96d18c0 ; ( SHA1 digest ) IN DS 32128 13 2 039b660206db76611305288042ee3fa132f3709e229005baf2b24bcdae7bc513 ; ( SHA256 digest ) IN DS 32128 13 3 753cf5f1c9a73fdaf3e09454a55916e7381bf24ce3c0e077defe1cfd15b6da5f ; ( GOST R 34.11-94 digest ) IN DS 32128 13 4 e772f48556bf23effe80946a5306e5d00c6138d321f6d0a66a2673d2b2bee4e3fe13340a7bcf921484081046e92ca983 ; ( SHA-384 digest )

Note how the exported dsset- contains one DS for each algorithm supported by my PowerDNS installation. We now copy the dsset- to the parent server, and add its content to the parent zone. The zone is configured with auto-dnssec maintain so BIND will immediately sign anything we add to it.

( echo "ttl 60"
  sed -e "s/^/update add /" -e "s/;.*//"
  echo "send" )  | nsupdate -l

If I now query for the DS records for in the parent zone (recall a DS RRset is in the parent) I obtain an appropriate response:

$ dig +norec @BIND ds
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14192
;; flags: qr aa; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; ANSWER SECTION:        60      IN      DS      32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0        60      IN      DS      32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513        60      IN      DS      32128 13 3 753CF5F1C9A73FDAF3E09454A55916E7381BF24CE3C0E077DEFE1CFD 15B6DA5F        60      IN      DS      32128 13 4 E772F48556BF23EFFE80946A5306E5D00C6138D321F6D0A66A2673D2 B2BEE4E3FE13340A7BCF921484081046E92CA983

Our parent zone is signed, our child zone is signed, our parent has a signed DS record (more than one actually, but that’s fine) for our child zone: the chain of trust is in place. (Note the key tag on the DS: 32128.)

Let it roll!

At some point in time we want to roll the child’s KSK, and I am not going to address timing issues of the roll proper; I’m discussing CDS only.

In order to roll a key, we create a new key in the child zone. Simultaneously we request PowerDNS publish CDS records in the zone for all keys:

$ pdnsutil add-zone-key ksk 256 active ecdsa256
Added a KSK with algorithm = 13, active=1
Requested specific key size of 256 bits

$ pdnsutil set-publish-cds

$ pdnsutil show-zone
This is a Master zone
Last SOA serial number we notified: 0 != 3 (serial in the database)
Metadata items:
        PUBLISH-CDS     1,2
Zone has NSEC semantics
ID = 31 (CSK), flags = 257, tag = 32128, algo = 13, bits = 256    Active ( ECDSAP256SHA256 )
CSK DNSKEY = IN DNSKEY 257 3 13 12lrJwo8w/PbnD8JssSlmuN7adbidwCsCaFn2yiXctj2k9g9dlGw+KTDqRsanj4InPgGcQwllBRGSojfwZVHRQ== ; ( ECDSAP256SHA256 )
DS = IN DS 32128 13 1 6823d9bb1b03df714dd0eb163e20b341c96d18c0 ; ( SHA1 digest )
DS = IN DS 32128 13 2 039b660206db76611305288042ee3fa132f3709e229005baf2b24bcdae7bc513 ; ( SHA256 digest )
DS = IN DS 32128 13 3 753cf5f1c9a73fdaf3e09454a55916e7381bf24ce3c0e077defe1cfd15b6da5f ; ( GOST R 34.11-94 digest )
DS = IN DS 32128 13 4 e772f48556bf23effe80946a5306e5d00c6138d321f6d0a66a2673d2b2bee4e3fe13340a7bcf921484081046e92ca983 ; ( SHA-384 digest )
ID = 32 (CSK), flags = 257, tag = 48629, algo = 13, bits = 256    Active ( ECDSAP256SHA256 )
CSK DNSKEY = IN DNSKEY 257 3 13 EY2fpwiU3dcg22g83gC+9oQ65vJHPELR6sU1MLB8r8F+6egarSIDzjyM5AY2RlbFGgOkjpPMaUonCONPalOQ4A== ; ( ECDSAP256SHA256 )
DS = IN DS 48629 13 1 4e324c9416d0009b4262c39494a1c7989f9c055c ; ( SHA1 digest )
DS = IN DS 48629 13 2 87081d41bbaba1c25d28f48ede7718e96ea8387cae2a286fa5c61e57971b8c66 ; ( SHA256 digest )
DS = IN DS 48629 13 3 99eadcdc47adfe2f68df3e1a4aa775fa409bafbd7815ca1c2643cdf49a0996bf ; ( GOST R 34.11-94 digest )
DS = IN DS 48629 13 4 f961984bc561906cde1987bf89f90654865d4b9500ee7eed8bf4a0245244ac492eeb66776475e7448826f74638ad9e9e ; ( SHA-384 digest )

This output is easy to follow once we notice that the top part has some metadata and then come the keys. Note that pdnsutil is printing a DS record for each of the algorithms PowerDNS supports, hence the verbosity. Let’s pay attention to the key tags: in above list we see our original 32128 tag and the new tag 48629.

The child zone is still signed; there are two keys in the zone, and we’ve requested CDS records be published. Does that work?

$ dig @POWERDNS cds
;; ANSWER SECTION:        3600    IN      CDS     32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0        3600    IN      CDS     48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C        3600    IN      CDS     32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513        3600    IN      CDS     48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

The CDS records are available with the digest algorithms currently implemented for DS, namely 1 (SHA1) and 2 (SHA256).

.. to the parent

Back on the parent, we prepare to use dnssec-cds for the magic. We already have the dsset- file, and as discussed above I touch its timestamp (or use -s switch):

$ touch -t 201709140000

$ cat

dig @POWERDNS +dnssec +noall +answer $z DNSKEY $z CDNSKEY $z CDS |
    dnssec-cds -u -i -f /dev/stdin -T 42 -d . -i.orig $z |
    tee /tmp/nsup |
    nsupdate -l

$ ./

dnssec_cds with the -u option creates a script suitable for feeding into nsupdate; for debugging purposes, I tee it into a file to show you here:

$ cat /tmp/nsup
update add 42 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
update add 42 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66
update del IN DS 32128 13 3 753CF5F1C9A73FDAF3E09454A55916E7381BF24CE3C0E077DEFE1CFD15B6DA5F
update del IN DS 32128 13 4 E772F48556BF23EFFE80946A5306E5D00C6138D321F6D0A66A2673D2B2BEE4E3FE13340A7BCF921484081046E92CA983

querying the parent we see the DS records with the superflous algorithms have been deleted and the DS records for the new key have been added. We also see our dsset- file has been updated accordingly (and I pay attention to the file’s modification time which has been set to the inception time of the DNSKEY RRSIG of the child zone):

$ dig +norec @BIND ds
;; ANSWER SECTION:        42      IN      DS      32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0        42      IN      DS      32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513        42      IN      DS      48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C        42      IN      DS      48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

$ cat 42 IN DS 32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0 42 IN DS 32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCDAE7BC513 42 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C 42 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

Now I delete the “old” key from the child zone using its (in my opinion slightly confusing) ID which is 31 – compare with the output of pdnsutil show zone above. (I would have preferred pdnsutil utilize key tags to refer to keys for a zone):

$ pdnsutil remove-zone-key 31

Now comes the drum-roll moment: if we re-run our dnssec-cds script will it blend?

$ ./

$ cat /tmp/nsup
update del IN DS 32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
update del IN DS 32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCDAE7BC513
$ dig +norec @BIND ds
;; ANSWER SECTION:        42      IN      DS      48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C        42      IN      DS      48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

A few points to note:

  • when lookup at the nsupdate script produced by dnssec-cds pay attention to add vs. del on the update statements.
  • it’s not necessary to have dnssec-cds maintain the dsset- file on the file system, but it gives me a warm and fuzzy feeling so I think I’d always do that
  • I should also mention that the dnssec-dsfromkey utility is quite versatile; we saw it above, and it’s good to know that the -C option creates CDS records au lieu de DS records.

Tony’s dnssec-cds together with a wee bit of scripting will basically allow us to add new DS for zones to their parent zones. In the examples above I’ve used nsupdate, but this could equally well be accomplished by other means.

View Comments :: DNS and DNSSEC :: 21 Sep 2017 :: e-mail

Other recent entries