The DNSSEC chain of trust starts at the root of the DNS with a resolver typically trusting said root by the fact that it’s got the root key (or hash thereof, called a Delegation Signer – DS record) built-in or configured into it. From there, a resolver chases delegation signer records (DS) which indicate to it, that a child zone is signed. We can compare this to how a resolver chases name server (NS) records to find delegations. The hash of a child zone’s DNSKEY is a DS record which is located in it’s parent zone and which has therefore been signed by the parent.

chain of trust

In the case of, we know that net is signed, so the root zone contains a DS record for net. If is signed, its parent zone (net) contains a DS record for, and so forth.

Any child zone which is signed must have a hash of its secure entry point as a DS record in its parent zone.

Uploading DS from a child to a parent zone can be an entertaining proposition. Anything from copy/paste into some (often lousy) Web form to sending an email might be available. Unfortunately there’s no real standard to accomplish this as some parent zones want DS records whereas others insist on DNSKEY records (from which they calculate the DS themselves). Be that as it may, what we typically do is to obtain the DS. For utilities provided by BIND or PowerDNS:

$ dnssec-dsfromkey IN DS 8419 5 1 2E4D616E70FED736A08D7854BCDD3D269A604FD3 IN DS 8419 5 2 6682CC1E528930DB7E097101C838F8D3D0DBB8EC5D1E8B50A5425FE57AB058C6

$ dig DNSKEY | dnssec-dsfromkey -f - IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

With a bit of zone name mangling and TTL adding we can use pdnsutil with dnssec-dsfromkey, but pdnsutil has its own subcommand as well:

$ pdnsutil export-zone-dnskey 32 |
     awk 'NR==1 { sub(" ", ". 60 "); print; }' |
     dnssec-dsfromkey -f - -T 120 120 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C 120 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

$ pdnsutil export-zone-ds
... (shown below)

Generally speaking the story stops here, and I’d leave you in charge of getting that DS-set to your parent zone somehow. Digressing only slightly, OpenDNSSEC has for ages, had a DelegationSignerSubmitCommand program in its configuration which can upload DS/DNSKEY to a parent via a program you create; the script you write and configure gets new keys via stdin and you can then automate submission to a parent zone to your heart’s content.

Can I haz automatik?

What we really want is automatic DS submission such as that the child zone uploads the DS directly to the parent zone where it is then signed. Unless the parent and the child zone are both under my administrative charge, that’s easier said than done: it’s unlikely the parent will allow me to do that.

Enter RFC 7344 which allows me to indicate, in my child’s zone, that I have a new DS record for submission. (This also works for DNSKEY records for those parents which prefer DNSKEY.) The fact that the child zone has a new DS for submission is indicated with a CDS record (child DS) and/or CDNSKEY (child DNSKEY) respectively. What will actually happen is that the parent will “consume” CDS/CDNSKEY records instead of the child “pushing” them somewhere. Hereunder I will be using CDS because they’re shorter, but CDNSKEYs work equally well.

As per section 4 of RFC 7344, if a child publishes either CDS or CDNSKEY it should publish both, unless the child knows the parent will use one of a kind only.

Using PowerDNS, I can configure the Authoritative server to automatically publish CDS and/or CDNSKEY records:

$ pdnsutil set-publish-cds zone
$ pdnsutil set-publish-cdskey zone

The process for BIND is a bit more involved. What I do here is to set a timing parameter on a key when I create a new key (or just after having created it).

$ dnssec-settime -P sync +1mi

$ $ grep Sync
; SyncPublish: 20170921094522 (Thu Sep 21 11:45:22 2017)

When running as an in-line signer, BIND will publish CDS and CDNSKEY records for the particular key until I use dnssec-settime to have it remove such records from the zone. (Note that BIND as smart signer (dnssec-signzone -S) does not add CDS or CDNSKEY records to the signed zone. Why? Good question; IMO an omission.)

So, ideally, what we then need is a mechanism by which a server checks for CDS/CDNSKEY records in a child zone and then updates the corresponding parent zone.


A combination of dig and a new utility will allow me to automate the process.


Tony Finch has written such a beast. It’s called dnssec-cds and it’s currently in a git tree he maintains. What this program does is to change DS records at a delegation point based on CDS or CDNSKEY records published in the child zone. By default CDS records are used if both CDS and CDNSKEY records are present.

What we’ll actually be doing in order to add a new signed child zone is:

  1. Create and sign the zone.
  2. Obtain the DS-set, copy that securely to the parent, and sign the result. We do this step once and we do it securely because this is how we affirm trust between parent and child.
  3. Once in the parent zone, the DS records of the child indicate the child zone’s secure entry point: validation can be chased down into the child zone.
  4. When the child’s KSK rolls, ensure child zone contains CDS/CDNSKEY records.
  5. Parent will periodically query for child’s CDS/CDNSKEY records; if there are none, processing stops.
  6. As soon as CDS/CDNSKEY records are visible in the child, dnssec-cds validates these by affirming, using the original DS-set obtained in 2, that they’re valid and not being replayed.
  7. A dynamic (or other) update can be triggered on the parent to add the child’s new DS-set.

dnssec-cds protects against replay attacks by requiring that signatures on the child’s CDS are not older than they were on a previous run of the program. (This time is obtained by the modification time of the dsset- file or from the -s option. Note below that I touch the dsset- file to ensure this, just the first time.) Furthermore, dnssec-cds protects against breaking the delegation by ensuring that the DNSKEY RRset can be verified by every key algorithm in the new DS RRset and that the same set of keys is covered by every DS digest type.

dnssec-cds writes replacement DS records (i.e. The new DS-set_ to standard output or to the input file if -i is specified, and -u prints commands suitable to be read by a dynamic DNS utility such as nsupdate. The replacement DS records will be the same as the existing records when no change is required. The output can be empty if the CDS / CDNSKEY records specify that the child zone wants to go insecure.

servers in use

The BIND name server in my example hosts the parent zone, and we’ll create a child zone ( on PowerDNS Authoritative (because we can). Which server brand the zone’s hosted on is quite irrelevant other than it must be able to serve CDS/CDNSKEY records in the zone. This is particularly easy to automate with PowerDNS.

First we sign the child zone and export its DS-set:

$ pdnsutil secure-zone
Securing zone with default key size
Adding CSK (257) with algorithm ecdsa256
Zone secured
Adding NSEC ordering information

$ pdnsutil export-zone-ds >
$ cat IN DS 32128 13 1 6823d9bb1b03df714dd0eb163e20b341c96d18c0 ; ( SHA1 digest ) IN DS 32128 13 2 039b660206db76611305288042ee3fa132f3709e229005baf2b24bcdae7bc513 ; ( SHA256 digest ) IN DS 32128 13 3 753cf5f1c9a73fdaf3e09454a55916e7381bf24ce3c0e077defe1cfd15b6da5f ; ( GOST R 34.11-94 digest ) IN DS 32128 13 4 e772f48556bf23effe80946a5306e5d00c6138d321f6d0a66a2673d2b2bee4e3fe13340a7bcf921484081046e92ca983 ; ( SHA-384 digest )

Note how the exported dsset- contains one DS for each algorithm supported by my PowerDNS installation. We now copy the dsset- to the parent server, and add its content to the parent zone. The zone is configured with auto-dnssec maintain so BIND will immediately sign anything we add to it.

( echo "ttl 60"
  sed -e "s/^/update add /" -e "s/;.*//"
  echo "send" )  | nsupdate -l

If I now query for the DS records for in the parent zone (recall a DS RRset is in the parent) I obtain an appropriate response:

$ dig +norec @BIND ds
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14192
;; flags: qr aa; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; ANSWER SECTION:        60      IN      DS      32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0        60      IN      DS      32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513        60      IN      DS      32128 13 3 753CF5F1C9A73FDAF3E09454A55916E7381BF24CE3C0E077DEFE1CFD 15B6DA5F        60      IN      DS      32128 13 4 E772F48556BF23EFFE80946A5306E5D00C6138D321F6D0A66A2673D2 B2BEE4E3FE13340A7BCF921484081046E92CA983

Our parent zone is signed, our child zone is signed, our parent has a signed DS record (more than one actually, but that’s fine) for our child zone: the chain of trust is in place. (Note the key tag on the DS: 32128.)

Let it roll!

At some point in time we want to roll the child’s KSK, and I am not going to address timing issues of the roll proper; I’m discussing CDS only.

In order to roll a key, we create a new key in the child zone. Simultaneously we request PowerDNS publish CDS records in the zone for all keys:

$ pdnsutil add-zone-key ksk 256 active ecdsa256
Added a KSK with algorithm = 13, active=1
Requested specific key size of 256 bits

$ pdnsutil set-publish-cds

$ pdnsutil show-zone
This is a Master zone
Last SOA serial number we notified: 0 != 3 (serial in the database)
Metadata items:
        PUBLISH-CDS     1,2
Zone has NSEC semantics
ID = 31 (CSK), flags = 257, tag = 32128, algo = 13, bits = 256    Active ( ECDSAP256SHA256 )
CSK DNSKEY = IN DNSKEY 257 3 13 12lrJwo8w/PbnD8JssSlmuN7adbidwCsCaFn2yiXctj2k9g9dlGw+KTDqRsanj4InPgGcQwllBRGSojfwZVHRQ== ; ( ECDSAP256SHA256 )
DS = IN DS 32128 13 1 6823d9bb1b03df714dd0eb163e20b341c96d18c0 ; ( SHA1 digest )
DS = IN DS 32128 13 2 039b660206db76611305288042ee3fa132f3709e229005baf2b24bcdae7bc513 ; ( SHA256 digest )
DS = IN DS 32128 13 3 753cf5f1c9a73fdaf3e09454a55916e7381bf24ce3c0e077defe1cfd15b6da5f ; ( GOST R 34.11-94 digest )
DS = IN DS 32128 13 4 e772f48556bf23effe80946a5306e5d00c6138d321f6d0a66a2673d2b2bee4e3fe13340a7bcf921484081046e92ca983 ; ( SHA-384 digest )
ID = 32 (CSK), flags = 257, tag = 48629, algo = 13, bits = 256    Active ( ECDSAP256SHA256 )
CSK DNSKEY = IN DNSKEY 257 3 13 EY2fpwiU3dcg22g83gC+9oQ65vJHPELR6sU1MLB8r8F+6egarSIDzjyM5AY2RlbFGgOkjpPMaUonCONPalOQ4A== ; ( ECDSAP256SHA256 )
DS = IN DS 48629 13 1 4e324c9416d0009b4262c39494a1c7989f9c055c ; ( SHA1 digest )
DS = IN DS 48629 13 2 87081d41bbaba1c25d28f48ede7718e96ea8387cae2a286fa5c61e57971b8c66 ; ( SHA256 digest )
DS = IN DS 48629 13 3 99eadcdc47adfe2f68df3e1a4aa775fa409bafbd7815ca1c2643cdf49a0996bf ; ( GOST R 34.11-94 digest )
DS = IN DS 48629 13 4 f961984bc561906cde1987bf89f90654865d4b9500ee7eed8bf4a0245244ac492eeb66776475e7448826f74638ad9e9e ; ( SHA-384 digest )

This output is easy to follow once we notice that the top part has some metadata and then come the keys. Note that pdnsutil is printing a DS record for each of the algorithms PowerDNS supports, hence the verbosity. Let’s pay attention to the key tags: in above list we see our original 32128 tag and the new tag 48629.

The child zone is still signed; there are two keys in the zone, and we’ve requested CDS records be published. Does that work?

$ dig @POWERDNS cds
;; ANSWER SECTION:        3600    IN      CDS     32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0        3600    IN      CDS     48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C        3600    IN      CDS     32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513        3600    IN      CDS     48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

The CDS records are available with the digest algorithms currently implemented for DS, namely 1 (SHA1) and 2 (SHA256).

.. to the parent

Back on the parent, we prepare to use dnssec-cds for the magic. We already have the dsset- file, and as discussed above I touch its timestamp (or use -s switch):

$ touch -t 201709140000

$ cat

dig @POWERDNS +dnssec +noall +answer $z DNSKEY $z CDNSKEY $z CDS |
    dnssec-cds -u -i -f /dev/stdin -T 42 -d . -i.orig $z |
    tee /tmp/nsup |
    nsupdate -l

$ ./

dnssec_cds with the -u option creates a script suitable for feeding into nsupdate; for debugging purposes, I tee it into a file to show you here:

$ cat /tmp/nsup
update add 42 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C
update add 42 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66
update del IN DS 32128 13 3 753CF5F1C9A73FDAF3E09454A55916E7381BF24CE3C0E077DEFE1CFD15B6DA5F
update del IN DS 32128 13 4 E772F48556BF23EFFE80946A5306E5D00C6138D321F6D0A66A2673D2B2BEE4E3FE13340A7BCF921484081046E92CA983

querying the parent we see the DS records with the superflous algorithms have been deleted and the DS records for the new key have been added. We also see our dsset- file has been updated accordingly (and I pay attention to the file’s modification time which has been set to the inception time of the DNSKEY RRSIG of the child zone):

$ dig +norec @BIND ds
;; ANSWER SECTION:        42      IN      DS      32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0        42      IN      DS      32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCD AE7BC513        42      IN      DS      48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C        42      IN      DS      48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

$ cat 42 IN DS 32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0 42 IN DS 32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCDAE7BC513 42 IN DS 48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C 42 IN DS 48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57971B8C66

Now I delete the “old” key from the child zone using its (in my opinion slightly confusing) ID which is 31 – compare with the output of pdnsutil show zone above. (I would have preferred pdnsutil utilize key tags to refer to keys for a zone):

$ pdnsutil remove-zone-key 31

Now comes the drum-roll moment: if we re-run our dnssec-cds script will it blend?

$ ./

$ cat /tmp/nsup
update del IN DS 32128 13 1 6823D9BB1B03DF714DD0EB163E20B341C96D18C0
update del IN DS 32128 13 2 039B660206DB76611305288042EE3FA132F3709E229005BAF2B24BCDAE7BC513
$ dig +norec @BIND ds
;; ANSWER SECTION:        42      IN      DS      48629 13 1 4E324C9416D0009B4262C39494A1C7989F9C055C        42      IN      DS      48629 13 2 87081D41BBABA1C25D28F48EDE7718E96EA8387CAE2A286FA5C61E57 971B8C66

A few points to note:

  • when lookup at the nsupdate script produced by dnssec-cds pay attention to add vs. del on the update statements.
  • it’s not necessary to have dnssec-cds maintain the dsset- file on the file system, but it gives me a warm and fuzzy feeling so I think I’d always do that
  • I should also mention that the dnssec-dsfromkey utility is quite versatile; we saw it above, and it’s good to know that the -C option creates CDS records au lieu de DS records.

Tony’s dnssec-cds together with a wee bit of scripting will basically allow us to add new DS for zones to their parent zones. In the examples above I’ve used nsupdate, but this could equally well be accomplished by other means.

View Comments :: DNS and DNSSEC :: 21 Sep 2017 :: e-mail

DNS servers optionally log queries on demand by formatting a message and storing that in a file, sending it through syslog, etc. This is an I/O-intensive operation which can dramatically slow down busy servers, and the biggest issue is we get the query but not the associated response.

[1505125481] unbound[89142:0] info: A IN

10-Sep-2017 08:31:03.644 client @0x7f9b12dd5c00 ( view internal: query: IN A +E(0)K (

In addition to having to format the data into human-readable format and write the resulting string to a file, DNS server authors haven’t been able to standardize on query logging formats. As can be seen by the two examples above (first Unbound, then BIND), the strings differ dramatically. The different results also mean that further parsing/processing of these logs will have to be different as well. (Have fun building regular expressions for both and having more than two problems.)

One method to overcome this is to capture packets externally, such as how DSC does it, but doing it in this fashion means the software must deal with several things the name server has already dealt with: fragments, TCP stream reassembly, spoofed packets, etc. (Here’s a bit of a “versus” thread.)

An issue with both these methods is that the query a name server received and the response it returned aren’t bundled together. Only the name server software itself knows, really, what belongs together at the time the query occurred and the response was returned. Can you imagine a DNS log so complete that you could see what query a client issued and which response it got?

dnstap is a solution which introduces a flexible, binary log-format for DNS servers together with Protocol Buffers, a mechanism for serializing structured data. Robert Edmonds had the idea for dnstap and created the first implementation with two specific use cases in mind:

  • make query-logging faster by eliminating synchronous I/O bottlenecks and message formatting
  • avoid complicated state reconstruction by capturing full messages instead of packets for passive DNS

What dnstap basically does is to add a lightweight message-copy routine into a DNS server. This routine duplicates a DNS message with context and moves it out of the DNS server to a listening process, typically via a Unix domain socket. In case of extreme load, a DNS server can simply start dropping log payloads instead of degrading server performance.

dnstap enabled DNS server (from project)

The dnstap protocol buffer content is defined in this schema, and includes a type of message (see below), the socket family queries/responses were transported on, socket protocols, query and responder addresses, the initiator’s DNS query message in wire format, timestamps, and the original wire-format DNS response message, verbatim.

dnstap is currently implemented in a few utilities as well as in these DNS servers:

  • BIND
  • CoreDNS
  • Knot 2.x
  • Knot Resolver (> 1.2.5)
  • Unbound

For my experiments, I’ll be using BIND 9.11.2, CoreDNS-011, Knot 2.5.4, and Unbound 1.6.5.

Before launching a dnstap-enabled (and configured) DNS server, we have to ensure a listener has created the Unix domain socket. The dnstap code in the BIND, Unbound etc. acts as a client rather than a server, so it requires a server which will accept connections. Robert Edmonds, dnstap inventor, did it this way so that a single socket could be used by different dnstap senders (like how a system daemon listens to messages from multiple clients). If the Unix socket isn’t present or nothing’s listening on it, the client code (in the DNS server) will periodically attempt reconnection.

We’ll be looking at two programs which provide this functionality.


We’ll likely have to build our DNS server installations ourselves as official packages are typically not built with dnstap support. The requirements for all the below (except CoreDNS which provides everything in its single statically linked binary) will be fstrm, protobuf, and protobuf-c:

  • fstrm is a frame streams implementation in C. It implements a lightweight protocol with which any serialized data format which produces byte sequences can be transported and provides a Unix domain listener (fstrm_capture) for dnstap records written by the DNS servers.
  • protobuf is the implementation of Google’s Protocol Buffers format. We install it in order to build and use some of the utilities, namely the protobuf compiler.
  • protobuf-c is a C implementation of the latter; this includes a library (libprotobuf-c) which some of the utilities require.

Other than these requirements, a number of the DNS server implementation I mention have their own additional requirements which I will not specify here – the projects’ documentation will tell you more.


What follows are some utilities we’ll be using for working with and/or decoding (i.e. printing) dnstap records.

dnstap -u


I can use fstrm_capture to create a required Unix domain socket which dnstap clients can write to. The program needs a “Frame Streams” content type specified as well as the path to the Unix socket and the file name it should protocol buffer frames to:

$ fstrm_capture -t protobuf:dnstap.Dnstap -u /var/run/dnstap.sock -w fstrm.tap

While there is provision in the code to handle SIGHUP (fstrm_capture then flushes the output file), there is no provision for file rotation.

An alternative method for doing similarly is to use the dnstap utility from the dnstap packag.


The dnstap project maintains a dnstap utility written in Go. There are, unfortunately, no prebuilt binaries on the releases page, but building the program is easy (after you go through the hassle of installing go).

I launch the dnstap utility (instead of launching fstrm_capture) like this:

dnstap -u /var/run/dnstap.sock -w file.tap

I can also use dnstap to read a tap file from the file system and print it out in various formats, which I will be showing below when we look at some examples. dnstap can also create a TCP endpoint (e.g. for CoreDNS) with dnstap -l <address:port>


For the actual decoding of dnstap files (i.e. printing them out), we can use dnstap as just discussed, or the reference utility which is called dnstap-ldns which has thankfully kept the option letters used by dnstap. However, this utility, as the name implies, brings an additional dependency, namely ldns. (But you have that already for its utility programs, don’t you?)


Whilst on the subject of decoding dnstap files, dnstap-read, from the BIND distribution, can also do that nicely. By default it prints the short version, but with -y it’ll also do the long YAML format.

$ dnstap-read file.tap
11-Sep-2017 10:59:00.652 CR <- UDP 107b www.test.aa/IN/A
11-Sep-2017 10:59:00.954 CR <- UDP 107b www.test.aa/IN/A
$ dnstap-read -y file.tap
identity: tiggr
version: bind-9.11.2
  response_time: !!timestamp 2017-09-11T08:59:00Z
  message_size: 107b
  socket_family: INET
  socket_protocol: UDP
  query_port: 61308
  response_port: 53
    opcode: QUERY
    status: NOERROR
    id:  24094
    flags: qr aa rd ra
    ANSWER: 1
        version: 0
        udp: 4096
        COOKIE: 67816834b9432822c5a508fd59b65054fb5bbab0c5fe14f8
      - www.test.aa. IN A
      - www.test.aa. 60 IN A
      - test.aa. 60 IN NS localhost.
  response_message: |
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id:  24094
    ;; flags: qr aa rd ra    ; QUESTION: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
    ; EDNS: version: 0, flags:; udp: 4096
    ; COOKIE: 67816834b9432822c5a508fd59b65054fb5bbab0c5fe14f8
    ;www.test.aa.			IN	A

    www.test.aa.		60	IN	A

    test.aa.		60	IN	NS	localhost.


kdig is the dig-like utility shipped with Knot, and it too can read dnstap files and present them in dig-like (I should probably say kdig-like) manner. I note that kdig doesn’t show version information recorded in the tap file.

$ kdig -G file.tap +multiline
;; Received 759 B
;; Time 2017-09-10 06:28:20 UTC
;; From in 32.1 ms
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 38621
;; Flags: qr aa; QUERY: 1; ANSWER: 3; AUTHORITY: 0; ADDITIONAL: 1

;; Version: 0; flags: do; UDP size: 1280 B; ext-rcode: NOERROR

;;          IN A

;; ANSWER SECTION:            3600 IN A            3600 IN A            3600 IN RRSIG A 5 2 3600 20171006034256 (
                                20170906032611 36186

What’s quite practical is that kdig can record a live query / response (i.e. something you’d do right now) into a tap file. So, in the following example, I use kdig to perform a query and the program writes in dnstap format to the specified file what we get on stdout:

$ kdig -E iis-a.tap AAAA
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 53652
;; Flags: qr rd ra; QUERY: 1; ANSWER: 1; AUTHORITY: 3; ADDITIONAL: 0

;;             		IN	AAAA

;; ANSWER SECTION:             	60	IN	AAAA	2001:67c:124c:4006::214

;; AUTHORITY SECTION:             	3600	IN	NS             	3600	IN	NS             	3600	IN	NS

$ ls -l *.tap
-rw-r--r-- 1 jpm users    305 Sep 10 14:55 iis-a.tap

# (I have reported the epoch of 1970-01-01 as a bug to the knot-dns project)

$ dnstap-ldns -r iis-a.tap
1970-01-01 04:22:23.659631 TQ UDP 24b "" IN AAAA
1970-01-01 04:22:23.725209 TR UDP 110b "" IN AAAA

$ dnstap-ldns -r iis-a.tap -y
version: "kdig 2.5.4"
  query_time: !!timestamp 1970-01-01 04:22:23.659631
  response_time: !!timestamp 1970-01-01 04:22:23.725209
  socket_family: INET
  socket_protocol: UDP
  query_port: 54370
  response_port: 53
  response_message: |
    ;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 53652
    ;; flags: qr rd ra ; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 0

    ;	IN	AAAA

    ;; ANSWER SECTION:	60	IN	AAAA	2001:67c:124c:4006::214

(Note how kdig uses the TOOL_* subtypes in the dnstap records.)

After discussing some of the tools for working with (in particular for decoding) dnstap, I now turn to the DNS servers proper.

DNS servers

Before we look at the individual DNS servers and how to configure them for support of dnstap, it’s interesting to know that dnstap currently has 12 defined subtypes of dnstap “Message”. dnstap tags a log record with a subtype corresponding to the location at which a record was recorded, so we can at any point in time see where the record was collected.

dnstap flow


These subtypes ought to be pretty self-explanatory, but their full description is in the dnstap protocol schema. The diagram above illustrates at which point they are obtained. The mnemonics in parenthesis are those which are output by the utilities in “quiet” mode.


BIND’s configuration is as flexible as Unbound’s in terms of dnstap logging. I build named by adding --enable-dnstap to ./configure and then modify named.conf.

I can set different types to be logged for each view (but I dislike views so I wont’t do that). Supported types are client, auth, resolver, and forwarder as well as all, which causes all dnstap messages to be logged regardless of their type. Each type may take an additional argument to indicate whether to log query or response messages. If not specified, BIND should log both, but this didn’t work for me.

options {
    dnstap { all; };
    // dnstap { auth; resolver query; resolver response; };

    /* where to capture to: file or unix (socket) */
    // dnstap-output file "/tmp/named.tap";
    dnstap-output unix "/var/run/dnstap.sock";

    dnstap-identity "tiggr";
    dnstap-version "bind-9.11.2";

When named starts it starts producing dnstap log data. When writing to a file, we can instruct named to truncate and reopen the file or we can instruct named to roll its dnstap output file using rndc:

$ rndc dnstap -reopen        # Close, truncate and re-open the DNSTAP output file.
$ rndc dnstap -roll <count>  # Close, rename and re-open the DNSTAP output file(s).

If you’re interested in the nitty-gritty of dnstap with servers with are both authoritative and recursive, here’s a thread Evan Hunt started. (But in my opinion you should not be interested in servers which are simultaneously authoritative and recursive …)

Other than that, there’s a good single-page document on using dnstap with BIND.

In BIND 9.12.x, dnstap logfiles can be configured to automatically roll when they reach a specified size, for example:

dnstap-output file "/taps/prod.tap" size 15M versions 100 suffix increment;


I spoke earlier of CoreDNS, and one of the really great things of this single-binary program is that it’s bundled with all I need to produce dnstap frames.

The following configuration suffices for CoreDNS to provide a forwarder which logs all requests in to the specified Unix domain socket:

.:53 {
    dnstap /var/run/dnstap.sock full
    proxy .

If I then look at a query I see what type of DNS server produced this query, namely a forwarder.

$ dnstap -r coredns.tap -y
  socket_family: INET
  socket_protocol: UDP
  response_port: 53
  query_message: |
    ;; opcode: QUERY, status: NOERROR, id: 60806
    ;; flags: rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

    ;       IN       A


    ; EDNS: version 0; flags: ; udp: 4096
    ; COOKIE: 7f8f3ebbf66ffc95


I note that CoreDNS records neither identity nor version in the tap file. CoreDNS can log to a remote endpoint by specifying tcp://address:port as sink.


Unbound has had dnstap support for a few versions, since Robert Edmonds did the first prototype. I build dnstap support into Unbound with --enable-dnstap.

    dnstap-enable: yes
    dnstap-socket-path: "/var/run/dnstap.sock"
    dnstap-send-identity: yes
    dnstap-send-version: yes
    dnstap-log-client-query-messages: yes
    dnstap-log-client-response-messages: yes
    dnstap-log-forwarder-query-messages: yes
    dnstap-log-forwarder-response-messages: yes
    dnstap-log-resolver-query-messages: yes
    dnstap-log-resolver-response-messages: yes

A local-zone is answered directly by Unbound without performing recursion, so you’ll only see response messages for those domains if you set “dnstap-log-client-response-messages: yes”.

The documentation of dnstap in unbound.conf is, well, no it’s not, it’s simply not there. Actually there is no documentation at all for dnstap in unbound-1.6.5/doc/ which is quite atypical: Unbound’s usually very good about that…


I tested knot-2.5.4 (authoritative) and built it with

./configure --with-module-dnstap=yes --enable-dnstap

I configure dnstap in knot.conf by specifying the module to load (mod-dnstap) and its parameters, most of which are self-explanatory and have sensible defaults. The sink directive specifies either a file on the file system (which is opened for truncate) or, if prefixed with the string "unix:", a Unix domain socket, e.g. as created by fstrm_capture.

  - id: tap
    sink: /root/taps/knot-auth.tap
    # sink: unix:/var/run/dnstap.sock
    log-queries: false
    log-responses: true

  - id: default
    global-module: mod-dnstap/tap
$ dnstap-ldns -r knot-auth.tap  -y
identity: ""
version: "Knot DNS 2.5.4"
  query_time: !!timestamp 1970-01-01 21:10:59.484261
  response_time: !!timestamp 1970-01-01 21:10:59.484261
  socket_family: INET
  socket_protocol: UDP
  query_port: 53394
  response_message: |
    ;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 35495
    ;; flags: qr aa rd ; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

    ;www.k20.aa.	IN	A

    www.k20.aa.	3600	IN	A



    ;; EDNS: version 0; flags: ; udp: 4096

Daniel writes that a knotc reload will rotate the dnstap output file as will a SIGHUP.

Wrapping up

There’s a lot of good in dnstap, and it’s a huge improvement over what existed beforehand. There are however a few things to take note of:

  • While debugging, it can take a while until queries start showing up in the dnstap file due to the buffering which the DNS servers do
  • I’m convinced (but have not taken the time to prove) some servers drop logs even though they’re completely idle. This might be due to either real drops or to not flushing existing records when a server is stopped. For example, after shutting down named (rndc stop) and killing dnstap I find the last query missing. People with heavy-traffic servers won’t notice this of course.
  • The existing toolset is a bit sloppy at times. For example fstrm_capture or dnstap -u which cannot rotate output files (the former can rotate every N seconds). This is easy to fix and it needs doing.
  • There’s no network transport of dnstap other than CoreDNS which can send dnstap to a tcp:// target. Somebody started some work but I haven’t seen it.

dnstap is a relatively young, open standard for DNS query logging. It was designed for large, busy DNS servers and offers minimal performance loss. It has already wide adoption amongst open source DNS server implementations, even if there are some missing: NSD, PowerDNS Authoritative, PowerDNS Recursor come to mind, and I hope they’ll very soon join the party. (There’s already a pull request to add dnstap support to dnsdist which will greatly simplify the work for the PowerDNS products, according to the project.)

Further reading:

View Comments :: DNS, logging, and monitoring :: 11 Sep 2017 :: e-mail

If CoreDNS had existed when I wrote Alternative DNS Servers I’d have included it; it’s quite a versatile beast.

CoreDNS was created by Miek Gieben, and he tells me there was a time during which CoreDNS was actually a forked Web server doing DNS, but that changed a bit. Whilst CoreDNS has its roots in and resembles Caddy, it’s a different beast. It’s not difficult to get to know, but some of the terminology CoreDNS uses confused me: for example the term middleware: I see that as a plugin, all the more so because this program’s option to list said middleware is called … drum roll … -plugins. Another thing I needed assistance for was some of the syntax, or rather the semantics, within the configuration file.

CoreDNS is another one of those marvelous single-binary, no-dependencies Go programs which I download and run. All that’s missing is a configuration file called a Corefile. (I associate a core file with the word Corefile … #justkidding ;-)

Launching coredns as root (so that the process may bind to port 53 – use -dns.port 5353 on the command line to specify an alternative, or cap_net_bind_service with systemd) will bring up a DNS server which uses the whoami middleware to provide no response, but an additional section to queries for any domain:

$ dig @

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8021
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 3

;		IN	A

;; ADDITIONAL SECTION:	0	IN	A	0	IN	SRV	0 0 60934 .

Quite useless, if you ask me, but at least I know the server’s running and it’s doing something.

The hosts middleware serves a zone from an /etc/hosts-type file, checking the file for changes and reloading the zone accordingly. A, AAAA, and PTR records are supported.

$ cat /etc/hosts         laptop.example.hosts         bigbox

$ cat Corefile
example.hosts {
        hosts /etc/hosts

With this configuration CoreDNS will respond authoritatively to a query for laptop.example.hosts only; the entry for bigbox is not found.

Let’s do something authoritative, and create a zone master file (in zone.db) and a Corefile:

$ cat Corefile
example.aa {
    file zone.db {
        transfer to *
        transfer to

The file middleware loads the specified master zone file and serves it. That’s it. Simple. Not only that, but it also periodically checks whether the file has changed and actually reloads the zone when the SOA serial number changes. In the transfer stanza I specify that any client (*) may transfer the zone and that the host gets a DNS NOTIFY when the zone is reloaded. (The port number on the address defaults to 53, I just show that it can be specified.) I tested NOTIFY with nsnotifyd and it works reliably.

Similar to file, the auto middleware can serve a whole directory of zone files, determining their origins using a regular expression.

The following Corefile uses the slave^H secondary middleware to create a slave zone which is transferred into RAM from the specified address. (Adding appropriate transfer to stanzas would make this secondary zone transferable by other secondaries.)

$ cat Corefile {
    secondary {
        transfer from
    errors stdout
    log stdout

Note that the zone is served from RAM which means, in the event coredns launches and cannot contact any of its zone masters, the zone cannot be served.

If I need a forwarder, I configure it, here for the root zone, i.e. for all zones not explicitly defined within the Corefile:

$ cat Corefile
. {
    proxy .

Other middleware includes bind which overrides the address to which CoreDNS should bind to, and cache which can cap TTL values when operating as a forwarder. Middleware probably worth looking at is etcd which can read zone data from an etcd instance and kubernetes. If you’re into that sort of stuff of course.

Then there’s the dnssec middleware which promises to enable on-the-fly, a.k.a. “online”, DNSSEC signing of data with NSEC as authenticated denial of existence. In order to test this, I first create a key and then configure an authoritative zone in the Corefile which uses that key file:

$ ldns-keygen -a ECDSAP256SHA256 -k sec.aa

$ cat Corefile
sec.aa {
    file sec.aa
    dnssec {
        key file Ksec.aa.+013+28796
sec.aa.		3600 IN	DNSKEY 257 3 13 (
			) ; KSK; alg = ECDSAP256SHA256 ; key id = 28796
sec.aa.		3600 IN	RRSIG DNSKEY 13 2 3600 (
			20170917103509 20170909073509 28796 sec.aa.
			nY9cmdO8tB81KX+OGA7d7V4cb6wrk876B5qRUWUZ2A== )

CoreDNS signs all records online; if I specify more than one key during configuration it signs each record with all keys.

CoreDNS binaries are provided with middleware for logging and monitoring. For example dnstap enables it to use’s structured binary log format, and I decide for which of the authoritative zones or proxy entries I want to log queries and responses by configuring dnstap accordingly. On the other hand, the health middleware enables an HTTP endpoint at a port you specify, and it returns a simple string if the server is alive:

$ cat Corefile
example.aa {
    health :8080

$ curl

The tls middleware allows me to create a DNS-over-TLS (RFC 7858) respectively a DNS-over-gRPC (does anybody really need that?) server.

The server can act as a round-robin DNS loadbalancer, and it can provide responses to TXT queries in the CH (chaos) class:

$ cat Corefile
# define the CH "tlds"
bind {
    chaos CoreDNS-010 "Miek Gieben"
server {
    chaos CoreDNS-010 "Miek Gieben"
$ dig ...
version.bind.		0	CH	TXT	"CoreDNS-010"
version.server.		0	CH	TXT	"CoreDNS-010"
authors.bind.		0	CH	TXT	"Miek Gieben"
authors.bind.		0	CH	TXT	""
hostname.bind.		0	CH	TXT	""
id.server.		0	CH	TXT	""

There are more middleware “plugins” (rewrite is fun!) and there’s also some documentation as to how to write your own.

Apparently it’s not possible to configure middleware globally. So, for example if you have two servers configured in a single Corefile (by specifying different ports, for example), both blocks need the middleware you want to share configured (documented here). This in turn means, that certain things cannot be done, e.g. dnstap into the same Unix socket.

Apropos documentation: that is, very unfortunately, a bit lacking in clarity. While the information is there, it’s presented in a form which made me pull lots of hair, and I frequently found myself grepping (is that a verb?) my way through the project’s Github issues in search of how, respectively where to write a directive in the Corefile, and Miek prodded me along, for which I thank him!

Other than that, CoreDNS is huge fun and has a lot of potential.

I was telling you about restic the other day, and I demonstrated using its rest-server for storage. I had shortly looked at minio, but Alexander mentioned a few possibilities I’d overlooked, so here goes.

Minio is an Open Source, Amazon S3-compatible, distributed object storage server, which is basically a mouthful to say it stores photos, videos, containers, virtual machines, log files or basically any “blob” of data. It also stores backups, because restic knows how to handle a Minio backend, and it can place its encrypted backups therein.

minio overview The Minio cloud storage stack consists of three major components:

  • the minio cloud storage server (a single binary)
  • the Minio client mc (a single binary)
  • a set of Minio SDKs (e.g. minio-py)


The minio storage server is designed to be minimal, and as far as I’m concerned it is: a single statically-linked Go application containing all I need to set up a storage server, dependencies included. I downloaded a version for the architecture of my NAS, and launched it.

$ mkdir config buckets
$ minio --config-dir config server buckets
Created minio configuration file successfully at /tmp/miniodemo/config
AccessKey: M83JKPPVH985R6XNR4XB 
SecretKey: JfNaBXpswthLzOAQRypLh+PIBwjg3LEkRLp/bmzg 

Browser Access:

Command-line Access:
   $ mc config host add myminio M83JKPPVH985R6XNR4XB JfNaBXpswthLzOAQRypLh+PIBwjg3LEkRLp/bmzg

Drive Capacity: 90 GiB Free, 465 GiB Total

Starting the minio server the first time generates an Access and a Secret key (unless you’ve pre-configured them in the JSON configuration file). Note the location of the config.json as shown in the output and/or make note of the Access and Secret keys because you’ll need them on the client. The buckets directory is where Minio will start writing data, and I don’t touch what ever’s in there “manually”

With minio I can also pool multiple drives into a single object storage server, and it supports things like notification of changes in buckets using different targets (AMQP, MQTT, ElasticSearch, Redis, etc.; here’s an example payload I obtained over MQTT).

Minio’s documentation is adequate though it took me a bit to detect the menu selector at the top of a page.

That’s all I need to do to get an Amazon S3-compatible server running, and I will now use mc, and then restic on it.


mc is the Minio client, and I recommend you keep a copy of its complete guide handy, even though the program has built-in help. It’s basically Minio’s answer to simple Unix commands like ls, cp, diff, etc., and it supports file systems as well as AWS-S3-type storage services like Minio.

I’ll now copy what minio launch said above regarding “Command-line Access” and invoke mc with that command, simply changing the name of the repository to “demo”, and I then create something called a bucket. If you’re familiar with S3 you know all about buckets, if not: a Minio bucket is a container which holds data like a real-life bucket holds water. Or gin&tonic. I digress. You can name a bucket how you wish, e.g. “data” or “gin-tonic”. I seem to be dehydrating; brb.

I use the Minio client (mc) to, say, create buckets and copy files. In order to do so, mc needs the URL and access keys of the storage server. I add those to its configuration:

$ mc config host add demo M83JKPPVH985R6XNR4XB JfNaBXpswthLzOAQRypLh+PIBwjg3LEkRLp/bmzg
Added `demo` successfully.

I can, and have, added a few storage servers to mc’s configuration using "mc config host add". (It’s not necessary to muck about in the minio client configuration file, but it’s not difficult; I do recommend you verify with jq or python -mjson.tool or something that your JSON’s ok after editing.)

Once done, I can “make a bucket” (mb) and copy some files into it:

$ mc mb demo/pail
Bucket created successfully `demo/pail`.

$ mc cp root-anchors/ tld-axfr/ demo/pail
tld-axfr/file..:  41.10 MB / 41.10 MB  ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓  100.00% 18.24 MB/s 2s

I can also launch a Web browser at the minio endpoint (URL shown above when we launched the minio server) and use said keys to login. (To be honest, this is something I don’t really need – I prefer the CLI.)

minio login

logged into minio

Multiple storage instances

I have another Minio server running here called nvx (its configuration is also in mc’s config.json), and I’m going to create a bucket on that named “cubo” (Spanish for “bucket”):

$ mc mb nvx/cubo
Bucket created successfully `nvx/cubo

And now for a bit of what makes this interesting for me: we’ll use mc to mirror one bucket to another:

$ mc mirror demo/pail nvx/cubo

$ mc ls nvx/cubo | head -4
[2017-09-06 12:42:30 CEST] 263KiB AERO.axfr.gz
[2017-09-06 12:42:30 CEST]  42KiB AL.axfr.gz
[2017-09-06 12:42:30 CEST]  12KiB AN.axfr.gz
[2017-09-06 12:42:30 CEST] 5.5KiB AO.axfr.gz

Can I add a file to that new bucket and then compare two buckets? Sure:

$ mc cp /etc/passwd nvx/cubo
$ mc diff demo/pail nvx/cubo
> nvx/cubo/passwd

$ mc mirror nvx/cubo demo/pail
$ mc diff demo/pail nvx/cubo

Back to restic

restic has built-in support for Minio, but as you can possibly imagine, handling the AWS-type settings in restic for different servers and repositories can become a bit of a pain.

Thankfully Alexander enjoyed my restic post, and he decided to start using restic with Minio as a backend. In order to make his life easier, he created restic-tools which is basically a shell script wrapper around restic with support for multiple repositories. This happens with a few sourced shell scripts as configuration files, for example:

$ cat /etc/backup/demo.repo
RESTIC_REPOSITORY="s3:"           # note "/restic" as bucket name
AWS_ACCESS_KEY_ID="M83JKPPVH985R6XNR4XB"			  # keys from minio launch
RESTIC_PASSWORD='sekrit'					  # restic's repository password

$ cat /etc/backup/local/config

With that in place, I use the backup utility to initialize restic’s repository (I don’t specify its password, because it’s already configured in demo.repo):

$ backup demo init
created restic backend 24303c79a5 at s3:

$ backup demo local
scan [/Users/jpm/docs/dir]
scanned 6 directories, 5 files in 0:00
[0:00] 100.00%  0B/s  41.907 KiB / 41.907 KiB  11 / 11 items  0 errors  ETA 0:00
duration: 0:00, 0.84MiB/s
snapshot 3460b459 saved

$ backup demo snapshots
ID        Date                 Host        Tags        Directory
3460b459  2017-09-06 12:56:29  tiggr                   /Users/jpm/Auto/docs/dir

$ backup demo monitor tiggr 10 20
OK - Last snapshot #3460b459 0h ago

Note how the same backup program is used to actually perform the backup proper as well as to use any of restic’s commands. Alexander added a special monitor command which produces an icinga-type notification when a snapshot (backup) last happened for a particular host. Also note that these files all need to be protected as they contain Minio’s “AWS” keys and the restic repository password. Assuming this is all happening on your local network I don’t consider it a grave problem.

Not only is it now easy to create backups and restore data, we also have the added feature that, using Minio buckets, we can replicate (or mirror) those buckets off-site to another Minio instance. (Replication using the likes of rsync could also be done with files stored by rest-server or SFTP of course.)

Note that other than working with the buckets restic produces as a whole, there’s not much we can do with their content (aside from using restic, of course) because the organization within a bucket is restic’s job. In other words, an mb diff or similar will be pretty useless.

So far, my only complaint about Minio is its name: using my favorite search engine to search for information on Minio means wading through bucketfulls (!) of pointers to Minions – a different thing entirely. ;-)

Further reading:

View Comments :: backup and restic :: 06 Sep 2017 :: e-mail

During the course of time I’ve been through a slew of backup utilities; everything from cpio and tar, via rsync through expensive off-site things which got me angry every time either the software or its “backend” was changed. (They call it “upgrading”, but if you force radical changes on me it feels more like a downgrade. I digress.)

I’ve been using restic for some months, and it offers all I need in terms of backup/restore. In no particular order, some of its most notable features:

  • several different local and remote storage backends, and many of the remote backends can be local to my network (e.g. SFTP, REST)
  • complete documentation (IMO good documentation is too often underrated)
  • backups can be mounted and browsed through
  • restores to different directories
  • single, statically linked, Open Source, binary with easily remembered options and built-in help

In order to create a working example, I will use the REST server backend (created by the makers of restic) as a backup endpoint for restic, and we’ll take it from there.

restic at work


The rest-server is easy to set up as it also is just a static binary I launch on a machine onto which I want to backup my data. I will assume HTTP is sufficient (TLS protects the authentication credentials in transit, but I can do without that in my small local network; restic encrypts the backups anyway).

So, on the target machine onto which I will be running backups, I launch rest-server (no root required as long as the user as which I run it can write into the desired path, so please don’t gratuitously sudo) and give it a path into which it should store backup snapshots:

$ rest-server --path /mnt/bigdisk/backups
rest-server 0.9.4 compiled with go1.8.3
Data directory: /mnt/bigdisk/backups/
Authentication disabled
Starting server on :8000
Creating repository directories in /mnt/bigdisk/backups/jp1

The last line of diagnostic output we see when we create our first backup repository later.

That completes the setup we need for remote backups to be sent to the REST server. (Interestingly, rest-server uses the same directory structure as the local backend, so you can access these files it both locally and via HTTP, even simultaneously; I’ll show you this later.)

Rest server can provide basic authentication via HTTP; a simple htpasswd-type file placed in the root directory of the backup target enables that, and, as mentioned, TLS.

$ htpasswd -s -c /mnt/bigdisk/backups/.htpasswd jjolie


The REST server backend we configured is but one possibility. restic will happily work with a number of different data stores for backing up your data. These include

  • local directories
  • SFTP using public keys
  • Amazon S3
  • the Open Source Minio Object Storage
  • Openstack Swift
  • Backblaze B2
  • Azure Blob Storage
  • Google Cloud Storage.

I have been running backups over SFTP (because the REST backend didn’t exist when I started).

Irrespective of the backend used, restic encrypts data as it is stored, and the location the backup data is stored at is assumed not to be trusted. This makes even a local directory which I periodically deposit at a friend’s house, or a backup to somebody else’s NAS practical because the data is protected.


Assuming we’ve decided where our backups are to be stored we first have to initialize a repository for restic to use. To keep the examples easier to follow, I will create an environment variable which points to a repository called jp1 on our REST server. Note the prefix rest: on the repository name:

$ export REPO='rest:'

$ restic -r $REPO init
enter password for new backend:
enter password again:
created restic backend fddd6a95ff at rest:

The password or pass phrase we enter is also called a key, and we can create as many as we want for this repository. (Keys can also be removed.) Anybody who has access to a repository key can unlock the repository. Note that this key is for the restic data repository proper; it is quite possible that your repository’s backend needs further authentication (e.g. HTTP basic auth for REST backend if you’ve configured it, SSH private key for SFTP, S3 credentials for the S3 backend, etc.)

The password or key can also be passed to restic via an environment variable, and I do this here to keep the output in my examples clear, and this allows automation from a script:

$ export RESTIC_PASSWORD='<clear-text-of-restic-password-here>'

Backups are kicked off with the backup subcommand. In restic terminology, the contents of a directory at a specific point in time is called a “snapshot”, so creating a backup actually means to create a snapshot:

$ restic -r $REPO backup /usr/share
scan [/usr/share]
scanned 725 directories, 14902 files in 0:00
[0:12] 100.00%  33.039 MiB/s  399.714 MiB / 399.714 MiB  15627 / 15627 items  0 errors  ETA 0:00
duration: 0:12, 31.12MiB/s
snapshot 5ebde637 saved

$ restic -r $REPO backup /usr/local/etc
$ restic -r $REPO backup /usr/local/etc
$ restic -r $REPO backup /usr/share
duration: 0:01, 319.43MiB/s
snapshot d7fe3fa0 saved

The second snapshot of a directory will typically be much faster than the first: restic When a directory is backed up, restic finds the pertaining snapshot for that directory and will update only files which have changed; in other words, backups are always incremental if there exists a matching snapshot. Similarly to how tar or rsync operate, we can also exclude particular files/directories from a snapshot, etc.

restic also accepts data from standard input for, say, backing up live data from an RDBMS or whatnot.

$ echo Hello world | restic -r $REPO backup --stdin --stdin-filename greetz
[0:00] 12B  0B/s
duration: 0:00, 0.00MiB/s
archived as 2dd3a47d

At any moment we can see which snapshots we have; it turns out a snapshot has been created from a Windows machine as well. (I would typically use a different repository for each machine, but I wanted to demonstrate restic’s interoperability.)

$ restic -r $REPO snapshots
ID        Date                 Host              Tags        Directory
5ebde637  2017-08-23 09:28:40              /usr/share
547a1a76  2017-08-23 09:31:44              /usr/local/etc
f4e3d16b  2017-08-23 09:32:48              /usr/local/etc
d7fe3fa0  2017-08-23 09:35:59              /usr/share
2dd3a47d  2017-08-23 09:39:19              greetz
c8599517  2017-08-23 09:52:50  t420                          C:\Users\jpm\bin\dict

What’s a backup without a restore? Not much. Restoring snapshots with restic is a snap. We can, for example, restore into a different directory:

$ restic -r $REPO restore 547a1a76 --target /tmp/rr
restoring <Snapshot 547a1a76 of [/usr/local/etc] at 2017-08-23 09:31:44.730050452 +0200 CEST \
   by> to /tmp/rr
$ ls -l /tmp/rr/etc/unbound/unbound.conf
-rw-r--r--  1 jpm  admin  30780 Dec 15  2016 /tmp/rr/etc/unbound/unbound.conf

Instead of specifying a particular snapshot ID, I can also use the keyword latest to restore the latest backup.

Did I mention restic is multi-platform?

C:\Users\jpm>restic.exe -r %REPO% restore 2dd3a47d --target gr
restoring <Snapshot 2dd3a47d of [greetz] at 2017-08-23 09:39:19.075975641 +0200 CEST \
      by> to gr

C:\Users\jpm>type gr\greetz
Hello world

I can also mount and browse snapshots (on macOS/Linux). To do so, I use the mount subcommand which provides a FUSE file system onto a restic backup. Therein, I can browse to my heart’s content, inspect files, copy files out, etc.

$ mkdir /tmp/m
$ restic -r $REPO mount /tmp/m
Now serving the repository at /tmp/m
Don't forget to umount after quitting!
$ tree -L 3 /tmp/m
├── hosts
│   ├── t420
│   │   └── 2017-08-23T09:52:50+02:00
│   └──
│       ├── 2017-08-23T09:28:40+02:00
│       ├── 2017-08-23T09:31:44+02:00
│       ├── 2017-08-23T09:32:48+02:00
│       ├── 2017-08-23T09:35:59+02:00
│       └── 2017-08-23T09:39:19+02:00
├── snapshots
│   ├── 2017-08-23T09:28:40+02:00
│   │   └── share
│   ├── 2017-08-23T09:31:44+02:00
│   │   └── etc
│   ├── 2017-08-23T09:32:48+02:00
│   │   └── etc
│   ├── 2017-08-23T09:35:59+02:00
│   │   └── share
│   ├── 2017-08-23T09:39:19+02:00
│   │   └── greetz
│   └── 2017-08-23T09:52:50+02:00
│       └── dict
└── tags

22 directories, 1 file

Backup space is typically finite (at least for me), so I will want to remove snapshots occasionally, and ensure the repository is intact: restic provides commands with which I can do that, and also provides us with the possibility of defining a policy for removing them (forget).

As one final show of why restic puts my mind at ease: you remember we had a REST server accepting backups. Let us assume that server dies but we can still access the files and directories it created. We can directly access them, using restic:

$ restic -r /mnt/bigdisk/backups/jp1 snapshots

Wrapping up

restic offers me the flexibility I want. For example, I can create local snapshots and move those to a remote location later, or I can use any of the backends which work on my network (SFTP, REST, Minio) and not share the data with others. I can also use restic to backup to a friend’s NAS because I know my data will be encrypted. If you’re fond of cloud services for your backups, there’s quite a choice in restic.

I cannot judge whether restic is particularly fast or not – I’ve simply not tested that because reliability and ease of restore are more important than to me that throughput of backup.

restic is Open Source, and its design is open. This is particularly important. When I asked a friend last night to confirm he is (or should I say “was”?) a Crashplan customer, he responded “Yes, not happy”. I can imagine: I’d be a bit miffed to hear my choice of backup software’s going down the tubes for me. If you are currently using some commercial backup software, this might be the perfect time for you to evaluate restic.

Continued in my restic backend of choice: minio.

Further reading:

View Comments :: backup and toolbox :: 23 Aug 2017 :: e-mail

Other recent entries