A zone digest is a cryptographic digest, or hash, of the data in a DNS zone which is embedded in the zone data itself as a ZONEMD resource record. It is computed upon publishing the zone, and it can be verified by zone recipients. ZONEMD is specified in RFC 8976.

In order to compute the hash, zone data is fed to a digest function using a well-defined and consistent record ordering and format. The data given to the hash excludes the ZONEMD record itself and its signatures if the zone is signed. The digest is then added to the apex of the zone itself and signed with DNSSEC (for signed zones).

Let’s see an example at work (with my new favorite TXT record in it):

$ORIGIN example.aa.
$TTL 3600
@    SOA   ns root 4 3H 1H 1W 1H
     NS    ns
     TXT   "DNS is innocent"
ns   A     127.0.0.1

I use ldns-zone-digest to create the digest (hash) and add the ZONEMD record to the zone:

$ ldns-zone-digest -c  -p 1,1 -o example.aa.digest example.aa ../example.aa
Loading Zone...4 records
Remove existing ZONEMD RRset
Add placeholder ZONEMD with scheme 1 and hash algorithm 1

$ cat example.aa.digest
example.aa.	3600	IN	NS	ns.example.aa.
example.aa.	3600	IN	SOA	ns.example.aa. root.example.aa. 4 10800 3600 604800 3600
example.aa.	3600	IN	TXT	"DNS is innocent"
example.aa.	3600	IN	ZONEMD	4 1 1 83dbb84f8b78e9bea8badede9316fb238f5c923440def32534aa147298d0912752aaf9b287823df1d1b737e43e71396d
ns.example.aa.	3600	IN	A	127.0.0.1

$ named-checkzone -q -F text -s relative -o - example.aa example.aa.digest
$ORIGIN .
$TTL 3600	; 1 hour
example.aa		IN SOA	ns.example.aa. root.example.aa. (
				4          ; serial
				10800      ; refresh (3 hours)
				3600       ; retry (1 hour)
				604800     ; expire (1 week)
				3600       ; minimum (1 hour)
				)
			NS	ns.example.aa.
			TXT	"DNS is innocent"
			ZONEMD	4 1 1 (
				83DBB84F8B78E9BEA8BADEDE9316FB238F5C923440DE
				F32534AA147298D0912752AAF9B287823DF1D1B737E4
				3E71396D )
$ORIGIN example.aa.
ns			A	127.0.0.1

As is to be expected, even if I change the order of the records in the input, the digest remains identical, as it should.

For a signed zone, I use ldns-zone-digest ... -z <ZSK.private> so that the utility can compute and add the RRSIG to the ZONEMD record.

The rdata of the ZONEMD record contains the following fields:

  1. serial, which must match the SOA serial of the zone the ZONEMD is being added to
  2. scheme, wich currently is 1 (SIMPLE)
  3. hash algorithm, 1 for SHA-384 and 2 for SHA-512
  4. digest field with 48 octets for SHA-384 and 64 octets for SHA-512

Another offline utility I can use is dns-tools; a single binary program written in Golang which has different functions. The one to create a ZONEMD record in a zone is:

$ dns-tools digest -f example.aa -o example.aa.digest -d 1 -z example.aa
[dns-tools] 2023/04/16 10:56:42 Using config file: /tmp/dns-tools/dns-tools-config.json
[dns-tools] 2023/04/16 10:56:42 Reading and parsing zone example.aa (updateSerial=false)
[dns-tools] 2023/04/16 10:56:42 Sorting zone
[dns-tools] 2023/04/16 10:56:42 Zone Sorted
[dns-tools] 2023/04/16 10:56:42 Updating ZONEMD Digest
[dns-tools] 2023/04/16 10:56:42 Started digest calculation.
[dns-tools] 2023/04/16 10:56:42 Stopped digest calculation.
[dns-tools] 2023/04/16 10:56:42 Writing zone
[dns-tools] 2023/04/16 10:56:42 Zone written
[dns-tools] 2023/04/16 10:56:42 zone digested successfully in example.aa.digest.

$ cat example.aa.digest
example.aa.	3600	IN	SOA	ns.example.aa. root.example.aa. 4 10800 3600 604800 3600
example.aa.	3600	IN	NS	ns.example.aa.
example.aa.	3600	IN	TXT	"DNS is innocent"
ns.example.aa.	3600	IN	A	127.0.0.1
example.aa.	3600	IN	ZONEMD	4 1 1 83dbb84f8b78e9bea8badede9316fb238f5c923440def32534aa147298d0912752aaf9b287823df1d1b737e43e71396d

Knot-DNS can add the digest automatically, if I configure the zone appropriately:

zone:
  - domain: example.aa
    template: cmember
    zonemd-generate: zonemd-sha384

Note how the digest equals our examples above as the zone contains the same data and has the same SOA serial number.

$ dig @192.168.1.170 +noall +answer +onesoa example.aa AXFR +multi
example.aa.		3600 IN	SOA ns.example.aa. root.example.aa. (
				4          ; serial
				10800      ; refresh (3 hours)
				3600       ; retry (1 hour)
				604800     ; expire (1 week)
				3600       ; minimum (1 hour)
				)
example.aa.		3600 IN	NS ns.example.aa.
example.aa.		3600 IN	TXT "DNS is innocent"
example.aa.		3600 IN	ZONEMD 4 1 1 (
				83DBB84F8B78E9BEA8BADEDE9316FB238F5C923440DE
				F32534AA147298D0912752AAF9B287823DF1D1B737E4
				3E71396D )
ns.example.aa.		3600 IN	A 127.0.0.1

Verification

Unbound has supported ZONEMD for some time now, and I configure an authoritative zone as follows:

server:
     verbosity: 3

auth-zone:
        name: "example.aa"
        primary: 192.168.1.170@5354
        zonemd-check: yes
        zonemd-reject-absence: yes
        zonefile: "example.aa"

When I reload the server, I can follow the digest verification in the log file:

[1681636411] unbound[47600:0] debug: auth-zone example.aa. ZONEMD hash is correct
[1681636411] unbound[47600:0] debug: auth zone example.aa. ZONEMD verification successful

If I configure Unbound to load the zone from a file, and I change the SOA serial in the file without recomputing ZONEMD, Unbound warns upon loading the zone:

[1681636681] unbound[47687:0] debug: auth-zone example.aa. ZONEMD failed: ZONEMD serial is wrong
[1681636681] unbound[47687:0] warning: auth zone example.aa.: ZONEMD verification failed: ZONEMD serial is wrong

PowerDNS has support for ZONEMD in pdnsutil and in validating ZONEMD in the PowerDNS Recursor in the zoneToCache function.

NSD parses ZONEMD since release 4.3.4, and BIND parses the record, but cannot as yet produce the record.

ldns has support for ZONEMD in both ldns-signzone and ldns-verify-zone:

$ ldns-keygen -a13 -k example.aa
Kexample.aa.+013+31040

$ ldns-signzone -z 1:1 example.aa Kexample.aa.+013+31040

$ ldns-verify-zone -ZZ example.aa.signed   # Requires a valid signed ZONEMD RR
Zone is verified and complete

$ ldns-zone-digest -v example.aa example.aa.signed
Loading Zone...16 records
Found and calculated digests for scheme:hashalg 1:1 do MATCH.

There are ongoing tests in adding ZONEMD to the root zone itself.

ZONEMD protects zone data “at rest” and is useful when transferring data between primaries and secondaries. (I think of it as being like a checksum for a zone which is contained in the zone.) Not only in cases in which servers AXFR the zone but also when zones are distributed outside of the DNS, as is the case, for instance, with the DNS root, published via the Web or on FTP. In all cases the integrity of the zone can be verified after downloading the zone. ZONEMD doesn’t provide origin authenticity; DNSSEC is required for that.

Otto makes a good point:

it is also interesting to note that glue records in a zone do note have DNSSEC signatures, but are covered by the ZONEMD record. So the signature of a ZONEMD record does cover glue records in an indirect way.

Further reading

dns :: 16 Apr 2023 :: e-mail

I’m just the messenger; don’t kill me. The user who asked the question and I both well know the security of logging in via SSH can greatly be improved upon by using SSH keys instead of passwords.

Be that as it may, there was what I thought an interesting case, in that an Ansible installation with a few hundred hosts has the requirement to change root passwords whenever an admin departs the organization. In order for this to be possible, passwords obtained from a password storage will be used to login as root, and they are to then be changed. I was asked how the first part could be accomplished.

In order to ensure I get a fresh SSH connection at each attempt, I begin by disabling fancy stuff:

[ssh_connection]
ssh_args = -C

My first idea was to leverage SSH_ASKPASS, but that didn’t work because Ansible populates that.

I then decided a vars plugin would do the trick, and that actually worked well with the disadvantage that the variable ansible_password became known throughout the play, something which doesn’t occur with ansible_password in the inventory or in host-/group_vars for example. Hmm. Not nice.

After asking the Fediverse for inspiration, both Roger and I appeared to have arrived at the same solution within seconds of each other. His idea

$ ansible ... -e ansible_password=`cat /etc/ansible.pass`

which suffers from a possible exposure of the password in the process list. I arrived at something like this, which exposes the lookup but not the secret. However, as Tony rightly points out, this is dangerous in that the secret is exposed in some unixes:

$ read -p "Password: " -s pw; export pw
$ ansible  ... -e ansible_password='{{ lookup("env", "pw") }}'

Using a pipe lookup, the password can be read directly from a password manager API program which emits a single-line password to stdout. Not super elegant, but workable to the point of being usable directly in an inventory file, and the variable is not visible in the play.

[webservers]
www13 ansible_password='{{ lookup("pipe", "./pw-manager.sh jane") }}'

Evgeni pushed me in direction of the possibly cleanest solution in the form of connection-password-file which takes a path or - to indicate stdin. If I configure a file it should contain the password, if I specify an executable program that shall print the password to stdout. In all cases Ansible strips \r\n from the value.

[defaults]
nocows = 1
connection_password_file = ./pw-manager.sh

It might be interesting to know that the connection_password_file is read once per playbook, irrespective of the number of tasks therein.

So, let’s test this. The current password is read by ./pw-manager.sh, and I wish to set a new root password:

$ python3 -c 'import secrets; print(secrets.token_urlsafe(20))' | tee pw
GuphPMbQOAPnvp_NziFnBtn4DM0

$ cat root.yml 
---
- hosts: www13
  remote_user: root
  gather_facts: no
  tasks:
   - name: "Alter root user"
     user:
         name: "root"
         password: "{{ password | password_hash('sha512')}}"

$ ansible-playbook root.yml -e password="$(cat pw)"

PLAY [www13] ***********************************************************************************************

TASK [Alter root user] *************************************************************************************
changed: [www13]

PLAY RECAP *************************************************************************************************
www13                      : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

# can I now login with the new password?

$ sshpass -f pw ssh -l root w00 whoami
root

The missing bits and pieces (accessing password store, etc.) are left as an exercise to … You get the point. :)

Further reading

ansible :: 21 Feb 2023 :: e-mail

Catalog zones are specially-formatted DNS zones that allow for easy provisioning of zones to secondary servers. The zones listed in a catalog zone are called member zones, and when a catalog is transferred and loaded on a secondary with support for catalog zones, the secondary creates the member zones automatically. This is a DNS server integral method for provisioning secondary servers without having to manually configure each secondary (even if it is via configuration management). BIND was the first server to support catalog zones, but support for them has meanwhile reached PowerDNS and Knot.

As described in this post, member zones are typically added to the catalog by, say, performing a dynamic update on the catalog. This causes the catalog zone to get its SOA serial number incremented and NOTIFY its secondaries, whereupon they transfer the catalog zone and provision themselves with the member zones, deleting members which have been removed and adding new members.

Knot has a special mode with which the catalog zone can automatically generate its content from configured zones. This is enabled with catalog-role: generate which causes the catalog zone to include member zones which have a catalog-role: member. The members are added to the specified catalog, of which there can be many.

remote:
  - id: bind1
    address: 192.168.33.4@53

template:
  - id: catzonetemplate
    catalog-role: generate
    acl: catalog_transfer

  - id: cmember # catalog member
    catalog-role: member
    catalog-zone: my-catalog
    acl: secondaries_may_transfer

zone:
  - domain: my-catalog
    template: catzonetemplate
    notify: [ bind1, bind2, ... ]
    acl: [ catalog_transfer ]

  - domain: a01
    template: cmember

  - domain: a02
    template: cmember

  - domain: b01
    template: other

So the above configuration on a Knot primary provides a catalog zone my-catalog which holds the member zones a01 and a02 but not b01`. (The latter doesn’t have a member catalog-role for our catalog.)

If I transfer the catalog zone from the primary Knot server I see its content:

$ dig @127.0.0.1 my-catalog AXFR +noall +answer +onesoa | named-compilezone -q -F text -s relative -o - my-catalog.
$ORIGIN .
$TTL 0  ; 0 seconds
my-catalog              IN SOA  invalid. invalid. (
                                1676734585 ; serial
                                3600       ; refresh (1 hour)
                                600        ; retry (10 minutes)
                                2147483646 ; expire (3550 weeks 5 days 3 hours 14 minutes 6 seconds)
                                0          ; minimum (0 seconds)
                                )
                        NS      invalid.
$ORIGIN my-catalog.
version                 TXT     "2"
$ORIGIN zones.my-catalog.
5ef8e84727fd007d        PTR     a02.
a918f34730f99351        PTR     a01.

The zone’s MNAME, RNAME, and NS are generated as “invalid” which is fine for most purposes, and should we need to modify that we can simply not use the generate function but manage the catalog zone “manually”.

(I was curious as to what Knot uses for the unique member names, as they’re shorter than a SHA-1. If I read the source code correctly, they opted for a SipHash-2-4 keyed with a timestamp.)

The good news for me in a current project is that Knot’s catalog zones are compatible with BIND’s (to be expected), and the automatic member zone addition will make the administrators’ lives easier.

DNS :: 18 Feb 2023 :: e-mail

More and more frequently, when I ask friends and family (people with a mainly non-computing background) how they manage their passwords their eyes cloud over, and I then feel the need to tell them that they ought to apply good password hygiene. (I tend to mensplain a bit.) As such I’ve been looking much more deeply into KeePassXC as a multi-platform, Open Source, and very decent password manager.

I ran away from 1Password many years ago when, IIRC, forced cloud upon their users and also converted to a subscription model and settled for EnPass at the time. Aside from a number of UI quirks in EnPass I’ve been happy enough with it, and I got it at the time when they had a purchase model; I believe that has meanwhile also changed to a subscription model. I want to be able to recommend a program which has a fixed price (Open Source is fine) and a UI which will hopefully remain somewhat consistent. I think KeePassXC matches the requirement.

These notes are intended as a reminder to myself of the features and possibilities I discovered in KeePassXC. (Start with some screenshots if you like.)

the database

KeePassXC databases (*.kdbx; file format explained) are protected with either a password or a key file or both. The desktop app and the CLI program can optionally create these key files, and they contain 128 byte of random data which is used to augment the password. These key files can also be an image, a love letter — any file which doesn’t change. Think of it as a really complicated password that is read from a file, so you don’t have to remember or type it into your master password field.

I would likely suggest a key file created with random data and have a backup of the key file printed on paper (using a font with which I can easily differentiate zero and oh and one and ell …):

$ keyfile=kp.key
$ dd if=/dev/urandom bs=128 count=1 status=none of=$keyfile

$ openssl dgst -sha1 $keyfile
SHA1(kp.key)= f4e8b1dca0f2833d0596ba60664999fc0ca99a09

$ openssl enc -base64 -in $keyfile
MShDxixExQGpQpnoXrby0DI7lVpAr+zLuqg8P3FYOpBpRwVT+hrViMcc+tV0DMWB
nSh7ar8n4f3H5WNbT3pqI8zMJNZj23XwMc1NakzjzcZuiMxbwUK8LDuzkh2NXtjQ
464jy83ECfvomjBTQVo9B64+qeDSuaM1IHTvCYuGH3A=

$ openssl enc -A -base64 -in $keyfile |
       qrencode -l Q -o $keyfile.png

QR-code of base64 of the key file

In order to recover the binary key file I could scan the QR code, copy the resulting text (or even enter it manually from the base64 representation if necessary), and decode the base64 back into the key file’s data with

$ openssl enc -d -A -base64 -in /tmp/paper -out kp-new.key

$ openssl dgst -sha1 kp-new.key
SHA1(kp-new.key)= f4e8b1dca0f2833d0596ba60664999fc0ca99a09

KeePassXC databases can be synchronized via, say, syncthing, Dropbox, or a file share, etc., but the key files ought to be kept separately. Key files are also supported by KeePassium on iOS and possibly also by other apps such KeePassDX and KeePass2Android on Android, and Strongbox on iOS.

Hardware key chooser when opening database

In addition to password and/or key file, the desktop apps can use a supported Yubikey (I chose a 5c nano) with HMAC-SHA1 to add additional entropy to the chosen password. While this works very well it has the disadvantage of not being supported by the mobile apps I looked at, meaning it would be a desktop-only feature. Also, it’s important to have a backup Yubikey (5c NFC here) for the feature; I wouldn’t want a lost/broken Yubikey to lock me out of the database!

Sadly, KeePassXC relies on external file synchronization, which might not be trivial to set up. As Alexander notes, the original Keepass2 (and Keepass2Android) are able to open database files directly from a WebDAV URL, and merge changes that have been made from a different device, but KeePassXC is likely the more modern choice with more features.

Python module

The Python pykeepass module interacts with KeePass databases (supports KDBX3 and KDBX4), and as such also works with KeePassXC. I can create a database (this is how I created the jane.kdbx database for the examples on this page), create and find entries, add entries, change / add passwords and entry details, etc.

#!/usr/bin/env python3

from pykeepass import PyKeePass, create_database
import secrets
from xkcdpass import xkcd_password as xp

kp = create_database("jane.kdbx", password="tt", keyfile="kp.key", transformed_key=None)

g_work = kp.add_group(kp.root_group, "Work")
g_play = kp.add_group(kp.root_group, "Play")
g_social = kp.add_group(g_play, "socialmedia")

wordfile = xp.locate_wordfile()
mywords = xp.generate_wordlist(wordfile=wordfile, min_length=5, max_length=8)

password = xp.generate_xkcdpassword(mywords, acrostic="tonic", delimiter="-")

entry = kp.add_entry(g_work, "gmail", "myusername", password)
print(entry)  # Entry: "email/gmail (myusername)"

e = kp.add_entry(g_social, "Mastodon", "janej", secrets.token_urlsafe(32))

e.url = "https://mastodon.example.com/@janej"
e.tags = [ "fediverse", "mastodon" ]
e.notes = "account created in 2018 with 2FA"

emails = [ "janej@example.com", "jane@example.com" ]
e.set_custom_property("mail", "\n".join(emails))    # custom property expects newline-separated
e.set_custom_property("uid", "12345678")            # pykeepass > 4.0.3 will have: protect=True

# there doesn't appear to be a way of exiting cleanly without the .save()
kp.save()

keepassxc-cli

keepassxc-cli is a command-line tool for KeePassXC from which I can manipulate its databases.

$ kpc open -k tt.key jane.kdbx
Enter password to unlock jane.kdbx:
Passwords> help


Available commands:
add                 Add a new entry to a database.
analyze             Analyze passwords for weaknesses and problems.
attachment-export   Export an attachment of an entry.
attachment-import   Imports an attachment to an entry.
attachment-rm       Remove an attachment of an entry.
clip                Copy an entry's attribute to the clipboard.
close               Close the currently opened database.
db-create           Create a new database.
db-edit             Edit a database.
db-info             Show a database's information.
diceware            Generate a new random diceware passphrase.
edit                Edit an entry.
estimate            Estimate the entropy of a password.
exit                Exit interactive mode.
generate            Generate a new random password.
help                Display command help.
ls                  List database entries.
merge               Merge two databases.
mkdir               Adds a new group to a database.
mv                  Moves an entry to a new group.
open                Open a database.
quit                Exit interactive mode.
rm                  Remove an entry from the database.
rmdir               Removes a group from a database.
search              Find entries quickly.
show                Show an entry's information.
Passwords> generate
vbPf4p9VmxwhkyDehiQDQNR2XiiMUbjf
Passwords>

Using the --yubikey option, I can also unlock a Yubikey-protected database from the command-line:

$ kpc ls other.kdbx -y 2
Enter password to unlock other.kdbx:
Please present or touch your YubiKey to continue.
...

In the example which follows, I attach an image to the database and then display all details (also the protected fields) of an entry.

$ alias kpc=/Applications/KeePassXC.app/Contents/MacOS/keepassxc-cli
$ kpc attachment-import -k kp.key jane.kdbx Mastodon mascot mastodon-mascot.jpg
Enter password to unlock jane.kdbx:
Successfully imported attachment mastodon-mascot.jpg as mascot to entry Mastodon.

$ kpc show jane.kdbx -k kp.key --show-protected --show-attachments Mastodon
Enter password to unlock jane.kdbx:
Title: Mastodon
UserName: janej
Password: REH1I0xz_iEM2VYvhiwfah5Rt1RROxqErmejlaoKY6A
URL: https://mastodon.example.com/@janej
Notes: account created in 2018 with 2FA
Uuid: {95275776-9a50-11ed-add7-f01898ef9fe7}
Tags: fediverse,mastodon

Attachments:
  mascot (3.0 KiB)

It’s not actually documented anywhere that I could find, but keepassxc-cli actually reads passwords from stdin. (tt is the database password and the dice ware subcommand creates four words as in ”subpar amusement crayfish footrest”.)

$ (echo tt; kpc diceware -W 4) | kpc edit -k tt.key jane.kdbx -p gmail
Enter password to unlock jane.kdbx:
Enter new password for entry:
Successfully edited entry gmail.

I could add -q to the command to completely silence prompts for database and entry’s new password.

the UI

screenshot of KeePassXC with the programmatically-created entry shown

  1. favicon downloaded from within the entry (add URL, hit download), but there is a menu for downloading URLs automatically, not possible here b/c it’s a fake address
  2. I didn’t understand the color square, but it’s a password-quality indicator
  3. Additional attributes. In future the Python module will be able to add protection (such as shown in the comment for 4.)
  4. Attachment names
  5. Group folders as created within Python

SSH agent

KeePassXC implements support for an SSH-agent, and I find it works very well. What I particularly appreciate is the possibility to override the agent socket path as I have a bit of a convoluted setup here which sets a specific path on login.

KeePassXC’s implementation can add SSH keys when unlocking a database, it can automatically remove keys from the agent after a selectable time, and it can optionally remove all keys it’s added when the database is locked (i.e. closed). Note there’s a setting with KeePassXC which can optionally ask for confirmation before using a key, but that requires the separate SSH ask-pass utility — this is not something that KeePassXC can implement as there’s no feedback from the agent itself.

There’s a very good writeup of how to manage SSH keys with KeePassXC.

TOTP

KeePassXC has built-in support for Time-based One-Time Passwords (TOTP). These are passwords which use the current time as a source of uniqueness. I prefer to use an app which does TOTP than to use SMS for two-factor authentication (2FA).

I have some doubts about the security of having TOTP within the password manager (I use Authy and not the support built-in to EnPass), so I asked on Mastodon:

Is there a consensus on whether it is better/safer to have TOTP generation done within the password safe (e.g. KeePassXC) or rather externally using a separate program (e.g. Authy)?

I think I’m summarizing correctly when I quote Thomas, who responded:

It’s better to have it on a separate device. But it’s also better to have it in any place than to not have it at all.

That’s probably very good advice.

Further reading

passwords :: 22 Jan 2023 :: e-mail

My original plan for 2022 was to work a bit less, but I failed for the simple reason that I forgot to mark “free” time in the calendar. Stupid. Be that as it may, it was mostly quite a good year for me, with a definite non-work-highlight being a three-week holiday with my offspring in the Spring.

I did quite a bit of DNS work and trainings, and gave a few Ansible trainings, and in between I wrote the odd blogticle:

This year marked my tenth Ansible anniversary, and we created an Ansible reference sheet. I learned about using a lookup plugin for Ansible module_defaults, and jotted down some notes on Ansible local facts on Windows nodes. As you might know, I’m a fan of local facts, and we began collecting ideas for using local facts.

On the DNS side, the pièce de résistance was writing about DNSSEC signing with an offline KSK, and because I get the question occasionally, about DNSSEC with NLnetLabs’ LDNS and NSD.

We also looked at a bit of history in DNSSEC “key tag” or “key ID”?, and for good measure I made a fool of myself in Red means Kaputt: when DNSSEC turns into a treasure hunt.

I’m looking forward to what 2023 has to offer. We’ll see. Wir werden sehen. Veremos. On verra bien.

dns, dnssec, ansible, and blog :: 23 Dec 2022 :: e-mail

Other recent entries