RFC 4255 specifies a standard for using the DNS to securely publish Secure Shell (SSH) key fingerprints. We’ve discussed that here before and there is a gotcha you should be aware of before deploying the records.

Let’s recall what an SSHFP record looks like in the DNS:

;; ANSWER SECTION:
ubu.jpmens.org.    120  IN  SSHFP  1 1 7E7A55CEA3B8E15528665A6781CA7C35190CF0EB
ubu.jpmens.org.    120  IN  SSHFP  2 1 CC17F14DA60CF38E809FE58B10D0F22680D59D08

The rdata of the record contains the algorithm number (1 == RSA, 2 = DSA), the fingerprint type specification (1 == SHA-1), and the fingerprint calculated from the public key blob.

I’m aware of two utilities that are available to create these records:

  • ssh-keygen -r hostname

    prints the SSHFP resource record named hostname for the specified public key file, or for the host’s local public key if none is specified.

  • sshfp by Paul Wouters.

    generates SSHFP records from .knownhosts files or from the output of ssh-keyscan.

This is all fine and dandy, but how can we collect SSHFP records for a mass of nodes (i.e. hosts) in our network and publish those in the DNS? Depending on the brand of DNS server in use, this could mean populating a zone file, using a Dynamic DNS Update, using an SQL dialect to manipulate an RDBMS back-end (e.g. for PowerDNS), etc. In the following unordered list of ideas, I’ll term that populate the DNS.

  • In a recent chat on IRC, Pieter Lexis mentioned he wanted to work on a Puppet module to collect SSH keys for SSHFP records. I hadn’t thought of that, but I like the idea. Using Puppet and storeconfigs, I can have the nodes transfer their public keys to the puppet master, create the DNS records there, and then populate the DNS. This is possibly the most elegant of solutions, providing you deploy nodes with Puppet, as it caters for nodes being unavailable at a particular point in time. Furthermore, we can ensure fingerprints are submitted only when host keys change (i.e. when they are rolled by the SSH service).

    Update: Pieter also points out that Puppet’s facter already provides both the hosts’s RSA and DSA keys, which make the task even easier. (The following example runs on a node.)

#/bin/sh

fqdn=`facter fqdn`
for algo in rsa dsa
do
	type=$algo
	[ $type == 'dsa' ] && type=dss
	facter "ssh${algo}key" | sed -e "s/^/ssh-${type} /" > /tmp/x
	ssh-keygen -r $fqdn -f /tmp/x
done
  • Puppet stores all nodes’ facts in YAML files on the Puppet master. I’ve written facts2sshfp which slurps through those, chops off whatever isn’t needed and prints out SSHFP records for all nodes. The program optionally produces YAML or JSON output, and you can give it a Python template in a file to print the key fingerprints however you want. (An example for creating SQL INSERTs for PowerDNS is included.) Documentation is contained in the README.

  • If the network is tightly controlled and I don’t mind distributing credentials to all my hosts, I can have the node update the DNS database directly, either by giving it access to the back-end RDBMS (think passwords) or by allowing it to perform a dynamic update (think TSIG or SIG(0) keys).

  • I could create a special HTTP REST service to which nodes can POST or PUT keys. This can be protected with SSL/TLS certificates I issue to each node. (Similar in concept to the previous method.)

  • sshfp can be instructed to use ssh-keyscan to probe public keys of specified hosts or domains, so I could do this centrally. For example, sshfp -a -n 127.0.0.2 -s ww.mens.de performs a zone transfer for the specified name server and zone, and it then scans the SSH public keys from the individual hosts. This will obviously fail to obtain fingerprints for nodes which are currently unreachable, so I’ll have to schedule it to run periodically, check for unreachable nodes, handle failures, etc.

  • Using Ansible I create an action plugin for obtaining SSHFP records and use those to update a database and create a zone master file.

Can you think of other methods? I’d be interested.

SSHFP, DNS, and DNSSEC :: 01 Feb 2012 :: e-mail