Christoph and I exchange gadgets occasionally, and he sent me a SensorTag, which is a small little red thing with contains a bunch of sensors on it: a temperature sensor, a 9-axis motion sensor, a humidity sensor, an altimeter and pressure sensor, and an ambient light sensor. In addition to two buttons, the SensorTag also has a reed switch on it which I can activate with a magnet, and it has a buzzer (which I thought didn't work until I held the device to my ear). Apparently it also has a serial flash on it for storage and OTA upgrades, and a battery. The teardown suggests it also has a microphone. The package is enclosed in red rubber which feels nice, and which can easily be removed to disclose a small, rectangular, plastic case with a transparent lid.


I spent an hour looking for information on the SensorTag, reading some articles about it, and generally attempting to get a feel for what it's supposed to do, but I'm having trouble finding real use-cases for it. As a consumer for this "Thing" there is too much I don't understand on the Texas Instruments SensorTag page. For example:

  • The SimpleLink™ SensorTag allows quick and easy prototyping of IoT devices. How? What does this mean?
  • IoT made easy, with a secure connection to the cloud. Maybe, but you need an app on a smart phone for that.
  • The SensorTag comes in two variants. One of them is "coming soon"
  • Low-power or battery-less operation. Which shall it be?
  • Remove the barriers between software and hardware. How's that? I need the hardware (SensorTag) and the software (an app).
  • With the SensorTag app, you can build your own iOS app in minutes. It allows quick and easy prototyping of simple sensor applications.. There is absolutely no indication whatsoever that the iOS app "lets me build my own iOS app in munutes" or that it lets me do "easy prototyping of simple sensor applications". Maybe what they mean is server-side, but that's certainly not what the blurb says. Or do they mean a bit of drag and drop as shown in this video?

The getting started tab of the SensorTag page says I should download the app, remove the battery tab (so it does have a battery, eh?), and I can then "explore" my SensorTag. Yes, that works, as the following screenshot depicts, after I figure out that I have to poke one of the rubber buttons on the tag to allow it to be discovered.

iOS app

I then find out by experimentation that I have to switch on the sourcing switch at the top of the app; this enables the app to push data to "the cloud", and I can click on a link which takes me there:

The Web page updates the device data instantly. I forget where exactly, but I stumbled over the fact that the sensors' data is also published to an IBM MQTT broker which I can subscribe to with any MQTT client, replacing the xxx in the subscription topic by the device ID which is displayed on the above IBM Internet of Things Foundation page. The MQTT topics are documented here.

mosquitto_sub -h \
   -v -i a:quickstart:jp01 -t 'iot-2/type/+/id/xxxxxxxxxxxx/evt/+/fmt/+'
iot-2/type/"multitool-app"/id/6e97b598fe86/evt/status/fmt/json {

I did not format this payload; the formatted JSON string you see is what was transmitted, and it makes me shudder slightly because it makes scripting difficult. The monitoring topic doesn't do that, but I do wonder who thought up the double quotes within the topic name:

iot-2/type/"multitool-app"/id/6e97b598fe86/mon { "Action": "Connect", "Time": "2015-11-24T07:16:46.294Z", "ClientAddr": "", "ClientID": "d:quickstart:\"multitool-app\":xxxxxxxxxxxx", "Port": 1883, "SecureConnection": false, "Protocol": "mqtt-tcp", "ConnectTime": "2015-11-24T07:16:46.293Z" }

Christoph wrote a nice little program which subscribes to that data stream and prints this out:

$ DEVICE=xxxxxxxxxxxx ./
     magZ      magY     gyroY    IRTemp  baroPres baroHeigh      magX      accY      accX      accZ     gyroZ     gyroX  humidity   AmbTemp   optical
 18.43799  96.83691 0.1634216      20.5   1012.35 -0.603385  56.06348 0.0057983 0.0003662 -0.244995 0.1089478 -0.910491  53.73907   24.5625      0.48
 18.43799  96.83691 0.1634216    20.375   1012.31 -0.258004  56.06348 0.0057983 0.0003662 -0.244995 0.1089478 -0.910491  53.73907   24.5625      0.48
 18.43799  96.83691 0.1634216  20.40625   1012.28        -0  56.06348 0.0057983 0.0003662 -0.244995 0.1089478 -0.910491  53.73907   24.5625      0.48
 18.43799  96.83691 0.1634216  20.34375   1012.31 -0.258004  56.06348 0.0057983 0.0003662 -0.244995 0.1089478 -0.910491  53.73907   24.5625       0.4
 18.43799  96.83691 0.1634216  20.40625   1012.31 -0.258004  56.06348 0.0057983 0.0003662 -0.244995 0.1089478 -0.910491  53.73907   24.5625      0.48
 18.43799  96.83691 0.1634216  20.46875   1012.26 0.1727007  56.06348 0.0057983 0.0003662 -0.244995 0.1089478 -0.910491    53.617   24.5625      0.48
  19.3374   95.9375 0.2957153  20.59375   1012.27 0.0863499  55.16406 0.0075683 -0.001159 -0.244689 0.0077819   -0.8638    53.617   24.5625      0.48

The program itself is quite simple:

#!/usr/bin/env python
import paho.mqtt.client as paho
import json
from os import getenv

device = getenv('DEVICE')

def on_connect(mosq, userdata, rc):
    mqttc.subscribe('iot-2/type/+/id/%s/evt/+/fmt/+' % device, 0)
    print "%9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s" % ( "magZ", "magY", "gyroY", "IRTemp", "baroPres", "baroHeight", "magX", "accY", "accX", "accZ", "gyroZ", "gyroX", "humidity", "AmbTemp", "optical")

def on_message(mosq, userdata, msg):
    data = json.loads(str(msg.payload))
    d = data["d"]
    print "%9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s %9.9s" % ( d["magZ"], d["magY"], d["gyroY"], d["IRTemp"], d["baroPres"], d["baroHeight"], d["magX"], d["accY"], d["accX"], d["accZ"], d["gyroZ"], d["gyroX"], d["humidity"], d["AmbTemp"], d["optical"])

mqttc = paho.Client(client_id="a:quickstart:%s" % device, clean_session=True, userdata=None, protocol=3)
mqttc.on_connect = on_connect
mqttc.on_message = on_message

mqttc.connect("", 1883, 60)

The documentation of this device is spread all over the place and the fact that there have been two versions of it doesn't help at all. Some documentation is an a TI Wiki. The CC2650 SensorTag User's Guide for example, warns about a pre-production FW image, but the app through which upgrades are apparently possible doesn't show newer firmware versions being available.

Another link from the SensorTag page takes me here where I see 'mqtt' linked to. There I see I need a "free Temboo account", and I have to "register with Twilio" where I get a phone number, and then I require a "Pagerduty account" and an API key, and, then I need a "Zendesk account", ... I very quickly closed that Web browser tab and shuddered. Gals & boys, if that's what we need for the "Internet of Things", we're going to have 50 billion squared problems by the year 2020!

There's also a quick guide to making a mobile app for the SensorTag which I haven't tried, because I don't see the point in creating a mobile app which displays the sensor data if I have to run the TI SensorTag app anyway.

Be that as it may I went in search of how to talk to the device directly, or rather, how to have the device talk to me, and found this.

After inserting a BLE-capable Bluetooth dongle into a laptop, I could actually scan for LE devices, and I learned that there may be one connection to a BLE device only, so I close the SensorTag app before attempting the scan:

$ hcitool -i hci1 lescan
LE Scan ...
B0:B4:48:BD:B8:05 CC2650 SensorTag

and the rest was easy in as much as a chap called Ian Harvey has done the hard work.

jpmens/cc2650/b0b448bdb805 {"lux": 0.24, "ambient_temp": 25.14, "humidity": 48.42047119140625, "target_temp": 21.03125, "tst": 1448289252, "millibars": 1002.97}
jpmens/cc2650/b0b448bdb805/lux 0.24
jpmens/cc2650/b0b448bdb805/ambient_temp 25.14
jpmens/cc2650/b0b448bdb805/humidity 48.4204711914
jpmens/cc2650/b0b448bdb805/target_temp 21.03125
jpmens/cc2650/b0b448bdb805/tst 1448289252
jpmens/cc2650/b0b448bdb805/millibars 1002.97

Minutes later, I had the device reporting via MQTT using this code and this doesn't require a cloud service -- I'd point this at my own MQTT broker of course.

After playing with this for a bit I realized that the Python implementation doesn't appear to cater for all the SensorTag's capabilities (i.e. sensors). I then found node-sensortag which does, and cobbled up something which publishes an MQTT payload whenever one of the sensors on the tag notifies NodeJS. This module also supports the buttons and the reed switch, and it auto-discovers the SensorTag proper, so I don't have to configure it with the device identifier. The result, using Freeboard with the MQTT plugin gave me this:


As a BLE device, the SensorTag literally "goes away" when it's too distant from whatever is receiving the data, be it the iOS app or the BT dongle in my laptop. I assume that the makers wanted the SensorTag to be carried in one pocket, with the cloud-connected smart phone in another. I just don't know. The SensorTag specifications state that it can be toggled between Bluetooth Smart, 6LoWPAN and ZigBee and, as mentioned above, there is to be a WiFi version some time.

The SensorTag is small enough to be placed almost anywhere, and as it doesn't need wiring, it would be nicely suited as an ambient light sensor to control, say, openHAB: an appropriate BLE dongle on an openHAB box in Bluetooth reach, and Bob's your uncle. (BTW, the ambient light sensor is appears to be very good.) Christoph also suggests using it to determine whether your spouse drives more aggressively than you do, by monitoring the accelerometer. ;-)

If and when the WiFi version comes, and if it's comparable in price to the BlueTooth version (USD 29.00), I could imagine getting two or three to use as temperature and luminescence sensors in the house.

View Comments :: IoT and MQTT :: 23 Nov 2015 :: e-mail

When an authoritative DNS name server is queried it knows the address of the recursive caching server which queried it, and based on this information, it can return a different response depending on the source address. This is typically knows as GeoDNS or GeoIP-based DNS, and it is often used to return the address of a resource which is closest (network-wise) to the user's resolver.

Aki Tuomi's GeoIP back-end for PowerDNS does just that. The location information on a by-country basis can be obtained from MaxMind's GeoLite database, or you can create your own.


We launch PowerDNS and configure the back-end with a few directives, basically giving it the path to the location data and to a YAML configuration file which defines the geo-enabled zones we provide on the server. (YAML is the same markup language we use in Ansible.)


There is an additional configuration directive called geoip-database-cache with which I specify what kind of caching should be done on the database.

  • standard, the default if unspecified, has the back-end read the database from the file system and consumes little memory.
  • memory loads the whole database into RAM which provides for high performance
  • index caches frequently accessed index portions of the database only, which is faster than standard and consumes less RAM than memory
  • mmap loads the database into memory-mapped RAM

Since I'm not in a position to test this at the moment from real addresses, or rather since I'm not willing to show you real addresses, I cobbled my own GeoIP.dat. Instead of using mmutils which ought to work, I compiled geoip-csv-to-dat and fed it this CSV based on a real one:


After getting that out of the way, we can create our geoip-zones-file. I add a single zone called with four records; one for each of the countries we support directly, and a wildcard country (note the quotes on that line -- they're an artifact of YAML syntax). Each of the records can have as many DNS resource records as I want, as long as they contain valid DNS rdata obviously.

- domain:
  ttl: 60
       - soa: 1 7200 3600 86400 60
       - ns:
       - a:
       - txt: Guten Tag
       - a:
       - txt: Muy buenos dias
       - loc: 40 8 43.041 N 3 21 42.539 W 714m 10m 100m 10m
       - a:
       - txt: I don't know exactly where you are
  services: ''

This configuration provides the server with answers for questions along the lines of what is the A record for, and what is its LOC record?. Additionally we create a service called which defines a service which people will query for. This will direct the server to return answers for the actual query of The %co is expanded by the geoip back-end, as follows:

  • %co is the 3-letter ISO country code
  • %cn is the continent
  • %af is replaced by the address family, i.e. "v4" or "v6" depending on whether the query originated from an IPv4 or IPv6 address respectively
  • %hh, %dd, %mo, %wd are replaced by two digits of hour, day of the month, month, and weekday (UTC) respectively, whereas %mos and %wds are short strings which correspond to the month (jan, feb) and weekday (mon, tue) respectively. We can use this to direct clients to specific servers during, say, periodic maintenance times.

So, let's try an IPv4 address query from localhost:


And an ANY query from "Spain":

;; ANSWER SECTION:  60  IN  LOC    40 8 43.041 N 3 21 42.539 W 714.00m 10m 100m 10m  60  IN  A  60  IN  TXT    "Muy buenos dias"

And what happens if we come from an unconfigured country?

;; ANSWER SECTION:        60  IN  CNAME    60  IN  A    60  IN  TXT    "I don't know exactly where you are"


You may have noticed the geoip-dnssec-keydir parameter in our configuration above; adding this will enable DNSSEC on this back-end, assuming the specified directory exists, is writeable by pdnssec, and readable by the server. This keydir stores keys in BIND's Private-key-format: v1.2 (which IIRC hasn't really been formally documented), and the filenames are built from zone name, key flags and active/disabled state encoded into them. To actually get our zone to produce DNSSEC data, we create at least one key for the zone:

$ pdnssec secure-zone
Securing zone with rsasha256 algorithm with default key size
Zone secured
Erasing NSEC3 ordering since we are narrow, only setting 'auth' fields

$ ls -l geo.keys/
-rw-rw-r--. 1 jpm jpm  939 Nov 12 11:56
-rw-rw-r--. 1 jpm jpm 1703 Nov 12 11:56

(It's probably worth pointing out at this time that the pdnssec utility will be renamed to pdnsutil very soon.)

I don't have to restart PowerDNS to obtain DNSSEC-signed responses:

;; flags: qr aa; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags: do; udp: 1680
;   IN ANY

;; ANSWER SECTION:    60 IN RRSIG TXT 8 4 60 (
                                20151126000000 20151105000000 32029
                                oNwyIS6djdgX5NyfXSfa6Dd8fAVkjfIVzpgDsMc= )    60 IN A    60 IN TXT "Guten Tag"    60 IN RRSIG A 8 4 60 (
                                20151126000000 20151105000000 32029
                                BCIi0+GUtEt0QNioZkBlvL33N1Wf1HaZDSc2LrQ= )

I've said it before, and I'll say it again, and you can't stop me saying it: enabling DNSSEC doesn't get easier than with PowerDNS.

Back on topic, the geoip back-end is easy to configure, and it is powerful. Our friends at PowerDNS are currently discussing adding a feature will will allow me to specify subnets directly in the YAML file.

If you want to see this live, fire off a TXT query to

The PIPE back-end has been a part of PowerDNS since what feels like forever: it speaks to a program you write via stdin and stdout. PowerDNS hands it queries which your process responds to in a particular textual format, and the name server then converts those to DNS responses which it returns to its clients. This so-called coprocess is launched by the name server, and if it should die, it is re-launched.

While it is slightly slower, the REMOTE back-end has a lot more features than the PIPE back-end. For one, it can do full DNSSEC signing, and it can talk to your actual back-end program via Unix sockets, pipes, or via a RESTful HTTP interface (neither authentication nor TLS are supported, but these can be added by hiding the interface behind an appropriate proxy). The RESTful interface supports GET or POST requests, and if we use POST it sends queries in JSON-formatted RPC requests.

I wanted to get a feeling for the remote back-end, so I whipped up a little something which allows me to query for IATA airport codes and their locations which we return as a DNS LOC record.

lax.airports.aa.  60 IN TXT  "Los Angeles International Airport"
lax.airports.aa.  60 IN LOC  33 56 36.233 N 118 24 29.808 W 0.00m 1m 10000m 10m

In order to activate the remote back-end, I configure the following in pdns.conf:

# gmysql-dnssec
# remote-dnssec=yes

I launch gmysql before remote because the former should be queried first (your mileage will vary), and the remote-connection-string defines how PowerDNS accesses its remote back-end -- in this case via HTTP.

The back-end process is in Python (code here) and it implements a /lookup endpoint which is used by PowerDNS to get the data. When we query for a TXT or LOC record, PowerDNS actually fires off an ANY query to our interface (SOA and NS queries are passed in with their qtypes), as in curl

  "result": [
      "ttl": 60,
      "auth": 1,
      "qname": "LAX.airports.aa",
      "qtype": "TXT",
      "content": "Los Angeles International Airport"
      "ttl": 60,
      "auth": 1,
      "qname": "LAX.airports.aa",
      "qtype": "LOC",
      "content": "33 56 36.233 N 118 24 29.808 W 0.00m"

Had I configured, for example, the HTTP queries would have had .xyz added to them (e.g. /lookup/LAX.airports.aa/ which might be useful to return something static, or when each of your queries are to be handled by their own PHP script.

Using query-loc, which we've mentioned here already, we can check if this is working:

$ query-loc ibz.airports.aa
38 52 35.742 N 1 22 04.091 E 0.00m 1.00m 10000.00m 10.00m


But what about DNSSEC? I'll restart the PowerDNS server with the comments in the above configuration removed. I mentioned earlier, that the remote back-end is much more capable than the pipe back-end is: it is able to do full DNSSEC by itself including delegation and key storage, and Aki Tuomi, the author of this back-end, has a complete example in Perl called autorev which demonstrates this. (BTW, Aki is the same person who implemented the PKCS#11 interface in PowerDNS.)

Now, I am far too lazy to do all this, so I'll use PowerDNS for the heavy lifting, letting it create and store the keys for me. In order to be able to do that, I have to add our zone to the domains table so that PowerDNS can associate the DNSSEC keys we create for the zone (in the cryptokeys database table) with this domain:

INSERT INTO domains (name, type) VALUES ('airports.aa', 'NATIVE');

I then use the utility to actually create the keys (KSK and ZSK), and I set NSEC3 narrow mode. PowerDNS' Narrow mode uses "additional hashing calculations to provide hashed secure denial of existence 'on the fly', without further involving the database". What this basically means is that it lies its pants off but is able to convince the client that something really doesn't exist if it doesn't. ;-)

$ pdnssec secure-zone airports.aa
Securing zone with rsasha256 algorithm with default key size
Zone airports.aa secured
Adding NSEC ordering information

$ pdnssec set-nsec3 airports.aa '1 0 5 DEED' narrow
NSEC3 set, please rectify-zone if your backend needs it

Now I obtain the DS for this zone:

$ pdnssec show-zone airports.aa
Zone is not presigned
Zone has NARROW hashed NSEC3 semantics, configuration: 1 0 5 deed
DS = airports.aa IN DS 220 8 2 09992c9728d682de6029bb4c3bba6a51f0976fac84c0972eab371423218814e0 ; ( SHA256 digest )

I then copy that DS record (or the DNSKEY of the KSK if you prefer) into a file which I configure in Unbound as follows:

auto-trust-anchor-file: "/usr/local/etc/unbound/airports.aa.anchor"
        name: "airports.aa"
        stub-addr:  # PowerDNS

Querying this Unbound server shows us a validated response (+ad flag) and the data.

;; flags: qr rd ra ad; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

sin.airports.aa.        60 IN TXT "Singapore Changi International Airport"
sin.airports.aa.        60 IN RRSIG TXT 8 3 60 (
                                20151112000000 20151022000000 3340 airports.aa.
                                zsH5JwP4LRQIV3aik/NjBUKs4J1tN2eHPxeaJBQ= )
sin.airports.aa.        60 IN LOC 1 21 40.223 N 103 59 24.734 E 0.00m 1m 10000m 10m
sin.airports.aa.        60 IN RRSIG LOC 8 3 60 (
                                20151112000000 20151022000000 3340 airports.aa.
                                X3gPvxetr6cmfZ74rhWk+4IXsViFUPp7Dt3kqJ0= )

Not-quite so lazy DNSSEC

It turns out it's a bit of a mystery why this works at all, or rather it may not actually be supposed to work: our friends at PowerDNS do not actually test for the ability to have keys in one back-end and DNS data in a second. In other words, I have to overcome my laziness and attempt to do this properly.

If we configure PowerDNS to have only the one remote back-end or launch it before gmysql we have to implement more functions, in particular those which provide domain metadata and key material to the server. I've done this experimentally and it appears to work. When a client queries PowerDNS for an existing name, we are asked all of this:

GET /lookup/ibz.aereo.aa/SOA
GET /lookup/aereo.aa/SOA
GET /lookup/ibz.aereo.aa/NS
GET /lookup/ibz.aereo.aa/ANY
GET /getDomainMetadata/aereo.aa/PRESIGNED
GET /getDomainKeys/aereo.aa/0

getDomainKeys must return one or more DNS keys in BIND Private Key format 2 which are easily created with

ldns-keygen -a RASHA256 -b 2048 -k aereo.aa

I then simply open the .private key file and return the ASCII blob I find there. (I did say "lazy", didn't I?)

def getDomainKeys(qname, kind):
    ''' all zones get the same key '''

    # ldns-keygen -a RSASHA256 -b 2048 -k airports.aa
    privkey = open("Kaereo.aa.+008+09736.private").read()
    key = {
        "id"    : 1,
        "flags" : 257,
        "active" : True,
        "content" : privkey,

    return dict(result=[ key ])

With a bit more work we'd have some sort of nice utility which creates keys and drops them into a data store from which we subsequently serve them. Alternatively, we can implement addDomainKey and use the pdnssec secure-zone utility to generate the keys and store them therein. The following command submits a PUT request to our back-end script:

$ pdnssec add-zone-key aereo.aa zsk
Added a ZSK with algorithm = 8, active=0
def addDomainKey(zone):
    ''' accept a key from pdnssec '''
    active = request.params.get('active')
    keyblob = request.params.get('content')
    flags = request.params.get('flags')  # 256/257

    print "Receiving key (%s) for %s" % (flags, zone)

    f = open("key-%s.private" % zone, "w")

    return dict(result=True)

If we're asked for a non-existent name NSEC/NSEC3 come into the spiel:

GET /lookup/xnada.aereo.aa/SOA
GET /lookup/aereo.aa/SOA
GET /lookup/xnada.aereo.aa/NS
GET /lookup/xnada.aereo.aa/ANY
GET /lookup/*.aereo.aa/ANY
GET /getDomainMetadata/aereo.aa/PRESIGNED
GET /getDomainKeys/aereo.aa/0
GET /getDomainMetadata/aereo.aa/NSEC3PARAM
GET /getDomainMetadata/aereo.aa/NSEC3NARROW
GET /lookup/aereo.aa/SOA
GET /lookup/aereo.aa/ANY
GET /getDomainMetadata/aereo.aa/SOA-EDIT
def getDomainMetadata(qname, kind):

    res = "0"
    if kind == 'NSEC3PARAM':
        res = "1 0 5 DEADBE"
    elif kind == 'NSEC3NARROW':
        res = "1"

    return dict(result=[res])

Here, my getDomainMetadata function says "No" when asked whether it's pre-signed, responds with 1 0 5 DEADBE when asked for the NSEC3PARAM record, and it responds with 1 when asked wether to do NSEC3NARROW (for the simple reason that I cannot be bothered to implement the getBeforeAndAfterNamesAbsolute routine).

So this seems to work: when I query my validating Unbound server, I see:

;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
ibz.aereo.aa.           60      IN      TXT     "Ibiza Airport"

And if you're wondering where all this is documented, that is a very good question. The remote back-end is well documented, but the how and why which routine is invoked by PowerDNS proper is hard to come by. I was lucky to have a longish rant session^W^Wchat with the chaps at PowerDNS who did their best to push me in the right directions -- I herewith claim all mistakes and omissions.

Aki has a Python package called remotebackend-python which helps in building scripts for the remote back-end, and a remotebackend-gem which is similar for Ruby.

View Comments :: DNS and DNSSEC :: 03 Nov 2015 :: e-mail

A participant in a DNS course asked Carsten Strotmann of Men & Mice why the comment character in a DNS zone master file is a semicolon; Carsten shrugged his shoulders and asked me. In my often snarky way, I replied authoritatively "because it was programmed to do so" and shrugged it off.

; this is a comment
www    3600 IN   A  ; the (duh!) Web server

Never before have I questioned why a particular form of comment is used in a language. Neither the * in column 7 for COBOL (I started off with punch cards), nor the REM (or the expanded :) in BASIC. (I've long forgotten what Nixdorf 8860 (not 8086) assembler used.) The /* */ in C didn't bother me, nor did the // when it appeared. Perl's and Python's #, and Lua's -- didn't hurt me either, and MS-DOS INI files used a ... (drumroll) ... semicolon (;).

However, as a trainer (which I am not in this particular case) I always try to respond to a question, so I felt obliged to at least attempt to find the reason. Whilst I went off in search of the first version of the BIND name server I could find, I asked people who could know:

Stéphane Bortzmeyer recommended I ask somebody called @svnr2000 (wow!) which I did and added in Paul Vixie as the original and long-time maintainer/author of BIND for good measure.

Responses started pouring in with all sorts of good ideas. David Ulevitch of OpenDNS fame said

old-school languages used to use ; over hash, like assembly, and Vixie found it pleasing.

which sounded rather sane to me in addition to which it was corroborated by

c'est assez classique non? Le ' est utilisé en Lisp et dans la plupart des assembleurs

Also the fact that the semicolon is used in hosts.txt is sensible.

These answers didn't really satisfy me though, so I kept on digging. Meanwhile I had the code of BIND version 4.8 dated 1988 and was glancing through the source of db_load.c, where I quickly found the relevant spot, but I was disappointed that there was no comment as to why the semicolon. Be that as it may, I should have looked at the comments at the top of the file, and Tony Finch nailed it: it says

 * Load data base from ascii backupfile.  Format similar to RFC 883.

RFC 833, by Paul Mockapetris, is dated November 1983, and specifies

Semicolon is used to start a comment; the remainder of the line is ignored.

That same Mr. Paul Mockapetris who, quoting WikiPedia,

is an American computer scientist and Internet pioneer, who, together with Jon Postel, invented the Internet Domain Name System (DNS).

In other words, he must know, right? Well, I was thrilled to actually have Paul Mockapetris respond to my query a few hours later:

TOPS-20 goes way back to 1969, and Lisp, which uses the semicolon as a comment character, so the question is whether TOPS-20 got the semicolon from Lisp, or whether the egg came before the chicken.

According to a reply by Paul Vixie, the semicolon was chosen as a comment in zone files because that's what the PDP-10 and other macro assemblers used in those days, and Christoph points me to GAS which lists single-line comments for assemblers.

So, is there anything else you want me to ask? ;-)

A DNSSEC-validating, NTA-capable, Lua-configured, DNS resolver/recursor with RFC5011 support backed by LMDB? Yes, it was announced yesterday, and I just had to take a look, and I want to thank Marek Vavrusa for some hand-holding.


First off, the documentation for the Knot Resolver is impressive and reads well. Building the Resolver is easy enough and currently involves a few ./configure invocations on a number of dependencies, including libknot. Packages are in the making, and our friends at CZ.NIC have also got an Ansible playbook for building and installing it.

After the ubiquitious make install, I create an empty directory and launch the server, which is called kresd, passing it a the path to a file containing the root DNSKEY record which kresd uses to validate DNSSEC records. (Knot should provide a utility such as Unbound's unbound-anchor to obtain that securely.)

kresd -k root.keys /var/kdns
[trust_anchors] key: 19036 state: Valid
[system] interactive mode

What we see is the CLI prompt from which we issue commands to kresd, update settings, etc. This is, in fact, a Lua interpreter which is running.

> print(env.USER)

> cache.stats() 
[hit] => 515
[delete] => 0
[miss] => 566
[txn_read] => 289
[txn_write] => 98
[insert] => 373

> net.listen("", 5353) -- add a listener

> net.list()
[] => {
    [tcp] => true
    [udp] => true
    [port] => 53
[] => {
    [tcp] => true
    [udp] => true
    [port] => 5353

According to the documentation, the key file we specify on start will roll according to RFC 5011 semantics, though I haven't tested it; I'm still recuperating from my last tests of RFC 5011.

Kresd will fork n times with the -f option, in which case it doesn't bring up the interactive console, but we can connect to a console if we know the PID:

$ pidof kresd

$ socat - UNIX-CONNECT:/tmp/kdns/tty/71378 
> modules = { 'hints' }

After adding that hint module, I obtain:

$ dig @ myhost +dnssec
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

myhost.         0   IN  A

which comes right out of /etc/hosts. (The hints module does that for local data, but I strongly disagree with the +AD flag because that was not signed data. Update: this is being changed to report +AA which is sensible.)

There are a number of resolver modules for collecting statistics, for query policies, views, and caching. For example, the memcached module uses memcached (or memcacheDB, which is backed by LMDB) to manage a cache shared by multiple Knot DNS Resolvers, and the redis module does that with Redis. The prefetch module can track records which are about to expire and fetch them in advance, i.e. before their TTL runs out. The graphite module can send metrics to Graphite or, preferably, to InfluxDB and Grafana.

For configuration, kresd can use the etcd module which watches for configuration changes with etcd -- very useful for configuring a large deployment of Resolvers.

As mentioned above, kresd's configuration is in Lua, and as such we can use the full flexibility of the language and any external Lua modules we desire, to flexibilize (that's a new word I invented not very long ago) our setup. An example copied from the documentation illustrates this.

-- Bind to all interfaces using iteration
for name, addr_list in pairs(net.interfaces()) do

Further examples include using socket.http to download a "hot" LMDB cache database from a parent Resolver in order to avoid cold-starting this particular instance. The Knot DNS Resolver uses LMDB as a fast local cache if we don't set up caching via one of the additional modules.

If a file called config exists in rundir (which we specified above as /var/kdns) the server reads that at startup. It is herein we specify a configuration for this server.

name = "JP" -- remember: it's Lua

modules = { 'hints', 'cachectl' }

To Trust or Not To Trust

RFC 7646, Definition and Use of DNSSEC Negative Trust Anchors, specifies a method by which a resolver can be instructed to please not validate DNSSEC data for a particular zone because the resolver's operator knows the data is bogus. (RFC 7646 is also the very first RFC to mention a blog post of mine; it's in a footnote, but hey! :-)

These Negative Trust Anchors (NTA) are important. Consider the case of Comcast trying to fix the fiasco a few years ago: adding an NTA makes the resolver's clients unaware that there are problems in as much as domains resolve albeit insecurely. It's the operator's responsibility to monitor the faulting zone and remove an NTA as soon as possible. (See also: Stéphane Bortzmeyer's article on NTA (in French).)

The Knot DNS Resolver supports NTA. We create a Lua table with the names of the zones below which we want validation to be ignored. As soon as we set trust_anchors.negative, kresd treats the zone as though it were unsigned.

> trust_anchors.negative = { '' }
> trust_anchors.insecure[1]

I'm not sure whether this is on purpose or not, but I'm seeing NTA applied only on names which aren't in the cache. If I set an NTA as above, the negative trust is applied only when the TTL of expires. A workaround, at least at the moment, is to load the cachectl module and flush the cache:

> modules = { 'cachectl' }
> cachectl.clear()
[result] => true

Unfortunately this clears the whole cache. A cachectl.clear(<name>) would be a useful addition. Update: beta #2 will allow cachectl.clear('*.com').

I think things like distributed caches (with Redis or memcached) will be something large operators are going to find very attractive. From the point of view configuration, the etcd module is brilliant as it allows central configuration without having to resort to Ansible & co, and the use of Lua as a configuration language (and partially internal -- some of the modules are in Lua) is grand in terms of flexibility.

As soon as Knot DNS Recursor matures a bit, it would be good to see how it performs in comparison to the other two Open Source validating resolvers, BIND and Unbound, in a setup similar to the benchmark done for authoritative servers.

This is the first time I've looked at the Knot DNS Resolver, and the "package" seems to be quite complete.

Further reading:

View Comments :: DNS, DNSSEC, and Knot :: 01 Oct 2015 :: e-mail

Other recent entries