It must have been over a year ago that somebody mentioned Jenkins is not just a tool for developers; system administrators can also put it to good use. I recall glancing at it and subsequently forgot about it. Anyway, I'm probably the very last person on Earth to learn this.

During Loadays in Antwerp last weekend, Fabian Arrotin mentioned this again, and I convinced him (in exchange of a beverage or two at the hotel bar) to show me. He demonstrated how he schedules Ansible jobs to run from within Jenkins, and the coin clicked as I watched what he did.

Job configuration

In the course of today's lunch break, I set up Jenkins on my laptop, installed a couple of plugins and everything I tried just worked. Fabian showed me the color plugin and mentioned the console column plugin which allows me to open the last console output at the click of a button from the dashboard.

Dashboard

Within a very few minutes I was kicking Ansible playbook-runs by checking something into a git repository; a commit hook kicks off a Jenkins build.

Console output

I think for things like Ansible, scheduled "builds" (think cron) will be tremendously useful, in particular because I can browse through prior build history etc. Within the same lunch break I had the MQTT sending me notifications of failed builds via mqttwarn.

I'm getting addicted in spite of its UI, and I can't wait for the weekend to read more about what Jenkins can do.

View Comments :: Sysadmin, Ansible, and Jenkins :: 16 Apr 2015 :: e-mail

One day after giving a one-hour presentation on what Ansible is capable of, "colleagues" flocked into my office and wanted to see stuff happen, so I showed them each a few odds and ends, in particular how Ansible can template out configuration files. I don't think I exaggerate when I say that I think I saw tears of joy come to somebody's eyes. Lovely. Anyhow, just a few days later, I was asked to find a solution for managing the creation (and destruction) of a potential boatload of DNS zones on a rather large number of PowerDNS servers.

I whipped up an Ansible module to create, delete, and list master or slave zones on authoritative PowerDNS servers with enabled REST API.

Unfortunately I had to resort to using urllib2 instead of Requests because I must not touch (i.e. install anything on) these machines. Thanks to James' comment below, I use Ansible's built-in fetch_url(). The pdns_zone module is very new, but it seems to do its job.

Create a master zone

In order to create a master zone, I invoke the module like this:

- hosts:
  - t1.prox
  gather_facts: False
  tasks:
  - action: pdns_zone name="ansi.test" action=master
            soa="ns.example.net hostmaster.example.com 1 1800 900 604800 3602"
            nsset="ns.example.net,ns.example.org"

The API then adds the following records to the records table:

mysql> SELECT * FROM records WHERE domain_id = (SELECT id FROM domains WHERE name = 'ansi.test');
+-------+-----------+-----------+------+--------------------------------------------------------------+-------+------+-------------+----------+-----------+------+
| id    | domain_id | name      | type | content                                                      | ttl   | prio | change_date | disabled | ordername | auth |
+-------+-----------+-----------+------+--------------------------------------------------------------+-------+------+-------------+----------+-----------+------+
| 16280 |        50 | ansi.test | SOA  | ns.example.net hostmaster.example.com 1 1800 900 604800 3602 | 86400 |    0 |        NULL |        0 | NULL      |    1 |
| 16281 |        50 | ansi.test | NS   | ns.example.net                                               | 86400 |    0 |        NULL |        0 | NULL      |    1 |
| 16282 |        50 | ansi.test | NS   | ns.example.org                                               | 86400 |    0 |        NULL |        0 | NULL      |    1 |
+-------+-----------+-----------+------+--------------------------------------------------------------+-------+------+-------------+----------+-----------+------+
3 rows in set (0.00 sec)

I can specify options to control how the module connects to the API, but by default it obtains these settings from the pdns.conf file. (See the module documentation.) Simultaenously, the comments table is also modified via the API (even though I'm still not quite understanding the use of this; maybe somebody can help me see that):

mysql> SELECT * FROM comments WHERE domain_id = (SELECT id FROM domains WHERE name = 'ansi.test');
+----+-----------+-----------+------+-------------+---------+-----------------+
| id | domain_id | name      | type | modified_at | account | comment         |
+----+-----------+-----------+------+-------------+---------+-----------------+
| 27 |        50 | ansi.test | SOA  |  1429114613 |         | Ansible-managed |
+----+-----------+-----------+------+-------------+---------+-----------------+

Peter gave me an interesting use-case for the per/RRset comments in PowerDNS: people can add, say, issue-tracking numbers to the records' comment in order to document how a record came to exist respectively why it was updated. It's an interesting use-case, but it doesn't cater for deletions... ;-)

Create a slave zone

Setting up a slave zone is very similar; the API modifies the domains table and, as shown above, the comments table.

- name: Create slave zone
  action: pdns_zone zone="example.org"
          action=slave
          masters="127.0.0.2:5301"
mysql> SELECT * FROM domains WHERE name = 'example.org';
+----+-------------+-----------------+------------+-------+-----------------+---------+
| id | name        | master          | last_check | type  | notified_serial | account |
+----+-------------+-----------------+------------+-------+-----------------+---------+
| 51 | example.org | 127.0.0.2:5301  |       NULL | SLAVE |            NULL | NULL    |
+----+-------------+-----------------+------------+-------+-----------------+---------+

Deleting a zone requires specifying action=delete, and it's removed from the back-end database. In the case of deletion of a master zone, all records are purged with the zone proper.

List zones

We can use the module to enumerate zones and their types (a.k.a. "kind"). As a special case, when we list zones, we can specify a shell-like glob which will match on names of zones. Consider this Ansible playbook and the associated template:

- hosts:
  - t1.prox
  vars:
  gather_facts: False
  tasks:
  - name: List existing .org zones
    action: pdns_zone action=list zone=*.org
    register: zl

  - name: Create report
    local_action: template src=a-zlist.j2 dest=/tmp/rep.out
{% for z in zl.zones | sort(attribute='name') %}
{{ "%-20s %-10s %s"|format(z.name, z.kind, z.serial) }}
{% endfor %}

The output produced looks like this:

e5.org               master     2015012203
example.org          slave      0

I think the list function is very practical as it allows me to connect to an authoritative server via SSH to enumerate zones, then turn around towards a second authoritative slave server (also via SSH) and create corresponding slave zones. (This is what you'd probably typically do with the PowerDNS superslave capability.)

pdns_zone

The diagram illustrates this: from our management console, we use Ansible via SSH to connect to one server, and use the obtained list of zones to create, via Ansible and the same module of course, appropriate slave zones on a second server. (If this doesn't make terribly much sense to you, you have my full understanding; trust me: it must be done this way in this particular case, if only because the machines have SSH access only.)

- hosts:
  - t1.prox
  gather_facts: True
  tasks:
  - name: List existing zones on main PowerDNS server
    action: pdns_zone action=list zone=*.org
    register: zl

- hosts:
  - deb.prox
  gather_facts: False
  tasks:
  - name: Create slave zones on secondary PowerDNS server
    action: pdns_zone zone={{item.name}}
                action=slave
                masters="{{ hostvars['t1.prox'].ansible_default_ipv4.address }}:5301"
                api_host=127.0.0.1
                api_port=8083
                api_key="ohoh"
    with_items: hostvars['t1.prox'].zl.zones

The JSON which is returned in the list command looks like this, with kind forced to lower case:

{
  "zones": [
    {
      "serial": 2015012203,
      "name": "e5.org",
      "kind": "master"
    },
    {
      "serial": 0,
      "name": "example.org",
      "kind": "slave"
    }
  ]
}

Now, if only the PowerDNS BIND back-end could be driven thusly, hint, hint ;-)

If this has piqued your interest, I've made the code and a few examples available in the pdns_zone module repository.

View Comments :: PowerDNS and Ansible :: 15 Apr 2015 :: e-mail

I've been doing a lot of work with and testing of the PowerDNS authoritative DNS server lately, and I must say I quickly tire at having to create new zones in its MySQL back-end database. Yes, I can and do use the PowerDNS API or nsedit for that as well as trivial shell scripts, but I remain an aficionado of command-line utilities such as cp and vi for zone file maintenance.

For some reason, and even though I describe that first in my book, I've been neglecting PowerDNS' so-called bind back-end (a misnomer in my opinion; it could equally well have been called the nsd back-end :-). Configured to use the bind back-end, PowerDNS reads zone master files directly off the file system without requiring a heavy-duty relational database system.

PowerDNS with the bind back-end runs in one of two modes:

  1. a hybrid mode in which it stores DNSSEC-related configuration in a separate back-end (e.g. MySQL or PostgreSQL)
  2. a non-hybrid mode in which PowerDNS uses a compiled-in version of SQLite in which to store DNSSEC-related configuration and metadata.

I will discuss this second form as it avoids large "moving parts", i.e. we don't require a relational database alongside the DNS server.

PowerDNS bind back-end

Let's assume the following configuration in /etc/powerdns/pdns.conf:

launch=bind
master=yes
slave=yes
security-poll-suffix=
bind-dnssec-db=/etc/powerdns/bind/dnssec.db
bind-config=/etc/powerdns/bind/named.conf
bind-check-interval=600

When PowerDNS launches, it checks its configuration and loads the zones enumerated in bind-config from master zone files just like BIND and NSD do. It obtains the names and types of the zones it should serve from a named.conf - like file, but it requires only a minuscule subset of BIND's directives. Basically the following suffices to configure one master and one slave zone.

options {
    directory "/etc/powerdns/bind";
};

zone "example.aa" IN {
    type master;
    file "example.aa";
};

zone "ww.mens.de" IN {
    type slave;
    masters { 192.168.1.10; };
    file "ww.mens.de";
};

PowerDNS starts up very quickly even with a very large number of zones. Slave zones are transferred in, but the file specified for the slave zone must exist and be writeable for PowerDNS or it won't transfer (AXFR) the zone to disk but a misleading diagnostic message is logged at first try.

Furthermore, there is no need to reload the server when a zone is added or removed: simply change the file pointed to by bind-config, and PowerDNS will pick this up every bind-check-interval seconds or explictly when you invoke

pdns_control rediscover

One really brilliant feature of the PowerDNS bind back-end is, I can edit one of the zone master files on disk (vi, etc.), and PowerDNS will pick up that change within a second without me having to do anything. (It checks the file's modification time once per second when queried for a zone and reloads it on change.)

PowerDNS has to store DNSSEC-related data and metadata for a zone somewhere; zone master files don't cater for that. In particular the keys (respectively pointers to keys on a HSM) must be available, and the server uses the configured bind-dnssec-db for doing that. This database contains the domainmetadata, cryptokeys, and tsigkeys tables. In other words, if I create DNSSEC keys and associate those with a zone, PowerDNS looks for that data in this database which we create before first launching PowerDNS:

pdnssec create-bind-db /etc/powerdns/bind/dnssec.db

I'm fond of hand-crafting zone master files, but there are cases in which automation is necessary. Unfortunately, the bind back-end has neither support for RFC 2136 dynamic updates (even though PowerDNS has this for some back-ends), nor does it support the REST API. (I thought I could lure one of the developers to bite, but my feeble attempt at an April fool's joke wasn't taken seriously ... even though I think having both these features in the bind back-end a very good idea. ;-)

In short: on the plus side:

  • Fewer moving parts (no relational database management system)
  • Fast
  • Zone master files are immediately reloaded when modified
  • DNSSEC support
  • Did I say "fast"?

On the minus side:

  • Neither the PowerDNS REST API (which is built-in to the server so it could be utilized) nor, and this is very minus, RFC 2136 updates which the server is also capable of doing.
  • Incoming AXFR fails if the zone file doesn't exist; the server could, permissions permitting, create this itself.

All in all, this combination could become one of my favorites...

View Comments :: DNS, PowerDNS, and BIND :: 02 Apr 2015 :: e-mail

DNSSEC requires private keys for signing DNS zones, just as your SSH client needs a private key to connect to a host via SSH. These private keys can be stored on a file system (and often are, particularly in the case of your SSH keys), but they can also be stored on a cryptographically secured hardware device. In the case of SSH this is often a Smart Card whereas with DNSSEC this is typically, in large environments anyway, one or more HSM, each costing anywhere up to several tens of thousands of Euros. But, as we all know, a high price doesn't necessarily mean "good quality"; one of my favorite examples (true story) is this diagnostic message each time a EUR 50k HSM is accessed:

NFLog_AddFileDriver Success failed

So, what do people who want to store private keys on a secure hardware device purchase? One possibility is a Smart Card.

One of the worst "shopping experiences" I've ever had was when I wanted to purchase a high-end HSM for a project a couple of years ago; it was almost impossible to obtain documentation. Only after several phone calls to the vendor was I given access. (It's as though they keep their docs in the bloody HSMs.) Runner up to that experience is probably trying to obtain a Smart Card and getting decent documentation on that. With the notable exception of the Yubikey, which has excellent, publically available documents, it's a disaster. I asked Jakob Schlyter whether he knew of anything which could work for me together with OpenDNSSEC, and he recommended either of a CardContact SmartCard (prices here) or a Yubico Yubikey Neo.

The CardContact SmartCard comes in several form-factors and the “documentation” consists of a zip file which a bunch of Windows executables and a PDF with some screenshots. That really didn't look convincing, so I selected the Yubikey Neo. To cut a very long story short, the Yubikey, while being a tremendously versatile bit of very well-documented kit, doesn't support creation of keys via the PKCS#11 interface, so that was the end of that. Back to square one, i.e. the CardContact SmartCard HSM (hereafter called SmartCard HSM).

If I wrote down everything I know about smart cards and HSMs, widely spaced, in a large font, I would cover a small postage stamp. One important thing to know is that software often interfaces to a HSM with PKCS#11.

SmartCard HSM

The SmartCard HSM token is a USB thing, so I plugged it in. Obviously. As I think I've already mentioned, there is zero documentation about this beast other than a short incorrect README tucked away in a subdirectory of the downloadable zip file which contains an obscure shared object (compiled for i386 -- also not documented).

$ dmesg
[680574.152470] usb 5-1: USB disconnect, device number 13
[680583.884245] usb 5-1: new full-speed USB device number 14 using uhci_hcd
[680584.072654] usb 5-1: New USB device found, idVendor=04e6, idProduct=5817
[680584.072666] usb 5-1: New USB device strings: Mfr=1, Product=2, SerialNumber=5
[680584.072675] usb 5-1: Product: SCT3522CC token
[680584.072682] usb 5-1: Manufacturer: Identive
[680584.072689] usb 5-1: SerialNumber: 21121350600105

I then studied the OpenSC page on the SmartCard-HSM, but everything I tried seemed to result in the message card not present. At the bottom of this post it says:

please make sure that you are compiling and installing OpenSC 0.14

and this is confirmed here, so I installed from that PPA. Nothing doing. I then installed all the bits and pieces from source; nothing doing, so I went to sleep. During the night it ocurred to me that the bit communicating with the card is pcscd, so the next morning I built that from source; nada. After scrounging around for ages, I stumbled over a blog post by smartcard-hsm called SmartCard-HSM USB-Stick with new USB Product ID, and bingo! Thanks a million to these people for hiding that information so well, and thanks a lot also to the German vendor of the card for responding to my query with

alle relevant information you will find at http://www.smartcard-hsm.com/

(I'm not usually the shame and blame type, but well, this just sucks.)

So, let's do something.

$ opensc-tool --list-readers
# Detected readers (pcsc)
Nr.  Card  Features  Name
0    Yes             CardContact SmartCard-HSM [CCID Interface] (21121350600105) 00 00
1    No              O2 Micro Oz776 01 00

$ pkcs11-tool --module opensc-pkcs11.so -L
Available slots:
Slot 0 (0xffffffffffffffff): Virtual hotplug slot
  (empty)
Slot 1 (0x1): CardContact SmartCard-HSM [CCID Interface] (21121350600105) 00 0
  token label        : SmartCard-HSM (UserPIN)
  token manufacturer : www.CardContact.de
  token model        : PKCS#15 emulated
  token flags        : rng, login required, PIN initialized, token initialized
  hardware version   : 24.13
  firmware version   : 1.2
  serial num         : DECC0100509
Slot 2 (0x5): O2 Micro Oz776 01 00
  (empty)

That looks promising, so can I now initialize the card and set the SO-PIN and the PIN? Following the very good instructions on doing so:

$ sc-hsm-tool --initialize --so-pin 0123012301230123 --pin 123456
Using reader with a card: CardContact SmartCard-HSM [CCID Interface] (21121350600105) 00 00

Now I'll try to generate a key in order to determine whether it's worth continuing to experiment with OpenDNSSEC (recall I was disappointed by the Yubikey in this respect)

$ pkcs11-tool --module opensc-pkcs11.so -l --keypairgen --key-type rsa:2048 --id 17 --label "JP first RSA keypair"
Using slot 1 with a present token (0x1)
Logging in to "SmartCard-HSM (UserPIN)".
Please enter User PIN:
Key pair generated:
Private Key Object; RSA
  label:      JP first RSA keypair
  ID:         17
  Usage:      decrypt, sign, unwrap
Public Key Object; RSA 2048 bits
  label:      JP first RSA keypair
  ID:         17
  Usage:      encrypt, verify, wrap

Yay! And a little green light on the USB thing blinks at me. Let me do another, this is fun!

$ time pkcs11-tool --module opensc-pkcs11.so -l --keypairgen --key-type rsa:2048 --id 18 --label "JP second RSA keypair"
...
real    0m11.358s
user    0m0.015s
sys     0m0.012s

What is this card actually capable of, at least in theory?

$ pkcs11-tool --module opensc-pkcs11.so --list-mechanisms
Using slot 1 with a present token (0x1)
Supported mechanisms:
  SHA-1, digest
  SHA256, digest
  SHA384, digest
  SHA512, digest
  MD5, digest
  RIPEMD160, digest
  GOSTR3411, digest
  ECDSA, keySize={192,320}, hw, sign, other flags=0x1500000
  ECDSA-SHA1, keySize={192,320}, hw, sign, other flags=0x1500000
  ECDH1-COFACTOR-DERIVE, keySize={192,320}, hw, derive, other flags=0x1500000
  ECDH1-DERIVE, keySize={192,320}, hw, derive, other flags=0x1500000
  ECDSA-KEY-PAIR-GEN, keySize={192,320}, hw, generate_key_pair, other flags=0x1500000
  RSA-X-509, keySize={1024,2048}, hw, decrypt, sign, verify
  RSA-PKCS, keySize={1024,2048}, hw, decrypt, sign, verify
  SHA1-RSA-PKCS, keySize={1024,2048}, sign, verify
  SHA256-RSA-PKCS, keySize={1024,2048}, sign, verify
  MD5-RSA-PKCS, keySize={1024,2048}, sign, verify
  RIPEMD160-RSA-PKCS, keySize={1024,2048}, sign, verify
  RSA-PKCS-KEY-PAIR-GEN, keySize={1024,2048}, generate_key_pair

So, can I "see" what's on the card? Yes:

$ pkcs11-tool --module opensc-pkcs11.so --list-objects
Public Key Object; RSA 2048 bits
  label:      JP first RSA keypair
  ID:         17
  Usage:      none
Public Key Object; RSA 2048 bits
  label:      JP second RSA keypair
  ID:         18
  Usage:      none

Now I wipe one of the key pairs, ensuring I log into the card:

$ pkcs11-tool -l --pin 123456 --module opensc-pkcs11.so --delete-object --type privkey --id 18

$ pkcs11-tool --module opensc-pkcs11.so --list-objects
Using slot 1 with a present token (0x1)
Public Key Object; RSA 2048 bits
  label:      JP first RSA keypair
  ID:         17
  Usage:      none

Thinking I was ready to use the card with OpenDNSSEC was a little premature. To cut a painful story short, I won't tell you. (No more blame and shame today.) BIND: same story. To be fair this isn't necessarily an issue of the DNS server software; it can well be due to differing interpretation of the PKCS#11 "standard" (how I detest that word). Be that as it may, if somebody says to you "supports PKCS#11" be scared. Very scared. Update: Matthijs convinced me to file a bug report at OpenDNSSEC which I've done.

I abandoned this project.

At just about this time, Aki Tuomi heard I was playing with PKCS#11 and asked me to test his implementation for the Authoritative PowerDNS server. I'll be honest: considering my experiences with OpenDNSSEC and BIND with PKCS#11, I was very reluctant to waste more time with this. I was wrong: in just a few hours yesterday, Aki enhanced the implementation for OpenSC support, and he got this working painlessly for me.

PowerDNS has an experimental PKCS#11 module which relies on P11-kit. According to the p11-kit manual, I must create a module file which associates a p11-kit module to a particular HSM. In order to connect to the SmartCard HSM via OpenSC, I create /etc/pkcs11/modules/opensc.module. That name is important to remember, because we'll see it referenced later. (I could have called it "blabla", but I chose a slightly more formal "opensc".)

module: /usr/opensc/lib/opensc-pkcs11.so
managed: yes
log-calls: no

I then verify that p11-kit can "see" my card; I identify this by the manufacturer:

$ p11-kit -l
...
opensc: /usr/opensc/lib/opensc-pkcs11.so
    library-description: Smart card PKCS#11 API
    library-manufacturer: OpenSC (www.opensc-project.org)
    library-version: 0.0
    token: SmartCard-HSM (UserPIN)
        manufacturer: www.CardContact.de
        model: PKCS#15 emulated
        serial-number: DECC0100509
        hardware-version: 24.13
        firmware-version: 1.2
        flags:
               rng
               login-required
               user-pin-initialized
               token-initialized

I create a test zone called cmouse.aa (cmouse is Aki's IRC handle :-) in PowerDNS which I'll use for these experiments.

The first thing I do is create two keys on my miniature HSM, one to be used as the KSK (2028 bits), the second as ZSK (1024 bits). I'm also using the -a switch to set a label on the keys so that I can identify them on the HSM later:

$ pkcs11-tool --module opensc-pkcs11.so -l --pin 123456 -k --key-type RSA:2048 -a 'cmouse.aa-KSK'
Using slot 1 with a present token (0x1)
Key pair generated:
Private Key Object; RSA
  label:      cmouse.aa-KSK
  ID:         a55b2ad19ab156bb8ab63a2361b9abf76e30315c
  Usage:      decrypt, sign, unwrap
Public Key Object; RSA 2048 bits
  label:      cmouse.aa-KSK
  ID:         a55b2ad19ab156bb8ab63a2361b9abf76e30315c
  Usage:      encrypt, verify, wrap

$ pkcs11-tool --module opensc-pkcs11.so -l --pin 123456 -k --key-type RSA:1024 -a 'cmouse.aa-ZSK'
Using slot 1 with a present token (0x1)
Key pair generated:
Private Key Object; RSA
  label:      cmouse.aa-ZSK
  ID:         0f1e61fd8c3b85f1b653fbc3a9273434c5712853
  Usage:      decrypt, sign, unwrap
Public Key Object; RSA 1024 bits
  label:      cmouse.aa-ZSK
  ID:         0f1e61fd8c3b85f1b653fbc3a9273434c5712853
  Usage:      encrypt, verify, wrap

So far, we have two key pairs on the HSM, but PowerDNS cannot use these yet; we've yet to associate these keys with the zone we want to sign. This is done with the hsm subcommand of the pdnssec utility.

command

The parameters are:

  1. Name of the zone from domains table.
  2. Signing algorithm
  3. The type of key. Values can be "zsk" or "ksk"
  4. The name of the p11-kit module PowerDNS will use to find the HSM. Recall I called it opensc
  5. The slot on said HSM. I identified it, above, as slot #1
  6. The PIN for said HSM. Careful: we'll see this in clear text in a moment! Anybody who can access the MySQL cryptokeys table will be able to read the HSM's PIN.
  7. The label of the key on the HSM.
$ pdnssec hsm assign cmouse.aa rsasha256 ksk opensc 1 123456 'cmouse.aa-KSK'
Module opensc slot 1 assigned to cmouse.aa with key id 16

$ pdnssec hsm assign cmouse.aa rsasha256 zsk opensc 1 123456 'cmouse.aa-ZSK'
Module opensc slot 1 assigned to cmouse.aa with key id 17

The key id issued by pdnssec is actually the row identifier in PowerDNS' cryptokeys table. Here we go:

mysql> SELECT * FROM cryptokeys WHERE domain_id = (SELECT id FROM domains WHERE name = 'cmouse.aa');
*************************** 1. row ***************************
       id: 16
domain_id: 4
    flags: 257
   active: 1
  content: Private-key-format: v1.2
Algorithm: 8
Engine: opensc
Slot: 1
PIN: 123456
Label: cmouse.aa-KSK

*************************** 2. row ***************************
       id: 17
domain_id: 4
    flags: 256
   active: 1
  content: Private-key-format: v1.2
Algorithm: 8
Engine: opensc
Slot: 1
PIN: 123456
Label: cmouse.aa-ZSK

I warned you about the HSM PIN being available in clear-text; there it is. This is necessary, because PowerDNS needs to login to the HSM to get it to sign data whenever it needs to use the key material. This also means that the HSM is a limiting factor with regards to performance: the slower the HSM, the slower PowerDNS will be able to produce DNSSEC signatures.

So, if everthing is set up correctly, and if everything works, we ought to be able to have PowerDNS show us a DNSKEY or two and the signed zone. Let's try.

$ pdnssec show-zone cmouse.aa
Zone is not presigned
Zone has NSEC semantics
keys:
ID = 16 (KSK), tag = 17284, algo = 8, bits = 2048       Active: 1 ( RSASHA256 )
KSK DNSKEY = cmouse.aa IN DNSKEY 257 3 8 AwEAAZOw37iBBoPHfLDjwyqGhAI00PSyVc92TfdIoYfryNtVc2nzX6p9iZMIOOGR70oicN/nIpA/9Pls7kUkb3Tf+P8TRa52SaxLIkwR/NBzBHj06q2JlJ6OUJ+BufD3WlLh5jZPHQzof5FFcRTg5Y4HwD0v+MbzuQoHOxM3PmG0qDUv+W2WZ2rFAmtEVQ2tupGHtzgtgfL7a4RU46rBpYujazHLU2A9Q9zbWJNlCeP4zPsYtDe7CiXtsEhs4c9VLF5WjOdBdUIXJNblEi6SnAxXpZrVGR1cDI+L1OshX0odOtN9e2fXF+rCxKPe2xtTGencx4H+zMyzbwBNYx0vN3UGY00= ; ( RSASHA256 )
DS = cmouse.aa IN DS 17284 8 1 0cd36f92047350aa481eaffa74ddd78197c89007 ; ( SHA1 digest )
DS = cmouse.aa IN DS 17284 8 2 cfdc012168c994ac53a83eb265db0f66e69e7fdefe4f57164ba61a7e70109f59 ; ( SHA256 digest )

ID = 17 (ZSK), tag = 5461, algo = 8, bits = 1024        Active: 1 ( RSASHA256 )

That looks wonderful, and what's even nicer is, as I ran that command, I saw blinkenlights on the SmartCard HSM. ;-)

$ dig @localhost cmouse.aa dnskey +multiline
;; ANSWER SECTION:
cmouse.aa.  300 IN DNSKEY 257 3 8 (
                    AwEAAZOw37iBBoPHfLDjwyqGhAI00PSyVc92TfdIoYfr
                    yNtVc2nzX6p9iZMIOOGR70oicN/nIpA/9Pls7kUkb3Tf
                    +P8TRa52SaxLIkwR/NBzBHj06q2JlJ6OUJ+BufD3WlLh
                    5jZPHQzof5FFcRTg5Y4HwD0v+MbzuQoHOxM3PmG0qDUv
                    +W2WZ2rFAmtEVQ2tupGHtzgtgfL7a4RU46rBpYujazHL
                    U2A9Q9zbWJNlCeP4zPsYtDe7CiXtsEhs4c9VLF5WjOdB
                    dUIXJNblEi6SnAxXpZrVGR1cDI+L1OshX0odOtN9e2fX
                    F+rCxKPe2xtTGencx4H+zMyzbwBNYx0vN3UGY00=
                    ) ; KSK; alg = RSASHA256; key id = 17284
cmouse.aa.  300 IN DNSKEY 256 3 8 (
                    AwEAAbwcLxxSvtlfPQVO7vv9cOF8KIwLnj8wb6iWwX60
                    MRMQ9jChRpjmVmIbnW+Y0No61jOwbVB9oN/+n2hj1uBT
                    jFd/4JK6I+sAqjcSOK/J3UNCFRJ5Lg7rBfegL2XNOKzz
                    54DzGE4m6AzM98gM0bItYKoqD0uN06blxk4qJTk+7Rot
                    ) ; ZSK; alg = RSASHA256; key id = 5461

More blinkenlights, and two DNSKEY records! Nice.

I can disable DNSSEC for a zone with the disable-dnssec subcommand of pdnssec which simply removes the key association from the cryptokeys table, leaving the keys on the HSM. These keys can be deleted from the device as shown above, using pkcs11-tool, or I can re-use them for a different zone or even associate them with more than one zone (i.e. shared keys).

Doing this with PowerDNS was a nice experience in spite of the fact that PKCS#11 support in PowerDNS is considered "experimental".

Was experimenting with a smart card for DNSSEC worth the effort? With the notable exception I just described in detail, it certainly was not.

View Comments :: DNSSEC :: 30 Mar 2015 :: e-mail

The Apache HTTP server allows a system administrator to configure how it should log requests. This is good in terms of flexibility, but it's horrid in terms of parsing: every installation can be different.

I was tasked with getting Apache logs into Graylog and discovered that $CUST has different Apache log formats even between Apache instances which run on a single machine. I certainly didn't want to have to write extractors for all of those, and I can't imagine people here wanting to maintain those ...

People have tried submitting JSON directly from Apache, but I find that a bit cumbersome to write, and I have the feeling it's brittle: an unexpected brace in the request (which ought to be possible) could render the JSON invalid.

apache-logger

I settled on what I think is a much simpler and rather flexible format: a TAB-separated (\t) list of key=value pairs configured like this in httpd.conf:

LogFormat "clientaddr=%h\trequest=%r\tstatus=%s\toctets=%b\ttime=%t\truntime=%D\treferer=%{Referer}i\tuseragent=%{User-Agent}i\tinstance=nsd9" graylog
CustomLog "|/usr/local/apache-logger.py" graylog

The apache-logger program splits those up, adds fields required for GELF, and fires that off to a Graylog server configured with an appropriate GELF input.

#!/usr/bin/env python
# JPMens, March 2015 filter for special Apache log format to GELF

import sys
import json
import gelf    # https://github.com/jspaulding/gelf-python/blob/master/gelf.py
import socket
import fileinput
from geoip import open_database    # http://pythonhosted.org/python-geoip/

my_hostname = socket.gethostname()  # GELF "host" (i.e. source)

try:
    geodb = open_database('GeoLite2-City.mmdb')
except:
    sys.exit("Cannot open GeoLite2-City database")

c = gelf.Client(server='192.168.1.133', port=10002)

def isnumber(s):
    try:
        float(s)
        return True
    except ValueError:
        pass

    return False

for line in fileinput.input():
    parts = line.rstrip().split('\t')
    data = {}
    for p in parts:
        key, value = p.split('=', 1)

        if isnumber(value):
            try:
                value = int(value)
            except:
                value = float(value)

        if value != '' and value != None:
            data[key] = value

    data['host']        = my_hostname    # overwrite with GELF source
    data['type']        = 'special'

    request = data.get('request', 'GET I dunno')
    method = request.split(' ', 1)[0]

    data['short_message']  = request
    data['method']         = method
    if 'request' in data:
        del data['request']

    try:
        g = geodb.lookup(data['clientaddr'])
        if g is not None:
            data['country_code'] = g.country
    except:
        pass

    try:
        c.log(json.dumps(data))
    except:
        pass

Graylog effectively receives something like this (the Geo-location having been added by apache-logger):

{
    "clientaddr": "62.x.x.x",
    "host": "tiggr",
    "instance": "nsd9",
    "method": "GET",
    "country_code": "GB",
    "octets": 282,
    "referer": "-",
    "runtime": 501,
    "short_message": "GET /barbo HTTP/1.1",
    "status": 404,
    "time": "[20/Mar/2015:06:41:36 +0000]",
    "type": "special",
    "useragent": "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2"
}

You'll have noted that the LogFormat allows me to specify any number of fields (e.g. instance) and values.

View Comments :: Graylog :: 20 Mar 2015 :: e-mail

Other recent entries