One of the first steps in an Ansible playbook run (unless you explicitly disable it) is the gathering of facts via the setup module. These facts are collected on each machine and were kept in memory for the duration of the playbook run before being destroyed. This meant, that a task wanting to reference a host variable from a different machine would have to talk to that machine at least once in the playbook in order for Ansible to have access to its facts, which in turn sometimes means talking to hosts although we just need a teeny weeny bit of information from that host.

One interesting feature of Ansible version 1.8 is called "fact caching". It allows us to build a cache of all facts for all hosts Ansible talks to. This cache will be populated with all facts for hosts for which the setup module (i.e. gather_facts) runs. Optional expiry of cached entries as well as enabling the cache itself is controlled by settings in ansible.cfg:

fact_caching = redis
fact_caching_timeout = 3600
fact_caching_connection = localhost:6379:0

By default, fact_caching is set to memory. Configuring it as above, makes Ansible use a Redis instance (on the local machine) as its cache. The timeout specifies when individual Redis keys (i.e. facts on a per/machine basis) will expire. Setting this value to 0 effectively disables expiry, and a positive value is a TTL in seconds.

The following small experiment will run over 246 machines.

---
- hosts:
  - mygroup
  gather_facts: True
  tasks:
  - action: debug msg="memfree = {{ ansible_memfree_mb }}"
PLAY [mygroup] *****************************************************************

GATHERING FACTS ***************************************************************
ok: [www01]
...
TASK: [debug msg="memfree = {{ ansible_memfree_mb }}"] ************************
ok: [www01] => {
    "msg": "memfree = 7811"
}
...

Running my sample playbook gathers all facts on each run. This playbook took just over a minute to run (1m11). So, after the run, what's in Redis?

127.0.0.1:6379> keys *
1) "ansible_cache_keys"
2) "JPM"
3) "ansible_factswww01"
...

Each of the keys in Redis contains a JSON string value -- the list of all facts collected by Ansible. Let's have a look:

#!/usr/bin/env python

import redis
import json

r = redis.StrictRedis(host='localhost', port=6379, db=0)
key = "ansible_facts" + "www01"
val = r.get(key)

data = json.loads(val)
print data['ansible_memfree_mb']  # => 7811

If I configure gather_facts = False, the setup module is not invoked in the playbook, and Ansible accesses the cache to obtain facts. Note, of course, that the value of each fact variable will be that which was previously cached. Also, because the fact gathering doesn't take place, the playbook runs a bit faster (which may be negligible depending on what tasks it's set to accomplish). In this particular case, the play ran in just under a minute (0m50) -- a slight speedup.

A second caching mechanism exists at the time of this writing: it's called jsonfile, and it allows me to use a directory of JSON files as the cache; expiry is supported as for Redis even though the JSON file remains on disk after it's expired (the file's mtime is used to calculate expiry). If I alter the caching configuration in ansible.cfg, I can activate it:

fact_caching = jsonfile
fact_caching_connection = /tmp/mycachedir

The "connection" setting must point to a writeable directory in which a file containing facts in JSON format for each host are stored. A memcached plugin for the cache also exists. Any playbook which gathers facts effectively populates the cache for the machines it speaks to.

The following playbook doesn't talk to the www01 machine, but it can access that machine's facts from the cache. (The city fact isn't default in Ansible: I set this up using facts.d.)

---
- hosts:
  - ldap21
  gather_facts: False
  tasks:
  - action: debug msg="City of www01 = {{ hostvars['www01'].ansible_local.system.location.city }}"
PLAY [ldap21] **************************************************************

TASK: [debug msg="City of www01 = {{ hostvars['www01'].ansible_local.system.location.city }}"] ***
ok: [ldap21] => {
    "msg": "City of www01 = Boston"
}

As soon as a cache entry expires these fact variables will be undefined, and the play will fail.

Populating or rejuvenating the facts cache is trivial: I'll be running the following playbook periodically in accordance with the cache timeout I've configured:

---
- hosts:
  - all
  gather_facts: True

In case of doubt, clear the cache by invoking ansible-playbook with the --flush-cache option.

View Comments :: Ansible, Redis, and JSON :: 29 Jan 2015 :: e-mail

I recently introduced you to the PowerDNS REST API and wished that somebody would build a really good Web-based front-end for PowerDNS with it. Henk Jan reminded me of nsedit which I'd simply forgotten about.

nsedit

It's worth having a look at nsedit, created by Mark Schouten. It consists of a bit of PHP I just drop on a Web server somewhere. On first invocation it creates a small SQLite database in which it stores users and zone associations, but all other operations are directed at the PowerDNS REST API and operate directly on the latter's back-end database.

The user interface is clean and modern-looking. It was created to "finally replace PowerAdmin" and take editing of DNS zones to modern times. It can import master zone files and create native, master, and slave zones. We can create, edit, and delete records, but careful: due to the way the PowerDNS API currently works, it is possible to insert "ugly" data into the back-end. I hope this issue will be soon solved by our friends at PowerDNS.

If you're tired of PowerAdmin's insufficiencies, as I know many are, have a look at nsedit.

DNSSEC uses keys with which it signs DNS records, and there is a school of thought which suggests DNSSEC keys should be rolled (i.e. re-created) every once in a while. The recommended frequency for doing this is specified in DNSSEC Operational Practices, Version 2 with additional information in DNSSEC Key Rollover Timing Considerations (draft). Whether or not you want to roll keys is a matter of taste (and/or paranoia?). How often have you rolled your SSH host keys? (And I don't mean unplanned rolling by, say, re-installing the host without backing up and restoring its keys). How often has the root DNSSEC KSK key been rolled? Let's look at the root trust anchor which was published on the 15th of July 2010:

<?xml version="1.0" encoding="UTF-8"?>
<TrustAnchor id="AD42165F-3B1A-4778-8F42-D34A1D41FD93" source="http://data.iana.org/root-anchors/root-anchors.xml">
  <Zone>.</Zone>
  <KeyDigest id="Kjqmt7v" validFrom="2010-07-15T00:00:00+00:00">
    <KeyTag>19036</KeyTag>
    <Algorithm>8</Algorithm>
    <DigestType>2</DigestType>
    <Digest>49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5</Digest>
  </KeyDigest>
</TrustAnchor>

Now let us look at the current DS of the root's KSK:

dig @k.root-servers.net. . dnskey | grep 257 > root.dnskey
ldns-key2ds -n root.dnskey
.   172800   IN   DS   19036 8 2 49aac11d7b6f6446702e54a1607371607a1a41855200fd2ce1cdde32f24e8fb5

Notice a difference in the digest? No, I don't either. In other words, the root KSK has never been rolled, but ICANN is seeking volunteers for the DNSSEC Root KSK Rollover Plan Design Team...

The relationship between a DNSSEC-signed zone (e.g. example.org) and its parent zone (e.g. org) make rolling DNSSEC key-signing keys (KSK) difficult: when I renew keys for example.org I have to submit the public DNSKEY or its hashed DS record to the parent (org); they add / replace that record for my example.org in their zone and re-sign org creating a chain of trust from org to example.org. This of course means I can't just roll my KSK at will; if I don't interact with my zone's parent, my zone becomes insecure and thus bogus for validating resolvers. DNSSEC basically consists of islands of trust; keys in a zone which are validated by trust anchors in other zones; this is comparable to how delegation works.

In a test environment or in private DNS networks, DNSSEC is easy: I can create, change, roll, whatever DNSSEC KSK keys at my whim, copy the relevant DS records to parent zones, re-sign and all is fine. But can't this be automated?

RFC5011, Automated Updates of DNS Security (DNSSEC) Trust Anchors, specifies a mechanism by which trust anchors can be deployed automatically via the DNS. A resolver periodically queries the DNS to determine whether a new DNSKEY has been published; if so, and if it can be validated by an existing trust anchor, it's retrieved and used as a future trust anchor for that zone. RFC 5011 works because new keys are signed by old keys and thus the chain of trust is maintained.

Both the BIND and Unbound name servers support RFC 5011.

The upcoming version of OpenDNSSEC will also have support for RFC 5011. (If you're new to OpenDNSSEC, read an earlier post on it and I highly recommend you print out this excellent document created by AFNIC.)

What I basically do, is specify RFC 5011 for the KSK keys in my kasp.xml:

<KSK>
    <Algorithm length="4096">8</Algorithm>
    <Lifetime>P1Y</Lifetime>
    <Repository>SoftHSM</Repository>
    <Standby>0</Standby>
    <RFC5011/>
</KSK>

It's also important to note the following:

  • Your configured DelegationSignerSubmitCommand will not be invoked for RFC 5011 keys.
  • Keys jump from publish to active, bypassing the ready state.
  • The ds-seen command should not be invoked on these keys; in fact you probably won't see waiting for ds-seen as a status at all.

kasp2html

RFC 5011 support in OpenDNSSEC is very new and it is, in theory, difficult to test because of the waiting periods involved between rolls. The good folk at NLnetLabs chose to implement a feature which you compile in when building OpenDNSSEC and which you do not use in production! I can set an environment variable to define a particular point in time; with this, rolling a KSK becomes trivial:

$ ods-ksmutil key list
Zone:                           Keytype:      State:    Date of next transition:
jp.aa                           KSK           active    2019-01-15 21:49:34
jp.aa                           KSK           publish   2019-02-14 22:05:34
$ ENFORCER_TIMESHIFT="2019-02-14:22:05:35" ./sbin/ods-enforcerd -1 -d
WARNING: Timeshift mode detected, running once only!
Jan 20 09:42:47 deb ods-enforcerd: INFO: KSK has been rolled for jp.aa

So, how do I test this other than looking at the DNSKEY records published and signed by OpenDNSSEC in the zone?

test scenario

OpenDNSSEC signs a test zone (here: jp.aa) and notifies NSD which transfers the zone in and serves it authoritatively. On the other side I have Unbound and BIND configured as recursive, validating resolvers to answer incoming queries.

key-checker

A utility I've found very helpful in keeping track of keys as they are created and revoked is St├ęphane Bortzmeyer's key-checker which uses a small SQLite3 database to keep track of new keys. (Read the paper entitled Monitoring DNSSEC zones: what, how, and when?.)

I launch key-checker as

while true; do
    ./key-store-and-report.py jp.aa 172.16.153.112
    sleep 600
done

or via cron, etc. It probes the specified DNS server for DNSKEY records for the specified zone and mails me reports:

1  F Jan 20 [DNSSEC Check of my zones] New key 55503 in zone jp.aa.
2  F Jan 20 [DNSSEC Check of my zones] New key 26931 in zone jp.aa.
3  F Jan 20 [DNSSEC Check of my zones] New key 7921 in zone jp.aa.
4  F Jan 20 [DNSSEC Check of my zones] New key 16275 in zone jp.aa.
5  F Jan 20 [DNSSEC Check of my zones] New keyset in zone jp.aa.
Subject: [DNSSEC Check of my zones] New keyset in zone jp.aa.

    The keyset RC/faWBmRTBu+nZcCPuivXe6aGQ= appeared for the first time in the zone "jp.aa.".

    Its TTL is 300 and its members are: [55503, 26931, 7921, 16275]
Subject: [DNSSEC Check of my zones] New key 16275 in zone jp.aa.

        The key 16275 appeared for the first time in the zone "jp.aa.".

        Its flags are 385 and its algorithm 8.

I keep the reports in a mailbox and use that as an archive of what happened when.

Unbound

Unbound has a trust anchor configured which I obtained from ods-ksmutil key export --zone jp.aa --keytype ksk and placed in the file jp.anchor.

server:
    verbosity: 1
    logfile: ""
    username: ""
    chroot: ""
    root-hints: "/home/jpm/rfc5011/root.hints"
    auto-trust-anchor-file: "/home/jpm/rfc5011/jp.anchor"

remote-control:
    control-enable: yes

Some time after the successfull KSK rollover, Unbound successfully obtained the new anchor and stored it:

jp.aa.  300     IN      DNSKEY  257 3 8 AwEAA....; {id = 13759 (ksk), size = 4096b} ;;state=2 [  VALID  ] ;;count=0 ;;lastchange=1421830033 ;;Wed Jan 20 09:47:13 2015
jp.aa.  300     IN      DNSKEY  385 3 8 AwEAA....; {id = 52347 (ksk), size = 4096b} ;;state=4 [ REVOKED ] ;;count=0 ;;lastchange=1421830033 ;;Wed Jan 20 09:47:13 2015

Note how the comments show the REVOKED key; the key's flags (385) have bit 8 set which mean's it has been revoked.

BIND

Unfortunately, my experiments in doing the same for BIND haven't been particularly fruitful. In theory, setting this up is just as easy as doing so for Unbound: I just configure a managed-keys stanza in named.conf, reconfigure, and that's it.

options {
    directory "/var/named";
    allow-query { any; };
    listen-on { 127.0.0.1; 172.16.153.110; };
    dnssec-enable yes;
    recursion yes;
    dnssec-validation yes;
    managed-keys-directory "/home/jpm/named";
};

managed-keys {
     "jp.aa."  initial-key 257 3 8
        "AwEAAcd8g/zOsovEQLmr/IM6Pvs3HQ9
         ...
         DrKO2DVOwe3MMzDy5L1ZJzSOH"; // {id = 52219 (ksk), size = 4096b}
};

zone "." in {
    type hint;
    file "/home/jpm/named/root.hints";
};

I'm too impatient: since I can't seem to be able to force BIND to "go, on, look now to see if there are new DNSKEYs, mate!", I have to wait until it does its thing. Also, I'm currently chasing down another little problem ...

20-Jan-2015 10:17:58.793 managed-keys-zone: Unable to fetch DNSKEY set 'jp.aa': failure

I have actually seen BIND write a revoked key (385) into managed-keys, but I haven't been able to yet actually test whether validation continues working.

If you want to do your own experimenting with a "fake" DNS root zone which rolls its keys (currently every 90 minutes), Warren Kumari's Keyroller is what you need, and Jakob Schlyter has created an associated toolset which downloads the Keyroller key, formats it, and launches either BIND or Unbound on it.

View Comments :: DNSSEC :: 21 Jan 2015 :: e-mail

The PowerDNS Authoritative DNS server supports multiple back-ends for data storage, and I believe the most popular are the MySQL and PostgreSQL back-ends. One reason for this popularity is that people can use SQL updates to add or modify DNS data by connecting to the database PowerDNS accesses and do relatively simple INSERT and UPDATE manipulations to their data which PowerDNS then serves as responses to DNS queries. I personally am not terribly fond of that because it's error-prone: "Garbage in -> Garage out" [sic], as somebody recently said ...

We've discussed some of the pitfalls of adding bad data to PowerDNS earlier, but the proposed solution is iffy in itself. It's much safer to add data to a back-end if the program which is receiving that data (PowerDNS) can check it when it's incoming. One method for doing that is by using dynamic DNS updates (RFC 2136) for PowerDNS, but using nsupdate et. al. is not everybody's cup of tea.

I was telling a client about PowerDNS' REST API earlier this week and realized I'd never actually played with it myself, so here goes.

The built-in (experimental) API is available in versions 3.4 and higher, and we talk to it using HTTP and JSON.

launch=gmysql
gmysql-dbname=pdns
gmysql-dnssec
...
slave=yes
#
webserver-address=172.16.153.110
webserver-allow-from=127.0.0.0/8,172.16.153.0/24
webserver-port=8081
webserver=yes
#
experimental-api-key=otto
experimental-json-interface=yes

The "old" Web server interface is still there, and I can look at the statistics it issues with a Web browser.

Web server interface

The REST service listens on the TCP address/port used by the webserver component. It is enabled with experimental-json-interface, and access to it is protected with an API key you specify in the experimental-api-key setting. Let's launch PowerDNS and use the API to see what we have, using curl or resty. Note, that we specify the API key (the value of the configured experimental-api-key) on each invocation, using a header.

curl -s -H 'X-API-Key: otto' http://127.0.0.1:8081/servers/localhost
{
    "config_url": "/servers/localhost/config{/config_setting}",
    "daemon_type": "authoritative",
    "id": "localhost",
    "type": "Server",
    "url": "/servers/localhost",
    "version": "git-20150108-5369-943b2f7",
    "zones_url": "/servers/localhost/zones{/zone}"
}

So, which zones does this server serve?

curl -s -H 'X-API-Key: otto' http://172.16.153.110:8081/servers/localhost/zones
[
    {
        "dnssec": true,
        "id": "d2.aa.",
        "kind": "Slave",
        "last_check": 1420793755,
        "masters": [
            "172.16.153.112"
        ],
        "name": "d2.aa",
        "notified_serial": 0,
        "serial": 1420793754,
        "url": "/servers/localhost/zones/d2.aa."
    },
    {
        "dnssec": false,
        "id": "ww.mens.de.",
        "kind": "Native",
        "last_check": 0,
        "masters": [],
        "name": "ww.mens.de",
        "notified_serial": 0,
        "serial": 201405202,
        "url": "/servers/localhost/zones/ww.mens.de."
    }
]

Looking at zones_url in the first response, it looks as though we can query an individual zone, and we can. (Output truncated.)

curl -s -H 'X-API-Key: otto' http://172.16.153.110:8081/servers/localhost/zones/d2.aa
{
    "comments": [],
    "dnssec": true,
    "id": "d2.aa.",
    "kind": "Slave",
    "last_check": 1420793755,
    "masters": [
        "172.16.153.112"
    ],
    "name": "d2.aa",
    "notified_serial": 0,
    "records": [
        {
            "content": "127.0.0.1",
            "disabled": false,
            "name": "a.d2.aa",
            "ttl": 60,
            "type": "A"
        },
        {
            "content": "localhost. root.localhost. 1420793754 10800 3600 604800 300",
            "disabled": false,
            "name": "d2.aa",
            "ttl": 300,
            "type": "SOA"
        },
    ],
    "serial": 1420793754,
    "soa_edit": "",
    "soa_edit_api": "",
    "type": "Zone",
    "url": "/servers/localhost/zones/d2.aa."
}

So far so good, but I really prefer looking at what my DNS servers have to say using dig. :-)

The API promises to be able to create new zones, update and delete individual records, etc. How does that turn out? We'll add a new zone. Instead of putting everything on the command line, I'll create the JSON in a file and feed that to curl. Here's the zone definition:

{
    "comments": [
        {
            "account": "JP",
            "content": "My first API-created zone",
            "name": "uhuh",
            "type": "dunno"
        }
    ],
    "kind": "Native",
    "masters": [],
    "name": "example.net",
    "nameservers": [
        "ns1.example.net",
        "ns2.example.net"
    ],
    "records": [
        {
            "content": "ns.example.net. hostmaster.example.com. 1 1800 900 604800 86400",
            "disabled": false,
            "name": "example.net",
            "ttl": 86400,
            "type": "SOA"
        },
        {
            "content": "192.168.1.42",
            "disabled": false,
            "name": "www.example.net",
            "ttl": 3600,
            "type": "A"
        }
    ]
}

So I submit that, and the API returns a confirmation

curl -s -H 'X-API-Key: otto' --data @/tmp/zone http://172.16.153.110:8081/servers/localhost/zones
{
    "comments": [
        {
            "account": "JP",
            "content": "My first API-created zone",
            "modified_at": 1420806114,
            "name": "uhuh",
            "type": "TYPE0"
        }
    ],
    "dnssec": false,
    "id": "example.net.",
    "kind": "Native",
    "last_check": 0,
    "masters": [],
    "name": "example.net",
    "notified_serial": 0,
    "records": [
        {
            "content": "ns1.example.net",
            "disabled": false,
            "name": "example.net",
            "ttl": 3600,
            "type": "NS"
        },
        {
            "content": "ns2.example.net",
            "disabled": false,
            "name": "example.net",
            "ttl": 3600,
            "type": "NS"
        },
        {
            "content": "ns.example.net. hostmaster.example.com. 1 1800 900 604800 86400",
            "disabled": false,
            "name": "example.net",
            "ttl": 86400,
            "type": "SOA"
        },
        {
            "content": "192.168.1.42",
            "disabled": false,
            "name": "www.example.net",
            "ttl": 3600,
            "type": "A"
        }
    ],
    "serial": 1,
    "soa_edit": "",
    "soa_edit_api": "",
    "type": "Zone",
    "url": "/servers/localhost/zones/example.net."
}

If the type is specified in lowercase (e.g. soa) I get some weird error-message which doesn't really make sense to me:

{"error":"Record example.net/TYPE0 ns.example.net. hostmaster.example.com. 1 1800 900 604800 86400: Unknown record was stored incorrectly, need 3 fields, got 7: ns.example.net. hostmaster.example.com. 1 1800 900 604800 86400"}

So, does it work? Yes, it does:

dig @127.0.0.1 +noall +answer example.net any
example.net.            86400   IN      SOA     ns.example.net. hostmaster.example.com. 1 1800 900 604800 86400
example.net.            3600    IN      NS      ns1.example.net.
example.net.            3600    IN      NS      ns2.example.net.

and our database tables? domains and records have been populated as they'd have been with manual SQL inserts. A new table, comments, holds the comments:

mysql> select * from comments;
+----+-----------+------+-------+-------------+---------+---------------------------+
| id | domain_id | name | type  | modified_at | account | comment                   |
+----+-----------+------+-------+-------------+---------+---------------------------+
|  2 |         6 | uhuh | TYPE0 |  1420806114 | JP      | My first API-created zone |
+----+-----------+------+-------+-------------+---------+---------------------------+

It's unclear to me what type and name are for, and the documentation isn't really clear on that. Judging by the fact that the type column contains "TYPE0" I assume it should have been a record type ("SOA", "AAAA", etc). Also account is what? Anyway, this may be nice to have.

I can also create a slave zone, and support for zone deletion is also implemented. Assuming I want to add a new zone to a PowerDNS slave server, this snippet will do:

{
    "kind": "Slave",
    "masters": [ "172.16.153.112:53"],
    "name": "jp.aa"
}
curl  -s -XPOST -H 'X-API-Key: otto' --data @new-zone.json http://172.16.153.110:8081/servers/localhost/zones

To remove a zone, I use the DELETE method which causes PowerDNS to drop the zone from its back-end, and wipe all its associated data.

curl -XDELETE -H 'X-API-Key: otto' http://172.16.153.110:8081/servers/localhost/zones/jp.aa

As to documentation: there's a very simple introductory blurb with a few examples (the docs really need a lot of work, so you may wish to contribute as you discover features).

All in all, this is pretty useful as it ensures data introduced into the PowerDNS back-ends is "clean". Most of the API status errors are encountered are pretty hard to interpret (it took me a minute to determine by staring at my JSON out why {"error":"Container was not an object."} was returned to me), but a bit of trial and error gets us going quickly enough. The API supports zone deletion, adding, replacing, and deleting individual resource records, and there's nothing that I personally missed in terms of features. I can well imagine people are going to like this very much, and there is maybe hope that somebody takes this as a basis to build yet another, but hopefully really good Web-based front-end for the PowerDNS database back-ends. Get to work! ;-)

Related: nsedit - a DNS zone and record editor for PowerDNS

View Comments :: DNS, PowerDNS, API, and REST :: 09 Jan 2015 :: e-mail

Christmas came and went, and all you got were two lousy pairs of socks, one striped, and the other printed with cup cakes. Take them back for a refund.

A neighbourhood gymnastics club (not the kind of place you'll typically find me at, but I'm friendly with them) is in dire need of replacing their aging floor mats, and they're looking for donations. I thought I'd help out a bit, and here's the deal.

You may know I wrote a book about Open Source DNS servers, and you may also know that it got a few good reviews. We then decided to give it away for free, as a PDF. (Yes, the whole book, no DRM, no strings attached!) But I'm hoping you've been wanting a hardcopy version with your name on it. The book was first published in 2009 and, although most of the software described has evolved meanwhile, the description of the servers is still current enough.

Book cover Send me EUR 20.00 plus postage (the postage hurts, I know, but there's nothing I can do about that) and I'll send you a signed copy of the book. This is a real paper book which weighs in at 730+ pages and almost 1.5 Kg. Is this an offer you can refuse? I will donate all proceeds from the books to the gymnastics club.

I hope the easiest for you will be to purchase your copy via eBay (Note: I'm only allowed to sell 9 articles of a kind at a time so come back here for the new link if necessary.)

If that doesn't work for you, you can also send me an e-mail with the following details filled in:

I would like a signed copy of your book "Alternative DNS Servers".

I live in Germany and will pay EUR 24.90
or I live outside of Germany and will pay EUR 35.90
I want to pay by SEPA or via Paypal: ________
Please dedicate the book to: _______________

My name and address are: ___________________, _______________
(make sure JP can just copy/paste name and address!!!)

When I receive your e-mail, I'll respond with further details.

Don't just get a copy for yourself; think of your esteemed colleagues, who'll also be pleased to get a signed copy, I'm sure, and I don't just sign with a blob of bits as for DNSSEC; I use a blue pen. :-)

Other recent entries