BIND’s views are a feature some people like using because they allow a single server instance to respond distinctly depending on, say, the source address used to query the instance. I actually quite dislike BIND views because they complicate configuration, and as such I typically advise against using them. (If you’re ever really bored, try configuring AXFR from one view to the other; it’s possible, and it’s horrid. ;-)
Hidden in the depths of the Unbound version 1.6.0 Changelog which I’ve only just looked at, I find the word view. I actually groaned, but I thought I’d give it a peek even so.
Unbound’s views can be used to serve local data depending on the source address a query is received on. Let’s look at a small example:
I define local-zone and local-data globally, so queries to this instance should return the following for my.aa/A:
The view named intview defines an alternative response, which is used when a query comes in to 127/8, as defined in the access-control-view statement:
There may be multiple view clauses, and options from views matching an access control statement will be used and override global options. On the other hand, global options are used if no matching view is found.
It doesn’t appear to be possible to use views other than for local data, which is fine, if you ask me. :-)
The only great difference, basically, is that the Homie class accepts a JSON file with configuration values (e.g. broker address, username, password, etc.) which can be overriden by environment variables.
I think this is going to be very useful for those who want Homie on ESP8266 and who use, say, sensors or actuators on a Linux board with Python.
You know the drill: you provision a server with a master zone, and then you have to hop over to all secondary servers and add the slave zone to their configuration. You probably do that with some form of automation, or you use something slightly convoluted like what we’ve discussed previously in automatic provisioning of slave DNS servers. If your master and slave servers are BIND, you’re in luck: catalog zones will automate this for you within the BIND code itself: there’ll no longer be a need for “hacking” this to accomplish automatic provisioning of slave BIND servers.
Catalog zones are scheduled for release in BIND 9.11, but Evan Hunt graciously let me have a peek at a preview of the code, and I must warn you: first, this is a work in progres and hasn’t been released yet, and second (and more importantly) what I’m going to show you is the little I know of catalog zones. Be warned on both accounts! Update: Catalog zones are now in BIND 9.11.0a3.
Catalog zones are based on an Internet-Draft called draft-muks-dnsop-dns-catalog-zones-01 which describes a method for automatic synchronization of zones among primary and secondary servers. The way this is works is that a zone contains the names (and optional metadata) describing which zones are to be used on slave (secondary) servers; these zone names in a catalog are called member zones. The secondary server obtains a copy of this catalog (via zone transfer (AXFR/IXFR)) and uses it to update the internal catalog of zones it ought to have; once it detects a change, it adds and slaves the member zone or deletes it, depending on whether the member zone was added or removed.
More concisely: we add one or more member zones to the catalog zone which is transferred to its slaves where it triggers the creation (or removal) of the member zones.
Let’s see how this works in practice with a preliminary copy of BIND which supports this.
Primary and secondary servers
On my primary test server, I add the following to named.conf:
Pay attention to the allow-new-zones directive. The catalog zone proper is mostly empty: mname, rname and NS are irrelevant, and the relative version label specifies the catalog zone format version:
On a secondary (192.168.1.189) I have the following configuration:
Here we also add allow-new-zones so that the secondary can create member zones on the fly, and we use the new catalog-zones stanza to define the primary server which holds our catalog. The default-masters statement in the catalog-zones stanza defines the default masters for its member zones.
The optional zone-directory in “catalog-zones” allows master files for slaves provisioned by catalog zones to be stored in a directory other than the server’s working directory.
If in-memory is set to yes, local copies of master files on slaves will be store in memory only (not on the file system).
Adding a new member zone
On the primary server I add a new zone, either dynamically with rndc addzone or manually; I’ll use the former:
From this point onwards, my primary server is ready to serve this master zone, but I want the secondary to serve the zone as well. Here’s what I do:
I create a hash of a value. Basically any value should do, but it must currently be a valid BIND “nzf” hash. (The pre-alpha code I have does a little oops if it isn’t – I’m reporting that as we speak.) ISC is considering loosening this rule to allow me to use any label I wish to, but for interoperability with possible future implementations which comply with the draft, the hash is preferable.
I then add a record to the catalog zone in which I name the member zone, bump its serial and reload the zone.
I run the above (thanks Witold!) to create the hash:
Another way of creating the hash (thanks, Peter) is:
I now use that hash for step 2 and add a record containing the member zone:
When I reload named on the primary (rndc reload), I watch the transfer logs; they look very promising:
Note how first the catalog zone was transferred, and then our new example.org zone.
So, what’s on the secondary?
The __catz__* file contains a copy of the slaved zone.
If I delete a member zone record from the catalog the process is reversed, and the secondaries remove the zone completely:
BIND can use metadata on a zone which is transferred with the catalog: for example different master servers for a particular zone. This allows us to add a member zone to the catalog, which is configured with different master servers. So, on my primary server, I can add the following to the catalog: the first record specifies a member zone, and the second record the master(s) for that zone.
When the secondaries get the catalog, this member zone is created and configured to obtain a copy of its data from the specified master server.
Another interesting feature is the possibility of setting ACLs on particular member zones within the catalog. This is done using Address Prefix Lists (RFC 3123):
The above example would allow transfers from the two addresses only.
Catalog zones may also carry TSIG signatures for member zones; this is described in the BIND ARM.
It goes without saying (so why am I?) that the catalog zone can be dynamically updated of course; this makes provisioning easy and ensures the SOA serial number is bumped.
We should probably all wish for other Open Source DNS server implementations to embrace catalog zones and add support for them in order to increase interoperability between DNS server software brands.
I took the plunge and decided to “design” and create a small PCB intended to keep together neatly the bits I want on a couple of sensors.
Writing a book was, for me, a lot more work than designing a small printed circuit board (PCB), but even so, the design took a few hours to create, particularly since I wanted to squeeze everything as closely together as possible to save cost on the manufacturing. The book-writing gave me a lot more satisfaction, but I’ll admit to being tickled when I received a thin envelope with two very small PCBs in it.
I suppose most of you will snort at seeing that, as it’s basically just a holder for a few integrated parts and not a real board design, but that doesn’t bother me at all; snort as much as you like.
I created these using Fritzing and used their (not inexpensive) Fritzing Fab PCB factory to produce them. They’re located near Berlin, so I was hoping for quick turnaround which I got. The PCB has a Wemos-D1 ESP8266 on it with a DS18B20 digital temperature sensor and a digital Lux meter thing which flips when luminosity goes under or above a threshhold which is set on it. And the PCB works.
The firmware running on the Wemos-D1 mini is based on Homie because I’ve had very good experiences with it, and it’s rock stable.
There will probably be a second iteration of the Homie/LT PCB. For one, the big fat red LED will go away (yes, I was told it’d be idiotic, but I wanted blinkenlights!), and we’ll probably add a simple LDR instead of the digital light sensor as the latter is very tall and gives the whole thing a rather ungainly look.
Apropos Homie: I decided to create a small Python utility called homie-ota which comes with a really retro-style looking, vintage Web interface upon which you can see an inventory of Homie devices in your network, upload new firmware binary files and issue an OTA (Over the Air) request via MQTT to a device for it to come and get a new firmware. I’ve been laughed at for the look-and-feel of the Web pages, but Ben has nevertheless been very supportive and has provided a ton of features and patches.
Another thing which is running Homie in the house is a small WiFi IoT Relay board which I had to try: it costs $6 including enclosure and postage & handling, and it switches two 220V outputs.
Nathan Chantrell wrote an in-depth review about this and a second module while the Electrodragons were en-route to me. To cut a long story short, I don’t regret this purchase, and the devices appear to be safe enough for the load I’m running them on (a lightbulb or two).
The small relay board I described above is well suited to be integrated into my OpenHAB installation which I do via MQTT of course. This items file picks up one of the relays and allows me to switch it:
As it’s all MQTT-controlled, I can switch a relay either by pressing the switch in the OpenHAB UI or with an MQTT publish:
It’s a huge bit of fun to play with these ESP8266 modules, and at their extremely low cost, there’s not much you can do wrong.
Is it possible to create a minature and very inexpensive GPS online and offline tracker which is OwnTracks-compatible? Yes, it is possible.
During a bout of craze\^W inspiration, I obtained a u-blox Neo 6M GPS module (EUR 9.30) and connected it to an Wemos-D1 mini ESP8266 module (EUR 3.35). Within the hour, I was happily publishing OwnTracks JSON location messages to a local broker over WiFi. The device is so small I named it Pico. (The rubber bands you see in the photo below are part of the cough design cough to make it vehicle-ready, and the LED is simply for additional blinkenlights of course. The black box is a USB battery pack.)
The idea would have been to use mobile phone WiFi tethering or a MiFi to take the result along in a vehicle, but that would have been a bit boring. I learned during my experience with Homie that the ESP8266 has an SPIFFS file system, and you can probably guess what’s going to happen.
When the Pico loses connection to its MQTT broker or disconnects from WiFi, it continues in offline mode, recording to the file system. Once a minute it attempts to reconnect, and if it succeeds, it publishes what it’s collected so far to the configured MQTT broker. This appears to work quite reliably.
The only slight handicap in Pico is, that the PubSubClient MQTT library has a default maximum MQTT message size of 128 octets which is too little for the information we want in OwnTracks JSON. (We don’t just publish lat and lon, but also velocity, altitude, distance travelled, and a few other bits.) We could easily change definition in PubSubClient, but I opted in the interest of portability and in the interest of maximizing the number of offline location publishes to use OwnTracks CSV which we developed for our Greenwich devices, and which is already fully supported by the Recorder. This means, with a 3MB file system on the ESP8266 we can store in excess of 60,000 points.
The CSV content may look a bit strange because we hexalize (is that a word?) the time stamp, unfloat (is that also a word?) lat and lon (we save the space of two decimal points) and do a few other optimizations (in the interest of conserving valuable GSM data).
The Pico publishes (or serializes to the file system if it’s offline) a location when it detects it’s moved mindist meters (100 by default) and 5 seconds interval have elapsed. When it’s stationary, it’ll publish a ping location (that’s what we call it) once an hour, and on booting it publishes a first location to tell us it’s alive.
A challenge would now be to take a EUR 4.40 GSM/GPRS module and allow the Pico to be a fully capable online tracker which publishes MQTT over the Internet via GPRS. Any takers?