When Ton Kersten asked me a few weeks ago whether I’d like to prepare a home automation workshop to be held in Utrecht, I quickly said yes.

It’s been four and a half years since I started using openHAB, and I haven’t regretted it once. The system has been very reliable, and I’m still running an early 2016 version. Why upgrade?

Meanwhile a lot has happened in the world of openHAB including a few very large new releases, so there are a lot of new things for me to learn and most of it is fun. Most of it.

I’m having a lot of fun putting together a nice demonstration environment for the workshop, and I’m using Ansible to set up virtual machines participants will use in the labs. Apropos labs I wanted participants to be able to connect a real physical switch and a real lamp, but investing in 20 Homematic CCU2 or in 20 Z-Wave sticks would be prohibitive, so we arrived at an alternative and fun solution:

These are Wemos-D1 mini (ESP8266) fitted with button shields (the physical button) which Ton soldered together, and they have an on-board LED (the physical lamp). The devices will be speaking MQTT with openHAB (yes, we’ll discuss MQTT as well), and the firmware’s been flashed onto them. I’ve just completed writting all the exercises we’ll be doing with a number of different openHAB bindings.

It’s impossible to cover all details of what openHAB has to offer, but I believe we’ll get to know most of it. Also, my focus will be on keeping data within the confines of your house/office; we’ll touch upon external services, but the Intranet of Things is what it should be. :-)

Care to join us? The workshop will be in Utrecht (that link contains the description as well) and in the English language, and I can tell you I am very much looking forward to that event.

Let there be blinkenlights!

openHAB :: 23 Jun 2018 :: e-mail

A few years ago I, almost literally, grabbed a POS (Point of Sale) pole-display out of a box of equipment which was on its way to the dumpster. I have a tiny bit of history with POS systems and thought it’d be cute to own one, without even knowing whether it worked or not.

I plugged it into a USB socket and it lit up, and I was able to echo something to it, so I determined it was “worth keeping”™ and stored it safely.

I stumbled over the display on the weekend during a bit of a cleaning up and decided it’d be a practical addition to the bunch of toys I’m assembling for the openHAB workshop I’m preparing. As I’m using an OpenBSD latptop with all the things I need I plugged it into that.

To cut a long story short, I couldn’t get it to work, but wasn’t quite sure what the reason was; my suspicion was that the required device wasn’t being created in the OS.

I asked, and as that link shows, within 24 hours I had an operating system kernel patch which added the necessary bits to make the “LD220-HP” work on OpenBSD.

$ dmesg | tail -2
uplcom0 at uhub3 port 2 configuration 1 interface 0 "Prolific Technology Inc. USB-Serial Controller" rev 1.10/3.10 addr 4
ucom3 at uplcom0

Ten minutes later, the small C program which opens the cuaU3 device, subscribes to MQTT and prints messages on a particular topic onto the display was assembled and running.

And the best part? The diff has been checked in, so it’ll be in the next version of OpenBSD.

Open Source: amazing!

OpenBSD :: 19 Jun 2018 :: e-mail

I’m writing this because Michael recently tweeted a link I looked at. I shivered so hard that I want to, without shaming or blaming, attempt to improve on what I saw.

The original poster was using a PAM (Pluggable Authentication Module) to notify (via Pushover) that a login had happened on a machine. Not a bad idea at all, and I learned about pam_exec. However the notification dance was being done with a shell script which invoked curl to do the protocol thing and Pushover’s credentials were in the shell script. Those things are what caused some ice-cold monster to walk down my spine for some reasons:

  • hard-coded credentials for a service like Pushover in a bunch of scripts on a whole number of sytems? That’s pretty awful, nay, horrid, unless you can at the very least manage those systems automatically with, say, Ansible
  • the script I saw used curl to connect to a service. That’s fine and dandy, but what if the service isn’t available? SSH will hang. You don’t want SSH to hang on you. Well you might not mind, but I mind a lot.
  • In order to avoid that, the script backgrounded that curl portion with nohup. That could mean DDoSsing your own box by having a while number of backgrounded processes hanging around. I’ve seen it happen.

pushover screenshot

What the original poster wanted was an alert of an SSH login to his machines. I think I can solve that with a few bits and pieces of code. Admittedly there may be a larger total of moving parts, but because of how I use them, we’re centralizing those parts, and I’m going to keep the PAM configuration (and the original alert of an SSH login) to an absolute minimum.

Here’s what’s going to happen:

  1. A very small compiled C program is going to produce a UDP datagram informing of a login. That datagram carries a short payload with the username of the person logging in, the source address, etc. That UDP datagram could possibly get lost (tough luck), but it’s not likely to in our LAN. Furthermore, if the UDP listener isn’t there, the login isn’t at all hindered or delayed; users won’t notice which is fine. For the record, if this program dies (even with a signal 11) it won’t hurt; PAM will silently ignore it which is exactly what I want to happen.
  2. A UDP listener gets the login datagrams and processes them. How it does so is up to you. I’ll have them published to MQTT.
  3. If you’ve been here before, you know what’ll then happen. :-) What I do here is to use what I once called twitter for my network: MQTT.

So, fifty-odd lines of C give the original version of the hare utility which is what we’ll be invoking from PAM, and the way we invoke this is by configuring it in /etc/pam.d/sshd (tested on CentOS Linux and on FreeBSD 10.2) as

session    optional     pam_exec.so /usr/local/sbin/hare

(Hare you ask? I believe Karanbir once told me many, many years ago, that the CentOS project had a Nabaztag rabbit which alerts them of logins. I loved that story, and this system I describe here is dedicated to that rabbit; I’m calling it hare. (Or is it because I saw something hairy? :-))

So whenever somebody uses the SSH subsystem (ssh, scp, sftp) our hare program will talk to its daemon, hared, on the specified server. This hared is so small I have it right here:

Update: meanwhile it’s turned into a Pip-installable package, and Juzam’s made a fully-compatible Go version of the daemon :-)

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import paho.mqtt.publish as mqtt
import socket
import json

__author__    = 'Jan-Piet Mens <jp()mens.de>'

server_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
server_socket.bind(('', 8035))

while True:
    message, address = server_socket.recvfrom(1024)
    host, port = address

    ''' ensure we have JSON '''
    js = json.dumps(json.loads(message))

    mqtt.single("logging/hare", js, hostname='')

All it does is wait for UDP datagrams, and it then packages each up as a JSON string which it publishes to MQTT. Update: due to lots of reasons, I’ve made the code configurable etc. and it is now here and installable via Pypi.

$ mosquitto_sub -v -t 'logging/hare'
logging/hare {"tst": 1522152142, "hostname": "zabb01", "user": "jjolie", "service": "sshd", "rhost": ""}
logging/hare {"tty": "ttyv0", "service": "login", "hostname": "fbsd103", "user": "root", "tst": 1522161689, "rhost": "<unknown>"}
$ cat /var/log/ssh-logins.log
login via sshd by jjolie on zabb01 from at 2018-03-27 14:02:22
login via login by root on zabb01 from <unknown> at 2018-03-27 14:07:45

In order to actually process those messages we have mqttwarn which is configured to produce a file with logins (as shown above; note how the login service shows up as well as I’ve also configured that to use hare in PAM), can notify via Pushover (as shown at top), and can easily be used to use any number of the meanwhile 65 services to process these messages. It’s also mqttwarn which translates the epoch timestamp from the JSON payload into a human-readable time.

Here’s the configuration I’m using:

launch	 = log, pushover, file, smtp
functions = 'pamfuncs.py'

append_newline = True
overwrite = False
targets = {
   'mylog'     : ['/var/log/ssh-logins.log'],

targets = {
    'pam'       : ['xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'],

server  =  'localhost:25'
sender  =  "MQTTwarn <jpm@localhost>"
username  =  None
password  =  None
starttls  =  False
targets = {
    'admins'    : [ 'rooters@ww.mens.de' ],

targets = pushover:pam, file:mylog, smtp:admins
alldata = moredata()
title: SSH login on {hostname}
format = login via {service} by {user} on {hostname} from {rhost} at {tstamp}

It’s easy for us to add additional targets without touching PAM configuration on any machine, and without changing our hare system: we add a target to mqttwarn, and there you go: you have mail:

Tue, 27 Mar 2018 14:02:22 +0200
From: MQTTwarn <jpm@localhost>
To: rooters@ww.mens.de
Subject: SSH login on zabb01
X-Mailer: mqttwarn

login via sshd by jjolie on zabb01 from at 2018-03-27 14:02:22

So what are the benefits or pitfalls of doing it the way I did?

  • Credentials for the Pushover service are securely stashed on a single machine, the machine which runs mqttwarn. If we want to or have to change them, e.g. because they got compromised, we do so centrally.
  • All nodes to which people login talk to a single machine in my network. It is from that one system, again the one running mqttwarn, that we access the external service.
  • If we want to rip out Pushover and alert/notify with some other service, or if we want to add some additional type of notification we can do this centrally.
  • Above all, we ensure that the SSH login service on the nodes will not hang or somehow be delayed.
  • One (slight) disadvantage is that we have to create architecture-dependent versions of hare. We have to distribute/install those, but that we easily do with our configuration managment system.

I explain how a lot of the mqttwarn stuff works in how do your servers talk to you?. I hope some of these technologies make the lives of your systems safer and your life easier.

SSH, MQTT, daemons, and rabbit :: 25 Mar 2018 :: e-mail

For a long time I’ve been annoyed that VirtualBox issues a different DHCP lease to a virtual machine guest after a while, and there doesn’t seem to be any way to change that within VirtualBox itself. I asked on Twitter, and got a response which trigerred an aha!

virtualbox dhcp disable

I disabled VirtualBox’ DHCP for the interface I’m interested in, and configured a dnsmasq on my notebook.


Minutes later I saw this on the console upon launching dnsmasq and then a VirtualBox guest I already had:

dnsmasq: started, version 2.78 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack no-ipset auth no-DNSSEC loop-detect no-inotify
dnsmasq: setting --bind-interfaces option because of OS limitations
dnsmasq-dhcp: DHCP, IP range --, lease time 12h
dnsmasq: reading /etc/resolv.conf
dnsmasq: using nameserver
dnsmasq: using nameserver
dnsmasq: read /etc/hosts - 3 addresses

dnsmasq-dhcp: DHCPDISCOVER(vboxnet0) 08:00:27:7d:aa:db
dnsmasq-dhcp: DHCPOFFER(vboxnet0) 08:00:27:7d:aa:db
dnsmasq-dhcp: DHCPREQUEST(vboxnet0) 08:00:27:7d:aa:db
dnsmasq-dhcp: DHCPACK(vboxnet0) 08:00:27:7d:aa:db cen7pdns

Lovely. It works.

VirtualBox and DHCP :: 07 Mar 2018 :: e-mail

After three (or was it almost four?) years of using Slack, we took the plunge and set up our own instance of Mattermost. The reasons for doing this include wanting more control over our data and wanting an unlimited history which Slack, as a hosted service, offers only to paying customers. This is more than fair enough – it’s not their fault that we’re too stingy. Apropos stingy: Mattermost exists in three editions – we chose the Team Edition; guess why.

Web UI

If you know how to wield Slack, you know how to use Mattermost. (Oh, please don’t mind the awful colors above – this is my test installation and I need severe optical distinction in order to not mistake the installations.) There are few differences, if any. Mattermost’s use of Markdown appears to be more comprehensive, in particular because its Webhooks support Markdown.

Installing Mattermost is easy thanks to the good documentation they provide which explains, step by step, what I have to do to install Mattermost on a machine. I chose to use PostgreSQL because I recall having read that’s Mattermost’s primary database candidate, and because it allows me to use PostgreSQL – a reason sufficient on its own. I selected to loosely follow advice given regarding location of config.json as that seemed a sensible thing to do. (config.json is Mattermost’s central configuration file which is reloaded on change.) If Ansible’s your drug, Pieter Lexis created an Ansible role for installing Mattermost on Debian/Unbuntu, and there’s playbook which does that and more also.

I can create as many teams as I want on a server, and each team can have as many channels as I want. In the Team Edition users authenticate with password and I enforced e-mail verification. (Other editions offer 2FA and LDAP.)

Mattermost users can upload files (images, code snippets, etc.) which have to be stored somewhere. By default a configurable directory on the local file system is used, but Mattermost’s system console allows me to configure Amazon S3 storage such as Minio.

Webhooks, API, Websocket, CLI, etc.

One of the things I like most about Slack are its integrations, and low and behold, Mattermost has these as well. Incoming and outgoing Webhooks as well as slash commands. Lovely.

Also very powerful is the Websocket API; there’s a Python3 driver which works very well, and next to that is Mattermost’s API with which I can create users, get their details, enumerate posts, create posts, etc. The following example using curl and jo shows how I can add a post from the command line:

json=$(jo channel=cartoons1 \
          username="my-script" \
          icon_url="" \
          text='Ha, this is _just_  an example using `curl`, :tada:')

curl -H 'Content-Type: application/json' \
     --data "$json" \

posted with curl

(For a much more flexible solution see 42wim’s mattertee.)

Programs which use Mattermost’s API must authenticate to the service and they can do so with either session tokens that expire and Personal Access Tokens which I create in my account preferences which don’t expire until I revoke them. Additionally Mattermost can act as an OAuth 2.0 provider.

Masses of messages

One of the channels we have is reserved for Nagios/Icinga-type notifications. One thing I wanted to be able to do is to delete and purge those messages; I don’t see why I need to know weeks later that something was offline for a moment. However, if I delete a message, either interactively or via the API, Mattermost soft deletes it; the message is marked as deleted with a time stamp, but it remains in the database.

So I went in search of an API to physically remove these messages, but it doesn’t exist. The solution? Use, say, the API to find the posts I want to remove, “delete” them using said API, and then use an SQL DELETE to purge:

DELETE FROM posts WHERE deleteat <> 0;

Mattermost at a console

Mattermost has a Web UI and some mobile and fat clients, but what does a person do with just a terminal at his/her disposal? Use either matterhorn or Irssi or your favorite IRC client with matterircd.

seed in Matterhorn

What you see in the screenshot above is matterhorn showing what the first Web UI screenshot shows. The program has some really cool features including scripts – just shell scripts which are given the text I enter on stdin and the stdout they produce is posted. Matterircd on the other hand is an IRC to Mattermost gateway written also by 42wim: you connect it to your remote Mattermost installation and talk to it via your IRC client.


Do I regret leaving Slack? Not really, even though their mobile apps are quite a bit more polished than Mattermost’s are – a result of development effort obviously. I now get a warm and quite fuzzy feeling knowing that we have control over our data, how we back it up, and what we do with it. And I’m confident that (other than the NSA) no third party has it.

Apropos 3rd party: while it’s possible to access dozens (or hundreds?) of integrations using an external service called Zapier we will not as that defeats the purpose of wanting to be the sole owners of said data. Similarly we’ve been discussing mobile notifications for which we could either set up mobile push or do it ourselves; we haven’t finally decided yet.

Do it yourself push

Do it ourselves, you ask? Yes, that’s possible by creating a notification endpoint which Mattermost uses whenever it’s about to notify a mobile device. The post I created with a shell script earlier, pushes to this example MQTT:

$ mosquitto_sub -v -t 'mm/#'
mm/_notif my-script in cartoons: Ha, this is _just_  an example using `curl`, :tada:

The way this happens is that Mattermost notifies a Web service I create which obtains the message and disposes of it in any way I want:

#!/usr/bin/env python

from bottle import run, request, post
import json
import paho.mqtt.publish as paho

__author__    = 'Jan-Piet Mens <jp()mens.de>'

def post1():

    data = json.loads(request.body.read())

    paho.single("mm/_notif", data['message'],

run(host='', port=8864)

If you’re not interested in mobile push, there’s always e-mail: when users are away or offline they can choose to be notified of new content by e-mail.

If I’d known about Mattermost before, I’d have migrated earlier.

Social :: 30 Jan 2018 :: e-mail

Other recent entries