Most of us monitor what our servers do, in some way or another. We might use Icinga/Nagios for some tasks, or use any of what feels like a myriad different tools for other tasks. I assume, that in most cases, people use e-mail, SMS, etc. for sending out warning or critical alerts when they occur.

Each of these monitoring services needs to be configured to send alerts via a particular channel (e.g. SMTP, SMS) to particular users, and if the need to change these associations arises, most people have to reconfigure their software in order to do so. For Icinga/Nagios, there are "recipes" on getting particular notification systems configured; that works, of course, but it's painful, error-prone, and the configuration is very static.

Some specialized tools have other mechanisms. Ansible for example, has a slew of notification modules (some of which I even wrote), which can announce to your deployment team what's going on while it runs. Here again, each time you use a particular notification module in Ansible, you have to configure it to notify a particular service's account. What if that changes? Do you really want to change all the Ansible playbooks ...? No.

Let me show you what I mean by listing the source of an alert, its type, and its destination:

icinga       disks
icinga       mail queue
icinga       dnsbl          SMS:1234567890
ansible      playbook       twilio:john18

So, when Icinga reports a problem with a DNS blacklist, an SMS will be sent, and when Icina warns about a mail queue, an e-mail to Jane is fired off.

Suppose we want to change that: in addition to having an e-mail sent off, we wish to alert a bunch of mobile phones via Pushover. Or we want to change the content of the message, etc. For most of us that means chasing down all instances of the configured notification method, and modifying / adding to that. No thanks. Really not.

Can we set up some sort of notification "bus" which services use to send notifications to, and which then decides, using a central config, what to do with those messages, maybe even reformat them on a per-destination basis? Yes we can.


We created that recently. It's called mqttwarn, and it's pretty easy to set up. All you need is an MQTT broker, which is a lightweight "bus" for your network. Your services then "talk" to that broker (i.e. send it messages). mqttwarn grabs those messages, and dispatches them on to a notification service (SMTP, NNTP, Twilio, Pushover, HTTP, ...), stores them in files or in MySQL, publishes them to Redis, pipes them to programs, etc. All messages, some only (filtered by content, say).

I'll show you a small example to hopefully whet your appetite.

mqttwarn is configured in an ini-type file. The difficult part is setting up the so-called service targets which are notification services. I'll configure mqttwarn to provide a target called smtp:jplocal which delivers via SMTP, and another one called pushover:icinga which delivers a message to my mobile phone via

hostname  = 'localhost'
port      = 1883
; name the service providers you will be using.
launch   = smtp, pushover

targets = {
    'icinga'      : ['xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'],

server  =  'localhost:25'
sender  =  "MQTTwarn <jpm@localhost>"
starttls  =  False
targets = {
    'jplocal'     : [ 'jpm@localhost' ],

targets = smtp:jplocal, pushover:icinga

When I launch mqttwarn, it will subscribe to the MQTT topic monitoring/+. (The + is a single-level wildcard which matches any one subtree.)

If I then publish a message to my MQTT broker, mqttwarn will receive that message and act upon it. In this example I'll use a command-line utility to do so, but we'd normally configure our monitoring system (or Ansible, etc.) to publish via MQTT, and there are all manner of language bindings to accomplish that.

mosquitto_pub -t monitoring/warning -m 'Disk utilization: 94%'

As you've probably guessed, -t specifies the topic name, and -m the payload to publish.

Do we have e-mail?

Date: Thu, 03 Apr 2014 09:26:36 +0200
From: MQTTwarn <jpm@localhost>
To: jpm@localhost
Subject: mqttwarn
X-Mailer: mqttwarn

Disk utilization: 94%

Simultaneously, mqttwarn notified the message via Pushover, and my phone alerts me accordingly:

mqttwarn on pushover

Coming back to Ansible, if I wanted to post that same notification from a playbook, I would use the mqtt notification module. (The strange-looking quotes are necessary to protect the colon in the string.)

- hosts: all
  - local_action: 'mqtt
                  payload="Disk utilization: 94%"'

If I later change my mind and want to modify one of the targets, say, I want to add another e-mail recipient, store messages, etc., I just modify mqttwarn's configuration file; no need to change the system which actually initiated the notification alert (i.e. I don't have to alter my Icinga configuration).

Looking at Icinga (or Nagios), we can very easily add something similar. Consider your fugly notification module

define command{
    command_name    notify-service-by-email
    command_line    /usr/bin/printf "%b" "***** Icinga *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$

and now look at mine:

define command{
    command_name notify-service-by-mqtt
    command_line /usr/bin/

The rather simple notify-by-mqtt program extracts the values I'm interested in from the environment (note: requires setting enable_environment_macros=1 in icinga.cfg) and produces a JSON payload which is published via MQTT to mqttwarn. The payload looks like this:

    "_type": "icinga", 
    "date": "2014-04-04", 
    "eventstarttime": "1396621975", 
    "hostdisplayname": "localhost", 
    "hostname": "localhost", 
    "hoststatetype": "HARD", 
    "servicedesc": "JPtest", 
    "servicedisplayname": "JPtest", 
    "serviceoutput": "file /tmp/f1: ENOENT", 
    "servicestate": "CRITICAL", 
    "servicestateid": "2", 
    "shortdatetime": "2014-04-04 16:38:25", 
    "time": "16:38:25", 
    "timet": "1396622305"

On the receiving end, mqttwarn can filter messages, and I change their formatting on a per/service basis (e.g. I can be more verbose in an e-mail than in a tweet). mqttwarn can transform data on the fly, and it can apply templates for format the message.

Let me show you a simple example. I'll take the above mqttwarn configuration and add the title and format lines to it:

targets = smtp:jplocal, pushover:icinga
title = {servicestate} {servicedisplayname}
format = {time}: {hostname}:{servicedisplayname} is {servicestate}: {serviceoutput}

mqttwarn attempts to decode JSON from the payload received over MQTT, and it can use the JSON elements to populate format-type strings. The result as seen on my phone via Pushover is like this:

Seen from bottom to top:

  1. our first example
  2. an Icinga alert with the raw JSON in it
  3. the final result with formatted title and body

We've had a lot of very positive feedback on mqttwarn, and I'm confident you'll be able to put it to good use!

View Comments :: monitoring and notification :: 03 Apr 2014 :: e-mail

Connaisez vous Usenet? I did, and I was an avid consumer of Usenet news which trickled in over UUCP, carried by a then tremendously expensive Telebit Trailblazer modem which cost me the equivalent of EUR 1300 today. But we tried everything to squeeze every last bit through those telephone wires at the time... I digress.


More than twenty (yes: 20!) years have passed for me since that time, and I'll admit to having assumed that Usenet news was just about dead. I was wrong. It's not dead, even though usage is perhaps on the decline, the volume of craW data that is carried over NNTP today is mind boggling.

I have inherited a rather large Diablo newsfeeding and news-reading/posting environment at a customer site which used to cater for several million clients. Most of those have meanwhile gone away, and we're now trying to very heavily slim down the number of servers involved in the News operation.

Diablo was new to me (I was, at the time, very familiar with INN), so I got down and studied the (cough) documentation (cough), wading through sample configuration files, source code etc. until I felt I had a grasp of its inner workings.

Hah. Little did I know...

Diablo is typically used in large environments with specialized servers which act as so-called "feeders" and "readers", and that is the setup I find before me. But I wanted to slim all this down and put all the components on a single machine.

To cut a very long story short, I don't think I've ever given up on getting a piece of software do do what I expected it to do, but I almost did here. Most of the bits and pieces I wanted consolidated worked, but only "most" -- not all.

After a plea of help, Johan came to my rescue late last night and drummed up an example configuration which worked out of the box. Thanks again, Johan!

Meanwhile I'm getting quite familiar with Diablo, and I've done the following, for posterity:

  1. I've set up a Github repository with Diablo patched with most of the XS4ALL patches.
  2. I've added a new LDAP authentication plugin to Diablo which does away with the ridiculous "copare passwords" operation and which supports returning different so-called _readerdef_s.

The LDAP authentication plugin allows me to differentiate between, say, free users and paying customers; if a user has a particular LDAP attribute type with a particular value, that user is marked as "PAYING", and is then allowed, say, more newsgroups, etc.

The configuration in dreader.access looks like this:

authdef chooseauth
   ldap    /news/etc/ldap.params

readerdef PAYUSER
  read                  yes
  post                  yes
  groups                grouppaylist
  idletimeout           900

readerdef freeuser
  read                  yes
  post                  no
  groups                groupfreelist
  idletimeout           300

# This `rcluster` readerdef will use my (JP's) new LDAP plugin
# to search for and authenticate users. The plugin returns a
# string in the form '110 yyyyyy'. The 'yyyyyy' is used to
# associate with ANOTHER readerdef. So, for example, if
# "110 PAYUSER" is returned for AUTHINFO, the user will be
# associated with the `PAYUSER' readerdef above.

readerdef rcluster
  read                  yes
  post                  no
  auth                 chooseauth

access  0/0  rcluster

The ldap.params file contains four lines of text with an LDAP URI, the search base, a DN of an entry allowed to search the directory and the latter's password:


The consolidation of the News infrastructure isn't complete, but I did breathe a sigh of relief late last night, when the bits started coming together!

The Ansible project has introduced a new feature called vault which allows us to encrypt configuration files with a symmetric AES key. The idea is you have playbooks or vars files which contain sensitive data, and you want to protect this data from prying eyes, say, when you check the files into a repository or generally share them.

In order to use these encrypted files, the ansible and ansible-playbook utilities have acquired a new command-line option called --ask-vault-pass which prompts for the secret key in order for Ansible to be able to decrypt and hence use the files while it does its "thing".

$ ansible-vault create jp.yml
Vault password:
Confirm Vault password:

[ ... edit file ... ]

$ cat jp.yml

$ ansible-playbook --ask-vault-pass jp.yml
Vault password:

This is useful as a first cut of the feature, but I'm disappointed, and the worst bit is: it's my fault.

I'm disappointed because we can use one password only on files which belong to a single playbook, unless I've misunderstood something. Also, this will not work in "pull mode" because there's no one there to enter the password. Update 2014-02-28: a command --vault-password-file ~/.vault_pass.txt was added in the mean time. I'm also disappointed because the whole file is encrypted, not just the individual values which I consider worthy of protection. At the time, I said:

I don't think encrypting vars files completely is useful: makes things like grep, and diff impossible to use.

I believe it was I who kicked off the discussion of a vault feature, after a number of people I spoke to had requested something able to protect sensitive data. (We also talked about this when we met in Antwerp at the Ansible day.) My original post to the Ansible mailing-list is dated June 2013, and a very long discussion ensued, with lots of good ideas being thrown around.

We closed my first pull-request, which contained a very early prototype implemented as a lookup plugin (if I recall correctly; it was maybe a Jinja2 filter ..), and further, very fruitful IMO, discussion arose.

What I was aiming for was an RPC-type system with an "agent" which is fed a bunch of named keys. At the moment the agent starts, or rather when a key is introduced to the agent, the operator is prompted for its password. From that point on, until the agent dies, Ansible invokes an RPC to the central agent in order to have it decrypt the data, and Ansible uses the clear text it receives from the agent to do whatever it must do.

What I'd also hoped for, was a possibility to encrypt just certain values, in order to keep the basic content of the YAML files legible. Something like this:

admin: Jane Jolie
dbpassword: @vault@"AHchx0a+G8mejs84tGxCNKxMFP7tM7Y7kl"
webservertype: nginx

Why's it my fault, you ask? I got sidetracked with a huge pile of work, and in spite of Michael (and several other pople) asking me repeatedly about progress on the code, I just didn't get around to converting my proof of concept to decent code for inclusion in Ansible. Worse: I ignored requests of submitting whatever snippets of code I already had.

I am very sorry about this.

View Comments :: Ansible :: 22 Feb 2014 :: e-mail

When Giovanni mentioned Pushover recently I was sold because of its support for iOS and Android. I had been using Prowl and a bit of NMA, but was unhappy with the distinct interfaces to those services. I quickly assembled a Python program to push notifications from MQTT to Pushover for my phone call detection in openHAB, and I thought "that's it". I was wrong.


A few hours after I'd finished, Ben modified the code to create similar programs: one for sending out notifications from MQTT via SMTP, another to notify XBMC. We then decided to consolidate these targets into a single program, and thought "that's it". We were wrong. ;-)

One thing led to another, and during a three hour drive home on Friday I dreamed up something better and more flexible. In a literal all-night session I created mqttwarn which, beware, has meanwhile mutated into a feature-packed monster.

mqttwarn subscribes to any number of MQTT topics (with wildcards) and hands incoming messages off to plugins you configure to handle them. For example, you could have a topic monitor/home which sends alerts to your e-mail address, and another called phone/calls which notifies your smartphone via Pushover.

Notifications are sent to targets. A target is a combination of a service (e.g. Twitter, Pushover, SMTP, XBMC, etc.) and an "account" on that service. So, for example, suppose you wish to notify missed phone calls to yourself and your spouse, mqttwarn will do that. You could, at a later stage, also decide to log those to a file; no problem.

mqttwarn will also attempt to decode incoming messages from JSON. If that succeeds, you can transform the message before it's sent out. This allows us, for example, to interesting notifications for OwnTracks (formerly MQTTitude).

There's a wee bit of common ground between mqttwarn and Node-RED; With appropriate JavaScript knowledge you'd probably slap up something similar in no time. I couldn't.

Do have a look at mqttwarn; I hope you enjoy it.

PS: at this point in time, I'm obsoleting all previous MQTT to "insert-something-here" I've ever written. :-)

When the phone rings in my small office, Frizzix tells me instantaneously who's calling, and I can decide whether to take the call or not. (I usually do, because few people phone me.) It's a nice utility, but I can neither automate it nor can I easily save the call logs, and doing that from the FritzBox Web interface is a nuisance. And when I'm not at home, I'm blissfully unaware that somebody called, but curious nonetheless. I was curious whether I could be notified of calls in "real-time".

Enter openHAB.

My initial experiments with openHAB have been fruitful, and with a bit of help in understanding some of the more intricate combinations, I'm progressing in automating the Intranet of Things at Casa Mens.

openHAB has a FritzBox binding (there's also an Asterisk counterpart) which I can use to trigger switches on incoming calls. This binding updates an openHAB item when it detects a call on the FritzBox, useful for, say, automatically muting the Sonos when the phone rings. This item definition looks like this:

Call Incoming_Call_No
    "Caller No. [%2$s]"
    { fritzbox="inbound" }

That item definition suffices for openHAB to detect an incoming log, which it shows in its log:

17:22:00.095 INFO runtime.busevents[:26] - Incoming_Call_No state updated to 05556302547##1234567
17:22:49.607 INFO runtime.busevents[:26] - Incoming_Call_No state updated to ##

If I place that item on an openHAB sitemap I can see the Caller-ID (CID) on the UI (which I don't want), but what else can I do with that trigger?

When I first discussed my experiments in home automation I wrote about using MQTT extensively, and for me that has been a very good decision. So the question was: can I get openHAB to

  1. detect an incoming call with the FritzBox binding
  2. invoke a rule with that CID number
  3. perform an external lookup on the number to obtain a matching name from my address book
  4. store the result in a database
  5. publish the result to an MQTT topic

and the answer is a resounding YES!

The rule

When the incoming call triggers the Call switch, openHAB invokes this rule I patiently assembled with a lot of trial and error, and error, and help from the knowledgeable chaps on the mailing-list.

import org.openhab.core.library.types.*
import org.openhab.core.persistence.*
import org.openhab.model.script.actions.*
import org.joda.time.*

rule "Incoming Phone Call"
    Item Incoming_Call_No changed
    var CallType c = Incoming_Call_No.state
    var String caller = "" + c.origNum
    var String callee = "" + c.destNum
    var String command

    if (caller != "" && callee != "") {
        var String cid

        command = "/usr/local/bin/ '" + caller + "'"

        cid = executeCommandLine(command, 2000)


The rule gets the calling number (which may be empty if a caller suppresses it) and executes an external command. (I'd be weary of having openHAB do that on a very busy "trigger", but it's not an issue in this case; I don't get many phone calls... ;-) This little program does a reverse lookup on my address book database, and it returns a string with the number and a name in parentheses.

$ /usr/local/bin/ 05556302547
05556302547 (Jane Jolie)

(Note the binding supplies an empty caller on anonymous calls, and both values are empty when the call ends.)

The rule then posts an update to another openHAB item, this time a String item

String mqFritz

This item is bound to an MQTT publish string which causes openHAB to publish the value to my MQTT broker. From there, believe it or not, the Caller-ID is pushed to my mobile phone. (How that works is a longish story :-)

17:22:00.095 INFO runtime.busevents[:26] - Incoming_Call_No state updated to 05556302547##1234567
17:22:00.262 INFO runtime.busevents[:26] - mqFritz state updated to 05556302547 (Jane Jolie)
17:22:49.607 INFO runtime.busevents[:26] - Incoming_Call_No state updated to ##



There is one thing missing from my five-point checklist above: I wanted the result persisted to a database, trivial to accomplish in my small Python program, but that's no fun!

As mentioned in my last post I can configure openHAB to persist any state updates and changes to different storages, including a built-in db4o database, rrd files, etc. I chose to use a MySQL database to keep track of a few values including callers.

Persistence is configured on a per/item basis in a .persist file:

Items {
    SunRiseTime : strategy = everyDay, restoreOnStartup
    mqFritz: strategy = everyUpdate

Resulting from this configuration, I find the following tables in the database:

mysql> SELECT * FROM Items;
| ItemId | ItemName             |
|     11 | SunRiseTime          |
|     13 | mqFritz              |
mysql> SELECT * FROM Items13;
| Time                | Value                           |
| 2014-02-15 17:22:00 | 05556302547 (Jane Jolie)        |

I wouldn't put frequently-changing states in MySQL, rather relying on, say, rrd for things like that, but for my use-case this is perfect.


The WAF for this is pretty good, but the Señora doesn't appreciate the notifications whilst en casa. It ought to be easy to add an openHAB switch item which disables notifications to MQTT (and hence onto her mobile device) when she's at home. And we can automate that presence detection with the binding for OwnTracks (formerly MQTTitude).

I've said it before, and I'll say it again: openHAB and MQTT are a great combination. As Ben succinctly writes:

Using MQTT you can effectively decouple all notifications from within openHAB. This means you no longer need to load the Mail, Twitter, Prowl, NMA, or XBMC action bundles, and you can replace all your notification rules with very simple one-line statements

Good stuff.

View Comments :: openHAB, FritzBox, and MQTT :: 15 Feb 2014 :: e-mail

Other recent entries