The Traccar server component has rather good support for notifying a user of a particular event. For example when a device enters or leaves a geofence or, if the device has support for it, notifying that ignition has been switched. These events can be configured to be issued to Web (meaning the Traccar Web interface where they slide in from below), Mail, or SMS (which in the case of Traccar means a configured SMPP server).

mail

What does a guy do who wants to manipulate notifications and do other things with them? Basically there are two choices:

  • Use an SMPP server which obtains the payload and does something clever with it
  • Configure position and/or event forwarding in Traccar

The former works, and we’ve had that working for the better part of a year, but the latter is a more solid approach.

Traccar can be instructed to submit an HTTP POST whenever it receives a position report from a device and whenever it would otherwise notify one of the built-in methods (mail, Web, SMS). So what I’m going to is to tell Traccar to give me all this data.

traccar to http to mqtt

Whenever Traccar notifies of an event or receives a position, it bundles up some data as JSON and POSTs this to our configured endpoint. An example for an enter event (called geofenceEnter in Traccar-speak) is (slightly shortened):

{
	"geofence": {
		"id": 7,
		"name": "blub9",
		"description": "",
		"area": "CIRCLE (49.133867934876974 8.166520803303387, 33112.6)"
	},
	"position": {
		"id": 18336,
		"attributes": {
			"t": "i",
			"ignition": true,
			"distance": 449672.22
		},
		"deviceId": 7,
		"protocol": "owntracks",
		"deviceTime": "2018-09-14T15:34:17.000+0000",
		"fixTime": "2018-09-14T15:34:17.000+0000",
		"latitude": 49.0156556,
		"longitude": 8.3975169,
		"network": null
	},
	"event": {
		"id": 1216,
		"deviceId": 7,
		"type": "geofenceEnter",
		"serverTime": "2018-09-14T15:34:17.906+0000",
		"positionId": 18336
	},
	"device": {
		"id": 7,
		"attributes": {
			"aaa": "AAAA",
			"mm": "1"
		},
		"name": "Vehicle-54",
		"uniqueId": "q54",
		"status": "online",
		"lastUpdate": "2018-09-14T15:34:17.881+0000",
		"positionId": 18335,
		"geofenceIds": [
			7
		],
		"category": "boat"
	},
	"users": [
		{
			"id": 1,
			"name": "jjolie",
			"login": "",
			"phone": "+49123456",
			"readonly": false,
			"twelveHourFormat": false
		}
	]
}

We then create an HTTP endpoint to which Traccar will transmit the POST requests containing our notification, as it fires. By the way: did you notice that the position was reported via OwnTracks? We submitted an OwnTracks protocol decoder to the Traccar project a year ago, and it can be used directly from the OwnTracks apps in HTTP mode.

enter

The Traccar configuration for this is done in conf/traccar.xml in which I can configure position forwarding and/or event forwarding.

<!-- position forwarding -->
<entry key='forward.enable'>true</entry>
<entry key='forward.json'>true</entry>
<entry key='forward.url'>http://127.0.0.1:8840/evpos</entry>

<!-- event forwarding -->
<entry key="event.forward.enable">true</entry>
<entry key='event.forward.url'>http://127.0.0.1:8840/evpos</entry>
<!-- <entry key='event.forward.header'></entry> -->

(Until Traccar 4.0 I could add additional parameters to the HTTP POST using event.forward.paramMode.additionalParams, but that feature was silently removed.)

If you prefer, Traccar can forward positions using query parameters: we can configure this by a disabling forward.json and specifying the parameters we’re interested in.

<entry key='forward.enable'>true</entry>
<entry key='forward.url'>http://127.0.0.1:8840/positions?id={uniqueId}&amp;lat={latitude}&amp;lon={longitude}</entry>
<entry key='forward.json'>false</entry>         

(And because I hear you asking: the &amp; are actually required as we’re adding an ampersand between each query parameter and an ampersand is formatted as &amp; in XML.)

The list of possible query parameter values which can be interpolated I’ve taken from the source:

  • {name} is the name of a device
  • {uniqueId} its unique identifier
  • {protocol} the protocol through which a position was reported, e.g. "owntracks"
  • {fixTime} the time of fix
  • {latitude} and {longitude} the latitude, and longitude respectively
  • {altitude}, {speed}, {course}, and {accuracy} should be self-explanatory
  • {address} the reverse-geo-coded address if available

If you configure forward.json to be true, the query-string GET parameters are not substituded; instead a body containing a JSON payload is POSTed to the forward.url.

We have a small utility named from-traccar which implements an HTTP server which republishes incoming positions and events to an MQTT broker.

Why MQTT? Well, because we do lots of good things with MQTT.

GPS, OwnTracks, and Traccar :: 14 Sep 2018 :: e-mail

All I wanted for Christmas in 2009 was a Sonos. Almost nine years have passed since I purchased the first S5 player (they’re called differently now), and we enjoyed the system so much, that we recommended it to quite a number of friends and acquaintances who’ve also bought Sonos equipment. Sometimes more of it, sometimes fewer parts, but Sonos it was.

Then came the time when we got miffed about the “new” and “improved” UI on the Sonos mobile apps on iPhone and iPad. To us the apps became almost useless, and I got the (probably incorrect) impression that each incantation was completely different from the previous. That bugged all of us here at Casa Mens, but I put it down to “getting old and farty”.

I got pro-actively upset last year, when the apps started informing me I’d need to create a Sonos account. I didn’t want a Sonos account and certainly did not believe that my “listening experience” would improve from having an online account.

I really got upset when, on the weekend, I couldn’t play music because the apps required updating, but wouldn’t do so without me having an account. OK, I thought to myself: don’t be such a wimp. That one more account won’t really hurt will it.

It does. And I am thoroughly angry.

The next day, I decided to actually log into the sonos.com site with the credentials I created, and I will admit I was shocked (yes, me, I’ve been on the Internet for a few days) to see some of this.

First of all, Sonos knows when I’m at home and when I’m not. Of course they know, but it’s none of their business, and even less so to record and store that information. Not only that, but they also tell me how much other people listen to their music on average. What’s that supposed to do? Show me theirs is bigger than mine?!

They also know what I listen to. It’s none of their bleeding business. That’s precisely the reason we have most our music as MP3 stored on a NAS at home. I expected only us to be privvy to that information.

When was one of us at home listening to music? They know that too.

I told the wife about this, and she was livid. Her exact words were “They know when I’m on the crapper in the bathroom putting on makeup?!?” Yes, dear, unfortunately they do. It’s my fault though because nine years ago I labelled the players by where they’re located; I thought it’d be practical, but I see I should have chosen names like 7354e2055eb803b3b4ccd7c2d317a064 to better protect our privacy. Please forgive me!

Oh, dear Sonos people, how long was the total playing time in my household last week? I’m sure you can tell me that too. Thank you. And I’m sorry I listened to 5% less than last week; I’ll try to improve on this.

The Sonos privacy page is full of it. Text. Lots and lots of it. Sometimes I wish I were a lawyer. I read it top to bottom. It’s a shame most people probably won’t have the pleasure of studying it. If you don’t, at least search for opt out and do that.

I opted out and the data no longer shows up on the Web page when I login to Sonos. Whether or not the data is being transmitted I do not know. What I also don’t know is why my account had this enabled; it appears as though others’ have this disabled by default. Is it because they’re newer to the game than I? I’ll never find out.

One passage from the “privacy page” is adorable:

you will not be able to opt out from this [Functional] data collection, sharing and/or processing if you want to continue to use your Sonos Products.

These two “frequently asked questions” are also interesting:

Do I need to register my Sonos products for them to work? Yes. This is fundamental to providing a secure internet–based home sound system.

How do I delete my personal data from Sonos and what are the consequences? You can always send us an email via privacy@sonos.com or contact our Customer Care team and request that your data be deleted. Please note, however, that by deleting your personal data your Sonos products will stop working.

If I lived alone, I would now show you a photo of all my previous Sonos equipment in the boot of the car, ready to be given to somebody who wants it, and that may still happen. I don’t live alone, and we’re still thinking about how to handle this situation for ourselves.

In case you follow me on other parts of the internets you’ll know that privacy has become important for me in the course of the last quite a few years. Call me naive for not having found out sooner, if you like, but this angers me beyond belief, and I am hugely disappointed by a company I previously admired.

Needless to say, I will begin apologizing to friends of ours who followed my advice, and I will warn them.

Reactions to this post:

  • “with the last update it was mandated to create an account with Sonos for “security” reasons. Since that moment I have removed all Sonos equipment from our house, sold it and moved to Naim and Bose speakers. Whiskey Tango Foxtrot Sonos…”
  • “After years of Sonos ownership I’m now in the market for different speakers.”
  • “Some years ago I also had a Sonos test setup at home. Somehow, I got a bad feeling about speakers directly connected to the internet, so I bought Bose stuff. Lost much convenience but won privacy.”
  • “I am @Sonos customer since 2013 or so, and, yes, I have no use whatsoever for that useless login, either.”
  • “Also, purchased the last Play:5 as a used model, because I do not want devices with microphones for this purpose (have no use for digital assistants at all, I just want good speakers).” – Yes, read their privacy statement on listening devices.
  • “re your Sonos blog-post: I hit the breaking point in September too, looked around for options and purchased a HifiBerry with case. Add RPi and my own speakers and I have better stereo sound, albeit less pretty.”

Update

I discovered a “Sonos community” and found this tidbit there which pretty well matches my history. One poster writes:

I don’t want a Sonos account. I have enough accounts out there already. I paid a lot of money for the hub and speakers and now my usage is being held hostage to Sonos’ desire for that fat data harvesting loot? The last couple of updates had a “skip” button re: Sonos account, and that was annoying but acceptable, barely. Forcing an account is just **. I bought my first Sonos speaker, a Play 5, on November 8th 2009. I didn’t have an account then. I bought my second on November 24th of the same year. No account. I bought my third in September 2010, no account. Your condescending explanation clarifies Sonos motivation for login somewhat. I don’t care. If Sonos has explained it clearly before forcing it I would have agreed, if grudgingly, but I have to hear it from some snarky tool in a forum?

whereupon somebody responds:

Yes, you did have an account, Sonos has always required you to have an account. Here’s how it worked: Your initial purchase required registering to an email address, which became your account. Each unit after that was assigned to the account automatically. They have recently replaced the automatic assignment in favor of requiring account information to prevent unauthorized devices from connecting to your system. This has nothing to do with data collection, they’ve already been collecting your data for years now.

They’ve been collecting my data for years now.

Sonos :: 11 Sep 2018 :: e-mail

When Ton Kersten asked me a few weeks ago whether I’d like to prepare a home automation workshop to be held in Utrecht, I quickly said yes.

It’s been four and a half years since I started using openHAB, and I haven’t regretted it once. The system has been very reliable, and I’m still running an early 2016 version. Why upgrade?

Meanwhile a lot has happened in the world of openHAB including a few very large new releases, so there are a lot of new things for me to learn and most of it is fun. Most of it.

I’m having a lot of fun putting together a nice demonstration environment for the workshop, and I’m using Ansible to set up virtual machines participants will use in the labs. Apropos labs I wanted participants to be able to connect a real physical switch and a real lamp, but investing in 20 Homematic CCU2 or in 20 Z-Wave sticks would be prohibitive, so we arrived at an alternative and fun solution:

These are Wemos-D1 mini (ESP8266) fitted with button shields (the physical button) which Ton soldered together, and they have an on-board LED (the physical lamp). The devices will be speaking MQTT with openHAB (yes, we’ll discuss MQTT as well), and the firmware’s been flashed onto them. I’ve just completed writting all the exercises we’ll be doing with a number of different openHAB bindings.

It’s impossible to cover all details of what openHAB has to offer, but I believe we’ll get to know most of it. Also, my focus will be on keeping data within the confines of your house/office; we’ll touch upon external services, but the Intranet of Things is what it should be. :-)

Care to join us? The workshop will be in Utrecht (that link contains the description as well) and in the English language, and I can tell you I am very much looking forward to that event.

Let there be blinkenlights!

openHAB :: 23 Jun 2018 :: e-mail

A few years ago I, almost literally, grabbed a POS (Point of Sale) pole-display out of a box of equipment which was on its way to the dumpster. I have a tiny bit of history with POS systems and thought it’d be cute to own one, without even knowing whether it worked or not.

I plugged it into a USB socket and it lit up, and I was able to echo something to it, so I determined it was “worth keeping”™ and stored it safely.

I stumbled over the display on the weekend during a bit of a cleaning up and decided it’d be a practical addition to the bunch of toys I’m assembling for the openHAB workshop I’m preparing. As I’m using an OpenBSD latptop with all the things I need I plugged it into that.

To cut a long story short, I couldn’t get it to work, but wasn’t quite sure what the reason was; my suspicion was that the required device wasn’t being created in the OS.

I asked, and as that link shows, within 24 hours I had an operating system kernel patch which added the necessary bits to make the “LD220-HP” work on OpenBSD.

$ dmesg | tail -2
uplcom0 at uhub3 port 2 configuration 1 interface 0 "Prolific Technology Inc. USB-Serial Controller" rev 1.10/3.10 addr 4
ucom3 at uplcom0

Ten minutes later, the small C program which opens the cuaU3 device, subscribes to MQTT and prints messages on a particular topic onto the display was assembled and running.

And the best part? The diff has been checked in, so it’ll be in the next version of OpenBSD.

Open Source: amazing!

OpenBSD :: 19 Jun 2018 :: e-mail

I’m writing this because Michael recently tweeted a link I looked at. I shivered so hard that I want to, without shaming or blaming, attempt to improve on what I saw.

The original poster was using a PAM (Pluggable Authentication Module) to notify (via Pushover) that a login had happened on a machine. Not a bad idea at all, and I learned about pam_exec. However the notification dance was being done with a shell script which invoked curl to do the protocol thing and Pushover’s credentials were in the shell script. Those things are what caused some ice-cold monster to walk down my spine for some reasons:

  • hard-coded credentials for a service like Pushover in a bunch of scripts on a whole number of sytems? That’s pretty awful, nay, horrid, unless you can at the very least manage those systems automatically with, say, Ansible
  • the script I saw used curl to connect to a service. That’s fine and dandy, but what if the service isn’t available? SSH will hang. You don’t want SSH to hang on you. Well you might not mind, but I mind a lot.
  • In order to avoid that, the script backgrounded that curl portion with nohup. That could mean DDoSsing your own box by having a while number of backgrounded processes hanging around. I’ve seen it happen.

pushover screenshot

What the original poster wanted was an alert of an SSH login to his machines. I think I can solve that with a few bits and pieces of code. Admittedly there may be a larger total of moving parts, but because of how I use them, we’re centralizing those parts, and I’m going to keep the PAM configuration (and the original alert of an SSH login) to an absolute minimum.

Here’s what’s going to happen:

  1. A very small compiled C program is going to produce a UDP datagram informing of a login. That datagram carries a short payload with the username of the person logging in, the source address, etc. That UDP datagram could possibly get lost (tough luck), but it’s not likely to in our LAN. Furthermore, if the UDP listener isn’t there, the login isn’t at all hindered or delayed; users won’t notice which is fine. For the record, if this program dies (even with a signal 11) it won’t hurt; PAM will silently ignore it which is exactly what I want to happen.
  2. A UDP listener gets the login datagrams and processes them. How it does so is up to you. I’ll have them published to MQTT.
  3. If you’ve been here before, you know what’ll then happen. :-) What I do here is to use what I once called twitter for my network: MQTT.

So, fifty-odd lines of C give the original version of the hare utility which is what we’ll be invoking from PAM, and the way we invoke this is by configuring it in /etc/pam.d/sshd (tested on CentOS Linux and on FreeBSD 10.2) as

session    optional     pam_exec.so /usr/local/sbin/hare 192.168.1.131

(Hare you ask? I believe Karanbir once told me many, many years ago, that the CentOS project had a Nabaztag rabbit which alerts them of logins. I loved that story, and this system I describe here is dedicated to that rabbit; I’m calling it hare. (Or is it because I saw something hairy? :-))

So whenever somebody uses the SSH subsystem (ssh, scp, sftp) our hare program will talk to its daemon, hared, on the specified server. This hared is so small I have it right here:

Update: meanwhile it’s turned into a Pip-installable package, and Juzam’s made a fully-compatible Go version of the daemon :-)

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import paho.mqtt.publish as mqtt
import socket
import json

__author__    = 'Jan-Piet Mens <jp()mens.de>'

server_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
server_socket.bind(('', 8035))

while True:
    message, address = server_socket.recvfrom(1024)
    host, port = address

    ''' ensure we have JSON '''
    js = json.dumps(json.loads(message))

    mqtt.single("logging/hare", js, hostname='127.0.0.1')

All it does is wait for UDP datagrams, and it then packages each up as a JSON string which it publishes to MQTT. Update: due to lots of reasons, I’ve made the code configurable etc. and it is now here and installable via Pypi.

$ mosquitto_sub -v -t 'logging/hare'
logging/hare {"tst": 1522152142, "hostname": "zabb01", "user": "jjolie", "service": "sshd", "rhost": "192.168.1.130"}
logging/hare {"tty": "ttyv0", "service": "login", "hostname": "fbsd103", "user": "root", "tst": 1522161689, "rhost": "<unknown>"}
$ cat /var/log/ssh-logins.log
login via sshd by jjolie on zabb01 from 192.168.1.130 at 2018-03-27 14:02:22
login via login by root on zabb01 from <unknown> at 2018-03-27 14:07:45

In order to actually process those messages we have mqttwarn which is configured to produce a file with logins (as shown above; note how the login service shows up as well as I’ve also configured that to use hare in PAM), can notify via Pushover (as shown at top), and can easily be used to use any number of the meanwhile 65 services to process these messages. It’s also mqttwarn which translates the epoch timestamp from the JSON payload into a human-readable time.

Here’s the configuration I’m using:

[defaults]
launch	 = log, pushover, file, smtp
functions = 'pamfuncs.py'

[config:file]
append_newline = True
overwrite = False
targets = {
   'mylog'     : ['/var/log/ssh-logins.log'],
  }

[config:pushover]
targets = {
    'pam'       : ['xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'],
  }

[config:smtp]
server  =  'localhost:25'
sender  =  "MQTTwarn <jpm@localhost>"
username  =  None
password  =  None
starttls  =  False
targets = {
    'admins'    : [ 'rooters@ww.mens.de' ],
    }

[logging/hare]
targets = pushover:pam, file:mylog, smtp:admins
alldata = moredata()
title: SSH login on {hostname}
format = login via {service} by {user} on {hostname} from {rhost} at {tstamp}

It’s easy for us to add additional targets without touching PAM configuration on any machine, and without changing our hare system: we add a target to mqttwarn, and there you go: you have mail:

Tue, 27 Mar 2018 14:02:22 +0200
From: MQTTwarn <jpm@localhost>
To: rooters@ww.mens.de
Subject: SSH login on zabb01
X-Mailer: mqttwarn

login via sshd by jjolie on zabb01 from 192.168.1.130 at 2018-03-27 14:02:22

So what are the benefits or pitfalls of doing it the way I did?

  • Credentials for the Pushover service are securely stashed on a single machine, the machine which runs mqttwarn. If we want to or have to change them, e.g. because they got compromised, we do so centrally.
  • All nodes to which people login talk to a single machine in my network. It is from that one system, again the one running mqttwarn, that we access the external service.
  • If we want to rip out Pushover and alert/notify with some other service, or if we want to add some additional type of notification we can do this centrally.
  • Above all, we ensure that the SSH login service on the nodes will not hang or somehow be delayed.
  • One (slight) disadvantage is that we have to create architecture-dependent versions of hare. We have to distribute/install those, but that we easily do with our configuration managment system.

I explain how a lot of the mqttwarn stuff works in how do your servers talk to you?. I hope some of these technologies make the lives of your systems safer and your life easier.

SSH, MQTT, daemons, and rabbit :: 25 Mar 2018 :: e-mail

Other recent entries