I need a flipchart or a whiteboard while teaching, but during online sessions I resort to using a terminal and a text editor and quickly got tired of having to swap sharing different windows in Zoom, Big Blue Button, & co. (I share individual windows, not the whole screen.) I share a window of a Web browser with my presentation in one tab and an open terminal using gotty in the second, and can flip between the two with CMD-1 / CMD-2.

screenshot of browser with two tabs

I stumbled over gotty some years ago, and have used it to enable users to login to systems if they can’t SSH in (think corporate firewalls). It works well and is reliable. During the new setup I’m doing for training machines I need one gotty per user which is easily accomplished using distinct TCP port numbers. My first attempt in launching them was by templating out an rc.local script, but I wanted something more elegant.

I recalled that systemd services can take an argument via the service@argument syntax which fits in well with my user numbering, and after a bit of experimentation came up with a solution which works well.

[Unit]
Description=GoTTY Web Terminal %i
After=network.target

[Service]
User=user%i
Group=user%i

WorkingDirectory=/home/user%i
ExecStart=/usr/local/sbin/gotty -p 91%i tmux new-session -A -s user%i

[Install]
WantedBy=multi-user.target

The unit file’s WorkingDirectory is needed or gotty starts in /, and the tmux invocation is to automatically start or attach to tmux in an SSH session. Users can SSH into the machine and/or use gotty and see the same screen. More importantly, I can, once given consent by a student, login and see their screen, help with error messages, give tips, etc.

Each user’s home directory has a ~/.gotty preferences file containing

address = "127.0.0.1"
credential = "username:password"
title_format = "Ansible:username"
permit_write = true
preferences {
    font_size = 20
    background_color = "rgb(255, 255, 255)"
    foreground_color = "rgb(0, 0, 0)"
}

The required number of gotty sessions are launched during setup of the machine with Ansible. (The sequence lookup produces a list of numbers 00, 01, 02 based on the number of users I expect to welcome.)

- name: "Gotty: start gotty services"
  systemd: name="gotty@{{ item }}" enabled=true state=started
  with_sequence: '{{ nusers }}'

This is the resulting process list:

$ ps aux|grep g[o]tty
user01      1209  ... /usr/local/sbin/gotty -p 9101 tmux new-session -A -s user01
user00      1244  ... /usr/local/sbin/gotty -p 9100 tmux new-session -A -s user00

Each gotty listens on the loopback interface. A templated-out configuration lets a TLS-protected nginx reverse proxy talk to the gottys.

location /tty00/ {
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $remote_addr;
	proxy_set_header Host $http_host;
	rewrite ^/tty00/?$ / break;
	rewrite ^/tty00/(.*)$ /$1 break;
	proxy_pass http://127.0.0.1:9100;
	proxy_http_version 1.1;
	proxy_set_header Upgrade $http_upgrade;
	proxy_set_header Connection "upgrade";
}

It’s only just now while writing this, that I realize GoTTY, which means “tty thing written in Go” actually reminds me of getty(8). :-)

Updates

  • There’s a newer and maintained version at https://github.com/sorenisanerd/gotty. The only way I can get it to understand my font/colour preferences is if I configure term = "hterm", but that, precisely, is what this fork appears to be dropping in favor of xterm.

terminals :: 03 May 2022 :: e-mail

I avoid using the cloud in situations where it’s not really necessary which is why I had the “whole” infrastructure with me during Ansible trainings: no dependencies on the Internet not working at a customer site, firewall rules forbidding access to resources, etc.

During the second half of 2019 I dreamed up and created an Ansible training environment based on FreeBSD and spent quite a bit of time getting it just right. TL;DR: it was a laptop with FreeBSD and a number of jails on it. Students would each have three “machines” from and onto which they could perform deployments in labs during the training. The setup contained a package repository and a PIP repo for select installs required during labs, a local DNS server, etc.; everything was self contained. Perfect for me, and after its first productive use I reported how enthusiastic I was that nobody had noticed anything untoward:

They looked at me a bit curiously, and when I told them they’d all been working together on one laptop, I saw the odd jaw open.

You might have noticed that last report is dated beginning of the pandemic: February 2020.

As you can imagine, all planned events for that year (and the next, and the next, but I digress) were cancelled, postponed, or requested as “online” versions. How to solve the problem of my portable data center?

Use the cloud, they said.

I found Digital Ocean offered FreeBSD as one of their Droplet operating systems, and I was quite quickly able to convert my setup to one of their Droplets. A few hours before a training, I would launch my magical shell scripts (sue me!), and behold I’d have a for me perfect setup for doing the labs. Once the training over, I would destroy the Droplet and reiterate at the next occasion.

Life was good. The setup performed well, and I tweaked and changed things to suit me better, but I was happy with how it all fitted together, and I don’t recall having heard a complaint during a training. (Once, a student said: “this isn’t Linux, is it?”). Then came the time when I learned that FreeBSD 12.2 was to become EOL. I contacted Digital Ocean to ask about FreeBSD’s future on their platform: hmm, nothing specific could yet be said. They were discussing it internally and couldn’t comment at this time. I thought that ominous.

Just after 12.2 EOL in March of this year it turns out that Digital Ocean are dropping support for FreeBSD entirely. FiLiS reports about Digital Ocean saying:

the action plan is to retire FreeBSD versions through the UI starting June 2022 and through the API starting August 2022.

Damnation.

I became aware that it is possible to upload FreeBSD custom images to Digital Ocean, and a number of people responded to tweets etc. telling me likewise, but I simply am not interested in hacking that. First of all because I don’t care to actually do the work and have to maintain the lot, and secondly because storing images isn’t free of charge. It’s not a large amount of money, but there’s the odd gin&tonic to be gained from not doing so. But honestly, why should I do this if alternative operating systems are maintained? I don’t see a benefit for myself.

The good folk at BastilleBSD pointed me at Vultr who actually do have a current (13.0) version of FreeBSD, but unfortunately it’s UFS file system only, and I need ZFS because of my iocage use. I’ve asked Vultr, and they’ve logged a feature request to their development team, but it’s unknown whether it will happen and in which time frame, and that’s a problem: if Digital Ocean really kill FreeBSD in June/August, then I’m in dire need of a working solution.

I probably could, providing enough time and effort, change, replace, rip out, re-insert, and whatever, for example replace iocage by BastilleBSD, but I just don’t feel like it any more. I’ve reached an age at which it’s just no fun. I need something with a “best before” date further in the future. Preparing for a course the evening before only to realize that stuff doesn’t work and I have to stay up for hours so that a dozen students don’t realize it didn’t work is just no longer my cup of tea. (This occurred to me near Utrecht a few months ago: at midnight I set up the servers only to have the system fail because a change to the way pkg bootstrapping worked was sprung upon me. Not fun.)

At the moment I don’t see an alternative to changing my setup. I will likely use individual Droplets, virtual machines, instances, or whatever you call them, launching them, getting a few things prepared and being done with it. I’m a perfectionist during trainings, and labs etc. have to be just so.

And to be quite clear, this is certainly not a reason for me to stop using Digital Ocean’s offerings, on the contrary. They’ve been good for me and will hopefully continue to be so. It’s not as though they’ll be losing a huge amount of money from my not using an 8GB machine once in a while, right? FreeBSD is likely not big business for them and hence they drop it. C’est la vie. We will gladly continue using their services for our Linux-based DNS & DNSSEC trainings.

I’ll get over it and solve the problem, but judging from what I’ve looked at the past 24 hours it won’t be using FreeBSD.

Further reading

ansible, freebsd, and linux :: 28 Apr 2022 :: e-mail

I’m regularly made fun of when teaching DNSSEC because I tell people I use a “napkin” when creating DNSSEC keys to jot down the key tags (or key IDs), and it’s true: also during trainings I have the “napkin” – to be precise it’s a sheet of A4 paper on which I note modifications to schedule, timezones, whether I still owe answers to questions, and of course, the key IDs of DNSSEC keys.

Here’s a partial scan of last week’s napkin:

the last napkin I used for DNSSEC signings

I don’t mind students’ grins and typically laugh last when they say they’re getting errors (e.g. during manual signing), whereupon I can victoriously respond: “if you’d used a napkin to make a note of which key tag is KSK and which is ZSK that wouldn’t have occurred!” :-)

I wonder whether I ought to start a napkin business and what the design should look like …

dnssec :: 24 Jan 2022 :: e-mail

I spend a bit of time explaining the DNS Start Of Authority (SOA) record in introductory DNS trainings. This is what a DNS SOA record (the first record in a zone file and one which must exist exactly once in a zone) looks like:

example.net.   3600 IN  SOA mname rname (
                        17       ; serial
                        7200     ; refresh (2 hours)
                        3600     ; retry (1 hour)
                        1209600  ; expire (2 weeks)
                        900      ; negttl [minimum] (15 minutes)
                        )

We discuss the individual fields and scenarios for their values (also pointing out that recommended SOA values may or may not be useful). I specifically talk about the expire field and what its use is. You will know that if a secondary server for this zone cannot contact a primary for expire seconds, the secondary server will no longer respond to queries for this zone, preferring to SERVFAIL rather than to respond with stale data. That is how I learned what the field means. Quite straightforward actually.

I would not have brought up the topic had it not been for a participant who asked what happens if expire is configured to zero (0) seconds.

After saying “don’t do that!” and threatening to get a frozen trout from the fridge if further such question arose, I put the question aside, but that evening I decided to investigate. Unfortunately, as it turns out.

DNS specifications and exceptions ... ;)

Shaft points out that Wikipedia says:

This value must be bigger than the sum of Refresh and Retry

but that there’s no source for the statement, nor is there an affirmation in RFCs 1034,1035.

What would actually happen if an authoritative primary provided a zone with expire=0 in its SOA?

My first thought was the secondary server, upon receiving a transfer with expire=0, would just immediately expire the zone. Easy enough to test, and it turned out that a BIND secondary does not do that at all but continues serving the zone “for a while”. (I initially reported BIND serves the zone for an hour before expiring it, but that is wrong.) Thanks to Evan who directs me to the function I wasn’t able to find in the source code, I learn expire is set to at least refresh + retry (and has been since 1999), whereby the latter two values have a minimum of 5 minutes each. I also learned that BIND limits expire to 14515200 seconds or 24 weeks.

The introductory training had already finished, but I contacted the participants and reported our findings. (I try to not leave questions unanswered.)

And how do the other Open Source DNS servers react?

PowerDNS and Knot DNS do not expire the zone data when receiving expire=0; the former because it doesn’t ever expire a zone (see below).

Admittedly this whole topic of expire with value 0 seconds is super edge-case, and there’s no reason to get involved in looking into it. (So why did I do that!?!?)

But what about “regular” expiration? Assume a zone has a valid expire field in its SOA, how will these servers handle that when operating as secondaries?

PowerDNS originally made a deliberate design choice to never expire zones. I learned about this yesterday upon submitting an issue report.

NSD implements zone expiry and logs the fact when it occurs. (Here are notes I took.)

nsd[45521]: error: xfrd: zone a1.dnslab.org has expired

Knot DNS also expires the zone when expire elapses, logging the fact (my notes).

info: [a1.dnslab.org.] zone expired

BIND also expires the zone when the SOA expire elapses (my notes), and logs the fact:

general: zone a1.dnslab.org/IN: expired

These last three respond with SERVFAIL when the zone has expired, meaning that a legitimate client such as a resolver will attempt to query a different nameserver.

I spent the better part of a day doing this. I should have left it at don’t do that!

DNS :: 14 Jan 2022 :: e-mail

I’ve been messing around with macOS keychains part of the morning, and it occurred to me that I hadn’t jotted down how to use Ansible vault with generic passwords in a macOS keychain, so here goes.

I create a generic password from the CLI or via the GUI

$ security add-generic-password -a jpmens -j "vault pw for example.com" -s vpw-example-com -w
password data for new item:
retype password for new item:
$

password in keychain

A one-line shell script I place in ~/bin/vaultpw.sh obtains that generic password

#!/bin/sh

/usr/bin/security find-generic-password -a jpmens -s vpw-example-com  -w

and I configure ansible.cfg to use that executable script from which to obtain the vault password on stdout (or I specify it at runtime as argument to --vault-password-file)

[defaults]
nocows = 1
vault_password_file = ~/bin/vaultpw.sh

Whenever I use Ansible vault, its password is obtained automatically.

$ EDITOR=ed ansible-vault create secrets.yml
0
a
---
dbpass: superverysecret
.
w
28
q

$ head -2 secrets.yml
$ANSIBLE_VAULT;1.1;AES256
33653339353466353561386535326537636435643338623134633036306533636338643661343866

$ ansible-vault view secrets.yml
---
dbpass: superverysecret

Note that it’s not possible to keep the vault password secret from anyone who must be able to launch playbooks which use vaulted files from the CLI.

Ansible and macOS :: 17 Dec 2021 :: e-mail

Other recent entries