It’s been six years since I first looked at Gitea, and I have only now begun looking at implementing continuous integration (CI) with it.

Gitea’s Actions didn’t convince me when I toyed with them, so I looked further and found Woodpecker-CI which I set up using this guide after removing the actual Gitea container because I already have that running as a service. Woodpecker-CI’s documentation is plentiful and quite understandable, but I missed an introduction on why to use docker-compose and not the binaries. After toying around I decided to actually do the Docker dance as I’d be needing that for the Woodpecker agents anyway.

Setting up a pipeline was easy enough, and my current playground looks a bit like this:

$ cat .woodpecker.yml
  build something:
    image: alpine
        - echo "Here we go!"
        - apk add --no-cache build-base gcc
        # ...
  notify via MQTT:
       payload: "${CI_COMMIT_AUTHOR} on ${CI_REPO_NAME} is done with ${CI_COMMIT_SHA}"
    secrets: [ mqtt_host, mqtt_topic ]

$ woodpecker-cli lint
✅ Config is valid

I can create secrets (e.g. mqtt_host above) which are associated with my repository and are passed into pipeline steps and plugins. Very useful for hiding passwords and other, well, secrets, also multi-line secrets such as SSH keys. I create secrets via the UI or via woodpecker-cli. (I had the devil of a time finding the WOODPECKER_TOKEN required for this program and couldn’t for the life of me find it in the docs, but it turns out it’s trivial: in the Woodpecker UI I click on my user avatar and find the two shell export statements required.)

$ export WOODPECKER_TOKEN="..."

$ woodpecker-cli secret add \
	--repo jpm/t1 \
	--name mqtt_host \

$ woodpecker-cli secret add  --repo jpm/t1 --name sshkey1 --value "$(cat ...)"

I discovered Woodpecker-CI is extensible via plugins and couldn’t resist. For the plugin proper, I create the program (here a simple shell script). The variables will be populated from secrets and from command parameters (see .woodpecker.yml above):


set -e

mosquitto_pub \
	-h "${MQTT_HOST}" \
	-i "woodpecker-ci-$(hostname)" \
	-t "${MQTT_TOPIC}" \

echo "MQTT notification published"
exit 0

I describe the docker/podman image:

FROM alpine
LABEL name="wpmqtt"
LABEL description="Woodpecker-CI MQTT notifyer"
LABEL maintainer="Jan-Piet Mens <>"
MAINTAINER "Jan-Piet Mens <>"
LABEL build_date=$BUILD_DATE

ADD /bin/
RUN chmod 755 /bin/
RUN apk -Uuv add curl ca-certificates mosquitto-clients


and build and push the image:

$ podman build --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
	-t .

$ podman push ...

I then commit a change to the repository and trigger a pipeline run:

Woodpecker-CI dashboard pipeline run

and am happy when I see the notification on my terminal:

$ mosquitto_sub -h -v -t jp/#
jp/3 jpm on t1 is done with 25af8963f2bbb06040921bfe24cd6339c78a8050


I still have to look into how to figure out if/when something goes wrong. Other than docker compose logs -f I’ve not yet seen anything mentioned.

I also want to see if I find a way how to have a pipeling run on a specific agent. I think this might be possible using Labels.

git :: 22 Sep 2023 :: e-mail

I recently spent an inordinate amount of time trying to debug why a curl-initiated Webhook POST to AWX was being rejected with the lame message

{"detail":"A server error has occurred."}

In spite of configuring debug logging and log forwarding from AWX, I couldn’t figure out what was wrong. My assumption was the body of the post was missing something. I looked at the source code of the api view controller and still didn’t figure it out and basically gave up after an hour. Actual webhooks posted from Gitea worked (when configured in AWX as Github), but my simple curl invocation wouldn’t. (Remind me to rave about how I like Gitea and Forgejo.)

Ton then showed me how to view what’s going on in the AWX web task:

$ kubectl get pods -n awx | grep awx-web
awx-web-6b6bddcf69-75jdp                           3/3     Running

$ kubectl logs -n awx pod/awx-web-6b6bddcf69-75jdp  -c awx-web -f
2023-09-13 10:58:59,846 ERROR    [6e4b76bd6879443f9fbf29a65ccda3a7] django.request Internal Server Error: /api/v2/job_templates/9/github/
Traceback (most recent call last):
  File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/api/views/", line 185, in get_signature
    hash_alg, signature = header_sig.split('=')
ValueError: not enough values to unpack (expected 2, got 1)

I quickly recognized where I’d gone wrong: I omitted the = sign between the string sha1 and the actual digest. Github now prefers SHA256 (which AWX doesn’t support here) and its header is different.

The two possible types of header for comparison:

X-Hub-Signature: sha1=05eb9e5d74e3085fce6a93fd72ec468a75dfdb8e
X-Hub-Signature-256: 6571761a59b557a1b7809ff8a687fc715daf83f23655c7a971f420ca6f40e3c2

Here’s the relevant bit from the AWX code.

I can now demonstrate how to launch a template from afar:


export secret="dAHQB8IS3F2gecUaIHmjJCwq9O5tG3CCoUK1ItNGWg2KdraBgB" # template Webhook Key
export payload="$(jo bla=true)"

digest1="$(printf "%s" "${payload}" | openssl dgst -sha1 -hmac "${secret}" | sed -e 's/^SHA1(stdin)= //' )"
sig1="X-Hub-Signature: sha1=$digest1"
uuid=$(python3 -c "import uuid; print(uuid.uuid4())")

curl -H 'content-type: application/json' \
        -H "${sig1}" \
        -H "X-GitHub-Event: push" \
        -H "X-GitHub-Event-Type: push" \
        -H "X-GitHub-Delivery: ${uuid}" \
        -d "${payload}"  \

Logging, finding, and studying logs: so important!

awx and ansible :: 13 Sep 2023 :: e-mail

During a lunch break in Munich last week, Michael mentioned Ansible rulebooks, and I realized I had not taken the time to look into them.

Rulebooks are the system by which Ansible is told which events to use in Event-Driven Ansible. They are written in YAML and contain three main components: sources which define the event sources to be used, rules which define conditionals matched from sources, and actions which trigger what should occur when a condition is met.

Here’s a small example I’ve cobbled together to test Event-Driven Ansible (EDA).

- name: Rulebook to do something
  hosts: localhost
  gather_facts: false

      - ansible.eda.webhook:
          port: 6000

      - ansible.eda.file_watch:
          path: "files/"
          recursive: false
          ignore_regexes: [ '.*\.o' ]
      - name: Launch playbook on start cmd
        condition: event.payload.cmd == 'start'
               name: jp01.yml
                   dessert: "{{ }}"
                   home: "{{ HOME }}"       # from environment
                   person: "{{ person }}"   # from vars

      - name: trigger on range
        condition: event.change == 'created' # 'modified'
               name: copy
                   src: "{{ event.src_path }}"
                   dest: "/tmp/files"

My rulebook defines two sources: the first listens to HTTP webhooks on port 6000, and the second watches (requires pip install watchdog) a directory for new files.

Then I define two rules:

  • the first matches the cmd element in the HTTP payload to the word start and performs an action on match. The action run_playbook launches the specified playbook using the inventory we give ansible-rulebook.
  • the second matches when the event indicates a file has changed in the directory and invokes an Ansible module (copy) to copy new discovered file to a particular destination.

I launch the rulebook:

$ ansible-rulebook -i inventory \
	-r rulebook.yml \
	--vars v.yml \
	-E HOME \

I then POST a webhook and create a new file in the watched directory:

$ curl -H 'Content-type: application/json' \
	-d "$(jo cmd='start' data='chocolate mousse')" \

$ ls > $dir/files/n01

On the console I can observe the events and what they trigger:

{   'meta': {   'endpoint': '',
                'headers': {   'Accept': '*/*',
                               'Content-Length': '41',
                               'Content-Type': 'application/json',
                               'Host': 'localhost:6000',
                               'User-Agent': 'curl/7.87.0'},
                'received_at': '2023-08-14T11:09:59.195936Z',
                'source': {   'name': 'ansible.eda.webhook',
                              'type': 'ansible.eda.webhook'},
                'uuid': 'c5241a36-3c11-403e-9899-1c43209c858f'},
    'payload': {'cmd': 'start', 'data': 'chocolate mousse'}}

PLAY [localhost] ***************************************************************

TASK [debug] *******************************************************************
ok: [localhost] => {
    "msg": "**** From /Users/jpm. Would you like some chocolate mousse, Jane?"

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

{   'change': 'created',
    'meta': {   'received_at': '2023-08-14T11:10:17.550916Z',
                'source': {   'name': 'ansible.eda.file_watch',
                              'type': 'ansible.eda.file_watch'},
                'uuid': '6878c7a9-0454-414b-a820-0ec788da599f'},
    'root_path': 'files/',
    'src_path': '/Users/jpm/take/training/rulebook/files/n01',
    'type': 'FileCreatedEvent'}
localhost | CHANGED => {
    "changed": true,
    "checksum": "fe142a4c6a82a55a1c7156fda0056c9f479c5b0c",
    "dest": "/tmp/files/n01",
    "gid": 20,
    "group": "staff",
    "md5sum": "1033373ec84317c9920fcf8e11635e1e",
    "mode": "0644",
    "owner": "jpm",
    "size": 39689,
    "src": "/Users/jpm/.ansible/tmp/ansible-tmp-1692011418.0594718-31843-165094790951764/source",
    "state": "file",
    "uid": 501

Internally ansible-rulebook uses the Java Drools rule engine and ansible-runner to actually invoke Ansible content. Rulebooks can gather facts which can be used in conditions. ansible-rulebook appears to ignore ansible.cfg as far as I can tell, and it requires me passing it an inventory explicitly. As shown in my example above, I can pass variables to it (from JSON or YAML files), and have it import environment variables – both useful for introducing API keys and whatnot.

I’ve only just begun toying with Event-Driven Ansible (and have reported an issue with watch_file which may not even be one), but it appears to work quite well so far.

I think this will be most interesting when interfacing with, say, repository pushes etc. which can then trigger Ansible playbook runs.

Further reading:

ansible :: 14 Aug 2023 :: e-mail

I mentioned back in 2010 that my first impression of OpenDNSSEC wasn’t bad, but I changed my mind when I had to maintain three large installations a few years later and pledged I wouldn’t do so again.

It’s seldom I speak negatively about a particular piece of software; if a program is good I will say so, but otherwise I’ll simply not mention it. I’m making an exception, in spite of OpenDNSSEC having become an NLnet Labs project – the brilliant people who bring us top-notch DNS servers NSD and Unbound.

Note: I originally forgot to mention my notes pertain to a much older, no-longer maintained version of OpenDNSSEC. The software has had significant changes done to it meanwhile, so my complaints may well no longer be valid. Furthermore, I inaccurately stated SoftHSM is an NLnet Labs project: it is an independent project with its own project owners and developers.

Here were four paragraphs detailing the issues we had with the program, but: water under the bridge. I will dwell a bit on the documentation though. OpenDNSSEC’s documentation has gone from one ill-maintained wiki to the next only to meanwhile have even lost all images; I’ve lost the love to report it. For SoftHSM the situation is even worse – what’s termed ‘documentation’ is pretty useless. Just go ahead and compare NSD’s or Unbound’s documentation to that of OpenDNSSEC or SoftHSM. Apropos SoftHSM: during an unrelated experiment about a year ago, I dropped 1,000 keys into a SoftHSM key store which resulted in 655,138 stat(2) system calls when signing a three-record zone; just look at the increasing time required to generate keys. I couldn’t even be bothered to report this as a bug, nor this. I’m sorry.

My time at the company was over and I stopped consulting for them so I wasn’t directly involved, thankfully, but tangentially got to hear about countless OpenDNSSEC enforcer failures, signers which crash, and even one or two unscheduled rollovers occurring on the spur of the moment.

I know there are some people who use OpenDNSSEC for a zone or two, and I know it’s used by some TLDs (who typically have relatively few but large zones), but my experience with the software over the years has been suboptimal, and I have once or twice reccommended against using it.

Roughly two years ago my customer asked me to return as they were having issues with the signing system, but I refused: I wouldn’t touch the existing installation but would gladly return to implement a new system.

“New” was easier said than done because an eventual migration came with one absolute blocking feature: the keys used for DNSSEC KSKs are stored on a slew of Thales/Entrust HSMs and an absolute prerequisite for migration was that KSKs were to be reused and not roll until ordered to.

After a bit of experimentation and different tests with BIND (for which DNSSEC Policy was on the horizon) and Knot-DNS, it turned out the latter had no apparent issues talking PKCS#11 to the installed HSMs, so the future of the project was clear: Knot-DNS would become the signer.

Knot as bump-in-the-wire signer

Knot-DNS, like BIND can be a bump-in-the-wire signer, which is how we use it. Knot implements a Key And Signing Policy (KASP) system such as that which was used in OpenDNSSEC, and we were able to configure the new signer according to our requirements, so we were basically good to go.

  - id: automatic
    keystore: thales
    manual: off
    ksk-shared: off
    algorithm: rsasha256
    ksk-size: 2048
    ksk-lifetime: 360d
    zone-max-ttl: 86400
    dnskey-ttl: 3600
    propagation-delay: 120
    rrsig-lifetime: 30d

It took a while for the project to actually kick off, enterprise being enterprise, but we’ve meanwhile successfully migrated two of three largish OpenDNSSEC installations to Knot-DNS. (Just in time BTW, because the last OpenDNSSEC installation is logging that keys can no longer be created … I’m crossing my fingers it’ll hold out another fortnight.)

New software means new features (and often also new bugs, but so far so good), and one already now greatly appreciated feature of the new environments is support for Catalog Zones, which Knot also has, and in particular how Knot automates adding member zones to a catalog. BTW, DNS Catalog Zones has just been published as RFC 9432.

Knot’s logs are descriptive and are easy to read (important when I’m trying to find out what the server is doing), and it’s documentation is good. The Knot developers react quickly and helpfully on the mailing list, and are open to new ideas:

  • code I contributed to have keymgr produce JSON was merged in short order
  • a request for a CKA_LABEL to be added to keys generated via PKCS#11 was implemented in short time

    $ preload -S cklist -n --cka-id=20ecaf71fd4c679166af99c5513721164afa2e3c | grep _LABEL
      CKA_LABEL "j01.example. KSK"

And now? I’m satisfied. The new Knot signer is performing well, and the lack of surprises to date is refreshing. It is of course early days, but I’m confident for the future.

DNS and DNSSEC :: 22 Jul 2023 :: e-mail

While explaining Ansible’s local facts to students last week, I was asked whether it’s possible to have encrypted facts on the node which get decrypted on use when read by the Ansible controller. The use-case would be some smallish data which is encrypted at rest, “invisible” to curious people on the node, but usable on the management server.

It might be feasible to to place Vault-encrypted (have you seen nanvault?) data on the node and unvaulting it (that’s verb I invented) on the controller (wasn’t there a filter for doing that..?), but the first thing which came to mind was to use age and to create a small filter for it, particularly as this student stayed on for the advanced Ansible course in which we discuss filter creation amongst other things.

age calls itself a “simple, modern and secure encryption tool (and Go library) with small explicit keys, no config options, and UNIX-style composeability”. I’m quite fond of the concept and the utility and use a pair of Yubikeys on which I keep some of its identities (secret keys).

In order to use age we require a key pair; the public key (recipient in age terminology) is written alongside the secret identity into the key file, and it’s printed to stderr to be copy/pasted directly from the console. As is typical, the secret key must be kept secret. (Here I display one for illustration.)

$ age-keygen -o cow.key
Public key: age1mkmc34wqy8tdda58077cm2p0eg3xedg4g8dk8sqwwczxl69gyvnqq84pha

$ cat cow.key
# created: 2023-06-01T09:56:06+02:00
# public key: age1mkmc34wqy8tdda58077cm2p0eg3xedg4g8dk8sqwwczxl69gyvnqq84pha

$ age-keygen -o ansible.key   # create another because it's fun and they're cheap
Public key: age19q8pzgaxq2uynsrp3dluxv5apxmqym2pyldwpkp4s30qf4vfzqrsvvjzjv

I then encrypt the data I wish to protect to one or more public keys, first using ASCII armour, and then by base64-encoding the binary encrypted data, showing the two distinct output encodings which my Ansible filter (below) will support.

$ echo "The quick brown fox" |
     age -r "age1mkmc34wqy8tdda58077cm2p0eg3xedg4g8dk8sqwwczxl69gyvnqq84pha" -a

$ echo "Pack my box with five dozen" |
     age -r "age19q8pzgaxq2uynsrp3dluxv5apxmqym2pyldwpkp4s30qf4vfzqrsvvjzjv" |

I ensure the local fact files, which must be named *.fact on Unix/Linux, are placed on the node I wish them to be on. The first in INI format, the second in JSON:

$ cat /etc/ansible/facts.d/pangram.fact

$ cat /etc/ansible/facts.d/other.fact
  "armored": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB4Z3VVZDlnQmhHTG8zREJk\nVUJ5eHNVLzZ6WFlhVXFkNzEvaHN3QnFkdGlFCm00QnZlcktveXo0bDdzVmZXM0Iz\nQjA1SWVhVEY4dThsU2k5SStPS1dJaGcKLS0tIEdyRTduOFAvN0xHZTNMQi9RZnMz\ncUNWVHJCV293ajhRSW5NUUptTC9ndmMKI5DvYVXGNT/I+5FBZ1saqSwac9ObmYZd\nvK/exrZMqlXVwlsXXavxAPA8R9se6vpMYkkI7w==\n-----END AGE ENCRYPTED FILE-----\n"

This is definitely a case in which I’d like Ansible to have support for YAML fact files (quite easily implemented as a custom executable fact – yq –tojson or Python come to mind), as the following looks much more elegant:

armored: |

Now we have age-encrypted data on a controlled node. Where the data was encrypted is unimportant; what matters is that it is located on the node and can be read by the controlling node during a fact-gathering dance. When Ansible obtains the local facts from a node, it will see the likes of this:

"ansible_local": {
    "other": {
        "armored": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB4Z3VVZDlnQmhHTG8zREJk\nVUJ5eHNVLzZ6WFlhVXFkNzEvaHN3QnFkdGlFCm00QnZlcktveXo0bDdzVmZXM0Iz\nQjA1SWVhVEY4dThsU2k5SStPS1dJaGcKLS0tIEdyRTduOFAvN0xHZTNMQi9RZnMz\ncUNWVHJCV293ajhRSW5NUUptTC9ndmMKI5DvYVXGNT/I+5FBZ1saqSwac9ObmYZd\nvK/exrZMqlXVwlsXXavxAPA8R9se6vpMYkkI7w==\n-----END AGE ENCRYPTED FILE-----\n"
    "pangram": {
        "p1": {
            "short": "YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBpQ21sTVN4ZDVkVEkzRWNCSkhYOGJaM3laOXlaRU5nNit3QmtDZkJUem00Cm1CdUFVSFVUNVEyVEY0S011YnJxQWdmYnljU1ZMb3VPTDN3ZzdBOUNRd0UKLS0tIERMeUdENUxFUVIvSUNxeXU5RmFYRDFRaWZpeEhjZXZDOVA2Q1F6K1VrTnMKmTG2+GrnkCezjKkTC3skuQGGEfleHh/PGfYxp0spYgITPNPXZs43cFECnyupz039QQtxljh1HBpggSe+"
"ansible_machine": "arm64"

Back on the Ansible controller machine, the playbook will use the content of the two facts which it will decrypt using an age identity; in the first instance using a default key file named ansible.key, and in the second the specified secret key, with both files located on the file system of the controller node (filters run in the templating engine which is invoked on the controller):

- hosts: alice
    - name: Decrypt the age-encrypted and base64-encoded pangram
         msg: "{{ ansible_local.pangram.p1.short | age_d }} liquor jugs"

    - name: Decrypt the age-encrypted and ASCII-armoured 2nd pangram
         msg: "{{ ansible_local.other.armored | age_d('cow.key') }}" 

    - name: Use age command to encrypt the current date string ...
         cmd: "age -e -a -r age19q8pzgaxq2uynsrp3dluxv5apxmqym2pyldwpkp4s30qf4vfzqrsvvjzjv <(date)"
         executable: /bin/bash  # yuck
      register: c

    - name: ... and decrypt it using our filter
         msg: "{{ c.stdout | age_d }}"

The output should be predictable:

PLAY [alice] *******************************************************

TASK [Gathering Facts] *********************************************
ok: [alice]

TASK [Decrypt the age-encrypted and base64-encoded pangram] ********
ok: [alice] => {
    "msg": "Pack my box with five dozen liquor jugs"

TASK [Decrypt the age-encrypted and ASCII-armoured 2nd pangram] ****
ok: [alice] => {
    "msg": "The quick brown fox"

TASK [Use age command to encrypt the current date string ...] ******
changed: [alice]

TASK [... and decrypt it using our filter] *************************
ok: [alice] => {
    "msg": "Fri Jun  2 12:55:17 UTC 2023"

For decryption the filter requires a path to a file containing identities (i.e. one or more secret keys). While this file could, in theory, be Ansible-vaulted, the age CLI sensibly has no provision for passing the identity on the CLI so Ansible would have to unvault the file before giving to age – probably quite unwise to do.

age can encrypt data to a set of recipients, making it possible to decrypt the data with distinct identities keys, say, when more than one Ansible controller access nodes and should use individual identities for decryption. age’s keys can also be password-protected, but I feel that would be overkill for this task.

On the train-ride back from Berlin today, I decided encryption might also be interesting, so the filter now has that too:

    recipient: "age19q8pzgaxq2uynsrp3dluxv5apxmqym2pyldwpkp4s30qf4vfzqrsvvjzjv"
  - name: Encrypt to age with base64 encoding ..
        secret: "{{ 'Moo 🐄' | age_e(recipient) | b64encode }}"
        armored: "{{ 'more moo 🐮' | age_e(recipient, true) }}"

  - name: .. and decrypt using age_d
       msg: "{{ secret | age_d }}"

  - name: .. and decrypt the armored value using age_d
       msg: "{{ armored | age_d }}"

And that produces this output:

TASK [Encrypt to age with base64 encoding ..] **************
ok: [localhost]

TASK [.. and decrypt using age_d] **************************
ok: [localhost] => {
    "msg": "Moo 🐄"

TASK [.. and decrypt the armored value using age_d] ********
ok: [localhost] => {
    "msg": "more moo 🐮"

I’m quite sure my filter’s Python code can be improved upon, so you know what to do: here it is.

ansible :: 01 Jun 2023 :: e-mail

Other recent entries