There’s no end to the fun that was made of me today for touching the K-word. Be that as it may, I have to do this if I want to continue giving AWX/Tower trainings, and in order to do that I need AWX to use an SSH jump host to get to nodes. The reasons for that lie hidden in here.

This post is going to be a quick and dirty collection of how I solved the particular issue I requested help on, and the last thing you want to do is to ask me for help on Kubernetes & co. It took me several hours to solve this problem.

What’s the problem? I need to deploy an SSH key and an SSH conf file into the containers (or are those pods?) the AWX task (awx-task) processes are running in.

In order to accomplish this, I finally was able to

  1. Create kubernetes secrets from files
  2. Deploy (I hope that’s the right term) AWX configured to use those secrets to create read-only volumes in minikube.

Creating the secrets is easy:

$ kubectl create secret generic awx-jp01 \
	--from-file=keyfile=$HOME/umleit \
secret/awx-jp01 created

$ kubectl describe secrets awx-jp01
Name:         awx-jp01
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

conf:     221 bytes
keyfile:  444 bytes

Then comes the operator (?) deployment (?) file. (You’re noticing I don’t know the proper terminology, and I’m not quite sure I really want to know it…)

kind: AWX
  name: awx
  tower_ingress_type: Ingress
  tower_admin_user: admin
  tower_admin_password_secret: changeme
  tower_task_extra_volume_mounts: |
     - name: "rootsshdir"
       mountPath: "/var/lib/awx/.ssh"
       readOnly: true
  tower_extra_volumes: |
     - name: "rootsshdir"
          secretName: "awx-jp01"
          - key: keyfile
            path: "id_ed25519"
          - key: conf
            path: "config"

The magic is in tower_extra_volumes which sources the secrets created earlier, and tower_task_extra_volume_mounts which creates the actual bind-mount from the secrets. I’ve not found a stich of documentation on this; either term produces two google hits; quite the record. </sarcasm>

So, I then apply this configuration, and watch how the new pod gets deployed. By the way, if you’re doing this, there are “ALOT” of situations in which the config is accepted but nothing happens.

$ minikube kubectl apply -- -f ~jpm/myawx.yml configured

$ kubectl get pods
NAME                           READY   STATUS        RESTARTS   AGE
awx-555d75485d-nbzf5           4/4     Running       0          5s
awx-86b468c746-sbqdc           0/4     Terminating   0          17m
awx-operator-57bcb58f5-7crxs   1/1     Running       0          106m
awx-postgres-0                 1/1     Running       0          104m

And then I login to the pod itself and see my files:

$ kubectl exec -c awx-task awx-555d75485d-nbzf5 -i -t -- bash -o vi
bash-4.4$ cd /var/lib/awx/.ssh
bash-4.4$ ls -l
total 0
lrwxrwxrwx 1 root root 25 Mar 25 17:24 config ->
lrwxrwxrwx 1 root root 17 Mar 25 17:24 id_ed25519 ->

bash-4.4$ ls -lL
-rw-r--r-- 1 root root 221 Mar 25 17:24 config
-rw-r--r-- 1 root root 444 Mar 25 17:24 id_ed25519

bash-4.4$ ssh -l ansible uname -a
Failed to add the host to the list of known hosts (/var/lib/awx/.ssh/known_hosts).
Password for
FreeBSD 12.2-RELEASE-p1 FreeBSD 12.2-RELEASE-p1 GENERIC  amd64

And now I have definitely deserved a drink. Oh, and have you seen the new “look” of AWX?


This works well from the command line as demonstrated above, but from within AWX it doesn’t. The only hint I’ve so far found is in the Ansible Runner documentation:

Ansible Runner will automatically bind mount your local ssh agent UNIX-domain socket (SSH_AUTH_SOCK) into the container runtime. However, this does not work if files in your ~/.ssh/ directory happen to be symlinked to another directory that is also not mounted into the container runtime

The files are indeed symlinks. Does this mean game over?

Another update

I’ve learned that I can create actual directories and files to avoid the symlinks, but AWX doesn’t use the configuration I drop into /var/lib/awx/.ssh so I’m a bit at wit’s end:

  tower_task_extra_volume_mounts: |
     - name: "sshconfig"
       mountPath: "/var/lib/awx/.ssh/config"
       subPath: "config"
       readOnly: true
     - name: "sshkey"
       mountPath: "/var/lib/awx/.ssh/id_ed25519"
       subPath: "id_ed25519"
       readOnly: true
  tower_extra_volumes: |
     - name: "sshconfig"
          secretName: "awx-jp01"
          - key: conf
            path: "config"
     - name: "sshkey"
          secretName: "awx-jp01"
          - key: keyfile
            path: "id_ed25519"

The SSH configuration is neither taken from /var/lib/awx/.ssh nor from /root/.ssh, and I’m not able to inject a file into /etc/ssh/ssh_config.d/xxx.conf as Kubernetes turns that into a directory, and I cannot overwrite /etc/ssh/ssh_config because K8s complains the container won’t start as a directory is trying to overwrite a file (I assume this is related to this “file injection”), so I’m actually capitulating at this point.

Unless somebody has a proven solution, I give up. It’s been a long time since I’ve given up in the face of a bit of software…

Another another update

This isn’t giving me peace. I set up another set of machines and an AWX to find out as which user AWX is actually executing my play / tasks, and spying on it with a local command id, I see uid=1000(runner) which is not the uid=1000(awx) I was expecting, so this is all occurring on awx-ee (execution environment). I had checked that, but maybe used the wrong home directory? So, let’s try


The runner is launching ansible as uid=1000 with a home directory of /home/runner in a temporary directory, so why isn’t our SSH config being used?

It’s not using /runner/env/envvars (which it probably should be?) If I launch ansible on awx-ee I see it’s not picking up an ansible config file (neither in ~/.ansible.cfg nor in ~/ansible.cfg), but I can provoke it to by exporting ANSIBLE_CONFIG. This is better than a treasure hunt not.