I’ve been very happy running Proxmox-VE on a couple of machines back at the ranch since late last year, but that wasn’t enough: I’m on the road frequently, and need lots of “machines” to play with. I’ve used VMWare and Virtualbox guests, but it is cumbersome to build them individually, and cloning/copying never really satisfied my needs. And yes, I’ve used Vagrant (which is very good) but I never really got warm with it.
After recently upgrading my Mac to lots of RAM and SSD goodness (and why didn’t anyone tell me to get SSD before?!?!?), I thought: let’s try Proxmox-VE locally (i.e. on the workstation), and with a bit of Peter’s networking advice, I’m now pleased as punch. The setup I describe should run quite the same way on Linux or Windows, if that’s your poison.
This is what my workstation now runs:
The whole Proxmox-VE setup runs on a Debian box as a VMWare guest which is configured to use a NAT network device. As such, anything on my workstation can talk to the devices within VMWare, and anything in the VM can talk to anybody, including accessing whichever interface is currently connected to the world. All containers within the Proxmox-VE system can speak to each other.
I need a shared directory into which (respectively from which) the OpenVZ containers can write (read) files, so I set up a shared VMWare directory (they call it “Folder”).
In the VMware guest (i.e. CT0), this directory is accessible as
/mnt/hgfs/data via the guest tools.
From there, I can mount that onto each and every container with a “bind” mount from
vps.mount script I created for bootstrapping my containers.
Works like a charm. This allows me to within a container, say, edit a file called
"/mnt/hello", and I’ll find that file at the exact same path in all containers,
"~/proxmox/data/hello" on my workstation.
Launch at boot
In order to have this VMWare guest launch at boot I created the following launch daemon
control file in
/Library/LaunchDaemons/ (Mac OS/X obviously).
Apart from an annoying dialog which pops up warning that an interface is going into promiscuous mode, this is good to go.
Using the command-line
I can create, start, stop, destroy, or tweak OpenVZ containers in Proxmox by pointing a Web browser at the Proxmox-VE host, but I prefer the command-line. I looked at ProxBash, but it doesn’t appeal to me: it uses SSH to connect to Proxmox from where it then runs pvesh.
I’m aware of two libraries which can speak to Proxmox-VE over HTTP: proxmoxia (written by a colleague of Paul’s) and pyproxmox. I had already started getting the hang of the latter, so I’m using that to create a small utility I’ll call ct (for container). It’s easy. This, for example, is all it takes to create a new container:
This is what my
ct list currently displays; the output is similar to
Names instead of addresses
I have terrible trouble remembering IP addresses (except 127.0.0.1) so I want to use names within the OpenVZ containers and when connecting from the workstation to a container. I want to keep things as easy as possible, but certainly don’t want to edit
/etc/hosts files (even though I could distribute them into the containers with Ansible) so I add host names and their IP addresses to Unbound, which runs as part of DNSSEC-Trigger on my workstation anyway. That sounds horribly complicated, but it isn’t:
- Unbound is a caching name server with DNSSEC support (but you can disable that if you don’t want it). (Here’s a presentation I gave on Unbound.)
- Unbound can be configured to serve local data, i.e. names that you add, either because they don’t exist, or because you want to override something.
I create these names in a “domain” called
.prox. Assume I’m creating a container for, say, Redis, and want its host name to be
redis.prox, I simply do this, and the name is immediately resolvable:
(This addition is volatile; in order to have it permanently assigned, I create a config file I include into
unbound.conf for the next reboot of the workstation.)
The containers are configured to use the Unbound server on my workstation (see nameserver in the Python program above), so they too, have access to this information.
There are some things I want installed on all OpenVZ containers within Proxmox-VE, so I bootstrap those as I’ve previously described.
My portable data center; I’m loving it!