I’ve been very happy running Proxmox-VE on a couple of machines back at the ranch since late last year, but that wasn’t enough: I’m on the road frequently, and need lots of “machines” to play with. I’ve used VMWare and Virtualbox guests, but it is cumbersome to build them individually, and cloning/copying never really satisfied my needs. And yes, I’ve used Vagrant (which is very good) but I never really got warm with it.
After recently upgrading my Mac to lots of RAM and SSD goodness (and why didn’t anyone tell me to get SSD before?!?!?), I thought: let’s try Proxmox-VE locally (i.e. on the workstation), and with a bit of Peter’s networking advice, I’m now pleased as punch. The setup I describe should run quite the same way on Linux or Windows, if that’s your poison.
This is what my workstation now runs:
The whole Proxmox-VE setup runs on a Debian box as a VMWare guest which is configured to use a NAT network device. As such, anything on my workstation can talk to the devices within VMWare, and anything in the VM can talk to anybody, including accessing whichever interface is currently connected to the world. All containers within the Proxmox-VE system can speak to each other.
File systems
I need a shared directory into which (respectively from which) the OpenVZ containers can write (read) files, so I set up a shared VMWare directory (they call it “Folder”).
In the VMware guest (i.e. CT0), this directory is accessible as /mnt/hgfs/data
via the guest tools.
From there, I can mount that onto each and every container with a “bind” mount from
the vps.mount
script I created for bootstrapping my containers.
#!/bin/bash
# Create a bind mount into the container which is now going to be
# mounted. The specified directory (on CT0) exists. (In fact it
# is a VMWare shared folder here, but that's irrelevant to this.)
/bin/mount -n --bind /mnt/hgfs/data /var/lib/vz/root/${VEID}/mnt
Works like a charm. This allows me to within a container, say, edit a file called
"/mnt/hello"
, and I’ll find that file at the exact same path in all containers,
and as "~/proxmox/data/hello"
on my workstation.
Launch at boot
In order to have this VMWare guest launch at boot I created the following launch daemon
control file in /Library/LaunchDaemons/
(Mac OS/X obviously).
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Disabled</key>
<false/>
<key>KeepAlive</key>
<false/>
<key>Label</key>
<string>net.jpmens.proxmox.headless</string>
<key>ProgramArguments</key>
<array>
<string>/Applications/VMware Fusion.app/Contents/Library/vmrun</string>
<string>-T</string>
<string>fusion</string>
<string>start</string>
<string>/Users/jpm/Documents/Virtual Machines.localized/Proxmox.vmwarevm/Proxmox.vmx</string>
<string>nogui</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>UserName</key>
<string>jpm</string>
</dict>
</plist>
Apart from an annoying dialog which pops up warning that an interface is going into promiscuous mode, this is good to go.
Using the command-line
I can create, start, stop, destroy, or tweak OpenVZ containers in Proxmox by pointing a Web browser at the Proxmox-VE host, but I prefer the command-line. I looked at ProxBash, but it doesn’t appeal to me: it uses SSH to connect to Proxmox from where it then runs pvesh.
I’m aware of two libraries which can speak to Proxmox-VE over HTTP: proxmoxia (written by a colleague of Paul’s) and pyproxmox. I had already started getting the hang of the latter, so I’m using that to create a small utility I’ll call ct (for container). It’s easy. This, for example, is all it takes to create a new container:
#!/usr/bin/env python
from pyproxmox import *
auth = prox_auth('proxmox.prox', 'root@pam', 'secret')
pm = pyproxmox(auth)
container_data = {
"searchdomain" : "prox",
"hostname" : "redis.prox",
'ostemplate' : "local:vztmpl/centos-6-standard_6.3-1_amd64.tar.gz",
'vmid' : '108',
'cpus' : '1',
'nameserver' : '172.16.153.1',
'ip_address' : '172.16.153.108',
'onboot' : '1',
'disk' : '1',
'memory' : '256',
'password' : 'secret',
'description' : 'quick and dirty'
}
ct = pm.createOpenvzContainer('proxmox', container_data)
# {u'data': u'UPID:proxmox:000216A0:006B0F0D:510F3D92:vzcreate:108:root@pam:'}
This is what my ct list
currently displays; the output is similar to vzlist
:
103 172.16.153.103 nsd4.prox running
101 172.16.153.101 bind993.prox running
102 172.16.153.102 pdns1.prox running
108 172.16.153.108 redis.prox stopped
Names instead of addresses
I have terrible trouble remembering IP addresses (except 127.0.0.1) so I want to use names within the OpenVZ containers and when connecting from the workstation to a container. I want to keep things as easy as possible, but certainly don’t want to edit /etc/hosts
files (even though I could distribute them into the containers with Ansible) so I add host names and their IP addresses to Unbound, which runs as part of DNSSEC-Trigger on my workstation anyway. That sounds horribly complicated, but it isn’t:
- Unbound is a caching name server with DNSSEC support (but you can disable that if you don’t want it). (Here’s a presentation I gave on Unbound.)
- Unbound can be configured to serve local data, i.e. names that you add, either because they don’t exist, or because you want to override something.
I create these names in a “domain” called .prox
. Assume I’m creating a container for, say, Redis, and want its host name to be redis.prox
, I simply do this, and the name is immediately resolvable:
$ unbound-control local_data redis.prox 10 A 172.16.153.108
$ dig redis.prox
;; ANSWER SECTION:
redis.prox. 10 IN A 172.16.153.108
(This addition is volatile; in order to have it permanently assigned, I create a config file I include into unbound.conf
for the next reboot of the workstation.)
The containers are configured to use the Unbound server on my workstation (see nameserver in the Python program above), so they too, have access to this information.
There are some things I want installed on all OpenVZ containers within Proxmox-VE, so I bootstrap those as I’ve previously described.
My portable data center; I’m loving it!