I was telling you about restic the other day, and I demonstrated using its rest-server for storage. I had shortly looked at minio, but Alexander mentioned a few possibilities I’d overlooked, so here goes.

Minio is an Open Source, Amazon S3-compatible, distributed object storage server, which is basically a mouthful to say it stores photos, videos, containers, virtual machines, log files or basically any “blob” of data. It also stores backups, because restic knows how to handle a Minio backend, and it can place its encrypted backups therein.

minio overview The Minio cloud storage stack consists of three major components:

  • the minio cloud storage server (a single binary)
  • the Minio client mc (a single binary)
  • a set of Minio SDKs (e.g. minio-py)


The minio storage server is designed to be minimal, and as far as I’m concerned it is: a single statically-linked Go application containing all I need to set up a storage server, dependencies included. I downloaded a version for the architecture of my NAS, and launched it.

$ mkdir config buckets
$ minio --config-dir config server buckets
Created minio configuration file successfully at /tmp/miniodemo/config
AccessKey: M83JKPPVH985R6XNR4XB 
SecretKey: JfNaBXpswthLzOAQRypLh+PIBwjg3LEkRLp/bmzg 

Browser Access:

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio M83JKPPVH985R6XNR4XB JfNaBXpswthLzOAQRypLh+PIBwjg3LEkRLp/bmzg

Drive Capacity: 90 GiB Free, 465 GiB Total

Starting the minio server the first time generates an Access and a Secret key (unless you’ve pre-configured them in the JSON configuration file). Note the location of the config.json as shown in the output and/or make note of the Access and Secret keys because you’ll need them on the client. The buckets directory is where Minio will start writing data, and I don’t touch what ever’s in there “manually”

With minio I can also pool multiple drives into a single object storage server, and it supports things like notification of changes in buckets using different targets (AMQP, MQTT, ElasticSearch, Redis, etc.; here’s an example payload I obtained over MQTT).

Minio’s documentation is adequate though it took me a bit to detect the menu selector at the top of a page.

That’s all I need to do to get an Amazon S3-compatible server running, and I will now use mc, and then restic on it.


mc is the Minio client, and I recommend you keep a copy of its complete guide handy, even though the program has built-in help. It’s basically Minio’s answer to simple Unix commands like ls, cp, diff, etc., and it supports file systems as well as AWS-S3-type storage services like Minio.

I’ll now copy what minio launch said above regarding “Command-line Access” and invoke mc with that command, simply changing the name of the repository to “demo”, and I then create something called a bucket. If you’re familiar with S3 you know all about buckets, if not: a Minio bucket is a container which holds data like a real-life bucket holds water. Or gin&tonic. I digress. You can name a bucket how you wish, e.g. “data” or “gin-tonic”. I seem to be dehydrating; brb.

I use the Minio client (mc) to, say, create buckets and copy files. In order to do so, mc needs the URL and access keys of the storage server. I add those to its configuration:

$ mc config host add demo M83JKPPVH985R6XNR4XB JfNaBXpswthLzOAQRypLh+PIBwjg3LEkRLp/bmzg
Added `demo` successfully.

I can, and have, added a few storage servers to mc’s configuration using "mc config host add". (It’s not necessary to muck about in the minio client configuration file, but it’s not difficult; I do recommend you verify with jq or python -mjson.tool or something that your JSON’s ok after editing.)

Once done, I can “make a bucket” (mb) and copy some files into it:

$ mc mb demo/pail
Bucket created successfully `demo/pail`.

$ mc cp root-anchors/ tld-axfr/ demo/pail
tld-axfr/file..:  41.10 MB / 41.10 MB  ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓  100.00% 18.24 MB/s 2s

I can also launch a Web browser at the minio endpoint (URL shown above when we launched the minio server) and use said keys to login. (To be honest, this is something I don’t really need – I prefer the CLI.)

minio login

logged into minio

Multiple storage instances

I have another Minio server running here called nvx (its configuration is also in mc’s config.json), and I’m going to create a bucket on that named “cubo” (Spanish for “bucket”):

$ mc mb nvx/cubo
Bucket created successfully `nvx/cubo

And now for a bit of what makes this interesting for me: we’ll use mc to mirror one bucket to another:

$ mc mirror demo/pail nvx/cubo

$ mc ls nvx/cubo | head -4
[2017-09-06 12:42:30 CEST] 263KiB AERO.axfr.gz
[2017-09-06 12:42:30 CEST]  42KiB AL.axfr.gz
[2017-09-06 12:42:30 CEST]  12KiB AN.axfr.gz
[2017-09-06 12:42:30 CEST] 5.5KiB AO.axfr.gz

Can I add a file to that new bucket and then compare two buckets? Sure:

$ mc cp /etc/passwd nvx/cubo
$ mc diff demo/pail nvx/cubo
> nvx/cubo/passwd

$ mc mirror nvx/cubo demo/pail
$ mc diff demo/pail nvx/cubo

Back to restic

restic has built-in support for Minio, but as you can possibly imagine, handling the AWS-type settings in restic for different servers and repositories can become a bit of a pain.

Thankfully Alexander enjoyed my restic post, and he decided to start using restic with Minio as a backend. In order to make his life easier, he created restic-tools which is basically a shell script wrapper around restic with support for multiple repositories. This happens with a few sourced shell scripts as configuration files, for example:

$ cat /etc/backup/demo.repo
RESTIC_REPOSITORY="s3:"           # note "/restic" as bucket name
AWS_ACCESS_KEY_ID="M83JKPPVH985R6XNR4XB"			  # keys from minio launch
RESTIC_PASSWORD='sekrit'					  # restic's repository password

$ cat /etc/backup/local/config

With that in place, I use the backup utility to initialize restic’s repository (I don’t specify its password, because it’s already configured in demo.repo):

$ backup demo init
created restic backend 24303c79a5 at s3:

$ backup demo local
scan [/Users/jpm/docs/dir]
scanned 6 directories, 5 files in 0:00
[0:00] 100.00%  0B/s  41.907 KiB / 41.907 KiB  11 / 11 items  0 errors  ETA 0:00
duration: 0:00, 0.84MiB/s
snapshot 3460b459 saved

$ backup demo snapshots
ID        Date                 Host        Tags        Directory
3460b459  2017-09-06 12:56:29  tiggr                   /Users/jpm/Auto/docs/dir

$ backup demo monitor tiggr 10 20
OK - Last snapshot #3460b459 0h ago

Note how the same backup program is used to actually perform the backup proper as well as to use any of restic’s commands. Alexander added a special monitor command which produces an icinga-type notification when a snapshot (backup) last happened for a particular host. Also note that these files all need to be protected as they contain Minio’s “AWS” keys and the restic repository password. Assuming this is all happening on your local network I don’t consider it a grave problem.

Not only is it now easy to create backups and restore data, we also have the added feature that, using Minio buckets, we can replicate (or mirror) those buckets off-site to another Minio instance. (Replication using the likes of rsync could also be done with files stored by rest-server or SFTP of course.)

Note that other than working with the buckets restic produces as a whole, there’s not much we can do with their content (aside from using restic, of course) because the organization within a bucket is restic’s job. In other words, an mb diff or similar will be pretty useless.

So far, my only complaint about Minio is its name: using my favorite search engine to search for information on Minio means wading through bucketfulls (!) of pointers to Minions – a different thing entirely. ;-)

Further reading:

backup and restic :: 06 Sep 2017 :: e-mail