During the course of time I’ve been through a slew of backup utilities; everything from cpio and tar, via rsync through expensive off-site things which got me angry every time either the software or its “backend” was changed. (They call it “upgrading”, but if you force radical changes on me it feels more like a downgrade. I digress.)

I’ve been using restic for some months, and it offers all I need in terms of backup/restore. In no particular order, some of its most notable features:

  • several different local and remote storage backends, and many of the remote backends can be local to my network (e.g. SFTP, REST)
  • complete documentation (IMO good documentation is too often underrated)
  • backups can be mounted and browsed through
  • restores to different directories
  • single, statically linked, Open Source, binary with easily remembered options and built-in help

In order to create a working example, I will use the REST server backend (created by the makers of restic) as a backup endpoint for restic, and we’ll take it from there.

restic at work

rest-server

The rest-server is easy to set up as it also is just a static binary I launch on a machine onto which I want to backup my data. I will assume HTTP is sufficient (TLS protects the authentication credentials in transit, but I can do without that in my small local network; restic encrypts the backups anyway).

So, on the target machine onto which I will be running backups, I launch rest-server (no root required as long as the user as which I run it can write into the desired path, so please don’t gratuitously sudo) and give it a path into which it should store backup snapshots:

$ rest-server --path /mnt/bigdisk/backups
rest-server 0.9.4 compiled with go1.8.3
Data directory: /mnt/bigdisk/backups/
Authentication disabled
Starting server on :8000
Creating repository directories in /mnt/bigdisk/backups/jp1

The last line of diagnostic output we see when we create our first backup repository later.

That completes the setup we need for remote backups to be sent to the REST server. (Interestingly, rest-server uses the same directory structure as the local backend, so you can access these files it both locally and via HTTP, even simultaneously; I’ll show you this later.)

Rest server can provide basic authentication via HTTP; a simple htpasswd-type file placed in the root directory of the backup target enables that, and, as mentioned, TLS.

$ htpasswd -s -c /mnt/bigdisk/backups/.htpasswd jjolie

backends

The REST server backend we configured is but one possibility. restic will happily work with a number of different data stores for backing up your data. These include

  • local directories
  • SFTP using public keys
  • Amazon S3
  • the Open Source Minio Object Storage
  • Openstack Swift
  • Backblaze B2
  • Azure Blob Storage
  • Google Cloud Storage.

I have been running backups over SFTP (because the REST backend didn’t exist when I started).

Irrespective of the backend used, restic encrypts data as it is stored, and the location the backup data is stored at is assumed not to be trusted. This makes even a local directory which I periodically deposit at a friend’s house, or a backup to somebody else’s NAS practical because the data is protected.

restic

Assuming we’ve decided where our backups are to be stored we first have to initialize a repository for restic to use. To keep the examples easier to follow, I will create an environment variable which points to a repository called jp1 on our REST server. Note the prefix rest: on the repository name:

$ export REPO='rest:http://192.168.1.188:8000/jp1'

$ restic -r $REPO init
enter password for new backend:
enter password again:
created restic backend fddd6a95ff at rest:http://192.168.1.188:8000/jp1

The password or pass phrase we enter is also called a key, and we can create as many as we want for this repository. (Keys can also be removed.) Anybody who has access to a repository key can unlock the repository. Note that this key is for the restic data repository proper; it is quite possible that your repository’s backend needs further authentication (e.g. HTTP basic auth for REST backend if you’ve configured it, SSH private key for SFTP, S3 credentials for the S3 backend, etc.)

The password or key can also be passed to restic via an environment variable, and I do this here to keep the output in my examples clear, and this allows automation from a script:

$ export RESTIC_PASSWORD='<clear-text-of-restic-password-here>'

Backups are kicked off with the backup subcommand. In restic terminology, the contents of a directory at a specific point in time is called a “snapshot”, so creating a backup actually means to create a snapshot:

$ restic -r $REPO backup /usr/share
scan [/usr/share]
scanned 725 directories, 14902 files in 0:00
[0:12] 100.00%  33.039 MiB/s  399.714 MiB / 399.714 MiB  15627 / 15627 items  0 errors  ETA 0:00
duration: 0:12, 31.12MiB/s
snapshot 5ebde637 saved

$ restic -r $REPO backup /usr/local/etc
...
$ restic -r $REPO backup /usr/local/etc
...
$ restic -r $REPO backup /usr/share
...
duration: 0:01, 319.43MiB/s
snapshot d7fe3fa0 saved

The second snapshot of a directory will typically be much faster than the first: restic When a directory is backed up, restic finds the pertaining snapshot for that directory and will update only files which have changed; in other words, backups are always incremental if there exists a matching snapshot. Similarly to how tar or rsync operate, we can also exclude particular files/directories from a snapshot, etc.

restic also accepts data from standard input for, say, backing up live data from an RDBMS or whatnot.

$ echo Hello world | restic -r $REPO backup --stdin --stdin-filename greetz
[0:00] 12B  0B/s
duration: 0:00, 0.00MiB/s
archived as 2dd3a47d

At any moment we can see which snapshots we have; it turns out a snapshot has been created from a Windows machine as well. (I would typically use a different repository for each machine, but I wanted to demonstrate restic’s interoperability.)

$ restic -r $REPO snapshots
ID        Date                 Host              Tags        Directory
----------------------------------------------------------------------
5ebde637  2017-08-23 09:28:40  tiggr.ww.mens.de              /usr/share
547a1a76  2017-08-23 09:31:44  tiggr.ww.mens.de              /usr/local/etc
f4e3d16b  2017-08-23 09:32:48  tiggr.ww.mens.de              /usr/local/etc
d7fe3fa0  2017-08-23 09:35:59  tiggr.ww.mens.de              /usr/share
2dd3a47d  2017-08-23 09:39:19  tiggr.ww.mens.de              greetz
c8599517  2017-08-23 09:52:50  t420                          C:\Users\jpm\bin\dict

What’s a backup without a restore? Not much. Restoring snapshots with restic is a snap. We can, for example, restore into a different directory:

$ restic -r $REPO restore 547a1a76 --target /tmp/rr
restoring <Snapshot 547a1a76 of [/usr/local/etc] at 2017-08-23 09:31:44.730050452 +0200 CEST \
   by jpm@tiggr.ww.mens.de> to /tmp/rr
$ ls -l /tmp/rr/etc/unbound/unbound.conf
-rw-r--r--  1 jpm  admin  30780 Dec 15  2016 /tmp/rr/etc/unbound/unbound.conf

Instead of specifying a particular snapshot ID, I can also use the keyword latest to restore the latest backup.

Did I mention restic is multi-platform?

C:\Users\jpm>restic.exe -r %REPO% restore 2dd3a47d --target gr
restoring <Snapshot 2dd3a47d of [greetz] at 2017-08-23 09:39:19.075975641 +0200 CEST \
      by jpm@tiggr.ww.mens.de> to gr

C:\Users\jpm>type gr\greetz
Hello world

I can also mount and browse snapshots (on macOS/Linux). To do so, I use the mount subcommand which provides a FUSE file system onto a restic backup. Therein, I can browse to my heart’s content, inspect files, copy files out, etc.

$ mkdir /tmp/m
$ restic -r $REPO mount /tmp/m
Now serving the repository at /tmp/m
Don't forget to umount after quitting!
$ tree -L 3 /tmp/m
.
├── hosts
│   ├── t420
│   │   └── 2017-08-23T09:52:50+02:00
│   └── tiggr.ww.mens.de
│       ├── 2017-08-23T09:28:40+02:00
│       ├── 2017-08-23T09:31:44+02:00
│       ├── 2017-08-23T09:32:48+02:00
│       ├── 2017-08-23T09:35:59+02:00
│       └── 2017-08-23T09:39:19+02:00
├── snapshots
│   ├── 2017-08-23T09:28:40+02:00
│   │   └── share
│   ├── 2017-08-23T09:31:44+02:00
│   │   └── etc
│   ├── 2017-08-23T09:32:48+02:00
│   │   └── etc
│   ├── 2017-08-23T09:35:59+02:00
│   │   └── share
│   ├── 2017-08-23T09:39:19+02:00
│   │   └── greetz
│   └── 2017-08-23T09:52:50+02:00
│       └── dict
└── tags

22 directories, 1 file

Backup space is typically finite (at least for me), so I will want to remove snapshots occasionally, and ensure the repository is intact: restic provides commands with which I can do that, and also provides us with the possibility of defining a policy for removing them (forget).

As one final show of why restic puts my mind at ease: you remember we had a REST server accepting backups. Let us assume that server dies but we can still access the files and directories it created. We can directly access them, using restic:

$ restic -r /mnt/bigdisk/backups/jp1 snapshots
...

Wrapping up

restic offers me the flexibility I want. For example, I can create local snapshots and move those to a remote location later, or I can use any of the backends which work on my network (SFTP, REST, Minio) and not share the data with others. I can also use restic to backup to a friend’s NAS because I know my data will be encrypted. If you’re fond of cloud services for your backups, there’s quite a choice in restic.

I cannot judge whether restic is particularly fast or not – I’ve simply not tested that because reliability and ease of restore are more important than to me that throughput of backup.

restic is Open Source, and its design is open. This is particularly important. When I asked a friend last night to confirm he is (or should I say “was”?) a Crashplan customer, he responded “Yes, not happy”. I can imagine: I’d be a bit miffed to hear my choice of backup software’s going down the tubes for me. If you are currently using some commercial backup software, this might be the perfect time for you to evaluate restic.

Continued in my restic backend of choice: minio.

Further reading:

backup and toolbox :: 22 Aug 2017 :: e-mail