Sat, 30 Sep 2017
Restic Systems Backup Setup, Part 2.5 - dealing with 'Unable to backup/restore files/dirs with same name'
This is Part '2.5' of my series on building a restic-based system backup series. The rest of the articles can be found here.
You should be reading Part 3 here, but in the development of that, I ran into this restic bug: Unable to backup/restore files/dirs with same name.
Unfortunately, for historic reasons (buried in some of the oldest code in restic),
only the last component of a path being backed up in a restic repository is
reflected in the repo. For example, in my case, when I wanted to back up both
/local
and /usr/local
they would show up as local
at the top of the repo, a very confusing state. Later versions of restic would rename
things so there was a local-0
and a local-1
, etc, but it's
still very confusing.
The primary restic developer is working on resolving this, as many other people
have ran into it, and it is expected to be fixed in restic 0.8. Until then, the
suggestion is to tell restic simply to back up /
, and exclude out
everything you don't want to back up. A workable enough solution, but I still want
something where I can think in terms of backing up what I want, and something else
figures out how to do the exclusion. That way, I can just add or remove things from
my list and I don't have to re-figure what to exclude. Or, things can come and go
that I don't care about, and they won't accidentially get backed up.
A few hours of work and experimentation, and I had restic-unroll, which does just that. In the same directory in that git repo is an example bash script you might wrap things in to do a daily system backup.
As a reminder, you can find the canonical repository of all my utility scripts in this series here
Posted at: 17:58 | category: /computers/backups/restic-systems-backups | Link
Sat, 16 Sep 2017
Restic Systems Backup Setup, Part 2 - Running minio under runit under systemd
Part 2 of my series on building a restic-based system backup setup. Part 1 can be found found here.
As described in Part 1, my general strategy is to have a centralized backup
server at a particular location, running an instance of minio
for each server being backed up. In essence, I'm going to want to be running N
minio server --config-dir=/...
instances, and I want a simple
way to add and start instances, and keep them running. In essence, I want
a simple init service.
Fortunately, if you're looking for a simple init service, you need look no
further than runit. It's an incredibly
tiny init-like system, composed of some simple tools: runsv
to
run a service, keep it up and optionally log stdout output somewhere;
sv
to control that service by simply talking to a socket; and
runsvdir
to keep a collection of runsv
instances
going. Defining a service is simple, in a directory there is a run
file, which is used by runsv
to start the service. If you want
to log, create a log
subdirectory, with it's own run
file — that file is executed and given the stdout of the main process
as its input (the included svlogd
command is a simple process
for handling logs). To run a bunch of runsv
instances, put
them (or symlinks to them) all in a single directory, and point runsvdir
at it. As a bonus, runsvdir
monitors that directory, and if a
runsv
directory is created or goes away, runsvdir
does the right thing.
It's an incredibly useful set of commands, and allows you to manage processes
fairly easily. In this case, every time I add a machine to this backup
scheme, I make an appropriate runsv
dir with the correct
minio
incantation in the run
file, and just
symlink it into the runsvdir
directory. We've been using
runit
at work for quite a while now in containers, and it's
an awsome tool.
My newly-minted backup server is running Debian Stretch, which uses
systemd as its init system. Creating systemd unit files is still
something I have to think about hard whenever I do it, so here's the
one I use for runit
:
[Unit] Description=Backup Service Minio Master runsvdir [Service] ExecStart=/usr/bin/runsvdir -P /backups/systems/conf/runit/ Restart=always KillMode=process KillSignal=SIGHUP SuccessExitStatus=111 WorkingDirectory=/backups/systems User=backups Group=backups UMask=002 [Install] WantedBy=multi-user.target
Here, systemd starts runsvdir
, pointing it at my
top-level directory of runsv
directories. It runs
it as the backups
user and group, and makes it
something that starts up once the system reaches "multi-user mode".
Part 3 is coming, where I'll document backing up my first system.
Posted at: 18:41 | category: /computers/backups/restic-systems-backups | Link
Current PGP Practices: GPG 2.1 and a Yubikey 4
I might write this up as a full tutorial someday, but there's already a few of those out there. That said, here's a short outline of my current usage of PGP, aided by modern GPG and the OpenPGP smartcard functionality of a Yubikey 4.
- Use GnuPG 2.1. Private keys are stored in a
.d
directory, can act as an ssh key agent, and you can forward your localgpg-agent
to a remote server. Oh, and it supports OpenGPG smartcards. - Use OpenSSH > 6.7. Makes
gpg-agent
forwarding much easier. - Use contemporary GnuPG configuration settings Riseup used to have a good guide for this, but it's sadly vanished behind HSTS. But this YubiKey-Guide has the settings, if not the explanation. It's also a fairly comprehensive step-by-step set of instructions for the entire process.
- Keep backups of your master keypair and revocation certificates. Pretty straightforward, not only will you need this to, say, load a new Yubikey or change a subkey, you'll also need this to sign anyone else's keys. I keep three copies at all times, with one always in a bank safe deposit box.
- Generate your master key offline. A Raspberry Pi not plugged into any network is a great tool for this, although you'll most likely have to bang your hands on the keys quite a bit to generate enough entropy for key generation.
- Use an imense passphrase on your offline key. This is very easy, since you'll only need to actually use this to a) update any subkeys; b) sign anyone else's key; c) push your subkeys into your Yubikey. And the Yubikey will be protected by a) being a physical thing that; b) must be plugged in; c) unlocked with a six digit PIN; d) and touched to actually do anything. Speaking of....
- Use the yubitouch utility to require touch. You can find that here. I use the mode where you have to touch the Yubikey for all three subkey usage, and fix it so that the setting can't be changed without re-loading the key material. This can be slightly paranoid, and I do wish it had a mode to "require a touch if it hasn't required a touch in the last N seconds". But I do like knowing that every use of my Yubikey requries me to physically touch it.
- Make a 'transfer' copy of your GNUPGHOME to load subkeys onto your Yubikey The process of loading your subkeys into a Yubikey replaces the secret key material with a pointer that says "this subkey is in OpenPGP card with the serial number...", and traps it in the Yubikey (by design).
- Use git to track your offline keys This has saved me from at least one blunder, and it gives me a history of what I've been doing to the keys over time.
- Set your key expiration to a fixed date, and update every few months You can set a key to expire in, say, two years, and then three months later, move the expiration date forward three months, etc. This has got a couple useful side effects. One, if for some reason you lose control of your key, it at least will go away sometime. Two, it forces you to touch your master key at least semi-occasionally. In my setup, I touch all three copies of my master key once every three months, so I'll be able to recover if one of the USB thumb drives decides to give up the ghost. Much better than leaving a drive in a drawer and five years later learning that it's unreadable.
- CHANGE BOTH PINS ... but after you've done all the card setup. Many of the things above will require you to enter the unlock and/or admin PIN, and it's much easier to type '123456' or '12345678' for all of this. Make a good PIN, don't make it something easily guessable, etc. etc. In used three 2d10 rolls to make mine.
- Entering the PINs too many times doesn't brick the card. We had some confusion about this at work, and thought we'd bricked a card. It turns out that entering the regular PIN enough times just makes it so that it won't do anything other than allow you to use the admin PIN to reset the regular PIN. And if you enter the admin PIN wrong three times, it just wipes the key material from the key and resets it to factory defaults. In fact, I'm fairly certain that the script
- Other info from Yubico
- All of this owes a great deal of debt to Alex Cabal's Generating the Perfect GPG Keypair, which got me thinking all about this in the first place a few years ago.
Posted at: 14:55 | category: /computers/gpg | Link
Sat, 09 Sep 2017
Restic Systems Backup Setup, Part 1
This is the first in what will undoubtedly be a series of posts on the new restic-based system backup setup.
As I detailed earlier this week, I've started playing around with using restic for backups. Traditionally, I've used a variant of the venerable rsync snapshots method to backup systems, wrapped in some python and make, of all things. Some slightly younger scripts slurp everything down to a machine at home so I've got at least another copy of everything.
In my previous post, I discussed my initial attempt at restic, simply replicating that home backup destination into Backblaze B2. That works, but it feels a bit brute-force, and there have been other things I've wanted to change about this for a while:
Replicating from colo to home takes an order of magnitude longer: Backing up the ten or so VMs I have on my colo machine takes about 10 minutes. Pulling that down to home takes 100 minutes or so. (I'll note here that the bulk of my 'large' data is in AFS; what I'm backing up on systems is primarily configuration files, logs, and some things that happen to live locally on a system).
Some of this is due to the fact that the replication traffic goes
from Michigan to New York, while the initial backups are all
happening within the same physical host. But the larger part,
I think, is due to the fact that in order to replicate my system
backups, I have to preserve hardlinks. A bit of background
here: the 'rsync snapshots' method works by using the
--link-dest
option to rsync. As I backup a system,
if the file hasn't been changed, rsync makes a hardlink to the
corresponding file in the --link-dest
directory. This
doesn't use any additional space, and it's an easy way of keeping,
say, fourteen days worth of backups while only using more space
for the files that change from day-to-day. Most of my systems
keep that may days of backups around.
Since I want to replicate all of those backups (and not, for
example, only replicate the latest day's worth of backups),
but I want to keep the space savings that --link-dest
gets me, I need to use the -H
argument to the replicating
rsync so it can scan all the files to be sent to find multiply hard-linked
files. This takes a long long time — so much so that the
sshd
man page warns about it:
Note that -a does not preserve hardlinks, because finding multiply-linked files is expensive. You must separately specify -H.
The backing-up or replicating rsync must run as root: Of course the rsync on the machine being backed up must run as root, it needs to be able to read everything to be backed up. But the destination side also has to run as root, because I want to preserve permissions and ownership, and only root can do this. I've long wished for an rsync 'server' that spoke the rsync protocol out one side and simply stored everything in some sort of object storage. Unfortunately, the rsync protocol is less a protocol and more akin to C structs shoved over a network, as far as I understand. And the protocol isn't really defined except as "here's some C that makes it go".
Restoring files is done entirely on the backup server: Because of the previous issue, I didn't want root on the client servers to ssh in as root on the backup server — I felt it was much safer and easier to isolate backups by having the backup server reach out to do backups. There's no ssh key on the client to even be able to get into the backup server. It's not a big issue, but if I need to restore a handful of files spread out I've got kinda stage them somewhere and then get them over to the client system. And because the backup server has a command-restricted ssh key on the client server, it takes some convoluted paths to get stuff moved around.
Adding additional replicas adds even more suck: Adding another replica means another 100 minutes somewhere pulling stuff down. And it also means a full-blown server, someplace where I can run rsync as root, and it's got to be some place I trust. Also, most of the really cheap storage to be found is in object storage, not disks (real or virtual) — part of what attracted me to restic in the first place.
When I started playing with restic, I saw a tool that could solve a bunch of those problems. Today I've been playing around with it, and here's my ideas so far.
Distinct restic repositories: One of the benefits of restic
is the inherent deduplication it does within a repo. And if I were backing
up a large number of systems, I might save something by only having one copy of,
say, /etc/resolv.conf
. But really, most of what I'm backing up
is either small configuration files, or log files. And these days, the few
tens of gigabytes of backups I have there isn't really worth deduplicating.
In addition, the largest consumer of backup space for me — stupidly
unrotated log files that get a little bit appended to them every day —
would benefit from the deduplication, even if it's only deduplicating on a
single system.
More important than that, however, is that I want isolation between my systems. For example, the backups of my kerberos kdc are way more important than, say, web server logs. And I really don't want something that would run on a public-facing system be able to see backups for an internal system. So, distinct repositories.
Use minio as the backend: My first thought when I was going to experiment was to use the sftp backend to restic. But to isolate things fully, I'd have to make a distinct user on the backup server to hold backups for each client, and that sounds like too damn much work.
Unrelated, I've been playing around with minio. Essentially, it's about the simplest thing you can get that exposes the 90% of S3 that you want. "Here's an ID and a KEY, list blobs, store blobs, get blobs, delete blobs". Because it's very simple, it doesn't offer multi-tenancy, so I will have to run a distinct minio for each client. That said, I think that should be easy enough, especially if I use something like runit to manage all of them.
Benefit from the combination of minio and restic for replication:
Minio is very simplistic in how it stores objects: some/key/name
is
stored as the file /top/of/minio/storage/some/key/name
. This has two
benefits: first, because the minio storage directory is also a restic
repository, I can just point a restic client at that directory, and as long
as I have a repository password, I can see stuff there. Second, every file in
the restic repository other than the top level 'config' file is named after
the sha256 hash of the file as it exists on disk, and all files in a repository
are immutable. This makes it trivial to copy a restic repository elsewhere.
While I'll likely start by simply using the b2
command line tool to
sync things into B2, I think you can do it even faster. I haven't looked deeply,
but my gut feeling is that the b2 sync
command looks at the sha1
hash of the source file to decide if it needs to re-upload a file that exists
already in B2. We don't need to do that at all; repository files are named after
their sha256 hash, so if the files have the same name, they have the same
contents [0]. So moving stuff around
is incredibly trivial.
Future niceties. I've got a bunch of other ideas floating around
in the back of my head for restic. One is a repository auditing tool: since nearly
everything in restic is named for the sha256 hash of the file content, I'd like a
tool I could run every day that would pull down, say, 1/30th of the files in the
repository and run sha256
on them, to make sure there's no damage.
The second is some way of keeping a local cache of the restic metadata so operations what have to read all that are much faster. Third, and related, a smarter tool for syncing repositories. For example, I'd love to, say, keep three days of backups in my local repository, and be able to shove new things to an S3 repository but keeping seven days there, and shove things in B2 and keep there until my monthly bill finally makes me care.
Anyways, this has been a few hour brain dump of a few hours of experimentation, so I'll end this part here.
- Notes:
- [0]: Well, until sha256 is broken....
Posted at: 19:12 | category: /computers/backups/restic-systems-backups | Link
Mon, 04 Sep 2017
Techno Housekeeping
A long weekend (here in the US) combined with a few strategic days off, and I had a long, five day weekend. A few of those days I managed to get out of the house and down to a coffee shop, so I got a bit of work in, and managed to wrap up a bunch of techno housekeeping.
First, with a new laptop and a fresh VM install of Debian 9, I've got all the components in place to reach my ideal PGP setup ‐ my day-to-day keys are on a Yubikey 4, ssh can now forward unix domain sockets, and gpg has well-defined socket locations for the agent that deals with keys. Any key operations on the remote VM tunnel back through ssh to the gpg agent running on my laptop, which passes them along to the Yubikey. PIN protected, touch required for operations, and the key material never leaves the Yubikey. This gives me a deeply warm and fuzzy feeling inside. In a year or so, when I build a new colocation box, my key material won't ever touch it.
The info for this is spread out in a few places, perhaps soon I'll put it all together, at least what I do.
Attempting to straighten out the mess of cables under the TV at home caused me to plug the wrong power adapter back into the USB3 drive I have hanging off a NUC that I use as the secondary site for backups for the colo machine, which sent it into the afterlife. A spare drive and 24 hours later, I had all the material re-synced, but it gave me the gumption to start throwing together a plan to shove those backups into at least a third location. I've been doing backup stuff long enough in my career to definitely not trust stuff backed up to two different locations, and to cast a very wary eye on stuff not backed up to at least three different locations.
I'd been wanting to use the Backblaze B2 storage since I first heard about it. After fooling around with it, it's nowhere near as full featured as S3, which I've used a decent amount, but it works and you certainly can't beat the price. After coming across Filippo Valsorda's review of restic, circumstances aligned and I started shoving copies of my AFS volume dumps into B2, encrypted and tracked with restic. Things are slowly bubbling up, which I attribute to the fact that it's not the world's beefiest USB drive setup. After that's up, I'll send a copy of all my system backups there ‐ I've been using a venerable rsync backup script for over a decade now (I just checked the date in the script header). And, with a new laptop, I have a new drive on the way to use for Carbon Copy Cloner, but, owing to this new allegiance to the "at least three sites" mantra, I'll probably be shoving that into restic as well.
That said, I'm also increasingly coming to the opinion that if you use any cloud service, you should use at least two distinct ones. So, depending on what my B2 bill is like, I may end up shoving restic somewhere else as well, perhaps S3 shoved into Glacier.
Posted at: 21:46 | category: /random/2016a/09 | Link