Sun, 21 Jan 2018
Disabling Yubikey 4 OTP
Since I can never remember this:
I don't make use of the Yubikey OTP mode, so I don't want what a former co-worker called "yubidroppings" when I accidentially brush my key.
Short answer: get
ykpersonalize
and run ./ykpersonalize -m 5
, since I only want U2F and CCID modes enabled. Tell it
yes twice.
Posted at: 12:10 | category: /computers/yubikey | Link
Sat, 20 Jan 2018
Restic Systems Backup Setup, Part 4.5 - Why not just rclone
This is Part 4.5 of my series on building a restic-based system backup series. The rest of the articles can be found here.
.@thomaskula nice article! Did you consider just running rclone in a loop?
— restic (@resticbackup) January 15, 2018
After I posted part 4 of my restic backup series, @resticbackup asked the above question, and I thought trying to answer it would be a good intermediate article.
As a brief background, rclone describes itself as "rsync for cloud storage". It can talk to a rather comprehensive number of storage providers as well as local storage, and can perform operations as simple mkdir, cp and ls and ones as complicated as syncing between two different storage providers. It, like rsync, is a useful tool in your kit.
So why not just run rclone in a loop? Actually, that might not be a bad idea, and it's certainly a simple one. Pick a loop sleep that matches your replications needs, put some error checking in, and fire away. If I were going to do this, I'd likely use rclone copy rather than rclone sync. copy will copy files from the source to the destination, but will not delete any files on the destination which do not exist on the source. sync on the other hand, will make the source and destinations look exactly the same.
My preference for copy over clone is two fold. First, I like the idea of having different retention policies at different repositories. For example, some of the storage provider options are so inexpensive, for my scale of storage needs, that I basically treat them as "just shove things in there forever, I don't care", or, at least, care about once a year. On the other hand, local fast storage is much more expensive, that perhaps I can only afford to keep, say, a week or two of backups around for all of my systems. By treating the repositories as distinct, and with the append-only nature of restic, I can do that, keeping what I'm likely to need for most restore operations at hand, and keeping longer term, much much less likely to be needed restored data off some place where it's harder to access but cheaper to keep.
The second reason for treating the repositories separate is it helps guard against "oh shit!" moments: if you are cloning every five minutes and you accidentially delete some data you need, you've got a narrow window to realize that and stop the clone. At some point in your life, you will do this — I remarked once that "[s]ome people think I'm smart, but that's not it. I just remember exactly what each of those bite marks on my ass means."
That all said, I'm going to keep using the mechanism I outlined in the last article, of firing off a new sync job every time a new snapshot appears. There's a few reasons for this. First, it's there, and its working. Baring some overriding need to change this setup, I don't plan on exerting energy to change it — for now.
Second, there is some amount of overhead cost here every time I do a sync. My goal is that data starts being synced within a couple minutes of a new snapshot being created. I'm still, however, mostly doing the one-backup-a-day-late-at-night model (at least for now). With that, I'll actually have work to do less than one-tenth of one percent of the time, which just feels off. I'll admit, of course, that's just a gut feeling. In addition, even if I'm not copying data, building up a list of what I have locally and, more importantly, what's at the remote repository, has some cost. All of the storage providers charge something for operations like LIST, etc. That said, honestly, I haven't ran the math on it and the charge here is almost certainly one or two epsilons within nothing, so perhaps this isn't much of a reason to care.
The two important bits on conclusion: first, I have something working, so I'm going to keep using it until it hurts to do so, which, honestly, is a good chunk of the reason I do many things. We'll be fancy and call it being "pragmatic". Second, your needs and costs and criteria are certainly different from mine, and what's best for you requires a solid understanding of those things — one size certainly doesn't fit all.
Posted at: 18:32 | category: /computers/backups/restic-systems-backups | Link
Mon, 15 Jan 2018
Restic Systems Backup Setup, Part 4 - Replication and Runsvdir
This is Part 4 of my series on building a restic-based system backup series. The rest of the articles can be found here.
Replication
A goal from the start of this project has been replicating backup date to multiple locations. A long personal and professional history of dealing with backups leads me to the mantra that it isn't backed up until it's backed up to three different locations. Restic has several features which make this easy: backend storage (to a first approximation) is treated as append only — a blob, one stored, is never touched although may be deleted as part of expiring snapshots. Second, everything is encrypted, so you can feel as safe spreading your data to any number of cost-effective storage providers as you trust restic's encryption setup (which I generally trust).
In general, I want the client systems to know only about one service, the server we're backing up to. Everything else, the replication to other storage, should happen on the backup server. Also, we want new snapshots to get replicated relatively soon after they are created. If I decide to make an arbitrary snapshot for whatever reason, I don't want to have to remember to go replicate it, or wait until "the daily replication job"
These criteria lend themselves to something which watches for new snapshots
on the backup server. Restic makes this easy, as one of the very last things
it does after a sucessful snapshot is make a new snapshot
object.
There's one directory to watch, and when a new object appears there, replicate.
How to do that, though?
Minio does contain a notification system, and I strongly considered that for a while (to the point of submitting a patch to some incorrect documentation around that). But that offered two complications. First, setting up notification involves both changing the minio configuration file and also submitting some commands to tell it what you want notifications for, which complicates setup. Second, I quickly fell down a rabbit hole of building a RESTful notification service. This isn't impossible to overcome, but it was blocking the real work I wanted to do (more on that later).
My next consideration was using the Linux kernel inotify
facility
to watch for events in the snapshot directory, but that also fell under
roughly the same problems as the previous solution, and also added some Linuxisms
that I didn't want to add at this point. Of course, that said, I do freely
use bash
scripts, with some bashisms in them, instead of a strictly
POSIX-compliant shell, but, frankly, I'm not all that interested in running
this on AIX. So, take this all with an appropriate grain of salt.
The solution I finally set on is backup-syncd
, the not as elegant
but still useful setup. This simply runs in a loop, sleeping (by default for a
minute) and then looking at the files in the snapshot
directory.
If the contents have changed, fire off a script to do whatever syncing you want
to do. There's some extra stuff to log and be robust and pass off to the
sync script some idea of what's changed in case it wants to use that, but otherwise
it's pretty simple.
A decent part of systems engineering is fitting the solution you make to the problem you actually need to solve. I'm not expecting to back up thousands of systems to one backup server, so the overhead of a watcher script for each client waking up every minute to typically go back to sleep isn't really a consideration. And yes, depending on timing it could be almost two minutes before a system starts replicating, but that's close enough that I don't care. And while I do want to eventually build that RESTful syncing service to work with Minio's notification system, that's a desire to understand building those services robustly, and shouldn't get in the way of the fact that right now, I just want backups to work.
That said, another decent part of systems engineering is the ability to make
that solution not fuck you over in the future. You have to be able to recognize
that what fits now may not fit in the future, and while what you're doing now
may not scale to that next level, it at least won't be a huge barrier to moving
to that next level. In this case, its easy enough to swap out backup-syncd
with something more sophisticated, should it be necessary. You could go
another way, as well — for a low-priority client you could certainly
configure backup-syncd
to only wake up every few hours, or even
forgo it completely in lieu of a classic cron-every-night solution, should
the situation warrant.
Runsvdir
Now that we have more than one service running for each client, I've updated
the setup to use a per-client runsvdir
, which manages all the
services a particular client needs to do backups. Here we have a top-level
runsvdir
, called by the systemd unit file, which is responsible
for running the services for all clients. In turn, that top-level runsvdir
runs one runsvdir
for each client, which in turn runs minio and
backup-syncd for that client. The idea here being that I want to treat each
client as a single unit, and be able to turn it on and off at will.
There's a small issue with the way runsv
manages services. To
cleanly stop runsvdir
and everything its running, you want to send
it a SIGHUP
. The way we start a client runsvdir
is to
make an appropriate symlink, which does what we expect. But when we remove
that symlink, the supervising runsvdir
sends the client
runsvdir
a SIGTERM
signal, which makes the client
runsvdir
go away without touching the child runsv
processes
it started. You can customize what happens to the client runsvdir
process, however, and I'll be doing that in a future phase of this project.
Future wants
I'll end here by outlining some future ideas and wants for this setup:
- Monitoring and sanity checking: I want some sort of audit of every storage backend for a client, to make sure that the snapshots I want are where I want them
- Restoration checking: A wise person once said that nobody wants a backup system, everybody wants a restoration system. Something which restores some set of files and does some sanity checking would be good
- Metamanagement: Instead of making symlinks and poking around manually, I want scripts where I can enable and disable a particular client, get the status of a particular client's backups, etc.
Posted at: 12:29 | category: /computers/backups/restic-systems-backups | Link
Updates and Engagement
The standard end-of-the-year party and eating season conspired to keep me from much creative work here, but I've been off work this past week and managed to wrap up a new issue of Late Night Thinking and do some work on my restic systems backup setup. Both will appear here shortly.
Also, if you're one of the small number of people who haven't found this out from any number of places, on 1 November 2016A I got engaged to E, my boyfriend of two years. Wedding is this coming November.
Posted at: 11:36 | category: /random/2016b/01 | Link