Wed, 25 Jul 2018

Using gopass with mutt

I've been using gopass for a long time as my password manager — with my GnuPG and Yubikey setup accessing my passwords on both my laptop and my colocated box is pretty transparently the same.

I randomly came across the fact that mutt will do backtick expansion in its configuration file. With that, I can keep my Mutt imap password in gopass and have mutt fetch it with set imap_pass=`pass mutt_imap_pass`

Posted at: 08:34 | category: /computers/nifty | Link

Sat, 14 Jul 2018

Wrapping Consul Lock

I've recently installed a Consul cluster at home, mostly to act as an HA backing store for Vault. If you've been following along, I've also been moving to Restic for my system backups so, of course, I want snapshots of Consul to end up there.

But this isn't a post about that — when I've got it running well and cleaned up, I'll post it and talk about it. What I want to talk about is the way that I'm wrapping the consul lock command.

The gist of the script is:

  1. Run consul snapshot save... and save the output to a local file
  2. Archive that file away

I want that running on all three Consul servers, but I only ever want the snapshot to happen on a single server at a time. That way, if I ever have to take a Consul server down for maintenance, snapshots will still happen within the cluster. This is a common practice, and it's really easy with consul lock. This is also useful where you want a cronjob or task to run, and it can run on more than one server but you only want it to run on a single server at any time. Rather than only running it on a single server, and having to remember to switch it elsewhere in the event that server goes down, or having to install some sort of distributed queueing system, you can use this.

If you do consul lock only one instance of will run at a time, which is the basis of the technique I'm using. But I don't want to put the consul lock logic in my crontab, I want it embedded in the script. Here's a simple way of doing that:



function inner {
    #do stuff inside the lock

#parse arguments here, getopt, etc

if [ "$1" == "inner" ]; then
    inner $@
elif [ "$1" == ... ]; then
    #other subcommands
    consul lock $KVPREFIX $0 $ARGUMENTS $GO $HERE inner

If you call your script normally, it will use consul lock to call itself with the argument inner, which does the stuff which should only be done on a single host at a time.

Posted at: 23:03 | category: /computers/consul | Link

Sun, 18 Feb 2018

Thoughts about On-Call

This month there have been a couple of interesting discussions about on-call rotations in the tech industry. The first was started by Charity Majors, who sparked a thread on Twitter:

All this heated talk about on call is certainly revealing the particular pathologies of where those engineers work. Listen:

1) engineering is about building *and maintaining* services
2) on call should not be life-impacting
3) services are *better* when feedback loops are short

— Charity Majors (@mipsytipsy) February 10, 2018

A couple days later John Barton followed up with an article that I really enjoyed, and pretty much whole-heartedly endorse. I had a few thoughts from both of these, and wanted to talk about them here.

"But that's just an incentive for engineers to weasel extra pay by building broken systems": I think this falls apart in several ways. First, that extra pay doesn't just appear with no additional consequences — the engineer on-call still has to actually fix the problem, wake up at odd hours, be bothered when they'd much rather be bowling or watching a movie or reading a book or sleeping, etc. Second, if this actually works at your company, your management is broken. Period. That's the whole point of it, to put an explicit material cost to this additional duty. If your management tolerates abuse of this pay, they either explictly consider this part of the cost of doing business, or they're not paying close enough attention, and both of those cases are entirely on them.

To everyone that argues that an engineer's pay covers this, I'd counter by asking "Okay, how much of that pay represents the on-call expectation?" I'm guessing many places wouldn't be able to do that. And unlike many things an employer pays for that are fuzzy, hard-to-define criteria, this one is easy, all it takes is a stop-watch and a calculator to count up how many minutes are spent responding to incidents. Is what you're being paid for it worth it? As John points out, many other industries with highly trained professionals pay on-call differentials, and tech shouldn't be any different.

I'd also add a guideline to John's list: if someone gets a page, the next day someone covers for them for 24 hours. While this isn't official policy where I work, it's my own unofficial policy to offer to cover for my co-workers when they have a particularly bad on-call day. Someone who is woken at 3am, even if they can go back to sleep ten minutes later, doesn't get as good as rest and isn't as effective the next day. Having that followed by another interrupted sleep the next night both makes the problem worse and also makes it so that the the most critical person on your team, the one responding to an emergency, is in less than peak condition. Don't let people shrug this off with an "I'm fine" — there's a large body of sleep research that disagrees with them.

Like many things in tech that I think are bad, it's only going to change if expectations start changing, and expectations aren't going to change unless we start prodding them in the right direction. I think these kinds of questions, asking for the kinds of policies John advocates, needs to be something more standard industry-wide. If my situation warrants, I plan on making this part of the questions I ask any potential employer, and if your situation warrants, I'd ask you to do the same.

Posted at: 14:39 | category: /tech | Link

Sun, 21 Jan 2018

Disabling Yubikey 4 OTP

Since I can never remember this:

I don't make use of the Yubikey OTP mode, so I don't want what a former co-worker called "yubidroppings" when I accidentially brush my key.

Short answer: get ykpersonalize and run ./ykpersonalize -m 5, since I only want U2F and CCID modes enabled. Tell it yes twice.

Posted at: 12:10 | category: /computers/yubikey | Link

Sat, 20 Jan 2018

Restic Systems Backup Setup, Part 4.5 - Why not just rclone

This is Part 4.5 of my series on building a restic-based system backup series. The rest of the articles can be found here.

.@thomaskula nice article! Did you consider just running rclone in a loop?

— restic (@resticbackup) January 15, 2018

After I posted part 4 of my restic backup series, @resticbackup asked the above question, and I thought trying to answer it would be a good intermediate article.

As a brief background, rclone describes itself as "rsync for cloud storage". It can talk to a rather comprehensive number of storage providers as well as local storage, and can perform operations as simple mkdir, cp and ls and ones as complicated as syncing between two different storage providers. It, like rsync, is a useful tool in your kit.

So why not just run rclone in a loop? Actually, that might not be a bad idea, and it's certainly a simple one. Pick a loop sleep that matches your replications needs, put some error checking in, and fire away. If I were going to do this, I'd likely use rclone copy rather than rclone sync. copy will copy files from the source to the destination, but will not delete any files on the destination which do not exist on the source. sync on the other hand, will make the source and destinations look exactly the same.

My preference for copy over clone is two fold. First, I like the idea of having different retention policies at different repositories. For example, some of the storage provider options are so inexpensive, for my scale of storage needs, that I basically treat them as "just shove things in there forever, I don't care", or, at least, care about once a year. On the other hand, local fast storage is much more expensive, that perhaps I can only afford to keep, say, a week or two of backups around for all of my systems. By treating the repositories as distinct, and with the append-only nature of restic, I can do that, keeping what I'm likely to need for most restore operations at hand, and keeping longer term, much much less likely to be needed restored data off some place where it's harder to access but cheaper to keep.

The second reason for treating the repositories separate is it helps guard against "oh shit!" moments: if you are cloning every five minutes and you accidentially delete some data you need, you've got a narrow window to realize that and stop the clone. At some point in your life, you will do this — I remarked once that "[s]ome people think I'm smart, but that's not it. I just remember exactly what each of those bite marks on my ass means."

That all said, I'm going to keep using the mechanism I outlined in the last article, of firing off a new sync job every time a new snapshot appears. There's a few reasons for this. First, it's there, and its working. Baring some overriding need to change this setup, I don't plan on exerting energy to change it — for now.

Second, there is some amount of overhead cost here every time I do a sync. My goal is that data starts being synced within a couple minutes of a new snapshot being created. I'm still, however, mostly doing the one-backup-a-day-late-at-night model (at least for now). With that, I'll actually have work to do less than one-tenth of one percent of the time, which just feels off. I'll admit, of course, that's just a gut feeling. In addition, even if I'm not copying data, building up a list of what I have locally and, more importantly, what's at the remote repository, has some cost. All of the storage providers charge something for operations like LIST, etc. That said, honestly, I haven't ran the math on it and the charge here is almost certainly one or two epsilons within nothing, so perhaps this isn't much of a reason to care.

The two important bits on conclusion: first, I have something working, so I'm going to keep using it until it hurts to do so, which, honestly, is a good chunk of the reason I do many things. We'll be fancy and call it being "pragmatic". Second, your needs and costs and criteria are certainly different from mine, and what's best for you requires a solid understanding of those things — one size certainly doesn't fit all.

Posted at: 18:32 | category: /computers/backups/restic-systems-backups | Link

Mon, 15 Jan 2018

Restic Systems Backup Setup, Part 4 - Replication and Runsvdir

This is Part 4 of my series on building a restic-based system backup series. The rest of the articles can be found here.


A goal from the start of this project has been replicating backup date to multiple locations. A long personal and professional history of dealing with backups leads me to the mantra that it isn't backed up until it's backed up to three different locations. Restic has several features which make this easy: backend storage (to a first approximation) is treated as append only — a blob, one stored, is never touched although may be deleted as part of expiring snapshots. Second, everything is encrypted, so you can feel as safe spreading your data to any number of cost-effective storage providers as you trust restic's encryption setup (which I generally trust).

In general, I want the client systems to know only about one service, the server we're backing up to. Everything else, the replication to other storage, should happen on the backup server. Also, we want new snapshots to get replicated relatively soon after they are created. If I decide to make an arbitrary snapshot for whatever reason, I don't want to have to remember to go replicate it, or wait until "the daily replication job"

These criteria lend themselves to something which watches for new snapshots on the backup server. Restic makes this easy, as one of the very last things it does after a sucessful snapshot is make a new snapshot object. There's one directory to watch, and when a new object appears there, replicate. How to do that, though?

Minio does contain a notification system, and I strongly considered that for a while (to the point of submitting a patch to some incorrect documentation around that). But that offered two complications. First, setting up notification involves both changing the minio configuration file and also submitting some commands to tell it what you want notifications for, which complicates setup. Second, I quickly fell down a rabbit hole of building a RESTful notification service. This isn't impossible to overcome, but it was blocking the real work I wanted to do (more on that later).

My next consideration was using the Linux kernel inotify facility to watch for events in the snapshot directory, but that also fell under roughly the same problems as the previous solution, and also added some Linuxisms that I didn't want to add at this point. Of course, that said, I do freely use bash scripts, with some bashisms in them, instead of a strictly POSIX-compliant shell, but, frankly, I'm not all that interested in running this on AIX. So, take this all with an appropriate grain of salt.

The solution I finally set on is backup-syncd, the not as elegant but still useful setup. This simply runs in a loop, sleeping (by default for a minute) and then looking at the files in the snapshot directory. If the contents have changed, fire off a script to do whatever syncing you want to do. There's some extra stuff to log and be robust and pass off to the sync script some idea of what's changed in case it wants to use that, but otherwise it's pretty simple.

A decent part of systems engineering is fitting the solution you make to the problem you actually need to solve. I'm not expecting to back up thousands of systems to one backup server, so the overhead of a watcher script for each client waking up every minute to typically go back to sleep isn't really a consideration. And yes, depending on timing it could be almost two minutes before a system starts replicating, but that's close enough that I don't care. And while I do want to eventually build that RESTful syncing service to work with Minio's notification system, that's a desire to understand building those services robustly, and shouldn't get in the way of the fact that right now, I just want backups to work.

That said, another decent part of systems engineering is the ability to make that solution not fuck you over in the future. You have to be able to recognize that what fits now may not fit in the future, and while what you're doing now may not scale to that next level, it at least won't be a huge barrier to moving to that next level. In this case, its easy enough to swap out backup-syncd with something more sophisticated, should it be necessary. You could go another way, as well — for a low-priority client you could certainly configure backup-syncd to only wake up every few hours, or even forgo it completely in lieu of a classic cron-every-night solution, should the situation warrant.


Now that we have more than one service running for each client, I've updated the setup to use a per-client runsvdir, which manages all the services a particular client needs to do backups. Here we have a top-level runsvdir, called by the systemd unit file, which is responsible for running the services for all clients. In turn, that top-level runsvdir runs one runsvdir for each client, which in turn runs minio and backup-syncd for that client. The idea here being that I want to treat each client as a single unit, and be able to turn it on and off at will.

There's a small issue with the way runsv manages services. To cleanly stop runsvdir and everything its running, you want to send it a SIGHUP. The way we start a client runsvdir is to make an appropriate symlink, which does what we expect. But when we remove that symlink, the supervising runsvdir sends the client runsvdir a SIGTERM signal, which makes the client runsvdir go away without touching the child runsv processes it started. You can customize what happens to the client runsvdir process, however, and I'll be doing that in a future phase of this project.

Future wants

I'll end here by outlining some future ideas and wants for this setup:

Posted at: 12:29 | category: /computers/backups/restic-systems-backups | Link

Updates and Engagement

The standard end-of-the-year party and eating season conspired to keep me from much creative work here, but I've been off work this past week and managed to wrap up a new issue of Late Night Thinking and do some work on my restic systems backup setup. Both will appear here shortly.

Also, if you're one of the small number of people who haven't found this out from any number of places, on 1 November 2016A I got engaged to E, my boyfriend of two years. Wedding is this coming November.

Posted at: 11:36 | category: /random/2016b/01 | Link

Thu, 02 Nov 2017

Restic Systems Backup Setup, Part 3 - Setting up a client

This is Part 3 of my series on building a restic-based system backup series. The rest of the articles can be found here.

We've got enough things setup that we can start backing up a client system. We'll do this in two sections: setting up the server side, and setting up the client side.

Setting up the backup server side

Using 'new-restic-server' to set up the server

You can find new-restic-server in the git repo.

/backups/bin/new-restic-server -H -p 9002

will set up all of the per-client setup on the backup-server: making a minio config and path for storage, setting up a runsv directory to run the minio server, and creating access key and secret for the minio server. You will have to make sure the port you picked (in this example, 9002) is distinct between all clients backing up to this service.

Activing the minio server

Activating the minio server is a distinct step, but an easy one with our runsvdir setup:

ln -s /backups/systems/ /backups/systems/conf/runit/

A few seconds later, runsvdir will detect the new symlink and start the minio process.

Setting up the client side

Installing the binaries

I install these all in /usr/local/bin, you'll need to get a recent copy of restic, as well as the daily-restic-backups, restic-unroll and restic-wrapper scripts from the client directory of the git repo (handily linked at the end of this article).


First, make an /etc/restic configuration directory: sudo install -o root -g root -m 700 -d /etc/restic

Create the environ file

/etc/restic/environ contains a series of environment variables that the restic client will use to identify the repo to backup to, as well as the access keys for it. It looks like the following:

export AWS_ACCESS_KEY_ID=key goes here
export AWS_SECRET_ACCESS_KEY=secret key goes here
export RESTIC_PASSWORD_FILE=/etc/restic/repo-password

Most of these are self-explanitory. The RESTIC_REPOSITORY is marked as s3 because that's what minio looks like to it. It ends in /backups because you have to put things in a "bucket" RESTIC_PASSWORD_FILE causes restic to read from that file, instead of prompting for a password.

Create include and exclude files

Now the hardest part, deciding what to backup and exclude. Everything will be backed up from /, use full paths in the include an exclude files, which go in /etc/restic/include-files and /etc/restic/exclude-files respectively.

Configure repo password

sudo /bin/sh -c 'pwgen 32 1 > /etc/restic/repo-password'

Here, we're using the pwgen command to generate a single, 32 character long password. YOU MUST NOT LOSE THIS. This is the encryption key used to encrypt everything in the repo, and without it, you won't be able to recover anything. I store mine in a GnuPG encrypted git repo that I backup outside of my restic setup.

Initialize the repo

sudo /usr/local/bin/restic-wrapper init

will initialize the repo. It will spit out something like:

created restic backend bcae9b3f97 at s3: Please note that knowledge of your password is required to access the repository. Losing your password means that your data is irrecoverably lost.

Set up a cron job to do daily backups

backups-cron.d contains a useful cron.d snippet to perform daily backups, modify to your taste.


We now have a client system which backs up daily to a backup server storing data in minio. Future articles will talk about automated replication to additional repositories for redundancy.

As a reminder, you can find the canonical repository of all my utility scripts in this series here. You can also find them at github.

Posted at: 10:40 | category: /computers/backups/restic-systems-backups | Link

Sat, 30 Sep 2017

Restic Systems Backup Setup, Part 2.5 - dealing with 'Unable to backup/restore files/dirs with same name'

This is Part '2.5' of my series on building a restic-based system backup series. The rest of the articles can be found here.

You should be reading Part 3 here, but in the development of that, I ran into this restic bug: Unable to backup/restore files/dirs with same name.

Unfortunately, for historic reasons (buried in some of the oldest code in restic), only the last component of a path being backed up in a restic repository is reflected in the repo. For example, in my case, when I wanted to back up both /local and /usr/local they would show up as local at the top of the repo, a very confusing state. Later versions of restic would rename things so there was a local-0 and a local-1, etc, but it's still very confusing.

The primary restic developer is working on resolving this, as many other people have ran into it, and it is expected to be fixed in restic 0.8. Until then, the suggestion is to tell restic simply to back up /, and exclude out everything you don't want to back up. A workable enough solution, but I still want something where I can think in terms of backing up what I want, and something else figures out how to do the exclusion. That way, I can just add or remove things from my list and I don't have to re-figure what to exclude. Or, things can come and go that I don't care about, and they won't accidentially get backed up.

A few hours of work and experimentation, and I had restic-unroll, which does just that. In the same directory in that git repo is an example bash script you might wrap things in to do a daily system backup.

As a reminder, you can find the canonical repository of all my utility scripts in this series here

Posted at: 17:58 | category: /computers/backups/restic-systems-backups | Link

Sat, 16 Sep 2017

Restic Systems Backup Setup, Part 2 - Running minio under runit under systemd

Part 2 of my series on building a restic-based system backup setup. Part 1 can be found found here.

As described in Part 1, my general strategy is to have a centralized backup server at a particular location, running an instance of minio for each server being backed up. In essence, I'm going to want to be running N minio server --config-dir=/... instances, and I want a simple way to add and start instances, and keep them running. In essence, I want a simple init service.

Fortunately, if you're looking for a simple init service, you need look no further than runit. It's an incredibly tiny init-like system, composed of some simple tools: runsv to run a service, keep it up and optionally log stdout output somewhere; sv to control that service by simply talking to a socket; and runsvdir to keep a collection of runsv instances going. Defining a service is simple, in a directory there is a run file, which is used by runsv to start the service. If you want to log, create a log subdirectory, with it's own run file — that file is executed and given the stdout of the main process as its input (the included svlogd command is a simple process for handling logs). To run a bunch of runsv instances, put them (or symlinks to them) all in a single directory, and point runsvdir at it. As a bonus, runsvdir monitors that directory, and if a runsv directory is created or goes away, runsvdir does the right thing.

It's an incredibly useful set of commands, and allows you to manage processes fairly easily. In this case, every time I add a machine to this backup scheme, I make an appropriate runsv dir with the correct minio incantation in the run file, and just symlink it into the runsvdir directory. We've been using runit at work for quite a while now in containers, and it's an awsome tool.

My newly-minted backup server is running Debian Stretch, which uses systemd as its init system. Creating systemd unit files is still something I have to think about hard whenever I do it, so here's the one I use for runit:

Description=Backup Service Minio Master runsvdir

ExecStart=/usr/bin/runsvdir -P /backups/systems/conf/runit/


Here, systemd starts runsvdir, pointing it at my top-level directory of runsv directories. It runs it as the backups user and group, and makes it something that starts up once the system reaches "multi-user mode".

Part 3 is coming, where I'll document backing up my first system.

Posted at: 18:41 | category: /computers/backups/restic-systems-backups | Link