Tue, 26 Nov 2019

Cooking Pork Tenderloin on a Loaf of Bread

Inspired by this recent Chef John video.

Prep a whole pork tenderloin as you normally would (I coat with a thin layer of vegetable oil and liberally apply kosher salt). Take a loaf of Italian bread (that's what my local store calls it, it's a long loaf of white bread with sesame seeds on the top), slice in half lengthwise, and coat with butter — I use one stick (1/2 cup) for a whole loaf split in two. Sprinkle on what I consider "stuffing herbs": rosemary, thyme, sage.

Put the tenderloin on top of the buttered side of the bread and press it down a bit to make it stable. Place in a hot oven and roast as you normally would. I put the whole thing on a sheet pan with a rack in it to keep the bread elevated.

When the pork comes out, the bottom side will be a bit pale. Remove the bread and reserve. Put the tenderloin back on the rack, pale side up, and put under a broiler until the top is browned. Remove and let rest before slicing.

While that's happening, roughly chop up the bread, which has absorbed all of the tenderloin cooking juices and is full of butter and herbs, and put in a bowl or baking dish. Add a bit of stock (pork or chicken) to moisten slightly until it looks like dressing. You could serve as is, or put it in a baking dish and pop it under the broiler to crisp up the top a bit.

And as always, enjoy.

Posted at: 10:56 | category: /food/2016C | Link

Sat, 06 Apr 2019

I Dream of Tornados, Part II

For ages, my psyche has marked the start of spring by having a dream about a tornado. At least back to high school, warmer weather, trees start budding and flowers start poking out and — boom, tornado dream.

This year, however, I've had two dreams about tornados in the past week, which is the first time I can remember this happening. I'll have other dreams about weather, but just one with a tornado there ever spring. I'm not sure what this means.

Posted at: 10:34 | category: /random/2016c | Link

Thu, 07 Feb 2019

Securely storing environment variables in gopass

See yubi-env in my 'one-offs' repository.

Posted at: 18:03 | category: /computers/yubikey | Link

Mon, 04 Feb 2019

Vault Secrets Engines

At home I make extensive use of both the Minio object storage server and the Backblaze B2 object storage service. I've also recently started making use of HashiCorp Vault.

Given how useful it is to generate dynamic secrets with Vault, I wanted to extend that to my usage of Minio and B2, so writing a secrets engine plugin for Vault has been on my project list for quite some time. A couple weeks ago I came across David Adams's Sample Vault Secrets Plugin, and after about an hour of staring at that and the Vault source, everything clicked and I started writing plugins.

It's a testament to the contributors to Vault that the plugins system is very well thought out and incredibly easy to use, making it a rather simple task to extend Vault. You can find both of mine on GitHub:

Full Disclosure:At the time that this post was written, I am a HashiCorp employee, although this post and the plugins were written as a personal project and are not official HashiCorp products. The views expressed in this posting are entirely personal and are not statements made on the behalf of HashiCorp, Inc.

Posted at: 12:42 | category: /computers/hashistack | Link

Sat, 29 Dec 2018

Restic Systems Backup Setup, Part 5 - minio multi-user support

This is Part 5 of my series on building a restic-based system backup series. The rest of the articles can be found here.

One of the original design decisions in my restic systems backup setup was isolation between hosts. I didn't want root on one system to be able to access the backups of other hosts, even if they were storing backups on a common backup server.

At that time, Minio, the object storage server I was using on the backup server, only supported single-tenancy — there was a single "access key"/"secret key" per instance, with access to every object and every bucket in that instance. Minio's recommendation at the time was to run multiple instances, each on a distinct port, to provide isolation. That's the solution I went with at that time.

Sometime in October, when I was pre-occupied with my wedding, Minio added Multi-User Support. This support adds the ability to have multiple users per Minio instance, each with distinct access and secret keys, and adds decent support for S3-style policies. After a bit of experimentation I was able to figure out a setup where I could run a single Minio instance, put each system's backup in a distinct bucket, and create policies that kept everything separate.

This would greatly simplify my backup setup, and ties in with some other changes I want to make. I want to make it much more easier to add and remove backup clients, and to get the current status of them, amongst other things. I've also started eating, well, not my dogfood, but work's dogfood, and I've got a running Consul and Vault cluster running, and I want to start leveraging that as well.

Expect the next post in this series to talk about the new setup, while hopefully avoiding System 2.0 tarpits.

Posted at: 12:47 | category: /computers/backups/restic-systems-backups | Link

Wed, 25 Jul 2018

Using gopass with mutt

I've been using gopass for a long time as my password manager — with my GnuPG and Yubikey setup accessing my passwords on both my laptop and my colocated box is pretty transparently the same.

I randomly came across the fact that mutt will do backtick expansion in its configuration file. With that, I can keep my Mutt imap password in gopass and have mutt fetch it with set imap_pass=`pass mutt_imap_pass`

Posted at: 08:34 | category: /computers/nifty | Link

Sat, 14 Jul 2018

Wrapping Consul Lock

I've recently installed a Consul cluster at home, mostly to act as an HA backing store for Vault. If you've been following along, I've also been moving to Restic for my system backups so, of course, I want snapshots of Consul to end up there.

But this isn't a post about that — when I've got it running well and cleaned up, I'll post it and talk about it. What I want to talk about is the way that I'm wrapping the consul lock command.

The gist of the script is:

  1. Run consul snapshot save... and save the output to a local file
  2. Archive that file away

I want that running on all three Consul servers, but I only ever want the snapshot to happen on a single server at a time. That way, if I ever have to take a Consul server down for maintenance, snapshots will still happen within the cluster. This is a common practice, and it's really easy with consul lock. This is also useful where you want a cronjob or task to run, and it can run on more than one server but you only want it to run on a single server at any time. Rather than only running it on a single server, and having to remember to switch it elsewhere in the event that server goes down, or having to install some sort of distributed queueing system, you can use this.

If you do consul lock only one instance of will run at a time, which is the basis of the technique I'm using. But I don't want to put the consul lock logic in my crontab, I want it embedded in the script. Here's a simple way of doing that:



function inner {
    #do stuff inside the lock

#parse arguments here, getopt, etc

if [ "$1" == "inner" ]; then
    inner $@
elif [ "$1" == ... ]; then
    #other subcommands
    consul lock $KVPREFIX $0 $ARGUMENTS $GO $HERE inner

If you call your script normally, it will use consul lock to call itself with the argument inner, which does the stuff which should only be done on a single host at a time.

Posted at: 23:03 | category: /computers/consul | Link

Sun, 18 Feb 2018

Thoughts about On-Call

This month there have been a couple of interesting discussions about on-call rotations in the tech industry. The first was started by Charity Majors, who sparked a thread on Twitter:

All this heated talk about on call is certainly revealing the particular pathologies of where those engineers work. Listen:

1) engineering is about building *and maintaining* services
2) on call should not be life-impacting
3) services are *better* when feedback loops are short

— Charity Majors (@mipsytipsy) February 10, 2018

A couple days later John Barton followed up with an article that I really enjoyed, and pretty much whole-heartedly endorse. I had a few thoughts from both of these, and wanted to talk about them here.

"But that's just an incentive for engineers to weasel extra pay by building broken systems": I think this falls apart in several ways. First, that extra pay doesn't just appear with no additional consequences — the engineer on-call still has to actually fix the problem, wake up at odd hours, be bothered when they'd much rather be bowling or watching a movie or reading a book or sleeping, etc. Second, if this actually works at your company, your management is broken. Period. That's the whole point of it, to put an explicit material cost to this additional duty. If your management tolerates abuse of this pay, they either explictly consider this part of the cost of doing business, or they're not paying close enough attention, and both of those cases are entirely on them.

To everyone that argues that an engineer's pay covers this, I'd counter by asking "Okay, how much of that pay represents the on-call expectation?" I'm guessing many places wouldn't be able to do that. And unlike many things an employer pays for that are fuzzy, hard-to-define criteria, this one is easy, all it takes is a stop-watch and a calculator to count up how many minutes are spent responding to incidents. Is what you're being paid for it worth it? As John points out, many other industries with highly trained professionals pay on-call differentials, and tech shouldn't be any different.

I'd also add a guideline to John's list: if someone gets a page, the next day someone covers for them for 24 hours. While this isn't official policy where I work, it's my own unofficial policy to offer to cover for my co-workers when they have a particularly bad on-call day. Someone who is woken at 3am, even if they can go back to sleep ten minutes later, doesn't get as good as rest and isn't as effective the next day. Having that followed by another interrupted sleep the next night both makes the problem worse and also makes it so that the the most critical person on your team, the one responding to an emergency, is in less than peak condition. Don't let people shrug this off with an "I'm fine" — there's a large body of sleep research that disagrees with them.

Like many things in tech that I think are bad, it's only going to change if expectations start changing, and expectations aren't going to change unless we start prodding them in the right direction. I think these kinds of questions, asking for the kinds of policies John advocates, needs to be something more standard industry-wide. If my situation warrants, I plan on making this part of the questions I ask any potential employer, and if your situation warrants, I'd ask you to do the same.

Posted at: 14:39 | category: /tech | Link

Sun, 21 Jan 2018

Disabling Yubikey 4 OTP

Since I can never remember this:

I don't make use of the Yubikey OTP mode, so I don't want what a former co-worker called "yubidroppings" when I accidentially brush my key.

Short answer: get ykpersonalize and run ./ykpersonalize -m 5, since I only want U2F and CCID modes enabled. Tell it yes twice.

Posted at: 12:10 | category: /computers/yubikey | Link

Sat, 20 Jan 2018

Restic Systems Backup Setup, Part 4.5 - Why not just rclone

This is Part 4.5 of my series on building a restic-based system backup series. The rest of the articles can be found here.

.@thomaskula nice article! Did you consider just running rclone in a loop?

— restic (@resticbackup) January 15, 2018

After I posted part 4 of my restic backup series, @resticbackup asked the above question, and I thought trying to answer it would be a good intermediate article.

As a brief background, rclone describes itself as "rsync for cloud storage". It can talk to a rather comprehensive number of storage providers as well as local storage, and can perform operations as simple mkdir, cp and ls and ones as complicated as syncing between two different storage providers. It, like rsync, is a useful tool in your kit.

So why not just run rclone in a loop? Actually, that might not be a bad idea, and it's certainly a simple one. Pick a loop sleep that matches your replications needs, put some error checking in, and fire away. If I were going to do this, I'd likely use rclone copy rather than rclone sync. copy will copy files from the source to the destination, but will not delete any files on the destination which do not exist on the source. sync on the other hand, will make the source and destinations look exactly the same.

My preference for copy over clone is two fold. First, I like the idea of having different retention policies at different repositories. For example, some of the storage provider options are so inexpensive, for my scale of storage needs, that I basically treat them as "just shove things in there forever, I don't care", or, at least, care about once a year. On the other hand, local fast storage is much more expensive, that perhaps I can only afford to keep, say, a week or two of backups around for all of my systems. By treating the repositories as distinct, and with the append-only nature of restic, I can do that, keeping what I'm likely to need for most restore operations at hand, and keeping longer term, much much less likely to be needed restored data off some place where it's harder to access but cheaper to keep.

The second reason for treating the repositories separate is it helps guard against "oh shit!" moments: if you are cloning every five minutes and you accidentially delete some data you need, you've got a narrow window to realize that and stop the clone. At some point in your life, you will do this — I remarked once that "[s]ome people think I'm smart, but that's not it. I just remember exactly what each of those bite marks on my ass means."

That all said, I'm going to keep using the mechanism I outlined in the last article, of firing off a new sync job every time a new snapshot appears. There's a few reasons for this. First, it's there, and its working. Baring some overriding need to change this setup, I don't plan on exerting energy to change it — for now.

Second, there is some amount of overhead cost here every time I do a sync. My goal is that data starts being synced within a couple minutes of a new snapshot being created. I'm still, however, mostly doing the one-backup-a-day-late-at-night model (at least for now). With that, I'll actually have work to do less than one-tenth of one percent of the time, which just feels off. I'll admit, of course, that's just a gut feeling. In addition, even if I'm not copying data, building up a list of what I have locally and, more importantly, what's at the remote repository, has some cost. All of the storage providers charge something for operations like LIST, etc. That said, honestly, I haven't ran the math on it and the charge here is almost certainly one or two epsilons within nothing, so perhaps this isn't much of a reason to care.

The two important bits on conclusion: first, I have something working, so I'm going to keep using it until it hurts to do so, which, honestly, is a good chunk of the reason I do many things. We'll be fancy and call it being "pragmatic". Second, your needs and costs and criteria are certainly different from mine, and what's best for you requires a solid understanding of those things — one size certainly doesn't fit all.

Posted at: 18:32 | category: /computers/backups/restic-systems-backups | Link