Sun, 11 Sep 2011
Hacking AFS Dumps for Fun and Profit
Well, for fun at least.
The traditional way of doing AFS volume dumps tends to follow a classical "Full,incremental,incremental" pattern, with occasional new Full dumps so that the number of dumps one has to restore for a given time period is manageable (at work, we do something that is roughly "Monthly-Weekly-Daily"). This also lets you do expiration of dumps for stuff you no longer need — if you only want to keep two months worth of dumps it is easy to determine which dump files you no longer need.
At home, however, doing full dumps is painful for large volumes, because my DSL connection has a rather paltry upload speed, and since I keep copies of the volume dumps both at home and at my colo location, no matter where I do the dump at least one of the transfers will be slow. What I would like to do, then, is a process where I do a painful full dump once, and then every day simply do a dump of what has changed since the previous day. This gets painful quickly, since after about three days the number of dumps to restore gets too large to want to do. In addition, you can never throw away any dump, since they are now all necessary (potentially), to do a restore.
My desire, then, is to have something that pulls apart dump files and keeps enough data around for every particular backup point so that I can synthesize what appears to be a full dump file for that point. AFS volume dumps handily do that: they will tell you either "Here's a vnode that's changed" and "This vnode is present but hasn't changed since your reference time" If you combine that with some logic that keeps track of what the vnodes looked like in the last backup, you all of a sudden have enough information to be able to do the sythesis.
Thus the impetus to create pyafsdump, a Python module that understands and can do various things with AFS volume dumps. As a proof-of-concept I put together a pair of hackish scripts, one of which pulls apart volume dumps and generates some metadata, and another which reads that metadata and synthesizes a full dump. A very rough test seems to indicate that it works, I was able to pull apart a full dump and three subsequent incremental dumps, and from that generate a full dump that contained what the volume looked like at the time the third incremental was made, which was restorable with vos restore.
A public git repository can be found at http://kula.tproa.net/code/pyafsdump.git
Posted at: 22:57 | category: /computers/afs | Link
Wed, 14 Jan 2009
Adding gzip support the dumpscan suite
I recently really started looking at the dumpscan suite from the folks at CMU SCS. It's a fairly useful set of libraries and tools for looking at AFS volume dumps, which has been a fascination of mine for a while.
Both for use at home, where I want to write a utility to merge several volume dumps into one, and at work, where it would be neat to do some sort of cataloging of dumps, this is a windfall, making such tools pretty easy to write. At both places, however, volume dumps tend to be gzipped right after they are created (or even as they are created). The dumpscan suite includes a generic library for file-like objects (called XFILE), that is easily extensible, and after thinking about it for 10 minutes while trying to fall asleep I got out of bed and just added gzip support.
It's completely cargo-cultish, and entirely and utterly untested, but it works enough that I can run afsdump_scan and have it scan directly a gzipped volume dump. Find it here.
Posted at: 01:08 | category: /computers/afs/2009/01 | Link
Fri, 09 Jan 2009
Administratively Read-Only Volumes
At work we have a couple of occasions where we want to make access to a volume be read-only. For example, when we do a restore for a user we don't want the user to be able to write stuff there. Or it might be useful when we get a request from the User Advocate or ITSS (IT Security Services) office to freeze an account to freeze the afs volume associated with the account.
Right now, when we do a restore, we run the restore with the -readonly flag, which marks the volume as a RO volume. This, of course isn't really a read-only volume, rather, in classic afs sense, this is a read-only version of a replicated volume. While it works in a pragmatic sense, there's no corresponding read-write volume, which really confuses vos (and me, when I try examining the volume and have to remind myself why I'm seeing weird output). Plus, this doesn't handle the "we need to lock this volume now" instance. Sure, we could change the volume's top-level ACL to do this, but again this is doing some other operation to fake what we really want to do.
Now, there is a -readonly flag to the fileserver, which basically makes all write operations to any volume on that fileserver fail with VREADONLY. And, this actually does exactly what we want, but it means you have to maintain at least one fileserver that is specifically a read-only server, and move/restore any volume you want to be read-only there.
What I really want, however, is a vos command that simply locks a volume. Since the term "read-only" is used otherwise in afs, I think I'd like it to be something like vos writelock -lock and vos writelock -unlock.
After staring at code, I think it would be fairly simple to do. In vol/volume.h take the first of the reserved2 array to be a flag for this, add a macro to see if that field is set, where all the other (vp)->header->diskstuff macros are set. Then in viced/afsfileprocs.c everwhere that you see if (readonlyServer) return (VREADONLY); add another check to see if the flag above is set and, if it is, also return VREADONLY. This would be fairly trivial.
The other side of this would be adding even more stuff to volser/vos.c, adding support for the writelock command, and also support to have vos examine return info about if the volume is writelocked. This is a little more complicated, because this involves adding another RPC and changing what UV_ListOneVolume returns. I may also want to overload the offline message in the volume header and allow writelock to set some small string in there.
I started looking at the 1.4.8 code, but with the previous paragraph, I should probably look at whatever the latest 1.5 series is, and put it in there, since this is likely enough of a change to not go into the maintenance release.
Posted at: 21:19 | category: /computers/afs/2009/01 | Link
Mon, 19 Nov 2007
Salvage, Me Pretties!
Last Tuesday my colo provider knocked the powercord from my machine there whilst doing some power work. Since then, although I didn't make the connection until just now, my nightly afs backups were taking much much longer than usual. Since I was also running out of space on the disk at home that holds one copy of the dumps, I just turned off backups until I could look at it tonight.
In investigation tonight I noticed that the BosConfig file was gibberish, and by looking in backups, I could tell it got messed up the day my machine fell over. I also had noticed that in the VolserLog there were a bunch of "trans X on volume Y is older than Z seconds", particularly on the volumes that change the most each day. A tiny voice in my head whispered " I bet I have a bunch of volumes that need salvaging.". Fortunately, at home a complete salvage of a fileserver takes less than a minute (unlike at work, where it takes a couple hours at minimum). It's only incidental evidence, but the making of the clone volume to back up my web page volume took a split second, instead of several seconds when I tried it earlier this evening.
So, in closing:
- Backups are good, okay. Thanks to my rsync backup system for stuff outside of afs, I could pinpoint to the day when BosConfig on service-5 changed, as well as easily restore it.
- Since salvages at home take no time, I should really turn off fast-restart there. I'm hoping this is a flag, instead of a compile-only option (I know you have to build it with a flag to turn that option on, I just hope that also enables a flag at runtime with which you can turn it off).
- I should really try the demand-attach and demand-salvage stuff in 1.5
Posted at: 19:32 | category: /computers/afs/2007/11 | Link