Tag filesystems

Disk Hack

One of the things I enjoy about Unix system administration is the McGyver aspect of it: when something goes pear-shaped, and your preferred tools aren’t available because they’re on the disk that just died, or on the other side of the pile of smoking ashes that used to be a router, you have to figure out how to recover with what you’ve got left. It’s a bit like that scene in Apollo 13 when they realize that the space capsule has a round hole for the air filter, but only square filters, and the engineer dumps all the equipment the astronauts have available onto the table, and says “We’ve got to find a way to make this fit into the hole for this, using nothing but that“.

So anyway, my mom’s Mac recently died. And, naturally, there are no available backups. But I said I’d do what I could, and took the machine home.

I’m glad we wrote off the old machine as a total loss, since it was (the tense should give you an idea of what’s coming) an iMac, one of those compact everyting-in-the-monitor models that tries oh-so-hard to fit everything into as small a space as possible. Of course, this compactness means that there’s no room to do anything: the disk is behind the LCD display, and wedged in a tight slot between the graphics card and the DVD drive. And the whole thing is wrapped in — I kid you not — foil, most likely to help control airflow. So basically I wound up ripping things out with little or no grace or elegance. If it wasn’t totaled then, it certainly is now.

At any rate, that left me with a disk that, thankfully, turned out to be unharmed. The next question was how to hook it up. I was pretty sure it had an HFS or HFS+ filesystem, which meant that the obvious thing to do would be to put it in a Mac to read. But I don’t have a Mac that I could put a second internal drive in. I toyed briefly with the idea of finding an enclosure and whatever conversion hardware would be necessary to turn an internal SATA drive into an external USB drive, but figured that was too hard for a one-shot. Then I found that my Linux box has hfs and hfsplus filesystem kernel modules, so hey.

(Of course, I don’t know how stable the Linux HFS driver is, so I figured it’d be best to write-protect the disk. At which point I discovered that this model only has one hardware jumper slot, and it doesn’t write-protect the disk. Fuck you very much, Maxtor/Apple.)

The Linux HFS driver turned out to be good enough for reading, and I could mount the disk and read files, so yay. The next question was how to get the files from there to a laptop that I could bring over to my parents’. Ths is complicated by the fact that on HFS, a file isn’t just a stream of bytes, the way it is in Unix; it has two “forks”: the data fork contains the actual data of the file (e.g., a JPEG image), and the resource fork contains metadata about the file, such as its icon, the application that should open the file by default, and so on. I didn’t want to lose that if I could help it.

The way the Linux HFS driver deals with resource forks is to create virtual or transient or whatever you want to call them files: if you open myfile, you’ll get the data fork, which looks just like any file. But if it has a resource fork, you can also open myfile/rsrc and read the contents of that. This meant that in the worst case, I 1) copy over the data forks to a directory on the Mac, then 2) find which files have resource forks (something like

find /mnt -type f |
sh -c 'while read filename; do
    if [ -s "$filename/rsrc" ]; then
        echo "$filename";
    fi;
done'

and 3) somehow re-graft the resource forks onto the files on the Mac end. But that seemed like a lot of work.

Apple software is often distributed on .dmg (disk image) files, which are mounted as virtual disks. I figured that’d be the obvious way to package up the contents of the disk. So I dded the raw disk device (/dev/sdb rather than /dev/sdb2 which was the mounted partition, in order to get the entire disk, including partition map and such), but when I tried to mount that, it didn’t work, so presumably there’s more to a .dmg file than just the raw disk data. I tried a couple of variations on that theme, but without success.

(In passing, I also noticed that rsync supports “extended attributes” on both my Mac and my Linux box, so I tried using that to copy files (with resource forks) over, only to find that the two implementations use different options to say “turn on extended attributes”, so the client couldn’t start the remote server correctly.)

Eventually, I realized that dd could be used not just to read a disk image, but to write one. Yes, I said above that reading from the disk and writing to a file didn’t produce a usable disk image. But I also said that .dmg files are mounted like disks, and that implies that there has to be a device to mount.

So on the Mac, I created a disk image file with hdiutil create, then opened it with the Disk Utility. Forcing “Verify disk” made the Disk Utility mount the image on /dev/disk2s9, just before telling me that there was no usable filesystem on the disk image. That was fine; all I wanted was for it to create a /dev/disk* device that I could write to. Then I was able to

ssh linuxbox 'dd if=/dev/sdb bs=2M' | dd bs=2048k of=/dev/disk2

to transfer the raw contents of the disk to the “disk data” portion of the disk image.

To my slight surprise, this actually worked. Yes, I had to repair the disk image, but from the log messages, that appears to be because some of the superblock copies were missing (the disk is 120Gb, but I only created a 32Gb image).

The final problem should be that of getting the disk image from my locked-down(-ish) laptop to my mom’s new vanilla Mac, but I don’t think I’ll bother. It’ll be a lot easier, and better in the long run, to put the old disk image onto an external drive that can then double as a backup disk, so I don’t have to do this again.

Because while it can be fun to solve a puzzle and figure out how to fix something with suboptimal tools, there’s also wisdom in avoiding getting into such situations in the first place.