All things come to an end at some point, so do uSD cards. And they tend to do that just about the time you normally LEAST expect them to.
Anyways, at my country house, away from the noise of the big city I had a cheap cellphone tethering internets over an OpenVPN connection. The operator does not offer proper external IP service, so I have to run an OpenVPN connection to have access to surveillance.
I have a bunch of cams here and there, mostly watching after these guys:
The cellphone itself runs a rooted android and a debian chroot with OpenVPN off an SD card. The SD card died this weekend and at some point I realised that I don’t have a recent backup. It was no big deal, just a debian rootfs + a bunch of config files for OpenVPN, but since I spent a while then and now perfecting configs and tuning OpenVPN for performance over the celluar network those weren’t backed up. Ooops.
Anyways, this note talks about data recovery from such an SD card and the common pitfalls.
How SD cards die
SD cards can die in different and weird ways. Here’s my summary of the ways those can die. Perhaps not the most comprehensive, but comes from personal experience:
- They become read-only if something fails inside them. Perhaps the most harmless way.
- Some blocks become unreadable (my case), unless you at least write something there to fix the bad blocks. <- My case!
- Some cards just don’t show up anywhere (card readers and mmc hosts alike) and show no life signs. (In case some internal connection under the epoxy is broken, the interface doesn’t work anymore). Sometimes data can be recovered in SPI mode.
My case and the action plan.
SD card had some of the blocks unreadable. The mounted chroot filesystem was missing crucial files required to start. My initial data recovery plan was:
- Create an image of the card with dd/ddrescue
- loop-mount a copy of the image and run e2fsck
- If that doesn’t work – think of something else 😉
After all all the crucial parts were just a few OpenVPN configs and scripts, so I had some chances.
Why you shouldn’t use a card-reader
SD card readers are complex devices that basically map SDIO bus (or even something else as well, e.g. memory stick) to USB storage and have a lot of things going on inside them. They are based off microcontrollers that run their own firmware, do buffering, error handling, etc. and we basically know very little about them. When I first plugged the card into the card reader I couldn’t do a thing – the reader just disconnected itself from the USB bus whenever it encountered a bad block. A different reader just hung there for a long time until reporting an unreadable block of data.
Lesson learned: SD card readers do a lousy job when it comes to error handling. Most likely those who actually made them never took an effort to test them with BAD cards.
What makes things worse, most laptops have their built-in card-readers connected over USB internally.
In any case, all we needed is to get rid of that buggy hardware abstraction layer called the card reader before we can do actual recovery. A raspberry or any other SBC with a built-in SD card slot would do fine, if they boot off some other device.
SD card slots in them are usually hooked to the kernel via mmc_host subsystem and therefore don’t have that bug, thanks to the perfectionists in charge of the linux kernel.
But what made the situation worse, I had no ready-to-use SBC at my disposal at that moment. But I did have a rooted android phone and a shitty internet connection 😉
ddrescue to the rescue.
I had an arm linux cross-toolchain at my disposal, but no Android NDK. The latter would be a huge download at my shitty internet connection. So I got the ddrescue tarball, cross-compiled it as a static binary (Android has a different incompatible libc, so we’ll have to be creative) and pushed to the device.
Here’s the whole procedure in a nutshell:
wget http://mirror.tochlab.net/pub/gnu/ddrescue/ddrescue-1.22.tar.lz tar vxpf ddrescue-1.22.tar.lz cd ddrescue-1.22 nano configure #Or whatever your editor of choice is.
You’ll need to open up ddrescue’s configure script with your editor and adjust a few variables somewhere near the beginning of the file. Mine ended up being like this:
CXX=arm-rcm-linux-gnueabihf-g++ CPPFLAGS= CXXFLAGS='-Wall -W -O2 -static'
Replace arm-rcm-linux-gnueabihf with your cross-compiler triplet that should obviously be in your PATH.
Then just run
./configure make adb push ddrescue /sdcard adb shell
Now, on the smartphone shell run
su busybox mount -o remount,rw /system cp /sdcard/ddrescue /system/bin/ chmod 755 /system/bin/ddrescue ddrescue
If it outputs a summary – we have it up and running!
Now goes the most vital part of this show. Make sure your internal phone storage is bug enough to store all the recovered data, insert the damaged SD and fire up ddrescue. By default ddrescue will scrape the unreadable blocks on the SD card. So you’d better use one of this flags:
-n, --no-scrape skip the scraping phase -N, --no-trim skip the trimming phase
I was pretty much well off with defaults that looked something like:
root@OUKITEL:/ # ddrescue /dev/block/mmcblk1 /sdcard/recovered.img /sdcard/recovered.map GNU ddrescue 1.22 ipos: 2253 MB, non-trimmed: 0 B, current rate: 0 B/s opos: 2253 MB, non-scraped: 0 B, average rate: 2967 kB/s non-tried: 0 B, bad-sector: 1556 kB, error rate: 512 B/s rescued: 3962 MB, bad areas: 380, run time: 22m 14s pct rescued: 99.96%, read errors: 3320, remaining time: 0s time since last successful read: 1s Finished root@OUKITEL:/ #
As you see from the summary above most of the card was readable, but having just a 0.04% of blocks out of 4GB in a few vital areas was enough to make all the card-readers I had go nuts.
After pulling the image file from the cellphone I loop-mounted it and ran e2fsck on it.
0 ✓ necromant @ sylwer ~ $ sudo losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE /dev/loop0 0 0 1 0 /var/lib/docker/devicemapper/devicemapper/data /dev/loop1 0 0 1 0 /var/lib/docker/devicemapper/devicemapper/metadata /dev/loop2 0 0 0 0 /home/necromant/arvale.img 0 ✓ necromant @ sylwer ~ $ sudo e2fsck /dev/loop2p2 e2fsck 1.42.12 (29-Aug-2014) arvale: recovering journal arvale: clean, 15687/235712 files, 149756/941824 blocks 0 ✓ necromant @ sylwer ~ $
WTF??? The filesystem turned out to be clean, so I had to add -f flag along with -y and -v to force a verbose check without any stupid questions. Finally e2fsck found a bunch of errors and fixed those.
0 ✓ necromant @ sylwer ~ $ sudo e2fsck /dev/loop2p2
/usr/bin was no more, perhaps some other parts were missing, but /etc/ with all my OpenVPN scripts was intact that I consider a win. Once I applied those configs atop of the most recent backup everything went back to normal.