The expansion of my NAS started off very nicely: I hot-swapped one disk after the other, giving the system time to re-sync the RAID until it had recognized the four new disks, a process that took approximately five hours per disk. As per instructions sent to me via e-mail by the NAS, I then rebooted to allow the ReadyNAS firmware to perform the actual RAID expansion. All was fine and dandy. I thought.

In order to ensure everything really was fine, I forced the NAS to reboot and perform a file system check, which it started doing.

I got a bit suspicious upon noticing that the 69% was still on the display after two hours. After about seven hours the fsck was still running. I attached an strace to it and saw blocks of data being shuffled around…

The fsck’ing fsck has now been running for over 24 hours, and it is still moving bits of data from left to right (a.k.a. from left to write :), albeit at a slower rate: the NAS started swapping yesterday evening. I’m crossing my fingers that the file-system check will come to completion sometime in the near future, and that the data on the NAS will resemble the data I initially stored on it…

I’ve waited for the odd fsck in the course of time, but this one beats them all.

Update: After letting the fsck run for 8 days and 2 hours (?!?!?) I pulled the plug: there was definitely something wrong with at least one of the disks, I thought. I ran the SMART disk test, which alone took 6 hours with no problems reported. Degraded (pun intended) trust in the file-system on the NAS caused me to wipe it and start over. Easier said than done, because I had to backup the data on it first, merging existing backups with stuff I hadn’t yet got around to saving – yes, it happens in the best of families… I collected a bunch of disks, hooked them up to a Linux box, created a fat RAID-0 stripe and rsyned the data from the NAS onto that before factory resetting the NAS and copying everything back onto it. A very long-running process, not to be repeated.

All’s well that ends well.