Warning: This is not a tutorial. It is just a story that explains a problem and provides the solution.
I had a few “D’oh!”-*slap your forehead* moments today when I increased the size of my data volume, and since I didn’t find any good information on some of the issues during my google searches, here is a short blog-post to commemorate the occasion and help fellow users.
I am using lvm, linux’s logical volume manager on this machine, and one of the volume groups (vg) sits on a software raid /dev/md0 that used to span 3 physical hard disk drives and which I now increased to span those three drives and an additional partition on a 4th drive. This is not ideal, however the three old drives are each of the same type and 750GB in size, whereas the new one is a single 1TB drive of a different make. To buy the same 750GB drive again would be more expensive then to buy a new drive with more than 1TB capacity, and I had one of those spare. So, I partitioned the 1TB drive into a 750GB partition and an additional partition for misc. purpose. The 750GB partition was then added to the software raid. This was no problem, and after additing the 750GB partition as a spare device, I extended the raid to span 4 active devices (3 HDD + 1 partition).
At this point, the raid had additional space, but I could not yet use it.
If I wasn’t using lvm, I could have just grown the size of my file system partition to cover the additional space available on the raid device (/dev/md0). Since I am, however, using lvm, the device to resize is the lvm volume group that covers the raid device, in my case that is “dat” i.e. /dev/dat/.
Calling vgdisplay as root before and after you extend the vg, should show you that the available free space in the volume group increased.
Once the volume group is grown to cover the whole of /dev/md0 using the “vgextend” command, the additional space may be attributed to any logical volume in that volume group, in my case /dev/dat/data. Use
lvextend -l +100%FREE /dev/yourvg/yourlv
command to increase the size of yourlv to cover all the free space in the volume group.
Now I had a logical volume with 2.05TB of space, but in Nautilus it only showed 1.4TB as before. This is where, unlike previously, I got stuck. On the web they talked about resizing your file system, which in my case sits in a logical volume, that sits in the volume group, that sits in the raid device. However, when I tried to resize the filesystem, I got the following error message:
resize2fs -p /dev/mapper/dat-data
resize2fs 1.42.4 (12-June-2012)
resize2fs: Device or resource busy while trying to open /dev/mapper/dat-data
Couldn't find valid filesystem superblock.
You see that I did not make the mistake of calling resize2fs on the /dev/dat/data, which is the logical volume, but instead I called it (correctly I think) on the file structure inside, accessable through /dev/mapper/. Unmounting the data volume did not help, and I could not find a helpful post online about this problem either.
Ultimately, the “Couldn’t find valid filesystem…” message made me realise that since I was using an encrypted logical volume, my filestystem is in fact mapped under /dev/mapper/cr_data. With this epiphany, it was no problem to call e2fsck and resize2fs on /dev/mapper/cr_data after which the additional space was available in my file manager.