1 |
Hello, |
2 |
|
3 |
I've been running an LVM RAID 5 on my home lab for a while, and recently it's been getting awfully close to 100% full, so I decided to buy a new drive to add to it, however, growing an LVM RAID is more complicated than I thought! I found very few documentation on how to do this, and settled on following some user's notes on the Arch Wiki [0]. I should've used mdadm !... |
4 |
My RAID 5 consisted of 3x6TB drives giving me a total of 12TB of usable space. I am trying to grow it to 18TB now (4x6TB -1 for parity). |
5 |
I seem to have done everything in order since I can see all 4 drives are used when I run the vgdisplay command, and lvdisplay tells me that there is 16.37TB of usable space in the logical volume. |
6 |
In fact, running fdisk -l on the lv confirms this as well : |
7 |
Disk /dev/vgraid/lvraid: 16.37 TiB |
8 |
|
9 |
However, the partition on it is still at 12TB (or a little bit less in HDD units) and I am unable to expand it. |
10 |
When I run the resize2fs command on the logical volume, I can see that it's doing something, and I can hear the disks doing HDD noises, but after just a few minutes (perhaps seconds), the disks turn quiet, and then a few more minutes later, resize2fs halts with the following error: |
11 |
doas resize2fs /dev/vgraid/lvraid |
12 |
resize2fs 1.46.4 (18-Aug-2021) |
13 |
Resizing the filesystem on /dev/vgraid/lvraid to 4395386880 (4k) blocks. |
14 |
resize2fs: Input/output error while trying to resize /dev/vgraid/lvraid |
15 |
Please run 'e2fsck -fy /dev/vgraid/lvraid' to fix the filesystem |
16 |
after the aborted resize operation. |
17 |
|
18 |
A few seconds after the resize2fs gives the "input/output" error, I can see the following lines appearing multiple times in dmesg: |
19 |
Feb 5 12:35:50 gentoo kernel: Buffer I/O error on dev dm-8, logical block 2930769920, lost async page write |
20 |
|
21 |
At first I was worried about data corruption or a defective drive, but I ran a smartctl test on all 4 drives and they all turn out healthy. Also, I am still capable of mounting the LVM partition and accessing all the data without any issue. |
22 |
I have then tried running the e2fsck command as instructed, which fixes some things [1], and then running the resize2fs command again, but it does the same thing every time. |
23 |
|
24 |
My Google skills seem to not be good enough for this one so I am hoping someone else here has an idea what is wrong... |
25 |
|
26 |
Thanks ! |
27 |
Julien |
28 |
|
29 |
[0] https://wiki.archlinux.org/title/User:Ctag/Notes#Growing_LVM_Raid5 |
30 |
[1] doas e2fsck -fy /dev/vgraid/lvraid |
31 |
e2fsck 1.46.4 (18-Aug-2021) |
32 |
Resize inode not valid. Recreate? yes |
33 |
|
34 |
Pass 1: Checking inodes, blocks, and sizes |
35 |
Inode 238814586 extent tree (at level 1) could be narrower. Optimize? yes |
36 |
|
37 |
Pass 1E: Optimizing extent trees |
38 |
Pass 2: Checking directory structure |
39 |
Pass 3: Checking directory connectivity |
40 |
Pass 4: Checking reference counts |
41 |
Pass 5: Checking group summary information |
42 |
Block bitmap differences: -(2080--2096) +(2304--2305) +(2307--2321) |
43 |
Fix? yes |
44 |
|
45 |
Free blocks count wrong for group #0 (1863, counted=1864). |
46 |
Fix? yes |
47 |
|
48 |
|
49 |
/dev/vgraid/lvraid: ***** FILE SYSTEM WAS MODIFIED ***** |
50 |
/dev/vgraid/lvraid: 199180/366284800 files (0.8% non-contiguous), 2768068728/2930257920 blocks |