Gentoo Archives: gentoo-user

From: William Kenworthy <billk@×××××××××.au>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] terrible performance with btrfs on LVM2 using a WD 2TB green drive
Date: Tue, 15 Mar 2011 10:24:03
Message-Id: 1300184527.12835.14.camel@rattus
In Reply to: Re: [gentoo-user] terrible performance with btrfs on LVM2 using a WD 2TB green drive by Matthew Marlowe
1 On Mon, 2011-03-14 at 23:50 -0700, Matthew Marlowe wrote:
2 > > My problem is that LVM2 is not supported in parted which is the
3 > > recommended tool to deal with this.
4 > >
5 > > I suspect I only need to map the individual PE to a particular start
6 > > sector on each drive, not btrfs, but then there is stripe/block sizes to
7 > > consider as well ... WD also are recommending 1mb sector boundaries for
8 > > best performance - I can see a reinstall coming up :)
9 > >
10 >
11 > I have on my workstation:
12 > 2 WD 2TB Black Drives
13 > 5 WD 2TB RE4 Drives
14 >
15 > Some notes:
16 > - The black drives have horrible reliability, poor sector remapping, and have
17 > certain standard drive features to make them unusable in raid. I would not
18 > buy them again. I'm not sure how similar the green drives are.
19 > - Many of the recent WD drives have a tendency to power down/up frequently
20 > which can reduce drive lifetime (research it and ensure it is set
21 > appropriately for your needs).
22 > - Due to reliability concerns, you'll may need to run smartd to give adequate
23 > pre-failure warnings
24 >
25 > Anyhow, in my config I have:
26 >
27 > 1 RE4 Drive as Server Boot Disk
28 > 4 RE4 Drives in SW RAID10 (extremely good performance and reliability)
29 > 2 Black Drives in LVM RAID0 for disk-to-disk backups (thats about all I trust
30 > them with).
31 >
32 > When I setup the LVM RAID0, I used the following commands to get good
33 > performance:
34 > fdisk (remove all partitions, you don't need them for lvm)
35 > pvcreate --dataalignmentoffset 7s /dev/sdd
36 > pvcreate --dataalignmentoffset 7s /dev/sdf
37 > vgcreate -s 64M -M 2 vgArchive /dev/sdd /dev/sdf
38 > lvcreate -i 2 -l 100%FREE -I 256 -n lvArchive -r auto vgArchive
39 > mkfs.ext4 -c -b 4096 -E stride=64,stripe_width=128 -j -i 1048576 -L
40 > /archive /dev/vgArchive/lvArchive
41 >
42 > I may have the ext4 stride/stripe settings wrong above, I didn't have my
43 > normal notes when I selected them - but the rest of the config I scrounged
44 > from other blogs and seemed to make sense (the --dataalignmentoffset 7s) seems
45 > to be the key.
46 >
47 > My RAID10 drives are configured slightly different w/ 1 partition that starts
48 > on sector 2048 if I remember and extends to the end of the drive.
49 >
50 > The 4 Disk SW RAID10 array gives me 255MB/s reads, 135MB/s block writes, and
51 > 98MB/s rewrites (old test, may need to rerun for latest changes/etc).
52 >
53 > LVM 2 Disk RAID0 gives 303MB/s reads, 190MB/s block writes, and 102MB/s
54 > rewrites (test ran last week).
55 >
56 > Regards,
57 > Matt
58
59 Thanks Matthew,
60 some good ideas here. I have other partitions on the disks such as
61 swap and rescue so LVM doesnt get all the space. I have steered away
62 from striping as I have lost an occasional disk over the years and worry
63 that a stripe will take out a larger block of data than a linear BOD but
64 your performance numbers look ... great!
65
66 As the stripe size is hard to change after creation it looks like I'll
67 have to migrate the data and recreate from scratch to get the best out
68 of the hardware.
69
70 In the short term, I'll just do some shuffling and delete then readd the
71 LVM partition on the green drive to the volume group which should
72 improve the performance a lot. If I am reading it right, I have to get
73 the disk partitioning right first, them make sure the PV is also created
74 at the right boundaries on the LVM. Then I will see how to tune btrfs
75 which I am becoming quite sold on - solid, and online fsck is better
76 than reiserfs which is just as solid, but you have to take offline to
77 check - not that either corrupt often.
78
79 BillK