Gentoo Archives: gentoo-amd64

From: thegeezer <thegeezer@×××××××××.net>
To: gentoo-amd64@l.g.o
Cc: Mark Knecht <markknecht@×××××.com>
Subject: Re: [gentoo-amd64] Is my RAID performance bad possibly due to starting sector value?
Date: Sun, 23 Jun 2013 11:31:18
Message-Id: 51C6DC7B.7090406@thegeezer.net
In Reply to: [gentoo-amd64] Is my RAID performance bad possibly due to starting sector value? by Mark Knecht
1 Howdy,
2 My own 2c on the issue is to suggest LVM
3 this looks at things in a slightly different way and allows me to treat
4 all my disks as one large volume i can carve up.
5 It supports multi way mirroring, so i can choose to create a volume for
6 all my pictures which is on at least 3 drives.
7 It supports volume striping (RAID0) so i can put swap file and scratch
8 files there.
9 It does support other RAID levels but I can't find where the scrub option is
10 It supports volume concatenation so i can just keep growing my MythTV
11 recordings volume by just adding another disk.
12 It supports encrypted volumes so I can put all my guarded stuff in there.
13 it supports (with some magic) nested volumes, so i can have an encrypted
14 volume sitting inside a mirrored volume so my secrets are protected.
15 i can partition my drives in 3 parts, so that i can create a volume
16 group of fast, medium and slow based on where on the disk the partition
17 is (start track ~150MB/sec, end track ~60MB/sec, numbers sort of
18 remembered sort of made up)
19 I can have a bunch of disks for long term storage and hdparm can spin
20 them down all the time.
21 Live movement even of a root volume also means that i can keep moving
22 storage to the storage drives or decide to use a fast disk as a storage
23 disk and have that spin down too.
24
25 I think the crucial aspect is to also consider what you wish to put on
26 the drives.
27 If it is just pr0n, do you really care if it gets lost?
28 if it is just scratch areas that need to be fast, ditto.
29 Where the different RAIDs are good is the use of parity so you don't
30 lose half of your potential storage size if it were a mirror.
31 Bit rot is real, all it takes is a single misaligned charged particle
32 from that nuclear furnace in the sky to knock a single bit out of
33 magnetic alignment so it will require regular scrubbing maybe in a cron.
34 https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#Data_scrubbing
35
36 Specifically on the bandwidth issue, I'd suggest
37 1. take all the drives out of RAID if you can, run a benchmark against
38 them individually, I like the benchmark tool in palimpset, but that's me.
39 2. concurrently run dd if=/dev/zero of=/dev/sdX on all drives and see
40 how it compares to the individual scores this will show you the computer
41 mainboard/chipset effect.
42 3. you might find this
43 https://raid.wiki.kernel.org/index.php/RAID_setup#Calculation a good
44 starting point for calculating strides and stripes
45 and this http://forums.gentoo.org/viewtopic-t-942794-start-0.html
46 shows the benefit of adjusting the numbers
47
48
49 hope this helps!
50
51
52 On 06/20/2013 08:10 PM, Mark Knecht wrote:
53 > Hi,
54 > Does anyone know of info on how the starting sector number might
55 > impact RAID performance under Gentoo? The drives are WD-500G RE3
56 > drives shown here:
57 >
58 > http://www.amazon.com/Western-Digital-WD5002ABYS-3-5-inch-Enterprise/dp/B001EMZPD0/ref=cm_cr_pr_product_top
59 >
60 > These are NOT 4k sector sized drives.
61 >
62 > Specifically I'm a 5-drive RAID6 for about 1.45TB of storage. My
63 > benchmarking seems abysmal at around 40MB/S using dd copying large
64 > files. It's higher, around 80MB/S if the file being transferred is
65 > coming from an SSD, but even 80MB/S seems slow to me. I see a LOT of
66 > wait time in top. And my 'large file' copies might not be large enough
67 > as the machine has 24GB of DRAM and I've only been copying 21GB so
68 > it's possible some of that is cached.
69 >
70 > Then I looked again at how I partitioned the drives originally and
71 > see the starting sector of sector 3 as 8594775. I started wondering if
72 > something like 4K block sizes at the file system level might be
73 > getting munged across 16k chunk sizes in the RAID. Maybe the blocks
74 > are being torn apart in bad ways for performance? That led me down a
75 > bunch of rabbit holes and I haven't found any light yet.
76 >
77 > Looking for some thoughtful ideas from those more experienced in this area.
78 >
79 > Cheers,
80 > Mark
81 >