1 |
On Thu, 2013-06-20 at 12:10 -0700, Mark Knecht wrote: |
2 |
> Hi, |
3 |
> Does anyone know of info on how the starting sector number might |
4 |
> impact RAID performance under Gentoo? The drives are WD-500G RE3 |
5 |
> drives shown here: |
6 |
> |
7 |
> http://www.amazon.com/Western-Digital-WD5002ABYS-3-5-inch-Enterprise/dp/B001EMZPD0/ref=cm_cr_pr_product_top |
8 |
> |
9 |
> These are NOT 4k sector sized drives. |
10 |
> |
11 |
> Specifically I'm a 5-drive RAID6 for about 1.45TB of storage. My |
12 |
> benchmarking seems abysmal at around 40MB/S using dd copying large |
13 |
> files. It's higher, around 80MB/S if the file being transferred is |
14 |
> coming from an SSD, but even 80MB/S seems slow to me. I see a LOT of |
15 |
> wait time in top. And my 'large file' copies might not be large enough |
16 |
> as the machine has 24GB of DRAM and I've only been copying 21GB so |
17 |
> it's possible some of that is cached. |
18 |
> |
19 |
> Then I looked again at how I partitioned the drives originally and |
20 |
> see the starting sector of sector 3 as 8594775. I started wondering if |
21 |
> something like 4K block sizes at the file system level might be |
22 |
> getting munged across 16k chunk sizes in the RAID. Maybe the blocks |
23 |
> are being torn apart in bad ways for performance? That led me down a |
24 |
> bunch of rabbit holes and I haven't found any light yet. |
25 |
> |
26 |
> Looking for some thoughtful ideas from those more experienced in this area. |
27 |
> |
28 |
> Cheers, |
29 |
> Mark |
30 |
> |
31 |
|
32 |
Not necessarily the kind of answer you are looking for, but a year or so |
33 |
back I converted my NAS from Hardware RAID1 to linux software RAID1 to |
34 |
RAID1 on ZFS. Before the conversion to ZFS I had issues with the NAS |
35 |
being unable to keep up with requests. Since then I have been able to |
36 |
hit the SAN relatively hard with no visible effects. Just to give an |
37 |
idea, a normal load involves streaming an HD movie to the TV, streaming |
38 |
music to a second system, being used as the shared storage for four |
39 |
computers, two of which almost constantly hit the shared drive for data |
40 |
(keep the distfile directory for all the systems on it as well as using |
41 |
it as the local rsync) , and once a month, transferring data to |
42 |
removable storage devices. All of this going over cat6 Ethernet and |
43 |
occasionally USB2. |
44 |
|
45 |
I'm unsure how I would go about measuring the throughput, mainly because |
46 |
I never cared in the past as long as the files transferred at a |
47 |
reasonable pace and the video/audio didn't stutter. By no means is my |
48 |
NAS a high-end system. It's stats are: |
49 |
|
50 |
AMD64 X2, 4200 |
51 |
ASUS A8V MoBo (I think) |
52 |
4GB RAM |
53 |
2 x Silicon Image Sil 3114 SATA RAID cards (4 port PCI cards) |
54 |
3 x 1.5TB Seagate drives (on Raid Cards) |
55 |
4 x 2TB Western Digital drives (On Raid Cards) |
56 |
2 x Western Digital antique 80GB drives (mirrored on motherboard for OS) |
57 |
Marvell GigE network cards (Have a second card to add once I figure how |
58 |
to automatically load balance through two cards) |
59 |
Case with 2 x 120mm fans on top, 3 x 120mm fans on the front, 1 x 240mm |
60 |
fan on the side |
61 |
Total storage available 6.3TB, of which 3.4TB is used. |
62 |
An image of the pool is created on a daily basis via cron jobs, which |
63 |
are overwritten every 3 days. (Image of Day 1, Day 2, Day 3 then Day 4 |
64 |
overwrites Day 1.)The pool started with 5 750GB drives and has been |
65 |
grown slowly as I find deals on better drives. |
66 |
|
67 |
Main advantage of using ZFS on linux is the ease of growing your pools. |
68 |
As long as you know the id of the drive (preferably the hardware id not |
69 |
the delegated one), its so simple I can manage it. Since I'm nowhere |
70 |
near the technical level of most folk here, anyone can do it. For what |
71 |
it's worth (very little I know), I think that ZFS has too many |
72 |
advantages over linux software RAID for it to be a real competition. |
73 |
|
74 |
YMMV |
75 |
|
76 |
B. Vance |