1 |
On Tue, Nov 29, 2011 at 9:10 AM, Mark Knecht <markknecht@×××××.com> wrote: |
2 |
> On Mon, Nov 28, 2011 at 8:10 PM, Michael Mol <mikemol@×××××.com> wrote: |
3 |
> |
4 |
> Hi Michael, |
5 |
> Welcome to the world of what ever sort of multi-disk environment |
6 |
> you choose. It's a HUGE topic and a conversation I look forward to |
7 |
> having as you dig through it. |
8 |
> |
9 |
> My main compute system here at home has six 500GB WD RE3 drives. |
10 |
> Five are in use with one as a cold spare. I'm using md. It's pretty |
11 |
> mature and you have good access to the main developer through the |
12 |
> email list. I don't know much about dm. If this is your first time |
13 |
> putting RAID on a box (it was for me) then I think md is a good |
14 |
> choice. On the other hand you're more system software savy than I am |
15 |
> so go with what you think is best for you. |
16 |
|
17 |
Last time I set up RAID was three or four years ago. Two volumes, on |
18 |
RAID5 of three 1.5TB drives (Seagate econo drives, but they worked |
19 |
well enough for me), one RIAD0 of three 1TB drives (WD Caviar Black). |
20 |
|
21 |
The RAID0 was for some video munging scratch space. The RAID5, I |
22 |
mounted as /home. Those volumes lasted a couple years, before I |
23 |
rebuilt all of them as two LVM pvgs, using the same drive sets. |
24 |
|
25 |
> |
26 |
> 1) First lesson - not all hard drives make good RAID hard drives. I |
27 |
> started with six 1TB WD Green drives and found they made _terrible_ |
28 |
> RAID units so I took them out and bought _real_ RAID drives. They were |
29 |
> only half as large for the same price but they have worked perfectly |
30 |
> for nearly 2 years. |
31 |
|
32 |
What makes a good RAID unit, and what makes a terrible RAID unit? |
33 |
Unless we're talking rapid failure, I'd think anything striped would |
34 |
be faster than the bare drive alone. |
35 |
|
36 |
> |
37 |
> 2) Second lesson - prepare to build a few RAID configurations and |
38 |
> TEST, TEST, TEST __BEFORE__ (BEFORE!!!) you make _ANY_ decision about |
39 |
> what sort of RAID you really want. There are a LOT of parameter |
40 |
> choices that effect performance, reliability, capacity and I think to |
41 |
> some extent your ability to change RAID types later on. To name a few: |
42 |
> The obvious RAID type (0,1,2,3,4,5,6,10, etc.) but also chunk size, |
43 |
> metadata type, physical layout for certain RAID types, etc. I strongly |
44 |
> suggest building 5-10 different configurations and testing them with |
45 |
> bonnie++ to gauge speed. I didn't do enough of this before I built |
46 |
> this system and I've been dealing with the effects ever since. |
47 |
|
48 |
I'm familiar with the different RAID types and how they operate. I'm |
49 |
familiar with some of the impacts of chunk size, what that can mean in |
50 |
impacts on caching and sector overlap (for SSD and 2TB+ drives, at |
51 |
least). |
52 |
|
53 |
The purpose of this array (or set of arrays) is for volume aggregation |
54 |
with a touch of redundancy. Speed is a tertiary concern, and if it |
55 |
becomes a real issue, I'll adapt; I've got 730GB left free on the |
56 |
system's primary disk which I can throw into the mix any which way. |
57 |
(use it raw as I currently am, or stripe a logical volume into it...) |
58 |
|
59 |
> 3) Third lesson - think deeply about what happens when 1 drive goes |
60 |
> bad and you are in the process of fixing the system. Do you have a |
61 |
> spare drive ready? |
62 |
|
63 |
Don't plan to, but I don't plan on storing vital or |
64 |
operations-dependent data in the volume without backup. These are |
65 |
going to be volumes of convenience. |
66 |
|
67 |
> Is it in the box? Hot or cold? What happens if a |
68 |
> second drive in the system fails while you're rebuilding the RAID? |
69 |
|
70 |
Drop the failed drives, rebuild with the remaining drives, copy back a backup. |
71 |
|
72 |
> It's from the same manufacturing lot so it probably suffers from the |
73 |
> same weaknesses. My decision for the most part was (for data or system |
74 |
> drives) 3-drive RAID1 or 5-drive RAID6. For backup I went with 5-drive |
75 |
> RAID5. It all makes me feel good, but it's too complicated. |
76 |
> |
77 |
> 4) Lastly - as they say all the time on the mdadm list: RAID is not a backup. |
78 |
|
79 |
Absolutely. I've had discussions of RAID and disk storage many times |
80 |
with some rather apt and experienced friends, but dmraid and btrfs are |
81 |
relatively new on the block, and the gentoo-user list is a new, |
82 |
mostly-untapped resource of expertise. I wanted to pick up any |
83 |
additional knowledge or references I hadn't heard before. :) |
84 |
|
85 |
> Personally I like your idea of one big RAID with lvm on top but I |
86 |
> haven't done it myself. I think it's what I would look at today if I |
87 |
> was starting from scratch, but I'm not sure. It would take some study. |
88 |
|
89 |
It's probably the simplest way forward. I notice there are some |
90 |
network-syncing block devices in the kernel (acting as RAID1 over a |
91 |
network) I'd like to play with, but I haven't done anything with OCFS2 |
92 |
(or whatever other multi-operator filesystems are in the 3.0.6 kernel) |
93 |
before. |
94 |
|
95 |
> |
96 |
> Hope this helps even a little, |
97 |
> Mark |
98 |
|
99 |
Certainly does. Also, your email has a permanent URL through at least |
100 |
a couple mailing list archivers, so it'll be a good thing to link to |
101 |
in the future. :) |
102 |
|
103 |
|
104 |
-- |
105 |
:wq |