1 |
hi - i'm to setup my 1st RAID, and i'd appreciate |
2 |
if any of you volunteers some time to share your |
3 |
valuable experience on this subject. |
4 |
|
5 |
my scenario |
6 |
----------- |
7 |
|
8 |
0. i don't boot from the RAID. |
9 |
|
10 |
1. read is as important as write. i don't |
11 |
have any application-specific scenario that |
12 |
makes me somehow favor one over another. |
13 |
so RAIDs that speed up the read (or write) |
14 |
while significantly harming the write (or |
15 |
read) is not welcome. |
16 |
|
17 |
2. replacing failed disks may take a week or |
18 |
two. so, i guess that i may have several |
19 |
disks fail one after another in the 1-2 |
20 |
weeks (specially if they were bought |
21 |
about the same time). |
22 |
|
23 |
3. i would like to be able to grow the RAID's |
24 |
total space (as needed), and increase its |
25 |
reliability (i.e. duplicates/partities) as |
26 |
needed. |
27 |
|
28 |
e.g. suppose that i got a 2TB RAID that |
29 |
tolerates 1 disk failure. i'd like to, at |
30 |
some point, to have the following options: |
31 |
|
32 |
* only increase the total space (e.g. |
33 |
make it 3TB), without increasing |
34 |
failure toleration (so 2 disk failure |
35 |
would result in data loss). |
36 |
|
37 |
* or, only increase the failure tolerance |
38 |
(e.g. such that 2 disks failure would |
39 |
not lead to data loss), without |
40 |
increasing the total space (e.g. space |
41 |
remains 2TB). |
42 |
|
43 |
* or, increase, both, the space and the |
44 |
failure tolerance at the same time. |
45 |
|
46 |
4. only interested in software RAID. |
47 |
|
48 |
my thought |
49 |
---------- |
50 |
|
51 |
i think these are not suitable: |
52 |
|
53 |
* RAID 0: fails to satisfy point (3). |
54 |
|
55 |
* RAID 1: fails to satisfy points (1) and (3). |
56 |
|
57 |
* RAIDs 4 to 6: fails to satisfy point (3) |
58 |
since they are stuck with a fixed tolerance |
59 |
towards failing disks (i.e. RAIDs 4 and 5 |
60 |
tolerate only 1 disk failure, and RAID 6 |
61 |
tolerates only 2). |
62 |
|
63 |
|
64 |
this leaves me with RAID 10, with the "far" |
65 |
layout. e.g. --layout=n2 would tolerate the |
66 |
failure of two disks, --layout=n3 three, etc. or |
67 |
is it? (i'm not sure). |
68 |
|
69 |
my questions |
70 |
------------ |
71 |
|
72 |
Q1: which RAID setup would you recommend? |
73 |
|
74 |
Q2: how would the total number of disks in a |
75 |
RAID10 setup affect the tolerance towards |
76 |
the failing disks? |
77 |
|
78 |
if the total number of disks is even, then |
79 |
it is easy to see how this is equivalent |
80 |
to the classical RAID 1+0 as shown in |
81 |
md(4), where any disk failure is tolerated |
82 |
for as long as each RAID1 group has 1 disk |
83 |
failure only. |
84 |
|
85 |
so, we get the following combinations of |
86 |
disk failures that, if happen, we won't |
87 |
lose any data: |
88 |
|
89 |
RAID0 |
90 |
------^------ |
91 |
RAID1 RAID1 |
92 |
--^-- --^-- |
93 |
F . . . < cases with |
94 |
. F . . < single disk |
95 |
. . F . < failures |
96 |
. . . F < |
97 |
|
98 |
F . . F < cases with |
99 |
. F F . < two disk |
100 |
. F . F < failures |
101 |
F . F . < |
102 |
. F F . < |
103 |
|
104 |
this gives us 4+5=9 possible disk failure |
105 |
scenarious where we can survive it without |
106 |
any data loss. |
107 |
|
108 |
but, when the number of disks is odd, then |
109 |
written bytes and their duplicates will |
110 |
start wrap around, and it is difficult for |
111 |
me to intuitively see how would this |
112 |
affect the total number of scenarious |
113 |
where i will survive a disk failure. |
114 |
|
115 |
Q3: what are the future growth/shrinkage |
116 |
options for a RAID10 setup? e.g. with |
117 |
respect to these: |
118 |
|
119 |
1. read/write speed. |
120 |
2. tolerance guarantee towards failing |
121 |
disks. |
122 |
3. total available space. |
123 |
|
124 |
rgrds, |
125 |
cm. |