1 |
rane 05/09/20 18:35:24 |
2 |
|
3 |
Added: xml/htdocs/doc/en/articles software-raid-p1.xml |
4 |
software-raid-p2.xml |
5 |
Log: |
6 |
two new articles from #104198 |
7 |
|
8 |
Revision Changes Path |
9 |
1.1 xml/htdocs/doc/en/articles/software-raid-p1.xml |
10 |
|
11 |
file : http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/software-raid-p1.xml?rev=1.1&content-type=text/x-cvsweb-markup&cvsroot=gentoo |
12 |
plain: http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/software-raid-p1.xml?rev=1.1&content-type=text/plain&cvsroot=gentoo |
13 |
|
14 |
Index: software-raid-p1.xml |
15 |
=================================================================== |
16 |
<?xml version="1.0" encoding="UTF-8"?> |
17 |
<!DOCTYPE guide SYSTEM "/dtd/guide.dtd"> |
18 |
<!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/articles/software-raid-p1.xml,v 1.1 2005/09/20 18:35:24 rane Exp $ --> |
19 |
|
20 |
<guide link="/doc/en/articles/software-raid-p1.xml"> |
21 |
<title>Software RAID in the new Linux 2.4 kernel, Part 1</title> |
22 |
|
23 |
<author title="Author"> |
24 |
<mail link="drobbins@g.o">Daniel Robbins</mail> |
25 |
</author> |
26 |
<!-- xmlified by: Joshua Saddler (jackdark@×××××.com) --> |
27 |
|
28 |
<abstract> |
29 |
In his two-part series on the Linux 2.4 Software RAID, Daniel Robbins |
30 |
introduces the new technology that's used to increase disk performance |
31 |
and reliability by distributing data over multiple disks. This first |
32 |
installment covers Software RAID setup (kernel and tools installation) |
33 |
and shows you how to create linear and RAID-0 volumes. |
34 |
</abstract> |
35 |
|
36 |
<!-- The original version of this article was first published on IBM |
37 |
developerWorks, and is property of Westtech Information Services. This |
38 |
document is an updated version of the original article, and contains |
39 |
various improvements made by the Gentoo Linux Documentation team --> |
40 |
|
41 |
<version>1.0</version> |
42 |
<date>2005-08-29</date> |
43 |
|
44 |
<chapter> |
45 |
<title>Installation and a general introduction</title> |
46 |
<section> |
47 |
<title>The wonders of RAID</title> |
48 |
<body> |
49 |
|
50 |
<note> |
51 |
The original version of this article was first published on IBM |
52 |
developerWorks, and is property of Westtech Information Services. This |
53 |
document is an updated version of the original article, and contains |
54 |
various improvements made by the Gentoo Linux Documentation team. |
55 |
</note> |
56 |
|
57 |
<p> |
58 |
The 2.4 kernel has a number of nifty features and additions. One of |
59 |
these is the inclusion of a modern Software RAID implementation -- yay! |
60 |
Software RAID allows you to dramatically increase Linux disk IO |
61 |
performance and reliability without buying expensive hardware RAID |
62 |
controllers or enclosures. Because it's implemented in software, Linux |
63 |
RAID software is flexible, fast... and fun! |
64 |
</p> |
65 |
|
66 |
<p> |
67 |
The concept behind Software RAID is simple -- it allows you to combine |
68 |
two or more block devices (usually disk partitions) into a single RAID |
69 |
device. So let's say you have three empty partitions: |
70 |
<path>hda3</path>, <path>hdb3</path>, and <path>hdc3</path>. Using |
71 |
Software RAID, you can combine these partitions and address them as a |
72 |
single RAID device, <path>/dev/md0</path>. <path>md0</path> can then |
73 |
be formatted to contain a filesystem and used like any other |
74 |
partition. There are also a number of different ways to configure a |
75 |
RAID volume -- some maximize performance, others maximize |
76 |
availability, while others provide a mixture of both. |
77 |
</p> |
78 |
|
79 |
<p> |
80 |
There are two forms of RAID: linear and RAID-0 mode. Neither one is |
81 |
technically a form of RAID at all, since RAID stands for "redundant |
82 |
array of inexpensive disks", and RAID-0 and linear mode don't |
83 |
provide any kind of data redundancy. However, both modes -- especially |
84 |
RAID-0 -- are very useful. After giving you a quick overview of these |
85 |
two forms of "AID", I'll step you through the process of getting |
86 |
Software RAID set up on your system. |
87 |
</p> |
88 |
|
89 |
</body> |
90 |
</section> |
91 |
</chapter> |
92 |
|
93 |
<chapter> |
94 |
<title>Introduction to linear mode</title> |
95 |
<section> |
96 |
<body> |
97 |
|
98 |
<p> |
99 |
Linear mode is one of the simplest methods of combining two or more |
100 |
block devices into a RAID volume -- the method of simple |
101 |
concatenation. If you have three partitions, <path>hda3</path>, |
102 |
<path>hdb3</path>, and <path>hdc3</path>, and each is about 2GB, they |
103 |
will create a resultant linear volume of 6GB. The first third of the |
104 |
linear volume will reside on <path>hda3</path>, the last third on |
105 |
<path>hdc3</path>, and the middle third on <path>hdb3</path>. |
106 |
</p> |
107 |
|
108 |
<p> |
109 |
To configure a linear volume, you'll need at least two partitions |
110 |
that you'd like to join together. They can be different sizes, and |
111 |
they can even all reside on the same physical disk without |
112 |
negatively affecting performance. |
113 |
</p> |
114 |
|
115 |
</body> |
116 |
</section> |
117 |
<section> |
118 |
<title>Linear applications</title> |
119 |
<body> |
120 |
|
121 |
<p> |
122 |
Linear mode is the best way to combine two or more partitions on the |
123 |
same disk into a single volume. While doing this with any other RAID |
124 |
technique will result in a dramatic loss of performance, linear mode |
125 |
is saved from this problem because it doesn't write to its |
126 |
constituent partitions in parallel (as all the other RAID modes do). |
127 |
But for the same reason, linear mode has the liability of lacking |
128 |
scale in performance compared to RAID-0, RAID-4, RAID-5, and to some |
129 |
extent RAID-1. |
130 |
</p> |
131 |
|
132 |
<p> |
133 |
In general, linear mode doesn't provide any kind of performance |
134 |
improvement over traditional non-RAID partitions. Actually, if you |
135 |
spread your linear volume over multiple disks, your volume is more |
136 |
likely to become unavailable due to a random hard drive failure. The |
137 |
probability of failure of a linear volume will be equal to the sum of |
138 |
the probabilities of failure of its constituent physical disks and |
139 |
controllers. If one physical disk dies, the linear volume is |
140 |
generally unrecoverable. Linear mode does not offer any additional |
141 |
redundancy over using a single disk. |
142 |
</p> |
143 |
|
144 |
<p> |
145 |
But linear mode is a great way to avoid repartitioning a single disk. |
146 |
For example, say your second IDE drive has two unused partitions, |
147 |
<path>hdb1</path> and <path>hdb3</path>. And say you're unable to |
148 |
repartition the drive due to critical data hanging out at |
149 |
<path>hdb2</path>. You can still combine <path>hdb1</path> and |
150 |
<path>hdb3</path> into a single, cohesive whole using linear mode. |
151 |
</p> |
152 |
|
153 |
<p> |
154 |
Linear mode is also a good way to combine partitions of different |
155 |
sizes on different disks when you just need a single big partition |
156 |
(and don't really need to increase performance). But for any other |
157 |
job there are better RAID technologies you can use. |
158 |
</p> |
159 |
|
160 |
</body> |
161 |
</section> |
162 |
</chapter> |
163 |
|
164 |
<chapter> |
165 |
<title>Introduction to RAID-0 mode</title> |
166 |
<section> |
167 |
<body> |
168 |
|
169 |
<p> |
170 |
RAID-0 is another one of those "RAID" modes that doesn't have any |
171 |
"R" (redundancy) at all. Nevertheless, RAID-0 is immensely useful. |
172 |
This is primarily because it offers the highest performance |
173 |
potential of any form of RAID. |
174 |
</p> |
175 |
|
176 |
<p> |
177 |
To set up a RAID-0 volume you'll need two or more equally (or |
178 |
almost equally) sized partitions. The RAID-0 code will evenly |
179 |
distribute writes (and thus reads) between all constituent |
180 |
partitions. And by parallelizing reads and writes between all |
181 |
constituent devices, RAID-0 has the benefit of multiplying IO |
182 |
performance. Ignoring the complexities of controller and bus |
183 |
bandwidth, you can expect a RAID-0 volume composed of two |
184 |
partitions on two separate identical disks to offer nearly double |
185 |
the performance of a traditional partition. Crank your RAID-0 |
186 |
volume up to three disks, and performance will nearly triple. |
187 |
This is why a RAID-0 array of IDE disks can outperform the fastest |
188 |
SCSI or FC-AL drive on the market. For truly blistering |
189 |
performance, you can set up a bunch of SCSI or FC-AL drives in a |
190 |
RAID-0 array. That's the beauty of RAID-0. |
191 |
</p> |
192 |
|
193 |
<p> |
194 |
To create a RAID-0 volume, you'll need two or more equally sized |
195 |
partitions located on separate disks. The capacity of the volume |
196 |
will be equal to the combined capacity of the constituent |
197 |
partitions. As with linear mode, you can combine block devices |
198 |
from various sources (such as IDE and SCSI drives) into a single |
199 |
volume with no problems. |
200 |
</p> |
201 |
|
202 |
<p> |
203 |
If you're creating a RAID-0 volume using IDE disks, you should try |
204 |
to use UltraDMA compliant disks and controllers for maximum |
205 |
reliability. And you should use only one drive per IDE channel to |
206 |
avoid sluggish performance -- a slave device, especially if it's |
207 |
also part of the RAID-0 array, will slow things down so much as to |
208 |
nearly eliminate any RAID-0 performance benefit. You may also need |
209 |
to add an off-board IDE controller so that you have the extra IDE |
210 |
channels you require. |
211 |
</p> |
212 |
|
213 |
<p> |
214 |
If you're creating a RAID-0 volume out of SCSI devices, be aware |
215 |
|
216 |
|
217 |
|
218 |
1.1 xml/htdocs/doc/en/articles/software-raid-p2.xml |
219 |
|
220 |
file : http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/software-raid-p2.xml?rev=1.1&content-type=text/x-cvsweb-markup&cvsroot=gentoo |
221 |
plain: http://www.gentoo.org/cgi-bin/viewcvs.cgi/xml/htdocs/doc/en/articles/software-raid-p2.xml?rev=1.1&content-type=text/plain&cvsroot=gentoo |
222 |
|
223 |
Index: software-raid-p2.xml |
224 |
=================================================================== |
225 |
<?xml version="1.0" encoding="UTF-8"?> |
226 |
<!DOCTYPE guide SYSTEM "/dtd/guide.dtd"> |
227 |
<!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/articles/software-raid-p2.xml,v 1.1 2005/09/20 18:35:24 rane Exp $ --> |
228 |
|
229 |
<guide link="/doc/en/articles/software-raid-p2.xml"> |
230 |
<title>Software RAID in the new Linux 2.4 kernel, Part 2</title> |
231 |
|
232 |
<author title="Author"> |
233 |
<mail link="drobbins@g.o">Daniel Robbins</mail> |
234 |
</author> |
235 |
<!-- xmlified by: Joshua Saddler (jackdark@×××××.com) --> |
236 |
|
237 |
<abstract> |
238 |
In this two-part series, Daniel Robbins introduces you to Linux 2.4 |
239 |
Software RAID, a technology used to increase disk performance and |
240 |
reliability by distributing data over multiple disks. In this article, |
241 |
Daniel explains what software RAID-1, 4, and 5 can and cannot do for |
242 |
you and how you should approach the implementation of these RAID |
243 |
levels in a production environment. In the second half of the article, |
244 |
Daniel walks you through the simulation of a RAID-1 failed drive |
245 |
replacement. |
246 |
</abstract> |
247 |
|
248 |
<!-- The original version of this article was first published on IBM |
249 |
developerWorks, and is property of Westtech Information Services. This |
250 |
document is an updated version of the original article, and contains |
251 |
various improvements made by the Gentoo Linux Documentation team --> |
252 |
|
253 |
<version>1.0</version> |
254 |
<date>2005-08-30</date> |
255 |
|
256 |
<chapter> |
257 |
<title>Setting up RAID-1 in a production environment</title> |
258 |
<section> |
259 |
<title>Real-world RAID</title> |
260 |
<body> |
261 |
|
262 |
<note> |
263 |
The original version of this article was first published on IBM |
264 |
developerWorks, and is property of Westtech Information Services. This |
265 |
document is an updated version of the original article, and contains |
266 |
various improvements made by the Gentoo Linux Documentation team. |
267 |
</note> |
268 |
|
269 |
<p> |
270 |
In my <uri link="/doc/en/articles/software-raid-p1.xml">previous |
271 |
article</uri>, I introduced you to Linux 2.4's software RAID |
272 |
functionality, showing you how to set up linear, RAID-0, and RAID-1 |
273 |
volumes. In this article, we look at what you need to know in order to |
274 |
use RAID-1 to increase availability in a production environment. This |
275 |
requires a lot more understanding and knowledge than just setting up |
276 |
RAID-1 on a test server or at home -- specifically, you'll need to |
277 |
know exactly what RAID-1 will protect you against, and how to keep |
278 |
your RAID volume up and running in case of a disk failure. In this |
279 |
article, we'll cover these topics, starting with an overview of what |
280 |
RAID-1, 4, and 5 can and can't do for you, and ending with a complete |
281 |
test simulation of a failed RAID 1 drive replacement -- something that |
282 |
you should actually do (with this article as your guide) if at all |
283 |
possible. After going through the simulation, you'll have all the |
284 |
experience you need to handle a RAID-1 failure in a real-world |
285 |
environment. |
286 |
</p> |
287 |
|
288 |
</body> |
289 |
</section> |
290 |
<section> |
291 |
<title>What RAID doesn't do</title> |
292 |
<body> |
293 |
|
294 |
<p> |
295 |
The fault-tolerant features of RAID are designed to protect you from |
296 |
the negative impacts of a spontaneous complete drive failure. That's |
297 |
a good thing. But RAID isn't a perfect fix for every kind of |
298 |
reliability problem. Before implementing a fault-tolerant form of RAID |
299 |
(1,4,5) in a production environment, it's extremely important that you |
300 |
know exactly what RAID will and <b>will not</b> do for you. When we're |
301 |
in a situation where we're depending on RAID to perform, we don't want |
302 |
to make any false assumptions about what it does. Let's start by |
303 |
dispelling common myths about RAID 1, 4, and 5. |
304 |
</p> |
305 |
|
306 |
<p> |
307 |
A lot of people think that if they place all their important data on a |
308 |
RAID 1/4/5 volume, then they won't have to perform regular backups. |
309 |
This is completely false -- here's why. RAID 1/4/5 helps to protect |
310 |
against unplanned <e>downtime</e> caused by a random drive failure. |
311 |
However, it offers no protection against accidental or malicious |
312 |
<e>data corruption</e>. If you type <c>cd /; rm -rf *</c> as root on a |
313 |
RAID volume, you'll lose a lot of very important data in a matter of |
314 |
seconds, and the fact that you have a 10 drive RAID-5 configuration will |
315 |
be of little significance. Also, RAID won't help you if your server is |
316 |
physically stolen or if there's a fire in your building. And of course, |
317 |
if you don't implement a backup strategy, you won't have an archive of |
318 |
past data -- if someone in your office deletes a bunch of important |
319 |
files, you won't be able to recover them. That alone should be enough |
320 |
to convince you that, in most circumstances, you should plan and |
321 |
implement a backup strategy <e>before</e> even thinking about tackling |
322 |
RAID-1, 4, or 5. |
323 |
</p> |
324 |
|
325 |
<p> |
326 |
Another mistake is to implement software RAID on a system composed of |
327 |
low-quality hardware. If you're putting together a server that's going |
328 |
to do something important, it makes sense to purchase the |
329 |
highest-quality hardware that's still comfortably within your budget. |
330 |
If your system is unstable or improperly cooled, you'll run into |
331 |
problems that RAID can't solve. On a similar note, RAID obviously can't |
332 |
give you any additional uptime in the case of a power outage. If your |
333 |
server is going to be doing anything relatively important, make sure |
334 |
that it's been equipped with an uninterruptible power supply (UPS). |
335 |
</p> |
336 |
|
337 |
<p> |
338 |
Next, we move on to filesystem issues. The filesystem exists "on top" |
339 |
of your software RAID volume. This means that using software RAID does |
340 |
not allow you to escape filesystem issues, such as long and potentially |
341 |
problematic <c>fsck</c>s if you happen to be using a non-journalled or |
342 |
flaky filesystem. So, software RAID isn't going to make the ext2 |
343 |
filesystem more reliable; that's why it's so important that the Linux |
344 |
community has ReiserFS, as well as JFS and XFS in the works. Software |
345 |
RAID and a reliable journalling filesystem make a great combination. |
346 |
</p> |
347 |
|
348 |
</body> |
349 |
</section> |
350 |
<section> |
351 |
<title>RAID - intelligent implementation</title> |
352 |
<body> |
353 |
|
354 |
<p> |
355 |
Hopefully, the previous section dispelled any RAID myths that you might |
356 |
have had. When you implement RAID-1, 4, or 5, it's very important that |
357 |
you view the technology as something that will enhance <e>uptime</e>. |
358 |
When you implement one of these RAID levels, you're protecting yourself |
359 |
against a very specific situation -- a spontaneous complete (single or |
360 |
multiple) drive failure. If you experience this situation, software |
361 |
RAID will allow the system to continue running, while you make |
362 |
arrangements to replace the failed drive with a new one. In other words, |
363 |
if you implement RAID 1, 4, or 5, you'll be reducing your risk of |
364 |
having a long, unplanned downtime due to a complete drive failure. |
365 |
Instead, you can have a short planned downtime -- just enough time to |
366 |
replace the dead drive. Obviously, this means that if having a |
367 |
highly-available system isn't a priority for you, then you shouldn't be |
368 |
implementing software RAID, unless you plan to use it primarily as a |
369 |
way to boost file I/O performance. |
370 |
</p> |
371 |
|
372 |
<p> |
373 |
A smart system administrator uses software RAID for a specific purpose |
374 |
-- to improve the reliability of an already very reliable server. If |
375 |
you're a smart sysadmin, you've already covered the basics. You've |
376 |
protected your organization against catastrophe by implementing a |
377 |
regular backup plan. You've hooked your server up to a UPS, and have |
378 |
the UPS monitoring software up and running so that your server will |
379 |
shut down safely in the case of an extended power outage. Maybe you're |
380 |
using a journalling filesystem such as ReiserFS to reduce <c>fsck</c> |
381 |
time and increase filesystem reliability and performance. And hopefully, |
382 |
your server is well-cooled and is composed of high-quality hardware, |
383 |
and you've paid close attention to security issues. Now, and only now, |
384 |
should you consider implementing software RAID-1, 4 or 5 -- by doing so, |
385 |
you'll potentially give your server a few more percentage points of |
386 |
uptime by guarding it against a complete drive failure. Software RAID |
387 |
is that added layer of protection that makes an already rugged server |
388 |
even better. |
389 |
</p> |
390 |
|
391 |
</body> |
392 |
</section> |
393 |
</chapter> |
394 |
|
395 |
<chapter> |
396 |
<title>A RAID-1 walkthrough</title> |
397 |
<section> |
398 |
<body> |
399 |
|
400 |
<p> |
401 |
Now that you've read about what RAID can and can't do, I hope you have |
402 |
reasonable expectations and the right attitude. In this section, I'll |
403 |
walk you through the process of simulating a disk failure, and then |
404 |
bringing your RAID volume back out of degraded mode. If you're have the |
405 |
ability to set up a RAID-1 volume on a test machine and follow along |
406 |
with me, I highly recommend that you do so. This kind of simulation can |
407 |
be fun. And having a little fun right now will help to ensure that when |
408 |
a drive really fails, you'll be calm and collected, and know exactly |
409 |
what to do. |
410 |
</p> |
411 |
|
412 |
<impo> |
413 |
To perform this test, it's essential that you set up your RAID-1 volume |
414 |
so that you can still boot your Linux system with one hard drive |
415 |
unplugged, because this is how we're going to simulate a drive failure. |
416 |
</impo> |
417 |
|
418 |
<p> |
419 |
OK, our first step is to set up a RAID-1 volume; refer to my <uri |
420 |
link="/doc/en/articles/software-raid-p1.xml">previous article</uri> if |
421 |
you need a refresher on how to do this. Once you've set up your volume, |
422 |
you'll see something like this if you <c>cat /proc/mdstat</c>: |
423 |
</p> |
424 |
|
425 |
|
426 |
|
427 |
-- |
428 |
gentoo-doc-cvs@g.o mailing list |