1 |
On Tue, Nov 29, 2011 at 1:20 PM, Florian Philipp <lists@×××××××××××.net> wrote: |
2 |
> Am 29.11.2011 14:44, schrieb Michael Mol: |
3 |
>> On Tue, Nov 29, 2011 at 2:07 AM, Florian Philipp <lists@×××××××××××.net> wrote: |
4 |
>>> Am 29.11.2011 05:10, schrieb Michael Mol: |
5 |
>>>> I've got four 750GB drives in addition to the installed system drive. |
6 |
>>>> |
7 |
>>>> I'd like to aggregate them and split them into a few volumes. My first |
8 |
>>>> inclination would be to raid them and drop lvm on top. I know lvm well |
9 |
>>>> enough, but I don't remember md that well. |
10 |
>>>> |
11 |
>>>> Since I don't recall md well, and this isn't urgent, I figure I can look |
12 |
>>>> at the options. |
13 |
>>>> |
14 |
>>>> The obvious ones appear tobe mdraid, dmraid and btrfs. I'm not sure I'm |
15 |
>>>> interested in btrfs until it's got a fsck that will repair errors, but |
16 |
>>>> I'm looking forward to it once it's ready. |
17 |
>>>> |
18 |
>>>> Any options I missed? What are the advantages and disadvantages? |
19 |
>>>> |
20 |
>>>> ZZ |
21 |
>>>> |
22 |
>>> |
23 |
>>> Sounds good so far. Of course, you only need mdraid OR dmraid (md |
24 |
>>> recommended). |
25 |
>> |
26 |
>> dmraid looks rather new on the block. Or, at least, I've been more |
27 |
>> aware of md than dm over the years. What's its purpose, as compared to |
28 |
>> dmraid? Why is mdraid recommended over it? |
29 |
>> |
30 |
> |
31 |
> dmraid being new? Not really. Anyway: Under the hood, md and dm use the |
32 |
> exactly same code in the kernel. They just provide different interfaces. |
33 |
> mdraid is a linux-specific software RAID implemented on top of ordinary |
34 |
> single-disk disk controllers. It works like a charm and any Linux system |
35 |
> with any disk controller can work with it (if you ever change your |
36 |
> hardware). |
37 |
> |
38 |
> dmraid provides a "fake-RAID": A software RAID with support of (or |
39 |
> rather, under control of) a cheap on-board RAID controller. |
40 |
> Performance-wise, it usually doesn't provide any kind of advantage |
41 |
> because the kernel driver still has to do all the heavy lifting |
42 |
> (therefore it uses the same code base as mdraid). Its most important |
43 |
> disadvantage is that it binds you to the vendor of the chipset who |
44 |
> determines the on-disk layout. Apparently, this gets better in the last |
45 |
> few years because of some pretty major consolidations on the chipset |
46 |
> market. It might be helpful if you consider dual-booting Windows on the |
47 |
> same RAID (both systems ought to use the same disk layout by means of |
48 |
> their respective drivers). |
49 |
> |
50 |
> |
51 |
>>> What kind of RAID level do you want to use, 10 or 5? You |
52 |
>>> can also split it: Use a smaller RAID 10 for performance-critical |
53 |
>>> partitions like /usr and the more space-efficient RAID 5 for bulk like |
54 |
>>> videos. You can handle this with one LVM volume group consisting of two |
55 |
>>> physical volumes. Then you can decide on a per-logical-volume basis |
56 |
>>> where it should allocate space and also migrate LVs between the two PVs. |
57 |
>> |
58 |
>> Since I've got four disks for the pool, I was thinking raid10 with lvm |
59 |
>> on top, and a single lvm pv above that. |
60 |
>> |
61 |
> |
62 |
> Yeah, that would also be my recommendation. But if storage efficiency is |
63 |
> more relevant, RAID-5 with 4 disks brings you 750GB more usable storage. |
64 |
> |
65 |
> |
66 |
|
67 |
It looks like I'll want to try two different configurations. RAID5 and |
68 |
RAID10. Not for different storage requirements, but I want to see |
69 |
exactly what the performance drop is. |
70 |
|
71 |
I wish lvm striping supported data redundancy. But, then, I wish btrfs |
72 |
was ready... |
73 |
|
74 |
-- |
75 |
:wq |