1 |
Thank you all! :) I finally have all clear. |
2 |
I'm going to do raid 10. Any way, I'm going to do a benchmark before to |
3 |
install. |
4 |
|
5 |
Thank you!;) |
6 |
|
7 |
|
8 |
2014-02-24 14:03 GMT-03:00 Jarry <mr.jarry@×××××.com>: |
9 |
|
10 |
> On 24-Feb-14 7:27, Facundo Curti wrote: |
11 |
> |
12 |
> n= number of disks |
13 |
>> |
14 |
>> reads: |
15 |
>> raid1: n*2 |
16 |
>> raid0: n*2 |
17 |
>> |
18 |
>> writes: |
19 |
>> raid1: n |
20 |
>> raid0: n*2 |
21 |
>> |
22 |
>> But, in real life, the reads from raid 0 doesn't work at all, because if |
23 |
>> you use "chunk size" from 4k, and you need to read just 2kb (most binary |
24 |
>> files, txt files, etc..). the read speed should be just of n. |
25 |
>> |
26 |
> |
27 |
> Definitely not true. Very rarely you need to read just one small file. |
28 |
> Mostly you need many small files (i.e. compilation) or a few big files |
29 |
> (i.e. database). I do not know what load you expect, but in my case |
30 |
> raid0 (with SSD) gave me about twice the r/w speed on heavily-loaded |
31 |
> virtualization platform with many virtual machines. And not only speed |
32 |
> is higher, but also IOPS are splitted to two disks (nearly doubled). |
33 |
> |
34 |
> I did some testing with 2xSSD/512GB in raid1, 2xSSD/256GB in raid0 and |
35 |
> 3xSSD/256GB in raid5 (I used 840/pro SSD with quite good HW-controller |
36 |
> but I think with mdadm it might be similar). Raid0 was way ahead of |
37 |
> other two configurations in my case. |
38 |
> |
39 |
> Finally I went for 4xSSD/256GB in raid10 as I needed both speed and |
40 |
> redundancy... |
41 |
> |
42 |
> Jarry |
43 |
> |
44 |
> -- |
45 |
> _______________________________________________________________ |
46 |
> This mailbox accepts e-mails only from selected mailing-lists! |
47 |
> Everything else is considered to be spam and therefore deleted. |
48 |
> |
49 |
> |