1 |
On Jul 20, 2013 9:27 PM, "Tanstaafl" <tanstaafl@×××××××××××.org> wrote: |
2 |
> |
3 |
> On 2013-07-19 3:02 PM, Paul Hartman <paul.hartman+gentoo@×××××.com> wrote: |
4 |
>> |
5 |
>> I think you are. Unless you are moving massive terabytes of data |
6 |
>> across your drive on a constant basis I would not worry about regular |
7 |
>> everyday write activity being a problem. |
8 |
> |
9 |
> |
10 |
> I have a question regarding the use of SSDs in a VM SAN... |
11 |
> |
12 |
> We are considering buying a lower-end SAN (two actually, one for each of |
13 |
our locations), with lots of 2.5" bays, and using SSDs. |
14 |
> |
15 |
> The two questions that come to mind are: |
16 |
> |
17 |
> Is this a good use of SSDs? I honestly don't know if the running VMs |
18 |
would benefit from the faster IO or not (I *think* the answer is a |
19 |
resounding yes)? |
20 |
> |
21 |
|
22 |
Yes, the I/O would be faster, although how significant totally depends on |
23 |
your workload pattern. |
24 |
|
25 |
The bottleneck would be the LAN, though. The peak bandwidth of SATA is 6 |
26 |
GB/s = 48 Gbps. You'll need active/active multipathing and/or bonded |
27 |
interfaces to cater for that firehose. |
28 |
|
29 |
> Next is RAID... |
30 |
> |
31 |
> I've avoided RAID5 (and RAID6) like the plague ever since I almost got |
32 |
bit really badly by a multiple drive failure... luckily, the RAID5 had just |
33 |
finished rebuilding successfully after the first drive failed, before the |
34 |
second drive failed. I can't tell you how many years I aged that day while |
35 |
it was rebuilding after replacing the second failed drive. |
36 |
> |
37 |
> Ever since, I've always used RAID10. |
38 |
> |
39 |
|
40 |
Ahh, the Cadillac of RAID arrays :-) |
41 |
|
42 |
> So... with SSDs, I think another advantage would be much faster rebuilds |
43 |
after a failed drive? So I could maybe start using RAID6 (would survive two |
44 |
simultaneous disk failures), and not lose so much available storage (50% |
45 |
with RAID10)? |
46 |
> |
47 |
|
48 |
If you're using ZFS with spinning disks as its vdev 'elements', resilvering |
49 |
(rebuilding the RAID array) would be somewhat faster because ZFS knows what |
50 |
needs to be resilvered (i.e., used blocks) and skip over parts that don't |
51 |
need to be resilvered (i.e., unused blocks). |
52 |
|
53 |
> Last... while researching this, I ran across a very interesting article |
54 |
that I'd appreciate hearing opinions on. |
55 |
> |
56 |
> "The Benefits of a Flash Only, SAN-less Virtual Architecture": |
57 |
> |
58 |
> |
59 |
http://www.storage-switzerland.com/Articles/Entries/2012/9/20_The_Benefits_of_a_Flash_Only,_SAN-less_Virtual_Architecture.html |
60 |
> |
61 |
> or |
62 |
> |
63 |
> http://tinyurl.com/khwuspo |
64 |
> |
65 |
> Anyway, I look forward to hearing thoughts on this... |
66 |
> |
67 |
|
68 |
Interesting... |
69 |
|
70 |
Another alternative for performance is to buy a bunch of spinning disks |
71 |
(let's say, 12 of them 'enterprise'-grade disks), join them into a ZFS Pool |
72 |
of 5 mirrored vdevs (that is, a RAID10 a la ZFS) + 2 spares, then use 4 |
73 |
SSDs to hold the ZFS Cache and Intent Log. |
74 |
|
75 |
The capital expenditure for the gained capacity should be cheaper, but with |
76 |
a very acceptable performance. |
77 |
|
78 |
Rgds, |
79 |
-- |