1 |
On Thu, Dec 7, 2017 at 9:53 AM, Frank Steinmetzger <Warp_7@×××.de> wrote: |
2 |
> |
3 |
> I see. I'm always looking for ways to optimise expenses and cut down on |
4 |
> environmental footprint by keeping stuff around until it really breaks. In |
5 |
> order to increase capacity, I would have to replace all four drives, whereas |
6 |
> with a mirror, two would be enough. |
7 |
> |
8 |
|
9 |
That is a good point. Though I would note that you can always replace |
10 |
the raidz2 drives one at a time - you just get zero benefit until |
11 |
they're all replaced. So, if your space use grows at a rate lower |
12 |
than the typical hard drive turnover rate that is an option. |
13 |
|
14 |
> |
15 |
> When I configured my kernel the other day, I discovered network block |
16 |
> devices as an option. My PC has a hotswap bay[0]. Problem solved. :) Then I |
17 |
> can do zpool replace with the drive-to-be-replaced still in the pool, which |
18 |
> improves resilver read distribution and thus lessens the probability of a |
19 |
> failure cascade. |
20 |
> |
21 |
|
22 |
If you want to get into the network storage space I'd keep an eye on |
23 |
cephfs. I don't think it is quite to the point where it is a |
24 |
zfs/btrfs replacement option, but it could get there. I don't think |
25 |
the checksums are quite end-to-end, but they're getting better. |
26 |
Overall stability for cephfs itself (as opposed to ceph object |
27 |
storage) is not as good from what I hear. The biggest issue with it |
28 |
though is RAM use on the storage nodes. They want 1GB/TB RAM, which |
29 |
rules out a lot of the cheap ARM-based solutions. Maybe you can get |
30 |
by with less, but finding ARM systems with even 4GB of RAM is tough, |
31 |
and even that means only one hard drive per node, which means a lot of |
32 |
$40+ nodes to go on top of the cost of the drives themselves. |
33 |
|
34 |
Right now cephfs mainly seems to appeal to the scalability use case. |
35 |
If you have 10k servers accessing 150TB of storage and you want that |
36 |
all in one managed well-performing pool that is something cephfs could |
37 |
probably deliver that almost any other solution can't (and the ones |
38 |
that can cost WAY more than just one box running zfs on a couple of |
39 |
RAIDs). |
40 |
|
41 |
|
42 |
-- |
43 |
Rich |