1 |
On Thu, Dec 07, 2017 at 10:26:34AM -0500, Rich Freeman wrote: |
2 |
> On Thu, Dec 7, 2017 at 9:53 AM, Frank Steinmetzger <Warp_7@×××.de> wrote: |
3 |
> > |
4 |
> > I see. I'm always looking for ways to optimise expenses and cut down on |
5 |
> > environmental footprint by keeping stuff around until it really breaks. In |
6 |
> > order to increase capacity, I would have to replace all four drives, whereas |
7 |
> > with a mirror, two would be enough. |
8 |
> > |
9 |
> |
10 |
> That is a good point. Though I would note that you can always replace |
11 |
> the raidz2 drives one at a time - you just get zero benefit until |
12 |
> they're all replaced. So, if your space use grows at a rate lower |
13 |
> than the typical hard drive turnover rate that is an option. |
14 |
> |
15 |
> > |
16 |
> > When I configured my kernel the other day, I discovered network block |
17 |
> > devices as an option. My PC has a hotswap bay[0]. Problem solved. :) Then I |
18 |
> > can do zpool replace with the drive-to-be-replaced still in the pool, which |
19 |
> > improves resilver read distribution and thus lessens the probability of a |
20 |
> > failure cascade. |
21 |
> > |
22 |
> |
23 |
> If you want to get into the network storage space I'd keep an eye on |
24 |
> cephfs. |
25 |
|
26 |
No, I was merely talking about the use case of replacing drives on-the-fly |
27 |
with the limited hardware available (all slots are occupied). It was not |
28 |
about expanding my storage beyond what my NAS case can provide. |
29 |
|
30 |
Resilvering is risky business, more so with big drives and especially once |
31 |
they get older. That's why I was talking about adding the new drive |
32 |
externally, which allows me to use all old drives during resilvering. Once |
33 |
it is resilvered, I install it physically. |
34 |
|
35 |
> […] They want 1GB/TB RAM, which rules out a lot of the cheap ARM-based |
36 |
> solutions. Maybe you can get by with less, but finding ARM systems with |
37 |
> even 4GB of RAM is tough, and even that means only one hard drive per |
38 |
> node, which means a lot of $40+ nodes to go on top of the cost of the |
39 |
> drives themselves. |
40 |
|
41 |
No need to overshoot. It's a simple media archive and I'm happy with what I |
42 |
have, apart from a few shortcomings of the case regarding quality and space. |
43 |
My main goal was reliability, hence ZFS, ECC, and a Gold-rated PSU. They say |
44 |
RAID is not a backup. For me it is -- in case of disk failure, which is my |
45 |
main dread. |
46 |
|
47 |
You can't really get ECC on ARM, right? So M-ITX was the next best choice. I |
48 |
have a tiny (probably one of the smallest available) M-ITX case for four |
49 |
3,5″ bays and an internal 2.5″ mount: |
50 |
https://www.inter-tech.de/en/products/ipc/storage-cases/sc-4100 |
51 |
|
52 |
Tata... |
53 |
-- |
54 |
I cna ytpe 300 wrods pre mniuet!!! |