Gentoo Archives: gentoo-user

From: Robert David <robert.david@××××××.net>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ
Date: Thu, 01 Jul 2021 13:47:20
Message-Id: 7622209.dfAi7KttbR@robert-notebook
In Reply to: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ by Frank Steinmetzger
1 Hi Frank,
3 On Tuesday, June 29, 2021 3:56:49 PM CEST Frank Steinmetzger wrote:
4 > Hello fellows
5 >
6 > This is not really a Gentoo question, but at least my NAS (which this mail
7 > is about) is running Gentoo. :)
8 >
9 > There are some people amongst this esteemed group that know their stuff
10 > about storage and servers and things, so I thought I might try my luck here.
11 > I’ve already looked on the Webs, but my question is a wee bit specific and
12 > I wasn’t able to find the exact answer (yet). And I’m a bit hesitant to ask
13 > this newbie-ish question in a ZFS expert forum. ;-)
14 >
15 > Prologue:
16 > Due to how records are distributed across blocks in a parity-based ZFS vdev,
17 > it is recommended to use 2^n data disks. Technically, it is perfectly fine
18 > to deviate from it, but for performance reasons (mostly space efficiency)
19 > it is not the recommended way. That’s because the (default) maximum record
20 > size of 128 k itself is a power of 2 and thus can be distributed evenly on
21 > all drives. At least that’s my understanding. Is that correct?
22 >
23 > So here’s the question:
24 > If I had three data drives, (c|w)ould I get around that problem by setting a
25 > record size that is divisible by 3, like 96 k, or even 3 M?
27 I would not bother with this. 128k is a good default for general usage
28 and even if you got 3 data disks the actual loss is pointless to think
29 about (assuming you got 4k disks).
32 >
33 >
34 >
35 > Here’s the background of my question:
36 > Said NAS is based on a Mini-ITX case which has only four drive slots (which
37 > is the most common configuration for a case of this formfactor). I started
38 > with two 6 TB drives, running in a mirror configuration. One year later
39 > space was running out and I filled the remaining slots. To maximise
40 > reliability, I went with RaidZ2.
41 >
42 > I reached 80 % usage (which is the recommended maximum for ZFS) and am
43 > now evaluating my options for the coming years.
44 > 1) Reduce use of space by re-encoding. My payload is mainly movies, among
45 > which are 3 TB of DVDs which can be shrunk by at least ⅔ by re-encoding.
46 > → this takes time and computing effort, but is a long-term goal anyway.
48 I always think about in such cases if I really need such data. In many
49 cases with clear consideration I find out I may remove half of the data
50 without any pain. It is like cleaning my home, there are many things
51 extra and there is missing a space for real valuable things, with disk
52 data it is the same.
54 > 2) Replace all drives with bigger ones. There are three counter arguments:
55 > • 1000 € for four 10 TB drives (the biggest size available w/o helium)
56 > • they are only available with 7200 rpm (more power, noise and heat)
57 > • I am left with four perfectly fine 6 TB drives
58 > 3) Go for 4+2 RaidZ2. This requires a bigger case (with new PSU due to
59 > different form factor) and a SATA expansion card b/c the Mobo only has
60 > six connectors (I need at least one more for the system drive), costing
61 > 250 € plus drives.
62 > 4) Convert to RaidZ1. Gain space of one drive at the cost of resilience. I
63 > can live with the latter; the server only runs occasionally and not for
64 > very long at a time. *** This option brings me to my question above,
65 > because it is easy to achieve and costs no €€€.
67 In any of my data arrays I have long time migrated off the RAIDZ to the
68 MIRROR or RAID10. You will find finally that the RAIDZ is slow and not
69 very flexible. Only think you gain is the extra space in constrained
70 array spaces. For RAID10 it is much easier to raise the size, just
71 resilvering to new bigger disks, removing old and expanding. The
72 resilvering speed is magnitude faster. And anyway much easier to recover
73 in cases of failure.
75 If you really need the additional space, consider adding second jbod
76 with another disks.
78 Robert.


Subject Author
Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ antlists <antlists@××××××××××××.uk>
Re: [gentoo-user] [OT] Using an odd number of drives in ZFS RaidZ "J. Roeleveld" <joost@××××××××.org>