Gentoo Archives: gentoo-user

From: "J. Roeleveld" <joost@××××××××.org>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] Hard drive storage questions
Date: Fri, 09 Nov 2018 09:02:47
Message-Id: 44053795.EIAGmacMho@eve
In Reply to: Re: [gentoo-user] Hard drive storage questions by Rich Freeman
1 On Friday, November 9, 2018 3:29:52 AM CET Rich Freeman wrote:
2 > On Thu, Nov 8, 2018 at 8:16 PM Dale <rdalek1967@×××××.com> wrote:
3 > > I'm trying to come up with a
4 > > plan that allows me to grow easier and without having to worry about
5 > > running out of motherboard based ports.
6 >
7 > So, this is an issue I've been changing my mind on over the years.
8 > There are a few common approaches:
9 >
10 > * Find ways to cram a lot of drives on one host
11 > * Use a patchwork of NAS devices or improvised hosts sharing over
12 > samba/nfs/etc and end up with a mess of mount points.
13 > * Use a distributed FS
14 >
15 > Right now I'm mainly using the first approach, and I'm trying to move
16 > to the last. The middle option has never appealed to me.
17
18 I'm actually in the middle, but have a single large NAS.
19
20 > So, to do more of what you're doing in the most efficient way
21 > possible, I recommend finding used LSI HBA cards. These have mini-SAS
22 > ports on them, and one of these can be attached to a breakout cable
23 > that gets you 4 SATA ports. I just picked up two of these for $20
24 > each on ebay (used) and they have 4 mini-SAS ports each, which is
25 > capacity for 16 SATA drives per card. Typically these have 4x or
26 > larger PCIe interfaces, so you'll need a large slot, or one with a
27 > cutout. You'd have to do the math but I suspect that if the card+MB
28 > supports PCIe 3.0 you're not losing much if you cram it into a smaller
29 > slot. If most of the drives are idle most of the time then that also
30 > demands less bandwidth. 16 fully busy hard drives obviously can put
31 > out a lot of data if reading sequentially.
32
33 I also recommend LSI HBA cards, they work really well and are really well
34 supported by Linux.
35
36 > You can of course get more consumer-oriented SATA cards, but you're
37 > lucky to get 2-4 SATA ports on a card that runs you $30. The mini-SAS
38 > HBAs get you a LOT more drives per PCIe slot, and your PCIe slots are
39 > you main limiting factor assuming you have power and case space.
40 >
41 > Oh, and those HBA cards need to be flashed into "IT" mode - they're
42 > often sold this way, but if they support RAID you want to flash the IT
43 > firmware that just makes them into a bunch of standalone SATA slots.
44 > This is usually a PITA that involves DOS or whatever, but I have
45 > noticed some of the software needed in the Gentoo repo.
46
47 Even with Raid-firmware, they can be configured for JBOD.
48
49 > If you go that route it is just like having a ton of SATA ports in
50 > your system - they just show up as sda...sdz and so on (no idea where
51 > it goes after that).
52
53 I tested this once, ended up getting sdaa, sdab,...
54
55 > Software-wise you just keep doing what you're
56 > already doing (though you should be seriously considering
57 > mdadm/zfs/btrfs/whatever at that point).
58
59 I would suggest ZFS or BTRFS over mdadm. Gives you more flexibility and is a
60 logical follow-up to LVM.
61
62 > That is the more traditional route.
63 >
64 > Now let me talk about distributed filesystems, which is the more
65 > scalable approach. I'm getting tired of being limited by SATA ports,
66 > and cases, and such. I'm also frustrated with some of zfs's
67 > inflexibility around removing drives.
68
69 IMHO, ZFS is nice for large storage devices, not so much for regular desktops.
70 This is why I am hoping BTRFS will solve the resilver issues. (not kept up, is
71 this still not working?)
72
73 --
74 Joost