Gentoo Archives: gentoo-user

From: Rich Freeman <rich0@g.o>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] Rasp-Pi-4 Gentoo servers
Date: Mon, 02 Mar 2020 14:48:29
Message-Id: CAGfcS_=EOdX9qmwcyizw1yzPrr57LcJ1OAJZNkzbZeyC4dw4QA@mail.gmail.com
In Reply to: Re: [gentoo-user] Rasp-Pi-4 Gentoo servers by William Kenworthy
1 On Mon, Mar 2, 2020 at 12:22 AM William Kenworthy <billk@×××××××××.au> wrote:
2 >
3 > I thought lizardfs was much more community minded
4 > but you are characterising it as similar to moosefs - a taster offering
5 > by a commercial company holding back some of the non-essential but
6 > jucier features for the paid version - is that how you see them?
7
8 I don't see much of an active community. It seems like most actual
9 development happens outside of the public repo, with big code drops by
10 the private team doing the work (which seems to be associated with a
11 company). A bit like the Android model. I'm sure they'll accept pull
12 requests, but that isn't how most of the work is getting done.
13
14 It seems like the main difference between them and moosefs is that
15 they're making more stuff FOSS to entice users over. Shadow masters
16 are FOSS as opposed to just having metadata loggers. HA is FOSS in
17 the latest RC.
18
19 So, it seems like their model is to trickle out the non-free stuff and
20 make it free after a delay.
21
22 It really seems like Ceph is the best fully open platform out there,
23 but the resource requirements just make it impractical. I have no
24 doubt that it can scale FAR better with its design, but that design
25 basically forces every node to be a bit of a powerhouse, versus
26 Lizardfs where you just have one daemon with all the intelligence and
27 the rest are just dumping files on disks. And you really don't need
28 much CPU/RAM for the master if you're serving large files - the
29 demands would go up with IOPS and number of files, and multimedia is
30 low on both.
31
32 > By the way, to keep to the rpi subject, I did have a rpi3B with a usb2
33 > sata drive attached but it was hopeless as a chunkserver impacting the
34 > whole cluster. Having the usb data flow and network data flow through
35 > the same hub just didn't go well
36
37 Hard drives plus 100Mbps LAN sharing a single USB 2.0 hub is
38 definitely not a recipe for NAS success...
39
40 When I upgraded to UniFi switches I really only noticed for the first
41 time how many hosts I have that aren't gigabit, and they're mostly Pis
42 at this point. They're nice little project boards but for anything
43 IO-intensive they're almost always the wrong choice.
44
45 The RockPro64 I'm using has gigabit plus PCIe 3.0 x8 plus USB3 and as
46 far as I can tell they don't have any contention. Maybe they're all
47 on a PCIe bus or something but obviously that can handle quite a bit.
48 Only issue was that the rk3399 PCIe drivers were not the most robust
49 in the kernel, but ayufan and the IRC channel were both helpful and
50 his kernel branch is actively maintained, so I was able to get
51 everything sorted (some delays needed during training to allow boards
52 to initialize and I was having power issues in the beginning). Much
53 of the rk3399 support in the kernel was pushed by Google for
54 Chromebooks and LSI HBAs weren't exactly on their list of things to
55 test with those - not sure if the Chromebooks put much of anything on
56 PCIe.
57
58 --
59 Rich