Gentoo Archives: gentoo-user

From: William Kenworthy <billk@×××××××××.au>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] Rasp-Pi-4 Gentoo servers
Date: Mon, 02 Mar 2020 01:52:21
Message-Id: 80b66b18-f923-02d2-7e6f-f7059cf6a984@iinet.net.au
In Reply to: Re: [gentoo-user] Rasp-Pi-4 Gentoo servers by Daniel Frey
1 I am going to answer multiple points below:
2
3
4 On 1/3/20 11:36 pm, Daniel Frey wrote:
5 > On 3/1/20 6:33 AM, Rich Freeman wrote:
6 >> On Sun, Mar 1, 2020 at 2:13 AM William Kenworthy <billk@×××××××××.au>
7 >> wrote:
8 >>>
9 >>> Keep in mind that rpi are not the only cheap, capable arm hardware out
10 >>> there.
11 >>>
12 >>
13 I am in Oz, delivery from HardKernel (the South Korean company behind
14 the Odroid line) takes ~1 week.  Shipping is mostly via FedEx, who are a
15 bit pricy and for me means a 30m drive to get it from a delivery centre
16 as I cant be home to take delivery - though the last delivery used a
17 different, much more flexible delivery company.
18
19 lizardfs and moosefs are very similar (originally a fork) - I went with
20 moosefs as the community was better a few months ago - but lizardfs
21 sounds like its getting back on track after some infighting (moosefs
22 apparently went through the same thing)  I find the moosefs
23 documentation and community help are adequate, but ultimately they are
24 are a commercial project with a free taster offering hence some
25 limitations in design for the community version (such as single
26 master/no shadow masters) that lizardfs doesn't have.  I will move to
27 lizardfs when I am satisfied they have their act together because of
28 this - having a single master means taking the whole cluster offline
29 when the master needs maintenance or fails which is painful
30
31 For those wanting to run a lot of drives on a single host - that defeats
32 the main advantage of using a chunkserver based filesystem -
33 redundancy.  Its far more common to have a host fail than a disk drive. 
34 Losing the major part of your storage in one go means the cluster is
35 effectively dead - hence having a lot of completely separate systems is
36 much more reliable - yes, I did try having 4 sata drives on an atom
37 board and found it was easy to justify two more HC2's for the
38 reliability. (note, its not the effect of the atom boards reliability I
39 am pointing out, but the effect on the whole storage system of losing
40 such a large percentage of capacity in one go - think maintenance, fail
41 to startup etc. - its more common than you may think)
42
43 Dale: there is no single reference - I just designed on the fly and did
44 what was necessary to move from my existing KVM/Qemu architecture on a
45 two powerfull intel systems, each with 8TB storage to an lxc container
46 based system backed by moosefs.  It uses an odroid arm based n2 for lxc,
47 an arm based xu4 for management and software compilation (its not that
48 powerfull, but is adequate), the HC2's (very similar to the xu4 - same
49 arm architecture) and an H2 to run the mfsmaster and I am moving to
50 ansible for management.  Overall, I am finding less maintenance due to
51 better backups and better reliability, considerably lower power
52 consumption while overall performance seems better.
53
54 Gotchas:
55
56 1. I originally got the 4gb ram N2 to run the mfsmaster software - tests
57 were excellent - until I added multiple copies of a mailserver with
58 nearly 20 years of history - hit swap and slowed to a crawl.  So I now
59 have an intel based Odroid H2 with 32GB ram - currently is using ~4.2GB
60 ram for 7TB of 20TB used, but has hit over 32GbB and well into swap
61 while converging.  One admin in our local LUG is using 4xHC2 with the
62 master running on one of the HC2's as a media server with no problems -
63 its millions of small files added ina short time that causes the grief -
64 there is a formula in the moosefs documentation allowing resources to be
65 estimated - on the mailing list I saw a mention of a data centre using
66 something like 150G ram and having problems!
67
68 2. The Odroid HC2 has a single sata port and a single usb port - I have
69 5 of them, two have a sata + a usb3 drive attached.  I did try (on the
70 N2) using multiple usb3 drives on the internal hub - disaster with way
71 too much traffic through the hub - don't do it!
72
73 3. Storage options are an SD card (the faster the better - swap on an SD
74 card ... sucks!) or eMMC (5xfaster than even a good SD card.) for the N2
75 and xu4 - HC2's can only do sdcard, or sata. The H2 can do sd card,
76 sata, eMMC, or m2 NVME - this last really flies!  Note that almost all
77 arm system max out at 2 to 4GB of ram, so swap is usually needed for
78 safety - depending on OOM killer resource management on a SAN type
79 storage system is asking for corruption like I saw in one recent gentoo
80 email.
81
82 4. xu4/HC2 are 32 bit arm v7, the N2 is 64 bit and runs aarch64 nicely -
83 I copied my rpi images across repurposed them (hooray for
84 emerge/portage!).  I do not use, or have done any work on their
85 graphics/multimedia capabilities
86
87 5. I want to move to all gentoo-sources kernels - the xu4/HC2's are
88 still on the odroid kernel until I get around to it.  The N2 was a lot
89 of work, but ultimately successful, the H2 is standard amd64 EFI.
90
91
92 Have fun!
93
94 BillK

Replies

Subject Author
Re: [gentoo-user] Rasp-Pi-4 Gentoo servers Rich Freeman <rich0@g.o>