Gentoo Archives: gentoo-user

From: Rich Freeman <rich0@g.o>
To: gentoo-user@l.g.o
Subject: Re: Living in NGL: was: [gentoo-user] NAS and replacing with larger drives
Date: Tue, 20 Dec 2022 00:00:33
Message-Id: CAGfcS_kPhDHYJS3j9g6ZPiJ3etP5QKmQ3__8qKUo4_-qvFVtJg@mail.gmail.com
In Reply to: Re: Living in NGL: was: [gentoo-user] NAS and replacing with larger drives by Mark Knecht
1 On Mon, Dec 19, 2022 at 11:43 AM Mark Knecht <markknecht@×××××.com> wrote:
2 >
3 > On Mon, Dec 19, 2022 at 6:30 AM Rich Freeman <rich0@g.o> wrote:
4 > <SNIP>
5 > > My current solution is:
6 > > 1. Moosefs for storage: amd64 container for the master, and ARM SBCs
7 > > for the chunkservers which host all the USB3 hard drives.
8 >
9 > I'm trying to understand the form factor of what you are mentioning above.
10 > Presumably the chunkservers aren't sitting on a lab bench with USB
11 > drives hanging off of them. Can you point me toward and example of
12 > what you are using?
13
14 Well, a few basically are just sitting on a bench, but most are in
15 half-decent cases (I've found that Pi4s really benefit from a decent
16 case as they will thermal throttle otherwise). I then have USB3 hard
17 drives attached. I actually still have one RockPro64 with an LSI HBA
18 on a riser card but I'm going to be moving those drives to USB3-SATA
19 adapters because dealing with the kernel patches needed to fix the
20 broken PCIe driver is too much fuss, and the HBA uses a TON of power
21 which I didn't anticipate when I bought it (ugh, server gear).
22
23 Really at this point for anything new 2GB Pi4s are my preferred go-to
24 with Argon ONE v2 cases. Then I just get USB3 hard drives on sale at
25 Best Buy for ~$15/TB if possible. USB3 will handle a few hard drives
26 depending on how much throughput they're getting, but this setup is
27 more focused on capacity/cost than performance anyway.
28
29 The low memory requirement for the chunkservers is a big part of why I
30 went with MooseFS instead of Ceph. The OSDs for Ceph recommend
31 something like 4GB per hard drive which adds up very fast.
32
33 The USB3 hard drives do end up strewn about a fair bit, but they have
34 their own enclosures anyway. I just label them well.
35
36 >
37 > I've been considering some of these new mini-computers that have
38 > a couple of 2.5Gb/S Ethernet ports and 3 USB 3 ports but haven't
39 > moved forward because I want it packaged in a single case.
40
41 Yeah, better ethernet would definitely be on my wish list. I'll
42 definitely take a look at the state of those the next time I add a
43 node.
44
45 > Where does the master reside? In a container on your desktop
46 > machine or is that another element on your network?
47
48 In an nspawn container on one of my servers. It is pretty easy to set
49 up or migrate so it can go anywhere, but it does benefit from a bit
50 more CPU/RAM. Running it in a container creates obvious dependency
51 challenges if I want to mount moosefs on the same server - that can be
52 solved with systemd dependencies, but it won't figure that out on its
53 own.
54
55 > > 2. Plex server in a container on amd64 (looking to migrate this to k8s
56 > > over the holiday).
57 >
58 > Why Kubernetes?
59
60 I run it 24x7. This is half an exercise to finally learn and grok
61 k8s, and half an effort to just develop better container practices in
62 general. Right now all my containers run in nspawn which is actually
63 a very capable engine, but it does nothing for image management, so my
64 containers are more like pets than cattle. I want to get to a point
65 where everything is defined by a few trivially backed-up config files.
66
67 One thing I do REALLY prefer with nspawn is its flexibility around
68 networking. An nspawn container can use a virtual interface attached
69 to any bridge, which means you can give them their own IPs, routes,
70 gateways, VLANs, and so on. Docker and k8s are pretty decent about
71 giving containers a way to listen on the network for connection
72 (especially k8s ingress or load balancers), but they do nothing really
73 to manage the outbound traffic, which just uses the host network
74 config. On a multi-homed network or when you want to run services for
75 VLANs and so on it seems like a lot of trouble. Sure, you can go
76 crazy with iptables and iproute2 and so on, but I used to do that with
77 non-containerized services and hated it. With nspawn it is pretty
78 trivial to set that stuff up and give any container whatever set of
79 interfaces you want bridged however you want them. I actually fussed
80 a little with running a k8s node inside an nspawn container so that I
81 could just tie pods to nodes to do exotic networking but clusters
82 inside containers (using microk8s which runs in snapd) was just a
83 bridge too far...
84
85 --
86 Rich