1 |
Rich Freeman wrote: |
2 |
> On Fri, Aug 26, 2022 at 7:26 AM Dale <rdalek1967@×××××.com> wrote: |
3 |
>> I looked into the Raspberry and the newest version, about $150 now, doesn't even have SATA ports. |
4 |
> The Pi4 is definitely a step up from the previous versions in terms of |
5 |
> IO, but it is still pretty limited. It has USB3 and gigabit, and they |
6 |
> don't share a USB host or anything like that, so you should get close |
7 |
> to full performance out of both. The CPU is of course pretty limited, |
8 |
> as is RAM. Biggest benefit is the super-low power consumption, and |
9 |
> that is something I take seriously as for a lot of cheap hardware that |
10 |
> runs 24x7 the power cost rapidly exceeds the purchase price. I see |
11 |
> people buying old servers for $100 or whatever and those things will |
12 |
> often go through $100 worth of electricity in a few months. |
13 |
> |
14 |
> How many hard drives are you talking about? There are two general |
15 |
> routes to go for something like this. The simplest and most |
16 |
> traditional way is a NAS box of some kind, with RAID. The issue with |
17 |
> these approaches is that you're limited by the number of hard drives |
18 |
> you can run off of one host, and of course if anything other than a |
19 |
> drive fails you're offline. The other approach is a distributed |
20 |
> filesystem. That ramps up the learning curve quite a bit, but for |
21 |
> something like media where IOPS doesn't matter it eliminates the need |
22 |
> to try to cram a dozen hard drives into one host. Ceph can also do |
23 |
> IOPS but you're talking 10GbE + NVMe and big bucks, and that is how |
24 |
> modern server farms would do it. |
25 |
> |
26 |
> I'll describe the traditional route since I suspect that is where |
27 |
> you're going to end up. If you only had 2-4 drives total you could |
28 |
> probably get away with a Pi4 and USB3 drives, but if you want |
29 |
> encryption or anything CPU-intensive you're probably going to |
30 |
> bottleneck on the CPU. It would be fine if you're more concerned with |
31 |
> capacity than storage. |
32 |
> |
33 |
> For more drives than that, or just to be more robust, then any |
34 |
> standard amd64 build will be fine. Obviously a motherboard with lots |
35 |
> of SATA ports will help here. However, that almost always is a |
36 |
> bottleneck on consumer gear, and the typical solution to that for SATA |
37 |
> is a host bus adapter. They're expensive new, but cheap on ebay (I've |
38 |
> had them fail though, which is probably why companies tend to sell |
39 |
> them while they're still working). They also use a ton of power - |
40 |
> I've measured them using upwards of 60W - they're designed for servers |
41 |
> where nobody seems to care. A typical HBA can provide 8-32 SATA |
42 |
> ports, via mini-SAS breakout cables (one mini-SAS port can provide 4 |
43 |
> SATA ports). HBAs tend to use a lot of PCIe lanes - you don't |
44 |
> necessarily need all of them if you only have a few drives and they're |
45 |
> spinning disks, but it is probably easiest if you get a CPU with |
46 |
> integrated graphics and use the 16x slot for the HBA. That or get a |
47 |
> motherboard with two large slots (they usually aren't 16x, but getting |
48 |
> 4-8x slots on a consumer motherboard isn't super-common). |
49 |
> |
50 |
> For software I'd use mdadm plus LVM. ZFS or btrfs are your other |
51 |
> options, and those can run on bare metal, but btrfs is immature and |
52 |
> ZFS cannot be reshaped the way mdadm can, so there are tradeoffs. If |
53 |
> you want to use your existing drives and don't have a backup to |
54 |
> restore or want to do it live, then the easiest option there is to add |
55 |
> one drive to the system to expand capacity. Put mdadm on that drive |
56 |
> as a degraded raid1 or whatever, then put LVM on top, and migrate data |
57 |
> from an existing disk live over to the new one, freeing up one or more |
58 |
> existing drives. Then put mdadm on those and LVM and migrate more |
59 |
> data onto them, and so on, until everything is running on top of |
60 |
> mdadm. Of course you need to plan how you want the array to look and |
61 |
> have enough drives that you get the desired level of redundancy. You |
62 |
> can start with degraded arrays (which is no worse than what you have |
63 |
> now), then when enough drives are freed up they can be added as pairs |
64 |
> to fill it out. |
65 |
> |
66 |
> If you want to go the distributed storage route then CephFS is the |
67 |
> canonical solution at this point but it is RAM-hungry so it tends to |
68 |
> be expensive. It is also complex, but there are ansible playbooks and |
69 |
> so on to manage that (though playbooks with 100+ plays in them make me |
70 |
> nervous). For something simpler MooseFS or LizardFS are probably |
71 |
> where I'd start. I'm running LizardFS but they've been on the edge of |
72 |
> death for years upstream and MooseFS licensing is apparently better |
73 |
> now, so I'd probably look at that first. I did a talk on lizardfs |
74 |
> recently: https://www.youtube.com/watch?v=dbMRcVrdsQs |
75 |
> |
76 |
|
77 |
|
78 |
This is some good info. It will likely start off with just a few hard |
79 |
drives but that will grow over time. I also plan to have a large drive |
80 |
as a spare as well, in case one starts having issues and needs replacing |
81 |
quick. I'd really like to be using RAID at least the two copies one but |
82 |
may take time, plus I got to learn how to do the thing. ;-) |
83 |
|
84 |
I may use NAS software. I've read/seen people use that and it is on a |
85 |
USB stick or something. They say once it is booted up, it doesn't need |
86 |
to access the USB stick much. I guess it is set to load into memory at |
87 |
boot up, like some rescue media does. I think that uses ZFS for the |
88 |
file system which is a little like LVM. The bad thing about using that |
89 |
tho, I may not be able to just move the drives over to the new NAS since |
90 |
it may not accept LVM well. I don't know much on that yet. I may end |
91 |
up having to buy drives and just rsync them over. One good thing, I |
92 |
have a 1GB capable router. It even has fast wifi if I get that. Plus, |
93 |
I'll have extra drives depending on how I work this. |
94 |
|
95 |
My first thing, buy a case to organize all this. I looked at a Fractal |
96 |
Design Node 804 case. It is a NAS case. I think it can handle a wide |
97 |
variety of mobos too. It can also hold a lot of drives as well. Eight |
98 |
if I recall. If I put the OS on a USB stick or something, that is a lot |
99 |
of drive spaces. I may could add a drive cage if needed. It's a fair |
100 |
sized case and certainly a good start. |
101 |
|
102 |
It will take me a while to build all this. Thing is, since I'll use it |
103 |
like a backup tool, I need to be able to put it in a safe place. I wish |
104 |
I could get a fire safe to stick it into. The biggest part is 16" I |
105 |
think. Plenty of cooling as well. I wish I had another Cooler Master |
106 |
HAF-932 like I have now but that almost certainly won't fit in a fire |
107 |
safe. Dang thing is huge. ROFL |
108 |
|
109 |
Lots to think on. |
110 |
|
111 |
Dale |
112 |
|
113 |
:-) :-) |