1 |
On 19/12/22 21:30, Rich Freeman wrote: |
2 |
> On Mon, Dec 19, 2022 at 7:51 AM Wols Lists <antlists@××××××××××××.uk> wrote: |
3 |
>> On 19/12/2022 12:00, Rich Freeman wrote: |
4 |
>>> On Mon, Dec 19, 2022 at 12:11 AM Dale<rdalek1967@×××××.com> wrote: |
5 |
>>>> If I like these Raspberry things, may make a media box out of one. I'd |
6 |
>>>> like to have a remote tho. 😉 |
7 |
>>> So, I've done that. Honestly, these days a Roku is probably the |
8 |
>>> better option, or something like a Google Chromecast or the 47 other |
9 |
>>> variations on this them. |
10 |
>> Where do you put that 2TB drive on your Roku or Chromecast? |
11 |
>> |
12 |
>> I'm thinking of building a media server, not to drive the TV, but to |
13 |
>> record and store. I thought that was what a media server was! |
14 |
> So, he said "media box," which I assumed meant the client that |
15 |
> attaches to the TV. There are some canned solutions for media servers |
16 |
> - I think the NVidia Shield can run Plex server for example. However, |
17 |
> in general server-side I'd go amd64. |
18 |
> |
19 |
> My current solution is: |
20 |
> 1. Moosefs for storage: amd64 container for the master, and ARM SBCs |
21 |
> for the chunkservers which host all the USB3 hard drives. With a |
22 |
> modest number of them performance is very good, though certainly not |
23 |
> as good as Ceph or local storage. (I do have moosefs in my overlay - |
24 |
> might try to get that into the main repo when I get a chance.) |
25 |
> 2. Plex server in a container on amd64 (looking to migrate this to k8s |
26 |
> over the holiday). |
27 |
> 3. Rokus or TV apps for the clients. |
28 |
|
29 |
Very similar to what I have (intel/arm for moosefs) - I am effectively |
30 |
using moosefs as a distributed NAS (fuse mount onto whatever system(s) I |
31 |
am using) with built in data protection and redundancy. LVM and similar |
32 |
pooling is discouraged as it defeats some of the built in data |
33 |
protection. To increase storage, just add a disk, format, add to the |
34 |
config and reload - it automatically redistributes the data. Similarly, |
35 |
you can add/remove storage or whole storage systems while live with no |
36 |
risk to your data (within limits!!!) With LVM, if a drive fails, you are |
37 |
SOL and offline until you can recover and restore the data. On a recent |
38 |
holiday, an SD card failed and a moosefs arm SBC in AU went offline - |
39 |
discovered the next morning when doing status checks from a ship in the |
40 |
Mediterranean(!) - it had already backfilled and protection was back at |
41 |
normal, moosefs was just missing 2Tb of storage space. 5 weeks later |
42 |
when I got home, I replaced the SD card, rebooted and readded the system |
43 |
all with no risk to the data. |
44 |
|
45 |
Dale, I was where you are about 10 or so years ago and was forced to |
46 |
move on when that design hit its limits - forget LVM etc, these days |
47 |
there are lots of better ways to do what you want with less risk to your |
48 |
data. Another factor is power - moosefs is currently 1 intel and 7 arm |
49 |
SBC's that use 90-110w (most of which is due to using ancient WD and |
50 |
Seagate hard drives) - where as my intel desktop is 90w when idle, or |
51 |
over 300 w when compiling etc. so its off unless its being used. Power |
52 |
is important to me as its expensive!! |
53 |
|
54 |
BillK |