Gentoo Archives: gentoo-user

From: Rich Freeman <rich0@g.o>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] Hard drive storage questions
Date: Fri, 09 Nov 2018 13:25:35
Message-Id: CAGfcS_mO8X+ao4apOi5de1jm70L=OC9gumiV+v6FZ+nwPZVDYQ@mail.gmail.com
In Reply to: Re: [gentoo-user] Hard drive storage questions by Bill Kenworthy
1 On Fri, Nov 9, 2018 at 3:17 AM Bill Kenworthy <billk@×××××××××.au> wrote:
2 >
3 > I'll second your comments on ceph after my experience - great idea for
4 > large scale systems, otherwise performance is quite poor on small
5 > systems. Needs at least GB connections with two networks as well as only
6 > one or two drives per host to work properly.
7 >
8 > I think I'll give lizardfs a go - an interesting read.
9 >
10
11 So, ANY distributed/NAS solution is going to want a good network
12 (gigabit or better), if you care about performance. With Ceph and the
13 rebuilds/etc it probably makes an even bigger difference, but lizardfs
14 still shuttles data around. With replication any kind of write is
15 multiplied so even moderate use is going to use a lot of network
16 bandwidth. If you're talking about hosting OS images for VMs it is a
17 big deal. If you're talking about hosting TV shows for your Myth
18 server or whatever, it probably isn't as big a deal unless you have 14
19 tuners and 12 clients.
20
21 Lizardfs isn't without its issues. For my purposes it is fine, but it
22 is NOT as robust as Ceph. Finding direct comparisons online is
23 difficult, but here are some of my observations (having not actually
24 used either, but having read up on both):
25
26 * Ceph (esp for obj store) is designed to avoid bottlenecks. Lizardfs
27 has a single master server that ALL metadata requests have to go
28 through. When you start getting into dozens of nodes that will start
29 to be a bottleneck, but it also eliminates some of the rigidity of
30 Ceph since clients don't have to know where all the data is. I
31 imagine it adds a bit of latency to reads.
32
33 * Lizardfs defaults to acking writes after the first node receives
34 them, then replicates them. Ceph defaults to acking after all
35 replicas are made. For any application that takes transactions
36 seriously there is a HUGE data security difference, but it of course
37 will lower write latency for lizardfs.
38
39 * Lizardfs makes it a lot easier to tweak storage policy at the
40 directory/file level. Cephfs basically does this more at the
41 mountpoint level.
42
43 * Ceph CRUSH maps are much more configurable than Lizardfs goals.
44 With Ceph you could easily say that you want 2 copies, and they have
45 to be on hard drives with different vendors, and in different
46 datacenters. With Lizardfs combining tags like this is less
47 convenient, and while you could say that you want one copy in rack A
48 and one in rack B, you can't say that you don't care which 2 as long
49 as they are different.
50
51 * The lizardfs high-availability stuff (equiv of Ceph monitors) only
52 recently went FOSS, and probably isn't stabilized on most distros.
53 You can have backup masters that are ready to go, but you need your
54 own solution for promoting them.
55
56 * Lizardfs security seems to be non-existent. Don't stick it on your
57 intranet if you are a business. Fine for home, or for a segregated
58 SAN, maybe, or you could stick it all behind some kind of VPN and roll
59 your own security layer. Ceph security seems pretty robust, but
60 watching what the ansible playbook did to set it up makes me shudder
61 at the thought of doing it myself. Lots of keys that all need to be
62 in sync so that everything can talk to each other. I'm not sure if
63 for clients whether it can outsource authentication to kerberos/etc -
64 not a need for me but I wouldn't be surprised if this is supported.
65 The key syncing makes a lot more sense within the cluster itself.
66
67 * Lizardfs is MUCH simpler to set up. For Ceph I recommend the
68 ansible playbook, though if I were using it in production I'd want to
69 do some serious config management as it seems rather complex and it
70 seems like the sort of thing that could take out half a datacenter if
71 it had a bug. For Lizardfs if you're willing to use the suggested
72 hostnames about 95% of it is auto-configuring as storage nodes just
73 reach out to the default master DNS name and report in, and everything
74 trusts everything (not just by default - I don't think you even can
75 lock it down unless you stick every node behind a VPN to limit who can
76 talk to who).
77
78 --
79 Rich