1 |
On 20/01/15 05:10, Rich Freeman wrote: |
2 |
> On Mon, Jan 19, 2015 at 11:50 AM, James <wireless@×××××××××××.com> wrote: |
3 |
>> Bill Kenworthy <billk <at> iinet.net.au> writes: |
4 |
>> |
5 |
>> I was wondering what my /etc/fstab should look like using uuids, raid 1 and |
6 |
>> btrfs. |
7 |
> |
8 |
> From mine: |
9 |
> /dev/disk/by-uuid/7d9f3772-a39c-408b-9be0-5fa26eec8342 / |
10 |
> btrfs noatime,ssd,compress=none |
11 |
> /dev/disk/by-uuid/cd074207-9bc3-402d-bee8-6a8c77d56959 /data |
12 |
> btrfs noatime,compress=none |
13 |
> |
14 |
> The first is a single disk, the second is 5-drive raid1. |
15 |
> |
16 |
> I disabled compression due to some bugs a few kernels ago. I need to |
17 |
> look into whether those were fixed - normally I'd use lzo. |
18 |
> |
19 |
> I use dracut - obviously you need to use some care when running root |
20 |
> on a disk identified by uuid since this isn't a kernel feature. With |
21 |
> btrfs as long as you identify one device in an array it will find the |
22 |
> rest. They all have the same UUID though. |
23 |
> |
24 |
> Probably also worth nothing that if you try to run btrfs on top of lvm |
25 |
> and then create an lvm snapshot btrfs can cause spectacular breakage |
26 |
> when it sees two devices whose metadata identify them as being the |
27 |
> same - I don't know where it went but there was talk of trying to use |
28 |
> a generation id/etc to keep track of which ones are old vs recent in |
29 |
> this scenario. |
30 |
> |
31 |
>> |
32 |
>> Eventually, I want to run CephFS on several of these raid one btrfs |
33 |
>> systems for some clustering code experiments. I'm not sure how that |
34 |
>> will affect, if at all, the raid 1-btrfs-uuid setup. |
35 |
>> |
36 |
> |
37 |
> Btrfs would run below CephFS I imagine, so it wouldn't affect it at all. |
38 |
> |
39 |
> The main thing keeping me away from CephFS is that it has no mechanism |
40 |
> for resolving silent corruption. Btrfs underneath it would obviously |
41 |
> help, though not for failure modes that involve CephFS itself. I'd |
42 |
> feel a lot better if CephFS had some way of determining which copy was |
43 |
> the right one other than "the master server always wins." |
44 |
> |
45 |
|
46 |
Forget ceph on btrfs for the moment - the COW kills it stone dead after |
47 |
real use. When running a small handful of VMs on a raid1 with ceph - |
48 |
sloooooooooooow :) |
49 |
|
50 |
You can turn off COW and go single on btrfs to speed it up but bugs in |
51 |
ceph and btrfs lose data real fast! |
52 |
|
53 |
ceph itself (my last setup trashed itself 6 months ago and I've given |
54 |
up!) will only work under real use/heavy loads with lots of discrete |
55 |
systems, ideally 10G network, and small disks to spread the failure |
56 |
domain. Using 3 hosts and 2x2g disks per host wasn't near big enough :( |
57 |
Its design means that small scale trials just wont work. |
58 |
|
59 |
Its not designed for small scale/low end hardware, no matter how |
60 |
attractive the idea is :( |
61 |
|
62 |
BillK |