1 |
Andrew Savchenko <bircoph <at> gmail.com> writes: |
2 |
|
3 |
|
4 |
> Ceph is optimized for btrfs by design, it has no configure options |
5 |
> to enable or disable btrfs-related stuff: |
6 |
> https://github.com/ceph/ceph/blob/master/configure.ac |
7 |
> No configure option => no use flag. |
8 |
|
9 |
Good to know; nice script. |
10 |
|
11 |
> Just use the latest (0.80.7 ATM). You may just nerame and rehash |
12 |
> 0.80.5 ebuild (usually this works fine). Or you may stay with |
13 |
> 0.80.5, but with fewer bug fixes. |
14 |
|
15 |
So just download from ceph.com, put it in distfiles and copy-edit |
16 |
ceph-0.80.7 in my /usr/local/portage, or is there an overlay somewhere |
17 |
I missed? |
18 |
|
19 |
> If raid is supposed to be read more frequently than written to, |
20 |
> then my favourite solution is raid-10-f2 (2 far copies, perfectly |
21 |
> fine for 2 disks). This will give you read performance of raid-0 and |
22 |
> robustness of raid-1. Though write i/o will be somewhat slower due |
23 |
> to more seeks. Also it depends on workload: if you'll have a lot of |
24 |
> independent read requests, raid-1 will be fine too. But for large read |
25 |
> i/o from a single or few clients raid-10-f2 is the best imo. |
26 |
|
27 |
Interesting. For now I'm going to stay with simple mirroring. After |
28 |
some time I might migrate to a more agressive FS arrangement, once |
29 |
I have a better idea of the i/o needs. With spark(RDD) on top of mesos, |
30 |
I shooting for mostly "in-memory" usage so i/o is not very heavily |
31 |
used. We'll just have to see how things work out. |
32 |
|
33 |
Last point. I'm using openrc and not systemd, at this time; any |
34 |
ceph issues with openrc, as I do see systemd related items with ceph. |
35 |
|
36 |
|
37 |
> Andrew Savchenko |
38 |
|
39 |
|
40 |
Very good advice. |
41 |
Thanks, |
42 |
James |