1 |
Hi, |
2 |
|
3 |
On Tue, 23 Dec 2014 16:36:25 +0100 Stefan G. Weichinger wrote: |
4 |
> Am 23.12.2014 um 16:20 schrieb Andrew Savchenko: |
5 |
[...] |
6 |
> > We used it about a year ago for our infrastructure (backup and live |
7 |
> > sync of HA systems), obviously both servers and clients were used, |
8 |
> > both on Gentoo. We stopped this because of numerous kernel panics, |
9 |
> > not to mention that it was quite slow even after tuning. So we |
10 |
> > switch to another solution for data sync and backups: clsync. (It |
11 |
> > was developed from scratch for our needs, this is not a |
12 |
> > filesystem, but may be considered as more powerful alternative to |
13 |
> > lsyncd.) |
14 |
> > |
15 |
> > Though this was a year ago or so. Your mileage may vary and it is |
16 |
> > likely that during this year stability was improved. Ceph is very |
17 |
> > promising by both design and capabilities. |
18 |
> |
19 |
> I agree! |
20 |
> |
21 |
> I expect that there were many changes over the time of a year ... they |
22 |
> went from v0.72 (5th stable release) in Nov 2013 to v0.80 in May 2014 |
23 |
> (6th stable release) ... and v0.87 in Oct 2014 (7th ...) |
24 |
> |
25 |
> We get 0.80.7 in ~amd64 now ... I will see. |
26 |
> |
27 |
> Ad "slow": what kind of hardware did you use and how many nodes/osds? |
28 |
|
29 |
We used 3 servers, where each server was both node and osd (that's |
30 |
our hardware limitation). Each machine had hardware alike 2x |
31 |
Xeon E5450, 16 GB and 2 Gbps network connectivity (via bonding of |
32 |
two 1 Gbps interfaces). |
33 |
|
34 |
We went through a lot of software and kernel tuning, this helped to |
35 |
solve many issues, but not all of them: ceph nodes still got kernel |
36 |
panics once in a while. This was unacceptable and we moved for |
37 |
other approaches to our issues. |
38 |
|
39 |
Best regards, |
40 |
Andrew Savchenko |