1 |
Am 24.12.2014 um 02:02 schrieb Andrew Savchenko: |
2 |
|
3 |
>> Ad "slow": what kind of hardware did you use and how many nodes/osds? |
4 |
> |
5 |
> We used 3 servers, where each server was both node and osd (that's |
6 |
> our hardware limitation). Each machine had hardware alike 2x |
7 |
> Xeon E5450, 16 GB and 2 Gbps network connectivity (via bonding of |
8 |
> two 1 Gbps interfaces). |
9 |
> |
10 |
> We went through a lot of software and kernel tuning, this helped to |
11 |
> solve many issues, but not all of them: ceph nodes still got kernel |
12 |
> panics once in a while. This was unacceptable and we moved for |
13 |
> other approaches to our issues. |
14 |
|
15 |
Hmm, that dampens my enthusiasm ;-) |
16 |
|
17 |
I watched a presentation on youtube yesterday where they recommended one |
18 |
SSD as journal per ~4 harddisks ... and 4-8 hard disks per OSD node |
19 |
maximum (if I remember correctly). Plus ~1 GHz / 1 core of CPU per OSD |
20 |
... as a rule of thumb. And 500 MB RAM per OSD ... that were the |
21 |
recommendations in |
22 |
|
23 |
http://youtu.be/C3lxGuAWEWU |
24 |
|
25 |
- |
26 |
|
27 |
Did you have the journal separated on SSDs? |
28 |
I think that would make quite a difference both in performance and cost ;) |
29 |
|
30 |
Do you remember the kernel version and ceph version? |
31 |
|
32 |
How many disks / OSDs? |
33 |
|
34 |
Sorry for being so curious .. |
35 |
|
36 |
Thanks, Stefan |