1 |
Matthias Bethke wrote: |
2 |
> The slowness is the same on SuSE and Gentoo based clients. The previous |
3 |
> installation handled the same thing without any problems, which I'd |
4 |
> certainly expect from a dual Xeon @3 GHz with $ GB RAM, a Compaq |
5 |
> SmartArray 642 U320 hostadpater and some 200 GB in a RAID5, connected |
6 |
> to the clients via GBit ethernet. |
7 |
> |
8 |
RAID-5? Ouch. |
9 |
RAID-10 offers a much better raw performance; since individual mirrors |
10 |
are striped, you get at least 4/3 the seek performance of a 4-disk |
11 |
raid-5 out of a 4-disk raid-10 setup - depending on the controller, this |
12 |
could even be double that figure ( if the controller cannot do perfect |
13 |
alignment in raid-5). |
14 |
Also, the network performance (linear read/write) is far superior to |
15 |
raid-5, typically giving >20% improvements. |
16 |
|
17 |
> Definitely not good for GBit, but not so bad either considering it will |
18 |
> have taken half a minute just to open that file. The file is complete |
19 |
> despite the I/O error but the error is definitely related to the server |
20 |
> load, it never happens normally (and I get 9-11s for the 100 MB). |
21 |
> |
22 |
> |
23 |
LoadAvg of over 10 for I/O only ? That is a serious problem. |
24 |
I repeat, that is a *problem*, not bad performance. |
25 |
|
26 |
Since you say the box has 4GB of RAM, what happens when you do a linear |
27 |
read of 2 or 3 GB of data, first uncached and then cached ? |
28 |
That should not be affected by the I/O subsystem at all. |
29 |
Also, test your network speed by running netperf or ioperf between |
30 |
client and server. |
31 |
Get some baseline values for maximum performance first! |
32 |
|
33 |
> I have 16 nfsd processes running but the problem is there even if only a |
34 |
> single client is active. nfsstat on the server shows a huge number of |
35 |
> |
36 |
> On the client, however, I get some retransmissions and very strange |
37 |
> read/write values compared to the server's. I thought of 32-bit overflow |
38 |
> but the value is obviously longer, I can drive it beyond 2^32 on the |
39 |
> |
40 |
> I noticed a few things about the setup: the SA 642 adapter still has a |
41 |
> stoneage firmware, V1.30, but we never saw a need to upgrade as it |
42 |
> worked nicely with the kernel 2.4.21 cciss driver. Any know issues with |
43 |
> the 2.6 kernel with this one? I just flashed the latest version and will |
44 |
> |
45 |
|
46 |
And more bla I don't understand about NFS - what about the basics ? |
47 |
Which versions are the server and client running ? |
48 |
Since both could run either v2 or v3 and in-kernel or userspace, that's |
49 |
4 x 4 = 16 possible combinations right there - and that is assuming they |
50 |
both run the *same* minor versions of the NFS software. |
51 |
|
52 |
|
53 |
> And one parameter I haven't tried to tweak is the IO scheduler. I seem |
54 |
> to remember a recommendation to use noop for RAID5 as the cylinder |
55 |
> numbers are completely virtual anyway so the actual head scheduling |
56 |
> should be left to the controller. Any opinions on this? |
57 |
> |
58 |
> |
59 |
I have never heard of the I/O scheduler being able to influence or get |
60 |
data directly from disks. |
61 |
In fact, as far as I know that is not even possible with IDE or SCSI, |
62 |
which both have their own abstraction layers. |
63 |
What you probably mean is the way the scheduler is allowed to interface |
64 |
with the disk subsystem - which is solely determined by the disk |
65 |
subsystem itself. |
66 |
"Heads" and "cylinders" play no part in this - it's *all* virtual as far |
67 |
as the scheduler is concerned. |
68 |
|
69 |
|
70 |
I'd recommend reading the specs for the raid controller - twice. |
71 |
Also dive into the module source if you're up for it - it can reveal a |
72 |
lot more than just plugging it in and adding disks. |
73 |
|
74 |
|
75 |
J |
76 |
|
77 |
-- |
78 |
gentoo-server@g.o mailing list |