Gentoo Archives: gentoo-server

From: Jeroen Geilman <jeroen@××××××.nl>
To: gentoo-server@l.g.o
Subject: Re: [gentoo-server] Atrocious NFS performance
Date: Sat, 22 Apr 2006 20:31:39
Message-Id: 444A9183.4040400@adaptr.nl
In Reply to: Re: [gentoo-server] Atrocious NFS performance by Matthias Bethke
1 Matthias Bethke wrote:
2 > Hi Jeroen,
3 >
4 > To make a long story short (I'll tell it anyway for the archives' sake),
5 > the problem seems solved so far. After recompiling the kernel with 100Hz
6 > ticks I get a slight increase in latency when doing the same things as
7 > yesterday on the server but it feels just as responsive as before.
8 > Lacking any benchmarks from before the change, that's the best measure I
9 > have. It might have been the firmware upgrade, but I doubt it. If I have
10 > to reboot any time soon, maybe I'll do another test to verify it.
11 >
12 Rrright.. that's the big disadvantage of having to sort this stuff out
13 on a production machine :)
14 We're only now starting to convince (read: force) our developers *not
15 *to put live code on production systems /before /we have had a chance to
16 test it.
17 You know how it goes...
18 > on Wednesday, 2006-04-19 at 20:42:09, you wrote:
19 >
20 >>> The slowness is the same on SuSE and Gentoo based clients. The previous
21 >>> installation handled the same thing without any problems, which I'd
22 >>> certainly expect from a dual Xeon @3 GHz with $ GB RAM, a Compaq
23 >>> SmartArray 642 U320 hostadpater and some 200 GB in a RAID5, connected
24 >>> to the clients via GBit ethernet.
25 >>> RAID-5? Ouch.
26 >>> RAID-10 offers a much better raw performance; since individual mirrors
27 >>> are striped, you get at least 4/3 the seek performance of a 4-disk
28 >>>
29 >
30 > Yeah, but also at 2/3 the capacity. I know RAID5 isn't exactly
31 > top-notch, but as long as the controller takes care of the checksumming
32 > and distribution and the CPU doesn't have to, it's good enough for our
33 > site. That's mostly students doing their exercises, webbrowsing, some
34 > programming, usually all with small datasets. The biggest databases are
35 > about two gigs and the disks write at just above 40 MB/s.
36 >
37 Okay, all valid, but still - that's not even close to the theoretical
38 practical maximum (as opposed to the theoretical theoretical limit) of
39 GbE, topping out at around 80~90 MB/sec.
40 You can and will reach that in linear reads from a RAID-10 :)
41 Not only that, but the I/O *latency *also decreases substantially when
42 /not /using RAID-5
43 As I think I already mentioned, but heh.
44
45 >> LoadAvg of over 10 for I/O only ? That is a serious problem.
46 >> I repeat, that is a *problem*, not bad performance.
47 >>
48 >
49 > Huh? No, 9 to 11 seconds, i.e. ~10 MB/s. I don't see a way how this
50 > benchmark could possibly bring my load up that much, after all it's just
51 > one process on the client and one on the server.
52 >
53 Okay, slight misunderstanding there, then.
54
55 >> Since you say the box has 4GB of RAM, what happens when you do a linear
56 >> read of 2 or 3 GB of data, first uncached and then cached ?
57 >> That should not be affected by the I/O subsystem at all.
58 >>
59 >
60 > Writing gives me said 40 MB/s, reading it back (dd to /dev/null in 1 MB
61 > chunks) is 32 MB/s uncached (*slower* than writes? Hm. controller
62 > caching maybe...) , ~850 MB/s cached.
63 >
64 The 850MB/sec is about what you'd expect when the data only has to go
65 from memory through the VFS layer.
66 But writing being faster than reading.. hmmm.. I remember that usually
67 only happening with RAID-1 - but perhaps that's one of the
68 idiosynchrasies of that controller.
69 Still, it should at least reach the 30MB/sec you get when using any
70 other protocol.
71 >> Also, test your network speed by running netperf or ioperf between
72 >> client and server.
73 >> Get some baseline values for maximum performance first!
74 >>
75 >
76 > I didn't test it as the only thing I changed was the server software and
77 > it was just fine before. And it *is* fine as long as the server disks
78 > aren't busy. Theoretically it could be that the Broadcom NIC driver
79 > started sucking donkey balls in kernel 2.6, but ssh and stuff are just
80 > fine and speedy (~30 MB/s for a single stream of zeroes).
81 >
82 Still only 300mbits, though... a server like that should be able to
83 handle SSH at near-line speed.
84 >> And more bla I don't understand about NFS - what about the basics ?
85 >> Which versions are the server and client running ?
86 >> Since both could run either v2 or v3 and in-kernel or userspace, that's
87 >> 4 x 4 = 16 possible combinations right there - and that is assuming they
88 >> both run the *same* minor versions of the NFS software.
89 >>
90 >
91 > It's v3, that's why I snipped the unused v2 portions of nfsstat output.
92 > Both server and client are in-kernel---the client could only be
93 > userspace via FUSE, right?---and the latest stable versions,
94 > nfs-utils-1.0.6-r6, gentoo-sources-2.6.15-r1 on the client and
95 > hardened-sources-2.6.14-r7 on the server.
96 >
97 >
98 Okay, forget I mentioned it - you seem to have that part covered :)
99 >>> And one parameter I haven't tried to tweak is the IO scheduler. I seem
100 >>> to remember a recommendation to use noop for RAID5 as the cylinder
101 >>> numbers are completely virtual anyway so the actual head scheduling
102 >>> should be left to the controller. Any opinions on this?
103 >>>
104 >>>
105 >> I have never heard of the I/O scheduler being able to influence or get
106 >> data directly from disks.
107 >> In fact, as far as I know that is not even possible with IDE or SCSI,
108 >> which both have their own abstraction layers.
109 >> What you probably mean is the way the scheduler is allowed to interface
110 >> with the disk subsystem - which is solely determined by the disk
111 >> subsystem itself.
112 >>
113 >
114 > OK, that was a bit misleading, I meant that even assuming things about
115 > the flat file the scheduler sees of the disk like that offsets in the
116 > file sort of linearly correspond to cylinders
117 That *may* be true for old IDE drives, but it isn't even remotely true
118 for SCSI, which is its own higher-level abstraction on top of the
119 physical drive interface already, not to mention the layer the
120 SmartArray puts on top of /that/.
121 > ---which is what it does to
122 > implement things like the elevator algorithm---are virtually always
123 > right for simple drives but may not be for a RAID.
124 >
125 Erm.. that's what I said :)
126 One thing I've noticed when working with SmartArray controllers is that
127 they truly are rather smart,i.e. it's hard to peek under the hood and
128 understand exactly what is happening.
129
130 Unless the money has to come out of a tight budget I would seriously
131 recommend you invest in either
132 A. the BBWC available for most SmartArray controllers (128MB of r/w
133 cache), and/or
134 B. a slew of 146GB 10k rpm drives to fuel a RAID-10 set of, say, 300 GB
135 (4 drives)
136 It's a hell of a lot more redundant as well ...
137
138 Good luck!
139
140 J
141
142 --
143 gentoo-server@g.o mailing list

Replies

Subject Author
Re: [gentoo-server] Atrocious NFS performance Matthias Bethke <matthias@×××××××.de>