1 |
Well over the weekend was the real test of this change. The result |
2 |
was over 1.1tb written and 15.2tb served in a 3 day period with a peak |
3 |
rate of 117mb/s over copper gigabit. So it survived it's first |
4 |
beating. |
5 |
|
6 |
Someone mentioned that in some cases NOOP would be a better choice for |
7 |
external storage units. I have 3 external storage units which all run |
8 |
some sort of linux kernel internally using LVM to expose raid volumes |
9 |
to the host via scsi. Do you think this would be a good canidate for |
10 |
NOOP? If so is there a way to set a default IO scheduler to CFQ |
11 |
except for these units? Or would I just have to do it with bash in |
12 |
the local init.d script? |
13 |
|
14 |
--David |
15 |
|
16 |
On 8/5/05, David Miller <david3d@×××××.com> wrote: |
17 |
> I just had a break through with my main file server here in it's |
18 |
> ability to handle a high load with lots of read/writes going on |
19 |
> simultaneously so I figured I would pass this on in hopes that it will |
20 |
> help someone with a similar problem. |
21 |
> |
22 |
> The setup: |
23 |
> Dual Xeon 2.8ghz, 2gb ram, LSI 320 pci-x dual channel scsi |
24 |
> controller with about 18tb of SATA over SCSI attached storage all in |
25 |
> RAID 5. All storage is managed using LVM2. |
26 |
> |
27 |
> The workload: |
28 |
> This server does alot more reads than writes and is almost always |
29 |
> serving files over the network (copper Gigabit) at rates of 100 to 110 |
30 |
> meg/s. This is done using samba as the client PC's are windows based. |
31 |
> |
32 |
> The problem: |
33 |
> When server was serving files at high rates any writes would drive |
34 |
> up IO waits and server load to the point that samba would become |
35 |
> unhappy. In some cases the entire SCSI io system in the kernel would |
36 |
> become unhappy causing scsi bus resets, file systems to unmount etc. |
37 |
> Luckily this never resulted in any data loss or corruption but it has |
38 |
> stopped some of the processing that we were doing on the files. |
39 |
> |
40 |
> My solution: |
41 |
> The solution seems to be CFQ IO scheduling. I had previously used |
42 |
> Deadline and the Anticipatory scheduler but neither of these solved |
43 |
> the problem. CFQ seems to keep everything happy even under a mixed |
44 |
> I/O load of multiple writes (35mb/s or so) and multiple I/O reads |
45 |
> (70mb/s or so). The server load is still high in the 12 to 14 range |
46 |
> but the server and it's services seem to be much more stable. |
47 |
> |
48 |
> So what type of IO Scheduler have you found to handle your typical |
49 |
> load the best? Are there any other IO Schedulers that haven't made it |
50 |
> to the mainstream source trees yet that are worth a look? How long |
51 |
> until we have a IO scheduler that can adjust it self to deal with |
52 |
> various loads dynamically...I'm sure this is an ongoing area of |
53 |
> research. |
54 |
> |
55 |
> -David |
56 |
> |
57 |
|
58 |
-- |
59 |
gentoo-server@g.o mailing list |