Gentoo Archives: gentoo-user

From: Alan McKinnon <alan.mckinnon@×××××.com>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] [OT] Fragmentation of my drives. Curious mostly
Date: Sun, 30 Nov 2008 12:27:32
Message-Id: 200811301427.22305.alan.mckinnon@gmail.com
In Reply to: Re: [gentoo-user] [OT] Fragmentation of my drives. Curious mostly by Daniel Iliev
1 On Sunday 30 November 2008 11:50:18 Daniel Iliev wrote:
2
3 > > > real    0m26.747s
4 > > > user    0m2.110s
5 > > > sys     0m1.450s
6 > > > localhost test # time cat test2 > /dev/null
7 > > >
8 > > > real    0m29.825s
9 > > > user    0m1.780s
10 > > > sys     0m1.690s
11 > >
12 > > This is not a test unfortunately. You did one run on one file and one
13 > > run on another file. We do not know what else the machine was doing
14 > > at that time, and that unknown is a considerable one.
15 >
16 > This result is from the last of three repetitions. Its values were in
17 > the middle (not average). The deviations were in the range 1 to 1.5
18 > seconds. Every time reading the more fragmented was slower.
19
20 That's a HUGE deviation, I assume it's real time. What's the difference in
21 user and sys time? Those are much more important here.
22
23 > The system was idle in level 3, no X running. I used iostat before
24 > the tests and it registered several tens of KBs written for 1 min before
25 > the test. Compared to the size of the test files and the speed of the
26 > disk it is insignificant.
27
28 You didn't remove the effect of buffers and caches. The only way I know of to
29 test this correctly is to reboot the machine after every run, start no
30 services and run the test with a cold cache.
31
32
33 Otherwise you don't really know what you are measuring.
34
35 > > Repeat your test eliminating this factor. Preferably, remount the
36 > > filesystems after each run and repeat 1000 times. Then analyze the
37 > > statistical distribution of your results. This should eliminate most
38 > > random factors and give a more realistic real-world view.
39 >
40 > I'm not willing to waste time for 1000 repetitions, but why don't you
41 > do the test yourself just a couple of times and see if there will be a
42 > case when the more fragmented file gets read faster?
43
44 You are the one making them claims! You do the test!
45
46 > Actually my results are a little lower than what I expected but enough
47 > for me to say that fragmentation matters. At least until proved
48 > otherwise. Fragmentation leads to seeks. The average seek time on
49 > modern disks is several milliseconds. Yes, there are algorithms
50 > reordering the I/O requests to minimize the seek penalty, but still
51 > seeks are there and they hurt performance.
52
53 Buffers and cache? You have two files of 1.2G each. I have more free RAM than
54 that for buffers and cache easily 90% of the time my notebook is on.
55
56 --
57 alan dot mckinnon at gmail dot com