Gentoo Archives: gentoo-amd64

From: Mark Knecht <markknecht@×××××.com>
To: Gentoo AMD64 <gentoo-amd64@l.g.o>
Subject: Re: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value?
Date: Sun, 23 Jun 2013 01:03:07
Message-Id: CAK2H+edpKpAx1nq+GM+PNEWfz1muDH27DPb42TCXEqMKrABgmQ@mail.gmail.com
In Reply to: [gentoo-amd64] Re: Is my RAID performance bad possibly due to starting sector value? by Duncan <1i5t5.duncan@cox.net>
1 On Sat, Jun 22, 2013 at 7:23 AM, Duncan <1i5t5.duncan@×××.net> wrote:
2 > Mark Knecht posted on Fri, 21 Jun 2013 10:40:48 -0700 as excerpted:
3 >
4 >> On Fri, Jun 21, 2013 at 12:31 AM, Duncan <1i5t5.duncan@×××.net> wrote:
5 >> <SNIP>
6 <SNIP>
7 >
8 > ... Assuming $PWD is now on the raid. You had the path shown too, which
9 > I snipped, but that doesn't tell /me/ (as opposed to you, who should know
10 > based on your mounts) anything about whether it's on the raid or not.
11 > However, the above including the drop-caches demonstrates enough care
12 > that I'm quite confident you'd not make /that/ mistake.
13 >
14 >> 4) As a second test I read from the RAID6 and write back to the RAID6.
15 >> I see MUCH lower speeds, again repeatable:
16 >>
17 >> dd if=SDDCopy of=HDDWrite
18 >> 97656250+0 records in 97656250+0 records out
19 >> 50000000000 bytes (50 GB) copied, 1187.07 s, 42.1 MB/s
20 >
21 >> 5) As a final test, and just looking for problems if any, I do an SDD to
22 >> SDD copy which clocked in at close to 200MB/S
23 >>
24 >> dd if=random1 of=SDDCopy
25 >> 97656250+0 records in 97656250+0 records out
26 >> 50000000000 bytes (50 GB) copied, 251.105 s, 199 MB/s
27 >
28 >> So, being that this RAID6 was grown yesterday from something that
29 >> has existed for a year or two I'm not sure of it's fragmentation, or
30 >> even how to determine that at this time. However it seems my problem are
31 >> RAID6 reads, not RAID6 writes, at least to new an probably never used
32 >> disk space.
33 >
34 > Reading all that, one question occurs to me. If you want to test read
35 > and write separately, why the intermediate step of dd-ing from /dev/
36 > random to ssd, then from ssd to raid or ssd?
37 >
38 > Why not do direct dd if=/dev/random (or urandom, see note below)
39 > of=/desired/target ... for write tests, and then (after dropping caches),
40 > if=/desired/target of=/dev/null ... for read tests? That way there's
41 > just the one block device involved, not both.
42 >
43
44 1) I was a bit worried about using it in a way it might not have been
45 intended to be used.
46
47 2) I felt that if I had a specific file then results should be
48 repeatable, or at least not dependent on what's in the file.
49
50
51 <SNIP>
52 >
53 > Meanwhile, dd-ing either from /dev/urandom as source, or to /dev/null as
54 > sink, with only the test-target block device as a real block device,
55 > should give you "purer" read-only and write-only tests. In theory it
56 > shouldn't matter much given your method of testing, but as we all know,
57 > theory and reality aren't always well aligned.
58 >
59
60 Will try some tests this way tomorrow morning.
61
62 >
63 > Of course the next question follows on from the above. I see a write to
64 > the raid, and a copy from the raid to the raid, so read/write on the
65 > raid, and a copy from the ssd to the ssd, read/write on it, but no test
66 > of from the raid read.
67 >
68 > So
69 >
70 > if=/dev/urandom of=/mnt/raid/target ... should give you raid write.
71 >
72 > drop-caches
73 >
74 > if=/mnt/raid/target of=/dev/null ... should give you raid read.
75 >
76 > *THEN* we have good numbers on both to compare the raid read/write to.
77 >
78 > What I suspect you'll find, unless fragmentation IS your problem, is that
79 > both read (from the raid) alone and write (to the raid) alone should be
80 > much faster than read/write (from/to the raid).
81 >
82 > The problem with read/write is that you're on "rotating rust" hardware
83 > and there's some latency as it repositions the heads from the read
84 > location to the write location and back.
85 >
86
87 If this lack of performance is truly driven by the drive rotational
88 issues than I completely agree.
89
90 > If I'm correct and that's what you find, a workaround specific to dd
91 > would be to specify a much larger block size, so it reads in far more
92 > data at once, then writes it out at once, with far fewer switches between
93 > modes. In the above you didn't specify bs (or the separate input/output
94 > equivilents, ibs/obs respectively) at all, so it's using 512-byte
95 > blocksize defaults.
96 >
97
98 So help me clarify this before I do the work and find out I didn't
99 understand. Whereas earlier I created a file using:
100
101 dd if=/dev/random of=random1 bs=1000 count=0 seek=$[1000*1000*50]
102
103 if what you are suggesting is more like this very short example:
104
105 mark@c2RAID6 /VirtualMachines/bonnie $ dd if=/dev/urandom of=urandom1
106 bs=4096 count=$[1000*100]
107 100000+0 records in
108 100000+0 records out
109 409600000 bytes (410 MB) copied, 25.8825 s, 15.8 MB/s
110 mark@c2RAID6 /VirtualMachines/bonnie $
111
112 then the results for writing this 400MB file are very slow, but I'm
113 sure I don't understand what you're asking, or urandom is the limiting
114 factor here.
115
116 I'll look for a reply (you or anyone else that has Duncan's idea
117 better than I do) before I do much more.
118
119 Thanks!
120
121 - Mark

Replies