1 |
On Wednesday 30 Mar 2016 19:36:57 Meino.Cramer@×××.de wrote: |
2 |
> Neil Bothwick <neil@××××××××××.uk> [16-03-30 17:12]: |
3 |
> > On Wed, 30 Mar 2016 06:36:15 +0100, Mick wrote: |
4 |
> > > Also worth mentioning is dcfldd which unlike dd can show progress of |
5 |
> > > the bit stream and also produce hashes of the transferred output. It |
6 |
> > > has the same performance as the dd command though. |
7 |
> > |
8 |
> > I can't find the reference right now, but I did read that dcfldd |
9 |
> > determines the best block size on the fly if none is given. It's |
10 |
> > certainly faster than dd when copying images to USB frives (my main use |
11 |
> > for it) when given no block size. |
12 |
|
13 |
This is good to know! I'll try to remember to use it more often. |
14 |
|
15 |
|
16 |
> Sounds it will be the tool of choice for that purpose, Neil ! :) |
17 |
> |
18 |
> Best regards, |
19 |
> Meino |
20 |
|
21 |
dd defaults to bs=512, so it is going to be slow on transferring anything more |
22 |
than a few megabytes. To find the optimum block size with dd you could run |
23 |
something like this: |
24 |
|
25 |
$ dd if=/dev/zero bs=512 count=2000000 of=~/1GB.file |
26 |
2000000+0 records in |
27 |
2000000+0 records out |
28 |
1024000000 bytes (1.0 GB) copied, 13.8359 s, 74.0 MB/s |
29 |
|
30 |
$ dd if=/dev/zero bs=1024 count=1000000 of=~/1GB.file |
31 |
1000000+0 records in |
32 |
1000000+0 records out |
33 |
1024000000 bytes (1.0 GB) copied, 10.4439 s, 98.0 MB/s |
34 |
|
35 |
$ dd if=/dev/zero bs=2048 count=500000 of=~/1GB.file |
36 |
500000+0 records in |
37 |
500000+0 records out |
38 |
1024000000 bytes (1.0 GB) copied, 9.57416 s, 107 MB/s |
39 |
|
40 |
$ dd if=/dev/zero bs=4096 count=250000 of=~/1GB.file |
41 |
250000+0 records in |
42 |
250000+0 records out |
43 |
1024000000 bytes (1.0 GB) copied, 9.0178 s, 114 MB/s |
44 |
|
45 |
$ dd if=/dev/zero bs=8192 count=125000 of=~/1GB.file |
46 |
125000+0 records in |
47 |
125000+0 records out |
48 |
1024000000 bytes (1.0 GB) copied, 9.47107 s, 108 MB/s |
49 |
|
50 |
$ rm 1GB.file |
51 |
|
52 |
On an old spinning disk of mine it seems that bs=4096 is a good size to select |
53 |
for writing data to it. |
54 |
|
55 |
|
56 |
NOTES: |
57 |
====== |
58 |
|
59 |
1. If you rm the 1GB.file in between tests you'll get a better number, but |
60 |
I've been lazy and other than the first test with bs=512, the comparative |
61 |
result between remaining tests remains consistent. |
62 |
|
63 |
2. In the above test the dcfldd command gives similar transfer times with dd, |
64 |
if you use the same block size. For larger files some difference may be |
65 |
apparent. |
66 |
|
67 |
3. In your case you can use the intended input/output devices, rather than |
68 |
/dev/zero, in order to get representative read and write cumulative times. |
69 |
|
70 |
-- |
71 |
Regards, |
72 |
Mick |