1 |
On Tue, Jan 8, 2013 at 2:06 PM, Florian Philipp <lists@×××××××××××.net> wrote: |
2 |
> Am 08.01.2013 18:35, schrieb Volker Armin Hemmann: |
3 |
>> Am Dienstag, 8. Januar 2013, 08:27:51 schrieb Florian Philipp: |
4 |
>>> Am 08.01.2013 00:20, schrieb Alan McKinnon: |
5 |
>>>> On Mon, 07 Jan 2013 21:11:35 +0100 |
6 |
>>>> |
7 |
>>>> Florian Philipp <lists@×××××××××××.net> wrote: |
8 |
>>>>> Hi list! |
9 |
>>>>> |
10 |
>>>>> I have a use case where I am seriously concerned about bit rot [1] |
11 |
>>>>> and I thought it might be a good idea to start looking for it in my |
12 |
>>>>> own private stuff, too. |
13 |
>>> |
14 |
>>> [...] |
15 |
>>> |
16 |
>>>>> [1] http://en.wikipedia.org/wiki/Bit_rot |
17 |
> [...] |
18 |
>>>> If you mean disk file corruption, then doing it file by file is a |
19 |
>>>> colossal waste of time IMNSHO. You likely have >1,000,000 files. Are |
20 |
>>>> you really going to md5sum each one daily? Really? |
21 |
>>> |
22 |
>>> Well, not daily but often enough that I likely still have a valid copy |
23 |
>>> as a backup. |
24 |
>> |
25 |
>> and who guarantees that the backup is the correct file? |
26 |
>> |
27 |
> |
28 |
> That's why I wanted to store md5sum (or sha2sums). |
29 |
> |
30 |
>> btw, the solution is zfs and weekly scrub runs. |
31 |
>> |
32 |
> |
33 |
> Seems so. |
34 |
> |
35 |
|
36 |
And, while it's not exceptionally likely, there's always a possibility |
37 |
that the checksum table, rather than the file being checked itself, is |
38 |
the location of the corruption, meaning you have to verify that as |
39 |
well when discrepancies occur. The likelihood of the perfect few bits |
40 |
flipping to match the corrupted data with a corrupted hash, within the |
41 |
time between checks, however, I would think is low enough to gamble on |
42 |
never seeing it in a reasonable lifetime. |
43 |
|
44 |
-- |
45 |
Poison [BLX] |
46 |
Joshua M. Murphy |