1 |
(Thanks to everyone for the replies so far!) |
2 |
|
3 |
Am Sun, 12 Jul 2015 08:48:48 -0400 |
4 |
schrieb Rich Freeman <rich0@g.o>: |
5 |
|
6 |
> On Sun, Jul 12, 2015 at 8:35 AM, Marc Joliet <marcec@×××.de> wrote: |
7 |
> > |
8 |
> > My question is how precisely the disks should be cleared. From various sources |
9 |
> > I know that overwriting them with random data a few times is enough to render |
10 |
> > old versions of data unreadable. I'm guessing 3 times ought to be enough, but |
11 |
> > maybe even that small amount is overly paranoid these days? |
12 |
> > |
13 |
> > As to the actual command, I would suspect something like "dd if=/dev/urandom |
14 |
> > of=/dev/sdx bs=4096" should suffice, and according to |
15 |
> > https://wiki.archlinux.org/index.php/Random_number_generation#.2Fdev.2Furandom, |
16 |
> > /dev/urandom ought to be random enough for this task. Or are cat/cp that much |
17 |
> > faster? |
18 |
> |
19 |
> I'd probably just use a tool like shred/wipe, but you have the general idea. |
20 |
|
21 |
Ah, I overlooked that shred can operate on device files! Thanks. I especially |
22 |
trust shred, since my main source was an article by its author |
23 |
(https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html). |
24 |
|
25 |
With regards to the other replies: I think physical destruction is unnecessary, |
26 |
and I don't really want to go through the trouble. The key bit in the above |
27 |
article is: |
28 |
|
29 |
"[...]. If these drives require sophisticated signal processing just to read |
30 |
the most recently written data, reading overwritten layers is also |
31 |
correspondingly more difficult. A good scrubbing with random data will do about |
32 |
as well as can be expected." |
33 |
|
34 |
And this was in 1996! Drives have only gotten denser since then (e.g., |
35 |
perpendicular recording), and the epilogues (which reiterate the above) suggest |
36 |
that nothing has changed to make old data more recoverable. I noticed that the |
37 |
info manual to shred even says: |
38 |
|
39 |
"On modern disks, a single pass should be adequate, and it will take one third |
40 |
the time of the default three-pass approach." |
41 |
|
42 |
The Arch wiki also arrives at the same conclusion (see |
43 |
https://wiki.archlinux.org/index.php/Securely_wipe_disk#Residual_magnetism), |
44 |
and provides some additional references. |
45 |
|
46 |
> I'd probably follow it up with an ATA secure erase - for an SSD it is |
47 |
> probably the only way to be sure (well, to the extent that you trust |
48 |
> the firmware authors). |
49 |
|
50 |
Yeah, that sounds like a good idea. In the case of HDDs, even if I can't trust |
51 |
the firmware, I've already wiped what I can. With regards to SSDs, I've been |
52 |
meaning to read http://www.cypherpunks.to/~peter/usenix01.pdf. |
53 |
|
54 |
So my intermediate summary is: I'll probably use shred with one pass, followed |
55 |
by ATA (Enhanced) Secure Erase to erase the reallocated sectors (though I'll |
56 |
have to fiddle with my BIOS to do that). I'll be sure to read |
57 |
https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase first. |
58 |
|
59 |
> If it weren't painful to set up and complicated for rescue attempts, |
60 |
> I'd just use full-disk encryption with a strong key on a flash drive |
61 |
> or similar. Then the disk is as good as wiped if separated from the |
62 |
> key already. |
63 |
|
64 |
Plus you don't have to worry about reallocated sectors (which might only |
65 |
contain single bit errors). Currently I'm planning on waiting for btrfs to |
66 |
support it. Chris Mason recently mentioned that it's definitely something they |
67 |
want to look at (https://youtu.be/W3QRWUfBua8?t=631), and it's not something |
68 |
that is so important to me personally that I have to have it right this instant. |
69 |
|
70 |
-- |
71 |
Marc Joliet |
72 |
-- |
73 |
"People who think they know everything really annoy those of us who know we |
74 |
don't" - Bjarne Stroustrup |