1 |
On Tue, Feb 24, 2015 at 6:54 AM, Bob Wya <bob.mt.wya@×××××.com> wrote: |
2 |
> I would always recommend a secure erase of an SSD - if you want a "fresh |
3 |
> start". That will mark all the NAND cells as clear of data. That will |
4 |
> benefit the longevity of your device / wear levelling. |
5 |
|
6 |
Not a bad idea, though if you're trimming your filesystem (and it |
7 |
supports this), that shouldn't be necessary, and of course a log-based |
8 |
filesystem like f2fs should promote excellent wear leveling |
9 |
automatically by design. Granted, that doesn't help you if an f2fs |
10 |
bug eats your data. |
11 |
|
12 |
> Personally having been burned by btrfs I would not try one of these |
13 |
> "experimental" file systems again... |
14 |
|
15 |
Well, trying them is one thing, relying on them is something else. |
16 |
I've had a few issues with btrfs in the last year but they've all been |
17 |
of the uptime/availability nature and none has actually caused |
18 |
unrecoverable data loss. It has caused me to start moving back |
19 |
towards the longterm stable branch though as the level of regressions |
20 |
has been fairly high of late. |
21 |
|
22 |
However, right now I keep everything on btrfs backed up onto ext4 |
23 |
using rsnapshot daily (an rsync-based tool I recommend if you're the |
24 |
sort that likes rsync for backups). So, the impact of a total |
25 |
filesystem failure is limited to availability (granted, quite a bit of |
26 |
it to completely restore multiple TB). That risk is acceptable for |
27 |
what I'm using it for. Another risk would be a silent corruption that |
28 |
persists longer than the number of backups I retain, but I think that |
29 |
is unlikely since silent failures is one of those things btrfs is |
30 |
designed to be good at detecting/preventing, and I've yet to see any |
31 |
reports of this kind of failure which makes me tend to think that if |
32 |
anything there is more risk of a silent corruption impacting my |
33 |
backups (ie I'm contrasting the risk of btrfs quietly storing the |
34 |
wrong content of a file vs the risk of a hard drive bit flip messing |
35 |
up data which ext4 can't detect). |
36 |
|
37 |
In general though there is a reason that sysadmins tend to be very |
38 |
conservative with filesystems. I doubt most even jumped onto ext4 all |
39 |
that quickly even though that was very stable from the start of being |
40 |
declared as such. You really need to look at your use case and |
41 |
understand the risks and benefits and how you plan to mitigate the |
42 |
risks. Something being experimental isn't a reason to automatically |
43 |
avoid using it if it brings some significant benefit to your design, |
44 |
as long as you've mitigated the risks. And, of course, if your goal |
45 |
is to better understand an experimental technology in a non-critical |
46 |
setting you should probably just get your feet wet. |
47 |
|
48 |
However, what you shouldn't do is just pick an experimental anything |
49 |
as a go-to default for something you want to never have to fuss with. |
50 |
|
51 |
-- |
52 |
Rich |