1 |
I'll add my anecdotes :) |
2 |
|
3 |
On Tue, Apr 23, 2013 at 3:40 PM, Alan McKinnon <alan.mckinnon@×××××.com> wrote: |
4 |
> In over 10 years, I have never had a file system failure with any of |
5 |
> these (all used a lot): |
6 |
> |
7 |
> ext2 |
8 |
> ext3 |
9 |
> ext4 |
10 |
> zfs |
11 |
> reiser3 |
12 |
|
13 |
ext2, ext3, ext4, btrfs here. |
14 |
|
15 |
ext4 for years (ever since it lost the dev suffix in the kernel) |
16 |
without a single hiccup, and btrfs on a laptop with no battery |
17 |
monitor, meaning the battery would die with no warning (unclean |
18 |
shutdowns x1000) and never had an issue that prevented it from |
19 |
mounting on the next reboot. |
20 |
|
21 |
Also have used btrfs on a mobile phone running Mer development |
22 |
snapshots which tends to crash, reboot, freeze and requires the |
23 |
battery pulled, also never failed to remount after that constant |
24 |
abuse. |
25 |
|
26 |
btrfs has some features similar to zfs, reiser, lvm, dm... I still |
27 |
haven't decided whether that feature-creep makes me think "oh cool!" |
28 |
or "oh no!" :) |
29 |
|
30 |
> I have had failures with these (used a lot): |
31 |
> |
32 |
> Oh wait, there aren't any of those. |
33 |
|
34 |
JFS is on my "never again" list, I have used it on a few drives and |
35 |
two of them ended with catastrophic failure after an unexpected |
36 |
shutdown. "journal replay failed" is a phrase I still see in my |
37 |
nightmares... The recovery stripped names from inodes resulting in |
38 |
millions of files like I01039130.RCN or something like that... not |
39 |
sorted into directories or anything, though the timestamps survived, |
40 |
strangely. It has been several years since then and I've avoided JFS |
41 |
ever since. |
42 |
|
43 |
I actually had a third JFS incident, but by then I had disabled |
44 |
auto-fsck. I was unable to mount it read-only, but found a shareware |
45 |
tool for OS/2 that was able to recover files from a corrupt JFS |
46 |
volume, complete with filenames and directories. I slapped the drive |
47 |
into an OS/2 machine and it took several DAYS to complete the |
48 |
recovery, but it did in fact complete and I happily sent the guy ten |
49 |
dollars. It looks like nowadays there is an open-source tool for linux |
50 |
called jfsrec which does the same kind of recovery from broken JFS |
51 |
volumes. |
52 |
|
53 |
I used XFS on a drive which had a bad cable, and it wound up being |
54 |
unmountable and unfixable by fsck, though (after replacing the cable) |
55 |
I was able to do read-only dump all of the files from it using the xfs |
56 |
utils, after which I reformatted and copied everything back. Can't |
57 |
fault the filesystem for a bad cable but any time fsck is unable to |
58 |
fix an unmountable filesystem, it scares me. |
59 |
|
60 |
So, for me the rule of thumb is: ext4 on "important" drives (servers, |
61 |
my main desktop system, RAID array, backups), and btrfs on drives |
62 |
where I'm more willing to experiment and take a chance at something |
63 |
weird happening (laptop, web surfing workstation, mobile phone, |
64 |
virtual machines). |