1 |
On Sun, May 17, 2015 at 6:08 PM, Stefan G. Weichinger <lists@×××××.at> wrote: |
2 |
> |
3 |
> There were problems with btrfs and the kernel a few months ago (Rich |
4 |
> Freeman was hit by that, maybe he chimes in here), but in general for me |
5 |
> it is still a very positive experience. |
6 |
> |
7 |
|
8 |
It is nowhere near the stability of ext4. In the last year I've |
9 |
probably had 2-3 periods of time where I was getting frequent panics, |
10 |
or panics anytime I'd mount my filesystems rw. That said, I've never |
11 |
had an occasion where I couldn't mount the filesystem ro, and I've |
12 |
never had an actual loss of committed data. Just downtime while I |
13 |
sorted things out. I do keep a full daily rsnapshot backup on ext4 |
14 |
right now since I consider btrfs experimental. However, if I were too |
15 |
cheap to do that I wouldn't have actually lost anything yet. |
16 |
|
17 |
On the other hand, both btrfs and zfs will get you a level of data |
18 |
security that you simply won't get from ext4+lvm+mdadm - protection |
19 |
from silent corruption. The only time I've ever had a filesystem eat |
20 |
my data on linux was on ext4+lvm+mdadm actually - when I googled for |
21 |
the specific circumstances I think I ran into one guy on a list |
22 |
somewhere who had the same problem, but it is pretty rare (and one |
23 |
piece of advice I would give to anybody using lvm is to backup your |
24 |
metadata - if I had done that and was more careful about running fsck |
25 |
in repair mode I probably could have restored everything without |
26 |
issue). (For the curious, the issue was that I repaired a bunch of |
27 |
fsck-detected problems in one filesystem and lost a lot of data in |
28 |
another one. I suspect that LVM got its mapping messed up somehow, |
29 |
and it might have had to do with operating in degraded mode (perhaps |
30 |
due to a crash and need for rebuild).) |
31 |
|
32 |
A big advantage of btrfs/zfs is that everything is checksummed on |
33 |
disk, and the filesystem is not going to rely on anything that isn't |
34 |
internally consistent. In the event of a rebuild/etc it can always |
35 |
tell which copies are good/bad, unless you do something really crazy |
36 |
like split an array onto two PCs, then rebuild both, and then try to |
37 |
start mix the disks back together - from what I've heard btrfs lacks |
38 |
generation numbers/etc needed to detect this kind of problem. |
39 |
|
40 |
For personal use btrfs is great for playing around with the likely |
41 |
future default linux filesystem. I wouldn't go installing it on |
42 |
production servers in a workplace unless it was a really niche |
43 |
situation, and then only with appropriate steps to mitigate the risks |
44 |
(lots of testing of new kernel releases, backups or an ability to |
45 |
regenerate the system, etc). I wouldn't go so far as to say that |
46 |
there are no circumstances where it is the right tool for the job. |
47 |
You should understand the pros/cons before using it, as with any tool. |
48 |
|
49 |
-- |
50 |
Rich |