1 |
On 05/06/14 18:18, Marc Joliet wrote: |
2 |
> Hi all, |
3 |
> |
4 |
> I've become increasingly motivated to convert to btrfs. From what I've seen, |
5 |
> it has become increasingly stable; enough so that it is apparently supposed to |
6 |
> become the default FS on OpenSuse in 13.2. |
7 |
> |
8 |
> I am motivated by various reasons: |
9 |
.... |
10 |
|
11 |
My btrfs experience: |
12 |
|
13 |
I have been using btrfs seriously (vs testing) for a while now with |
14 |
mixed results but the latest kernel/tools seem to be holding up quite well. |
15 |
|
16 |
~ 2yrs on a Apple/gentoo laptop (I handed it back to work a few months |
17 |
back) - never a problem! (mounted with discard/trim) |
18 |
|
19 |
btrfs on a 128MB intel ssd (linux root drive) had to secure reset a few |
20 |
times as btrfs said the filesystem was full, but there was 60G+ free - |
21 |
happens after multiple crashes and it seemed the btrfs metadata and the |
22 |
ssd disagreed on what was actually in use - reset drive and restore from |
23 |
backups :( Now running ext4 on that drive with no problems - will move |
24 |
back to btrfs at some point. |
25 |
|
26 |
cephfs - rolling disaster but its more to do with not giving the system |
27 |
adequate resources and using what from ceph's point of view are bad |
28 |
practises (running ceph on the same machine used for VM's and mounts) - |
29 |
mostly resulted in gradually corrupted and unrecoverable btrfs |
30 |
partitions over time. |
31 |
|
32 |
3 x raid 0+1 (btrfs raid 1 with 3 drives) - working well for about a month |
33 |
|
34 |
~10+ gentoo VM's, one ubuntu and 3 x Win VM's with kvm/qemu storage on |
35 |
btrfs - regular scrubs show an occasional VM problem after system crash |
36 |
(VM server), otherwise problem free since moving to pure btrfs from |
37 |
ceph. Gentoo VM's were btrfs in raw qemu containers and are now |
38 |
converted to qcow2 - no problems since moving from ceph. Fragmentation |
39 |
on VM's is a problem but "cp --reflink vm1 vm2" for vm's is really |
40 |
really cool! |
41 |
|
42 |
I have a clear impression that btrfs has been incrementally improving |
43 |
and the current kernel and recovery tools are quite good but its still |
44 |
possible to end up with an unrecoverable partition (in the sense that |
45 |
you might be able to get to some of the the data using recovery tools, |
46 |
but the btrfs mount itself is toast) |
47 |
|
48 |
Backups using dirvish - was getting an occasional corruption (mainly |
49 |
checksum) that seemed to coincide with network problems during a backup |
50 |
sequence - have not seen it for a couple of months now. Only lost whole |
51 |
partition once :( Dirvish really hammers a file system and ext4 usually |
52 |
dies very quickly so even now btrfs is far better here. |
53 |
|
54 |
The comments on ceph only hold in my use case i.e., dont do it this way! |
55 |
After the experience and problems, I would still choose ceph for its |
56 |
proper use case (its actually way cool!) - the ceph people do not |
57 |
recommend btrfs for production use. |
58 |
|
59 |
I am slowly moving my systems from reiserfs to btrfs as my confidence in |
60 |
it and its tools builds. I really dislike ext4 and its ability to lose |
61 |
valuable data (though that has improved dramaticaly) but it still seems |
62 |
better than btrfs on solid state and hard use - but after getting burnt |
63 |
I am avoiding that scenario so need to retest. |
64 |
|
65 |
BillK |