1 |
On 05/07/14 07:51, Marc Joliet wrote: |
2 |
> Am Wed, 07 May 2014 06:56:12 +0800 |
3 |
> schrieb William Kenworthy <billk@×××××××××.au>: |
4 |
> |
5 |
>> On 05/06/14 18:18, Marc Joliet wrote: |
6 |
>>> Hi all, |
7 |
>>> |
8 |
>>> I've become increasingly motivated to convert to btrfs. From what I've seen, |
9 |
>>> it has become increasingly stable; enough so that it is apparently supposed to |
10 |
>>> become the default FS on OpenSuse in 13.2. |
11 |
>>> |
12 |
>>> I am motivated by various reasons: |
13 |
>> .... |
14 |
>> |
15 |
>> My btrfs experience: |
16 |
>> |
17 |
>> I have been using btrfs seriously (vs testing) for a while now with |
18 |
>> mixed results but the latest kernel/tools seem to be holding up quite well. |
19 |
>> |
20 |
>> ~ 2yrs on a Apple/gentoo laptop (I handed it back to work a few months |
21 |
>> back) - never a problem! (mounted with discard/trim) |
22 |
> That's one HDD, right? From what I've read, that's the most tested and stable |
23 |
> use case for btrfs, so it doesn't surprise me that much that it worked so well. |
24 |
> |
25 |
Yes, light duty using the builtin ssd chips on the motherboard. |
26 |
>> btrfs on a 128MB intel ssd (linux root drive) had to secure reset a few |
27 |
>> times as btrfs said the filesystem was full, but there was 60G+ free - |
28 |
>> happens after multiple crashes and it seemed the btrfs metadata and the |
29 |
>> ssd disagreed on what was actually in use - reset drive and restore from |
30 |
>> backups :( Now running ext4 on that drive with no problems - will move |
31 |
>> back to btrfs at some point. |
32 |
> All the more reason to stick with EXT4 on the SSD for now. |
33 |
I have had had very poor luck with ext anything and would hesitate it to |
34 |
recommend it except for this very specific case where there is little |
35 |
alternative - reiserfs is far better on platters for instance. |
36 |
> |
37 |
> [snip interesting but irrelevant ceph scenario] |
38 |
Its relevant because it keeps revealing bugs in btrfs by stressing it - |
39 |
one of those reported by me to ceph was reported upstream by the ceph |
40 |
team and fixed last year - bugs still exist in btrfs ! |
41 |
>> 3 x raid 0+1 (btrfs raid 1 with 3 drives) - working well for about a month |
42 |
> That last one is particularly good to know. I expect RAID 0, 1 and 10 to work |
43 |
> fairly well, since those are the oldest supported RAID levels. |
44 |
> |
45 |
>> ~10+ gentoo VM's, one ubuntu and 3 x Win VM's with kvm/qemu storage on |
46 |
>> btrfs - regular scrubs show an occasional VM problem after system crash |
47 |
>> (VM server), otherwise problem free since moving to pure btrfs from |
48 |
>> ceph. Gentoo VM's were btrfs in raw qemu containers and are now |
49 |
>> converted to qcow2 - no problems since moving from ceph. Fragmentation |
50 |
>> on VM's is a problem but "cp --reflink vm1 vm2" for vm's is really |
51 |
>> really cool! |
52 |
> That matches the scenario from the ars technica article; the author is a huge |
53 |
> fan of file cloning in btrfs :) . |
54 |
> |
55 |
> And yeah, too bad autodefrag is not yet stable. |
56 |
Not that its not stable but that it cant deal with large files that |
57 |
change randomly on a continual basis like VM virtual disks. |
58 |
> |
59 |
>> I have a clear impression that btrfs has been incrementally improving |
60 |
>> and the current kernel and recovery tools are quite good but its still |
61 |
>> possible to end up with an unrecoverable partition (in the sense that |
62 |
>> you might be able to get to some of the the data using recovery tools, |
63 |
>> but the btrfs mount itself is toast) |
64 |
>> |
65 |
>> Backups using dirvish - was getting an occasional corruption (mainly |
66 |
>> checksum) that seemed to coincide with network problems during a backup |
67 |
>> sequence - have not seen it for a couple of months now. Only lost whole |
68 |
>> partition once :( Dirvish really hammers a file system and ext4 usually |
69 |
>> dies very quickly so even now btrfs is far better here. |
70 |
> I use rsnapshot here with an external hard drive formatted to EXT4. I'm not |
71 |
> *that* worried about the FS dying, more that it dies at an inopportune moment |
72 |
> where I can't immediately restore it. |
73 |
> |
74 |
> [again, snip interesting but irrelevant ceph scenario] |
75 |
as I said above - if it fails under ceph, its likely going to fail under |
76 |
similar stresses using other software - I am not talking ceph bugs (of |
77 |
which there are many) but actual btrfs corruption. |
78 |
>> I am slowly moving my systems from reiserfs to btrfs as my confidence in |
79 |
>> it and its tools builds. I really dislike ext4 and its ability to lose |
80 |
>> valuable data (though that has improved dramaticaly) but it still seems |
81 |
>> better than btrfs on solid state and hard use - but after getting burnt |
82 |
>> I am avoiding that scenario so need to retest. |
83 |
> Rising confidence: good to hear :) . |
84 |
> |
85 |
> Perhaps this will turn out similarly to when I was using the xf86-video-ati |
86 |
> release candidates and bleeding edge gentoo-sources/mesa/libdrm/etc. (for 3D |
87 |
> support in the r600 driver): I start using it shortly before it starts truly |
88 |
> stabilising :) . |
89 |
> |
90 |
More exposure, more bugs will surface and be fixed - its getting there. |
91 |
|
92 |
BillK |