1 |
On Sat, May 17, 2014 at 3:53 AM, Mick <michaelkintzios@×××××.com> wrote: |
2 |
> I am not clear on one thing: is the corruption that you show above *because* |
3 |
> of btrfs, or it would occur silently with any other fs, like e.g. ext4? |
4 |
|
5 |
That is something I'm curious about as well as I stumbled on this |
6 |
thread. I've been running btrfs on a 5 drive array set to raid1 for |
7 |
both data and metadata for several months now, and I've yet to see a |
8 |
single error in my weekly scrubs. This is on a system that is up |
9 |
24x7, running mysql, mythtv, postfix, and a daily rsync backup - |
10 |
basically light disk activity at all times, and heavy activity |
11 |
moderately often. The only issue I've had with btrfs is ENOSPC when |
12 |
it manages to allocate all of its chunks (more of a problem on a |
13 |
smaller ssd running btrfs for /), and panics when I try to remove |
14 |
several snapshots at once. |
15 |
|
16 |
I'm not sure how easy it would be to test for silent corruption on |
17 |
another fs, unless you tried using ZFS instead, or used tripwire or |
18 |
some other integrity checker. Testing the drive itself would be |
19 |
straightforward if you didn't need to use it in any kind of production |
20 |
capacity - write patterns to it and try to read them back in a few |
21 |
days. |
22 |
|
23 |
Rich |