Gentoo Archives: gentoo-dev

From: Richard Yao <ryao@×××××××××××××.edu>
To: "gentoo-dev@l.g.o" <gentoo-dev@l.g.o>
Subject: Re: [gentoo-dev] preserve_old_lib and I'm even more lazy
Date: Sat, 25 Feb 2012 21:57:08
Message-Id: CABDyM6Qm1W71N2Qc8oChE+bwyZe07O6ZZB801FxhDkY66p45ng@mail.gmail.com
1 > Why would btrfs be inferior to ZFS on multiple disks?  I can't see how
2 > its architecture would do any worse, and the planned features are
3 > superior to ZFS (which isn't to say that ZFS can't improve either).
4
5 ZFS uses ARC as its page replacement algorithm, which is superior to
6 the LRU page replacement algorithm used by btrfs. ZFS has L2ARC and
7 SLOG. L2ARC permits things that would not be evacuated from ARC had it
8 been bigger to be stored in a Level 2 cache. SLOG permits writes to be
9 stored in memory before they are committed to the disks. This provides
10 the benefits of write sequentialization and protection against data
11 inconsistency in the event of a kernel panic. Furthermore, data is
12 striped across vdevs, so the more vdevs you have, the higher your
13 performance goes.
14
15 These features enable ZFS performance to go to impressive heights and
16 the btrfs developers display no intention of following it as far as I
17 have seen.
18
19 > Beyond the licensing issues ZFS also does not support reshaping of
20 > raid-z, which is the only n+1 redundancy solution it offers.  Btrfs of
21 > course does not yet support n+1 at all aside from some experimental
22 > patches floating around, but it plans to support reshaping at some
23 > point in time.  Of course, there is no reason you couldn't implement
24 > reshaping for ZFS, it just hasn't happened yet.  Right now the
25 > competition for me is with ext4+lvm+mdraid.  While I really would like
26 > to have COW soon, I doubt I'll implement anything that doesn't support
27 > reshaping as mdraid+lvm does.
28
29 raidz has 3 varieties, which are single parity, double parity and
30 triple parity. As for reshaping, ZFS is a logical volume manager. You
31 can set and resize limits on ZFS datasets as you please.
32
33 As for competiting with ext4+lvm+mdraid, I recently migrated a server
34 from that exact configuration. It had 6 disks, using RAID 6. I had a
35 VM on it running Gentoo Hardened in which I did a benchmark using dd
36 to write zeroes to the disk. Nothing I could do with ext4+lvm+mdraid
37 could get performance above 20MB/sec. After switching to ZFS,
38 performance went to 205MB/sec. The worst performance I observed was
39 92MB/sec. This used 6 Samsung HD204UI hard drives.
40
41 > I do realize that you can add multiple raid-zs to a zpool, but that
42 > isn't quite enough.  If I have 4x1TB disks I'd like to be able to add
43 > a single 1TB disk and end up with 5TB of space.  I'd rather not have
44 > to find 3 more 1TB hard drives to hold the data on while I redo my
45 > raid and then try to somehow sell them again.
46
47 You would probably be better served by making your additional drive
48 into a hotspare, but if you insist on using it, you can make it a
49 separate vdev, which should provide more space. To be honest, anyone
50 who wants to upgrade such a configuration probably is better off
51 getting 4x2TB disks, do a scrub and then start replacing disks in the
52 pool, iterating between replacing a disk and resilvering the vdev.
53 After you have finished this process, you will have doubled the amount
54 of space in the pool.

Replies

Subject Author
Re: [gentoo-dev] preserve_old_lib and I'm even more lazy Rich Freeman <rich0@g.o>