1 |
> Oh, if you need a safe COW filesystem today I'd definitely recommend |
2 |
> ZFS over btrfs for sure, although I suspect the people who are most |
3 |
> likely to take this sort of advice are also the sort of people who are |
4 |
> most likely to not be running Gentoo. There are a bazillion problems |
5 |
> with btrfs as it stands. |
6 |
|
7 |
There is significant interest in ZFS in the Gentoo community, |
8 |
especially on freenode. Several veteran users are evaluating it and |
9 |
others have already begun to switch from other filesystems, volume |
10 |
managers and RAID solutions. |
11 |
|
12 |
> However, fundamentally there is no reason to think that ZFS will |
13 |
> remain better in the future, once the bugs are worked out. They're |
14 |
> still focusing on keeping btrfs from hosing your data - tuning |
15 |
> performance is not a priority yet. However, the b-tree design of |
16 |
> btrfs should scale very well once the bugs are worked out. |
17 |
|
18 |
ZFSOnLinux performance tuning is not a priority either, but there have |
19 |
been a few patches and the performance is good. btrfs might one day |
20 |
outperform ZFS in terms of single disk performance, assuming that it |
21 |
does not already, but I question the usefulness of single disk |
22 |
performance as a performance metric. If I add a SSD to a ZFS pool |
23 |
machine to complement the disk, system performance will increase |
24 |
many-fold. As far as I can tell, that will never be possible with |
25 |
btrfs without external solutions like Google's flashcache, which |
26 |
killed an OCZ Vertex 3 within 16 days about a month ago that Wyatt in |
27 |
#gentoo-chat on freenode had to replace. I imagine that its death |
28 |
could have been delayed through write rate limiting, which is what ZFS |
29 |
uses for L2ARC, but until you can replace the Linux page replacement |
30 |
algorithm with either ARC or something comparable, flashcache will be |
31 |
inferior to ZFS L2ARC. You can read more about this topic at the |
32 |
following link: |
33 |
|
34 |
http://linux-mm.org/AdvancedPageReplacement |
35 |
|
36 |
ZFS at its core is a transactional object store and everything that |
37 |
enables its use as a filesystem is implemented on top of that. ZFS |
38 |
supports raidz3, zvols, L2ARC, SLOG/ZIL and endian independence, which |
39 |
as far as I can tell, are things that btrfs will never support.. ZFS |
40 |
also has either first-party or third-party support on Solaris, |
41 |
FreeBSD, Linux, Mac OS X and Windows, while btrfs appears to have no |
42 |
future outside of Linux. |
43 |
|
44 |
Lastly, ZFS' performance scaling exceeds that of any block device |
45 |
based filesystem I have seen (which excludes comparisons with |
46 |
tmpfs/ramfs and lustre/gpfs). The following benchmark is of a SAN |
47 |
device using ZFS: |
48 |
|
49 |
http://www.anandtech.com/show/3963/zfs-building-testing-and-benchmarking/2 |
50 |
|
51 |
While ZFS performance in that benchmark is impressive, ZFS can scale |
52 |
far higher with additional disks and more SSDs. SuperMicro has a |
53 |
hotswappable 72-disk enclosure that should enable ZFS to far exceed |
54 |
the performance of the system that Anandtech benchmarked, provided |
55 |
that it is configured with a large ARC cache and multiple vdevs each |
56 |
with multiple disks, some SSDs for L2ARC and a SLC SSD-based SLOG/ZIL. |
57 |
I would not be surprised if ZFS performance were to exceed 1 million |
58 |
IOPS on such hardware. Nothing that I have seen planned for btrfs can |
59 |
perform comparably, in any configuration. |