1 |
Am 27.10.2014 um 17:52 schrieb Pandu Poluan: |
2 |
> |
3 |
> |
4 |
> On Oct 27, 2014 10:40 PM, "Rich Freeman" <rich0@g.o |
5 |
> <mailto:rich0@g.o>> wrote: |
6 |
> > |
7 |
> > On Mon, Oct 27, 2014 at 11:22 AM, Mick <michaelkintzios@×××××.com |
8 |
> <mailto:michaelkintzios@×××××.com>> wrote: |
9 |
> > > |
10 |
> > > Thanks Rich, I have been reading your posts about btrfs with |
11 |
> interest, but |
12 |
> > > have not yet used it on my systems. Is btrfs agreeable with SSDs, |
13 |
> or should I |
14 |
> > > be using f2fs: |
15 |
> > > |
16 |
> > |
17 |
> > Btrfs will auto-detect SSDs and optimize itself differently, and is |
18 |
> > generally considered to be fine on SSDs. Of course, btrfs itself is |
19 |
> > experimental and may eat your data, especially if you get it too full, |
20 |
> > but you'll be no worse off for running it on an SSD. |
21 |
> > |
22 |
> > I doubt you'll find any general-purpose filesystem that works as well |
23 |
> > overall on an SSD as something like f2fs as this is log-based and |
24 |
> > designed with SSDs in mind. However, f2fs is also very immature and |
25 |
> > also carries risks, and the last time I checked it was missing some |
26 |
> > features like xattrs as well. It also doesn't have anything like |
27 |
> > btrfs send to serialize your data. |
28 |
> > |
29 |
> > zfs on linux might be another option. I don't know how well it |
30 |
> > handles SSDs in general, and you have to fuss with FUSE and a boot |
31 |
> > partition as I don't think grub supports it - it could be a bit of a |
32 |
> > PITA for a single-drive system. However, it is probably more mature |
33 |
> > than btrfs overall, and it certainly supports send. |
34 |
> > |
35 |
> > I just had a btrfs near-miss which caused me to rethink how I'm |
36 |
> > managing my own storage. I was half-tempted to blog on it - it is a |
37 |
> > bit frustrating as I believe we're right in the middle of the shift |
38 |
> > between the traditional filesystems and the next-generation ones. |
39 |
> > Sticking with the old means giving up a lot of potential benefits, but |
40 |
> > there are a lot of issues with jumping ship as well as the new systems |
41 |
> > all lack maturity or are not feature-complete yet. I was looking at |
42 |
> > f2fs, btrfs, and zfs again this weekend and the issues I struggle with |
43 |
> > are the immaturity of btrfs and f2fs, the lack of working parity raid |
44 |
> > on btrfs, the lack of many features on f2fs, and the inability to |
45 |
> > resize vdevs on zfs which means on a system with few drives you get |
46 |
> > locked in. I suspect all of those will change in time, but not yet! |
47 |
> > |
48 |
> > -- |
49 |
> > Rich |
50 |
> > |
51 |
> |
52 |
> ZoL (ZFS on Linux) nowadays is implemented using DKMS instead of FUSE, |
53 |
> thus running in kernelspace, and (relatively) easier to put into an |
54 |
> initramfs. |
55 |
> |
56 |
> Updating is a beeyotch on binary-based distros as it requires a |
57 |
> recompile. Not a big deal for us Gentooers :-) |
58 |
> |
59 |
> vdevs can grow, but they can't (yet) shrink. And putting ZFS on |
60 |
> SSDs... not recommended. Rather, ZFS can employ SSDs to act as a |
61 |
> 'write cache' for the spinning HDDs. |
62 |
> |
63 |
> In my personal opinion, the 'killer' feature of ZFS is that it's built |
64 |
> from the ground up to provide maximum data integrity. The second |
65 |
> feature is its high performance COW snapshot ability. You can do an |
66 |
> obscene amount of snapshots if you want (but don't actually do it; |
67 |
> managing more than a hundred snapshots is a Royal PITA). And it's also |
68 |
> able to serialize the snapshots, allowing perfect delta replication |
69 |
> to another system. This saves a lot of time doing bit-perfect backup |
70 |
> because only changed blocks will be transferred. And you can ship a |
71 |
> snapshot instead of the whole filesystem, allowing online backup. |
72 |
> |
73 |
> (And yes, actually deployed ZoL on my previous employer's email |
74 |
> system, with the aforementioned snapshot-shipping backup strategy). |
75 |
> |
76 |
> Other features include: Much easier mounting (no need to mess with |
77 |
> fstab), built-in NFS support for higher throughput, and ability to |
78 |
> easily rebuild a pool merely by installing the drives (in any order) |
79 |
> into a new box and let ZFS scan for all the metadata. |
80 |
> |
81 |
> The most serious drawback in my opinion is ZoL's nearly insatiable |
82 |
> appetite for RAM. Unless you purposefully limit its RAM usage, ZoL's |
83 |
> cache will consume nearly all available memory, causing memory |
84 |
> fragmentation and ending with OOM. |
85 |
> |
86 |
> Rgds, |
87 |
> -- |
88 |
> |
89 |
|
90 |
I haven't run into oom situations caused by zfs. |
91 |
|
92 |
Unlike oom's caused by konqueror, chromium or gcc... |