1 |
Rich Freeman <rich0@g.o> writes: |
2 |
|
3 |
> On Mon, Dec 29, 2014 at 8:55 AM, lee <lee@××××××××.de> wrote: |
4 |
>> |
5 |
>> Just why can't you? ZFS apparently can do such things --- yet what's |
6 |
>> the difference in performance of ZFS compared to hardware raid? |
7 |
>> Software raid with MD makes for quite a slowdown. |
8 |
>> |
9 |
> |
10 |
> Well, there is certainly no reason that you couldn't serialize a |
11 |
> logical volume as far as design goes. It just isn't implemented (as |
12 |
> far as I'm aware), though you certainly can just dd the contents of a |
13 |
> logical volume. |
14 |
|
15 |
You can use dd to make a copy. Then what do you do with this copy? I |
16 |
suppose you can't just use dd to write the copy into another volume |
17 |
group and have it show up as desired. You might destroy the volume |
18 |
group instead ... |
19 |
|
20 |
> ZFS performs far better in such situations because you're usually just |
21 |
> snapshotting and not copying data at all (though ZFS DOES support |
22 |
> serialization which of course requires copying data, though it can be |
23 |
> done very efficiently if you're snapshotting since the filesystem can |
24 |
> detect changes without having to read everything). |
25 |
|
26 |
How's the performance of software raid vs. hardware raid vs. ZFS raid |
27 |
(which is also software raid)? |
28 |
|
29 |
> Incidentally, other than lacking maturity btrfs has the same |
30 |
> capabilities. |
31 |
|
32 |
IIRC, there are things that btrfs can't do and ZFS can, like sending a |
33 |
FS over the network. |
34 |
|
35 |
> The reason ZFS (and btrfs) are able to perform better is that they |
36 |
> dictate the filesystem, volume management, and RAID layers. md has to |
37 |
> support arbitrary data being stored on top of it - it is just a big |
38 |
> block device which is just a gigantic array. ZFS actually knows what |
39 |
> is in all those blocks, and it doesn't need to copy data that it knows |
40 |
> hasn't changed, protect blocks when it knows they don't contain data, |
41 |
> and so on. You could probably improve on mdadm by implementing |
42 |
> additional TRIM-like capabilities for it so that filesystems could |
43 |
> inform it better about the state of blocks, which of course would have |
44 |
> to be supported by the filesystem. However, I doubt it will ever work |
45 |
> as well as something like ZFS where all this stuff is baked into every |
46 |
> level of the design. |
47 |
|
48 |
Well, I'm planning to make some tests with ZFS. Particularly, I want to |
49 |
see how it performs when NFS clients write to an exported ZFS file |
50 |
system. |
51 |
|
52 |
How about ZFS as root file system? I'd rather create a pool over all |
53 |
the disks and create file systems within the pool than use something |
54 |
like ext4 to get the system to boot. |
55 |
|
56 |
And how do I convert a system installed on an ext4 FS (on a hardware |
57 |
raid-1) to ZFS? I can plug in another two disks, create a ZFS pool from |
58 |
them, make file systems (like for /tmp, /var, /usr ...) and copy |
59 |
everything over. But how do I make it bootable? |
60 |
|
61 |
|
62 |
-- |
63 |
Again we must be afraid of speaking of daemons for fear that daemons |
64 |
might swallow us. Finally, this fear has become reasonable. |