1 |
On Sun, Jan 11, 2015 at 1:42 PM, lee <lee@××××××××.de> wrote: |
2 |
> Rich Freeman <rich0@g.o> writes: |
3 |
>> |
4 |
>> Generally you do backup at the filesystem layer, not at the volume |
5 |
>> management layer. LVM just manages a big array of disk blocks. It |
6 |
>> has no concept of files. |
7 |
> |
8 |
> That may require downtime while the idea of taking snapshots and then |
9 |
> backing up the volume is to avoid the downtime. |
10 |
|
11 |
Sure, which is why btrfs and zfs support snapshots at the filesystem |
12 |
layer. You can do an lvm snapshot but it requires downtime unless you |
13 |
want to mount an unclean snapshot for backups. |
14 |
|
15 |
> |
16 |
>>>> Just create a small boot partition and give the rest to zfs. A |
17 |
>>>> partition is a block device, just like a disk. ZFS doesn't care if it |
18 |
>>>> is managing the entire disk or just a partition. |
19 |
>>> |
20 |
>>> ZFS does care: You cannot export ZFS pools residing on partitions, and |
21 |
>>> apparently ZFS cannot use the disk cache as efficiently when it uses |
22 |
>>> partitions. |
23 |
>> |
24 |
>> Cite? This seems unlikely. |
25 |
> |
26 |
> ,---- [ man zpool ] |
27 |
> | For pools to be portable, you must give the zpool command |
28 |
> | whole disks, not just partitions, so that ZFS can label the |
29 |
> | disks with portable EFI labels. Otherwise, disk drivers on |
30 |
> | platforms of different endianness will not recognize the |
31 |
> | disks. |
32 |
> `---- |
33 |
> |
34 |
> You may be able to export them, and then you don't really know what |
35 |
> happens when you try to import them. I didn't keep a bookmark for the |
36 |
> article that mentioned the disk cache. |
37 |
> |
38 |
> When you read about ZFS, you'll find that using the whole disk is |
39 |
> recommended while using partitions is not. |
40 |
|
41 |
Ok, I get the EFI label issue if zfs works with multiple endianness |
42 |
and only stores the setting in the EFI label (which seems like an odd |
43 |
way to do things). You didn't mention anything about disk cache and |
44 |
it seems unlikely that using partitions vs whole drives is going to |
45 |
matter here. |
46 |
|
47 |
Honestly, I feel like there is a lot of cargo cult mentality with many |
48 |
in the ZFS community. Another one of those "must do" things is using |
49 |
ECC RAM. Sure, you're more likely to end up with data corruptions |
50 |
without it than with it, but the same is true with ANY filesystem. |
51 |
I've yet to hear any reasonable argument as to why ZFS is more |
52 |
susceptible to memory corruption than ext4. |
53 |
|
54 |
> |
55 |
>>> Caching in memory is also less efficient because another |
56 |
>>> file system has its own cache. |
57 |
>> |
58 |
>> There is no other filesystem. ZFS is running on bare metal. It is |
59 |
>> just pointing to a partition on a drive (an array of blocks) instead |
60 |
>> of the whole drive (an array of blocks). The kernel does not cache |
61 |
>> partitions differently from drives. |
62 |
> |
63 |
> How do you use a /boot partition that doesn't have a file system? |
64 |
|
65 |
Oh, I thought you meant that the memory cache of zfs itself is less |
66 |
efficient. I'd be interested in a clear explanation as to why |
67 |
10X100GB filesystems use cache differently than 1X1TB filesystem if |
68 |
file access is otherwise the same. However, even if having a 1GB boot |
69 |
partition mounted caused wasted cache space that problem is easily |
70 |
solved by just not mounting it except when doing kernel updates. |
71 |
|
72 |
> |
73 |
>>> On top of that, you have the overhead of |
74 |
>>> software raid for that small partition unless you can dedicate |
75 |
>>> hardware-raided disks for /boot. |
76 |
>> |
77 |
>> Just how often are you reading/writing from your boot partition? You |
78 |
>> only read from it at boot time, and you only write to it when you |
79 |
>> update your kernel/etc. There is no requirement for it to be raided |
80 |
>> in any case, though if you have multiple disks that wouldn't hurt. |
81 |
> |
82 |
> If you want to accept that the system goes down or has to be brought |
83 |
> down or is unable to boot because the disk you have your /boot partition |
84 |
> on has failed, you may be able to get away with a non-raided /boot |
85 |
> partition. |
86 |
> |
87 |
> When you do that, what's the advantage other than saving the software |
88 |
> raid? You still need to either dedicate a disk to it, or you have to |
89 |
> leave a part of all the other disks unused and cannot use them as a |
90 |
> whole for ZFS because otherwise they will be of different sizes. |
91 |
|
92 |
Sure, when I have multiple disks available and need a boot partition I |
93 |
RAID it with software RAID. So what? Updating your kernel /might/ be |
94 |
a bit slower when you do that twice a month or whatever. |
95 |
|
96 |
>> Better not buy an EFI motherboard. :) |
97 |
> |
98 |
> Yes, they are a security hazard and a PITA. Maybe I can sit it out |
99 |
> until they come up with something better. |
100 |
|
101 |
Security hazard? How is being able to tell your motherboard to only |
102 |
boot software of your own choosing a security hazard? Or are you |
103 |
referring to something other than UEFI? |
104 |
|
105 |
I think the pain is really only there because most of the |
106 |
utilities/etc haven't been updated to the new reality. |
107 |
|
108 |
In any case, I don't see it going away anytime soon. |
109 |
|
110 |
> |
111 |
>>>>> With ZFS at hand, btrfs seems pretty obsolete. |
112 |
>>>> |
113 |
>>>> You do realize that btrfs was created when ZFS was already at hand, |
114 |
>>>> right? I don't think that ZFS will be likely to make btrfs obsolete |
115 |
>>>> unless it adopts more dynamic desktop-oriented features (like being |
116 |
>>>> able to modify a vdev), and is relicensed to something GPL-compatible. |
117 |
>>>> Unless those happen, it is unlikely that btrfs is going to go away, |
118 |
>>>> unless it is replaced by something different. |
119 |
>>> |
120 |
>>> Let's say it seems /currently/ obsolete. |
121 |
>> |
122 |
>> You seem to have an interesting definition of "obsolete" - something |
123 |
>> which holds potential promise for the future is better described as |
124 |
>> "experimental." |
125 |
> |
126 |
> Can you build systems on potential promises for the future? |
127 |
|
128 |
I never claimed that you could. I just said that something which is |
129 |
"experimental" is not "obsolete." |
130 |
|
131 |
Something which is obsolete has no future promise. Something which is |
132 |
experimental is full of it. |
133 |
|
134 |
Would a potential cure for cancer be considered "obsolete" even if it |
135 |
isn't suitable for general use yet? |
136 |
|
137 |
> |
138 |
> If the resources it takes to develop btrfs would be put towards |
139 |
> improving ZFS, or the other way round, wouldn't that be more efficient? |
140 |
> We might even have a better solution available now. Of course, it's not |
141 |
> a good idea to remove variety, so it's a dilemma. But are the features |
142 |
> provided or intended to be provided and the problems both btrfs and ZFS |
143 |
> are trying to solve so much different that that each of them needs to |
144 |
> re-invent the wheel? |
145 |
|
146 |
Uh, you must be new around here. :) |
147 |
|
148 |
This argument has been raised countless times. Sure, Debian would be |
149 |
a bit nicer if everybody on this list quit wasting their time working |
150 |
on Gentoo, but it just doesn't work that way. |
151 |
|
152 |
> |
153 |
> What are these licensing issues good for other than preventing |
154 |
> solutions? |
155 |
|
156 |
You do realize that the licensing issue was created SPECIFICALLY to |
157 |
prevent solutions, right? It isn't like Sun didn't know what they |
158 |
were doing when they created the CDDL. |
159 |
|
160 |
> |
161 |
>> I'd actually be interested in a comparison of the underlying btrfs vs |
162 |
>> zfs designs. I'm not talking about implementation (bugs/etc), but the |
163 |
>> fundamental designs. What features are possible to add to one which |
164 |
>> are impossible to add to the other, what performance limitations will |
165 |
>> the one always suffer in comparison to the other, etc? All the |
166 |
>> comparisons I've seen just compare the implementations, which is |
167 |
>> useful if you're trying to decide what to install /right now/ but less |
168 |
>> so if you're trying to understand the likely future of either. |
169 |
> |
170 |
> The future is not predictable, and you can only install something |
171 |
> /now/. What you will be able to install in the future and what your |
172 |
> requirements are in the future isn't very relevant. |
173 |
|
174 |
Sure it is. You can compare the design of a bubble sort against the |
175 |
design of a merge sort completely independently of any implementation |
176 |
of either. If you're investing in a sort program that was implemented |
177 |
with the former with the goal of applying it to large datasets you can |
178 |
then realize that it makes far more sense to spend your time rewriting |
179 |
it to use a better algorithm than to try to optimize every instruction |
180 |
in the program to squeeze an extra 10% out of your O(n^2) solution. |
181 |
|
182 |
> So what's the benefit you'd get from the comparison you're interested |
183 |
> in? |
184 |
|
185 |
Well, for starters I'd like to understand how they both work. You'd |
186 |
think I run Gentoo or something. :) |
187 |
|
188 |
Understanding the maturity of the code today is certainly important. |
189 |
However, you don't really think that a rigorous examination of the |
190 |
design of something as fundamental as a filesystem isn't important for |
191 |
understanding the long-term direction the linux world is going to move |
192 |
in, right? |
193 |
|
194 |
I'm sure those working on both zfs and btrfs probably do understand |
195 |
the other solutions reasonably well, or at least I'd hope they would. |
196 |
Otherwise we're doing what you previously suggested we shouldn't do - |
197 |
reinventing the wheel. |
198 |
|
199 |
-- |
200 |
Rich |