1 |
On Thursday 29 March 2007 03:09:33 Remy Blank wrote: |
2 |
> Boyd Stephen Smith Jr. wrote: |
3 |
> >> <troll> |
4 |
> >> ZFS? |
5 |
> >> </troll> |
6 |
> > |
7 |
> > You say troll, I say possibility; I'll certainly consider it. |
8 |
> |
9 |
> Actually, I would be very interested in using ZFS for my data. |
10 |
> |
11 |
> The "troll" was more about the fact that the ZFS license was explicitly |
12 |
> designed to be GPL-2 incompatible, hence preventing it from being |
13 |
> included into Linux (it would require a clean-room rewrite from the |
14 |
specs). |
15 |
> |
16 |
> > However, the demos that I've seen about ZFS stress how easy it is to |
17 |
> > administer, and all the LVM-style features it has. Personally, |
18 |
> > I've /very/ comfortable with LVM and am of the opinion that such |
19 |
features |
20 |
> > don't actually belong at the "filesystem" layer. |
21 |
> |
22 |
> I haven't made the step to LVM and am still using a plain old RAID-1 |
23 |
> mirror. I'm not that comfortable adding one more layer to the data path, |
24 |
> and one more difficulty in case of hard disk failure. |
25 |
> |
26 |
> > I need to good general purpose filesystem, what matters most to be is: |
27 |
> > 1) Online growing of the filesystem, with LVM I use this a lot, I won't |
28 |
> > consider a filesystem I can't grow while it is in active use. |
29 |
> > 2) Journaling or other techniques (FFS from the *BSD world does |
30 |
something |
31 |
> > they don't like to call journaling) that reduce the frequency of full |
32 |
> > fscks. |
33 |
> > 3) All-round performance, and I don't mind it using extra CPU time or |
34 |
> > memory to make filesystem performance better, I have both to spare. |
35 |
> > 4) Storage savings (like tail packing or transparent compression) |
36 |
> |
37 |
> I completely agree with 1) and 2), and 3) and 4) are nice to haves. What |
38 |
> I like in ZFS is the data integrity check, i.e. every block gets a |
39 |
> checksum, and it can auto-repair in a RAID-Z configuration, something |
40 |
> that RAID-1 cannot. |
41 |
|
42 |
RAID-3?/5/6 can self-repair like this, but the checksumming is done at the |
43 |
stripe, rather than inode level. Since I use HW RAID-6 across 10 drives, |
44 |
I'm |
45 |
not that concerned with this done at the filesystem level. Even without |
46 |
the |
47 |
extra disks, you can use SW RAID across partitions on a single (or small |
48 |
number of) disk(s). [(Ab)uses of SW RAID like this are not something I'd |
49 |
always recommend, but can provide the integrity checks you desire.] |
50 |
|
51 |
Also, EVMS provides a BBR (bad block relocatation) target, that can work |
52 |
around isolated disk failures. |
53 |
|
54 |
> 5) Reliable data integrity checks and self-healing capability. |
55 |
|
56 |
Overall, I see this as something I'd rather see done at the block device |
57 |
level, instead of the filesystem level. Surely, a filesystem should not |
58 |
shy |
59 |
away from sanity checks that can be done with little overhead besides CPU |
60 |
time, but adding a checksum to each block might be a little overkill. |
61 |
|
62 |
-- |
63 |
Boyd Stephen Smith Jr. ,= ,-_-. =. |
64 |
bss03@××××××××××.net ((_/)o o(\_)) |
65 |
ICQ: 514984 YM/AIM: DaTwinkDaddy `-'(. .)`-' |
66 |
http://iguanasuicide.org/ \_/ |