1 |
Rich Freeman <rich0@g.o> writes: |
2 |
|
3 |
> On Sat, Jan 10, 2015 at 1:22 PM, lee <lee@××××××××.de> wrote: |
4 |
>> Rich Freeman <rich0@g.o> writes: |
5 |
>>> |
6 |
>>> You can dd from a logical volume into a file, and from a file into a |
7 |
>>> logical volume. You won't destroy the volume group unless you do |
8 |
>>> something dumb like trying to copy it directly onto a physical volume. |
9 |
>>> Logical volumes are just block devices as far as the kernel is |
10 |
>>> concerned. |
11 |
>> |
12 |
>> You mean I need to create a LV (of the same size) and then use dd to |
13 |
>> write the backup into it? That doesn't seem like a safe method. |
14 |
> |
15 |
> Doing backups with dd isn't terribly practical, but it is completely |
16 |
> safe if done correctly. The LV would need to be the same size or |
17 |
> larger, or else your filesystem will be truncated. |
18 |
|
19 |
Yes, my impression is that it isn't very practical or a good method, and |
20 |
I find it strange that LVM is still lacking some major features. |
21 |
|
22 |
>>>> How about ZFS as root file system? I'd rather create a pool over all |
23 |
>>>> the disks and create file systems within the pool than use something |
24 |
>>>> like ext4 to get the system to boot. |
25 |
>>> |
26 |
>>> I doubt zfs is supported by grub and such, so you'd have to do the |
27 |
>>> usual in-betweens as you're eluding to. However, I suspect it would |
28 |
>>> generally work. I haven't really used zfs personally other than |
29 |
>>> tinkering around a bit in a VM. |
30 |
>> |
31 |
>> That would be a very big disadvantage. When you use zfs, it doesn't |
32 |
>> really make sense to have extra partitions or drives; you just want to |
33 |
>> create a pool from all drives and use that. Even if you accept a boot |
34 |
>> partition, that partition must be on a raid volume, so you either have |
35 |
>> to dedicate at least two disks to it, or you're employing software raid |
36 |
>> for a very small partition and cannot use the whole device for ZFS as |
37 |
>> recommended. That just sucks. |
38 |
> |
39 |
> Just create a small boot partition and give the rest to zfs. A |
40 |
> partition is a block device, just like a disk. ZFS doesn't care if it |
41 |
> is managing the entire disk or just a partition. |
42 |
|
43 |
ZFS does care: You cannot export ZFS pools residing on partitions, and |
44 |
apparently ZFS cannot use the disk cache as efficiently when it uses |
45 |
partitions. Caching in memory is also less efficient because another |
46 |
file system has its own cache. On top of that, you have the overhead of |
47 |
software raid for that small partition unless you can dedicate |
48 |
hardware-raided disks for /boot. |
49 |
|
50 |
> This sort of thing was very common before grub2 started supporting |
51 |
> more filesystems. |
52 |
|
53 |
That doesn't mean it's a good setup. I'm finding it totally |
54 |
undesirable. Having a separate /boot partition has always been a |
55 |
crutch. |
56 |
|
57 |
>> Well, I don't want to use btrfs (yet). The raid capabilities of brtfs |
58 |
>> are probably one of its most unstable features. They are derived from |
59 |
>> mdraid: Can they compete with ZFS both in performance and, more |
60 |
>> important, reliability? |
61 |
>> |
62 |
> |
63 |
> |
64 |
> Btrfs raid1 is about as stable as btrfs without raid. I can't say |
65 |
> whether any code from mdraid was borrowed but btrfs raid works |
66 |
> completely differently and has about as much in common with mdraid as |
67 |
> zfs does. |
68 |
|
69 |
Hm, I might have misunderstood an article I've read. |
70 |
|
71 |
> I can't speak for zfs performance, but btrfs performance isn't all |
72 |
> that great right now - I don't think there is any theoretical reason |
73 |
> why it couldn't be as good as zfs one day, but it isn't today. |
74 |
|
75 |
Give it another 10 years, and btrfs might be the default choice. |
76 |
|
77 |
> Btrfs is certainly far less reliable than zfs on solaris - zfs on |
78 |
> linux has less long-term history of any kind but most seem to think it |
79 |
> works reasonably well. |
80 |
|
81 |
It seems that ZFS does work (I can't say anything about its reliability |
82 |
yet), and it provides a solution unlike any other FS. Btrfs doesn't |
83 |
fully work yet, see [1]. |
84 |
|
85 |
|
86 |
[1]: https://btrfs.wiki.kernel.org/index.php/RAID56 |
87 |
|
88 |
>> With ZFS at hand, btrfs seems pretty obsolete. |
89 |
> |
90 |
> You do realize that btrfs was created when ZFS was already at hand, |
91 |
> right? I don't think that ZFS will be likely to make btrfs obsolete |
92 |
> unless it adopts more dynamic desktop-oriented features (like being |
93 |
> able to modify a vdev), and is relicensed to something GPL-compatible. |
94 |
> Unless those happen, it is unlikely that btrfs is going to go away, |
95 |
> unless it is replaced by something different. |
96 |
|
97 |
Let's say it seems /currently/ obsolete. It's not fully working yet, |
98 |
reliability is very questionable, and it's not as easy to handle as ZFS. |
99 |
By the time btrfs has matured to the point where it isn't obsolete |
100 |
anymore, chances are that there will be something else which replaces |
101 |
it. |
102 |
|
103 |
Solutions are needed /now/, not in about 10 years when btrfs might be |
104 |
ready. |
105 |
|
106 |
|
107 |
-- |
108 |
Again we must be afraid of speaking of daemons for fear that daemons |
109 |
might swallow us. Finally, this fear has become reasonable. |