1 |
On Sat, Jan 10, 2015 at 1:22 PM, lee <lee@××××××××.de> wrote: |
2 |
> Rich Freeman <rich0@g.o> writes: |
3 |
>> |
4 |
>> You can dd from a logical volume into a file, and from a file into a |
5 |
>> logical volume. You won't destroy the volume group unless you do |
6 |
>> something dumb like trying to copy it directly onto a physical volume. |
7 |
>> Logical volumes are just block devices as far as the kernel is |
8 |
>> concerned. |
9 |
> |
10 |
> You mean I need to create a LV (of the same size) and then use dd to |
11 |
> write the backup into it? That doesn't seem like a safe method. |
12 |
|
13 |
Doing backups with dd isn't terribly practical, but it is completely |
14 |
safe if done correctly. The LV would need to be the same size or |
15 |
larger, or else your filesystem will be truncated. |
16 |
|
17 |
> |
18 |
>>> How about ZFS as root file system? I'd rather create a pool over all |
19 |
>>> the disks and create file systems within the pool than use something |
20 |
>>> like ext4 to get the system to boot. |
21 |
>> |
22 |
>> I doubt zfs is supported by grub and such, so you'd have to do the |
23 |
>> usual in-betweens as you're eluding to. However, I suspect it would |
24 |
>> generally work. I haven't really used zfs personally other than |
25 |
>> tinkering around a bit in a VM. |
26 |
> |
27 |
> That would be a very big disadvantage. When you use zfs, it doesn't |
28 |
> really make sense to have extra partitions or drives; you just want to |
29 |
> create a pool from all drives and use that. Even if you accept a boot |
30 |
> partition, that partition must be on a raid volume, so you either have |
31 |
> to dedicate at least two disks to it, or you're employing software raid |
32 |
> for a very small partition and cannot use the whole device for ZFS as |
33 |
> recommended. That just sucks. |
34 |
|
35 |
Just create a small boot partition and give the rest to zfs. A |
36 |
partition is a block device, just like a disk. ZFS doesn't care if it |
37 |
is managing the entire disk or just a partition. This sort of thing |
38 |
was very common before grub2 started supporting more filesystems. |
39 |
|
40 |
> |
41 |
> Well, I don't want to use btrfs (yet). The raid capabilities of brtfs |
42 |
> are probably one of its most unstable features. They are derived from |
43 |
> mdraid: Can they compete with ZFS both in performance and, more |
44 |
> important, reliability? |
45 |
> |
46 |
|
47 |
|
48 |
Btrfs raid1 is about as stable as btrfs without raid. I can't say |
49 |
whether any code from mdraid was borrowed but btrfs raid works |
50 |
completely differently and has about as much in common with mdraid as |
51 |
zfs does. I can't speak for zfs performance, but btrfs performance |
52 |
isn't all that great right now - I don't think there is any |
53 |
theoretical reason why it couldn't be as good as zfs one day, but it |
54 |
isn't today. Btrfs is certainly far less reliable than zfs on solaris |
55 |
- zfs on linux has less long-term history of any kind but most seem to |
56 |
think it works reasonably well. |
57 |
|
58 |
> With ZFS at hand, btrfs seems pretty obsolete. |
59 |
|
60 |
You do realize that btrfs was created when ZFS was already at hand, |
61 |
right? I don't think that ZFS will be likely to make btrfs obsolete |
62 |
unless it adopts more dynamic desktop-oriented features (like being |
63 |
able to modify a vdev), and is relicensed to something GPL-compatible. |
64 |
Unless those happen, it is unlikely that btrfs is going to go away, |
65 |
unless it is replaced by something different. |
66 |
|
67 |
|
68 |
-- |
69 |
Rich |