1 |
Rich Freeman posted on Sun, 15 Jul 2012 08:30:31 -0400 as excerpted: |
2 |
|
3 |
> Looking at the docs it seems like you'd need a hook for the cmdline |
4 |
> stage that sets rootok (assuming it gets that far without a root, or if |
5 |
> you set it to something like root=TMPFS). Then you'd install a hook to |
6 |
> mount to mount the tmpfs, and then use the fstab-sys module to mount |
7 |
> everything else. You'd need to create mountpoints for everything of |
8 |
> course, and not just the boot-critical stuff, since otherwise openrc |
9 |
> won't be able to finish mounting mounting everything. |
10 |
|
11 |
The last bit I had already anticipated, as I'm doing something similar |
12 |
with my tmpfs-based /tmp and /var/tmp (symlinked to /tmp). Nothing |
13 |
mounted on top, but I'm creating subdirs inside it, setting permissions, |
14 |
etc. A critical difference is that this is on a full rootfs so I don't |
15 |
have to worry about not having the necessary tools available yet, but I |
16 |
do have the general ideas down. And I'm doing some bind-mounts as well, |
17 |
which require a remount to let all the options take effect, and of course |
18 |
there's mount ordering to worry about, etc. So I have the general idea, |
19 |
but doing it from an initr* with limited tools available will be |
20 |
interesting. |
21 |
|
22 |
As for the tmpfs rootfs itself, I have the vague idea that I'd |
23 |
"simply" (note the scare-quotes) use what's normally the initial root |
24 |
that's essentially thrown away, only I'd not throw it away, I'd just |
25 |
mount everything on top, keep using it, and /somehow/ ensure that |
26 |
anything running from it directly terminates one way or another, so that |
27 |
I don't have old processes stuck around using the mounted-over points. |
28 |
|
29 |
>> The big problem with btrfs subvolumes from my perspective is that |
30 |
>> they're still all on a single primary filesystem, and if that |
31 |
>> filesystem develops problems... all your eggs/data are in one big |
32 |
>> basket, good luck if the bottom drops out of it! |
33 |
> |
34 |
> Maybe, but does it really buy you much if you only lose /lib, and not |
35 |
> /usr? I guess it is less data to restore from backup, but... |
36 |
|
37 |
Which is why I keep /usr (and /lib64 and /usr/lib64) on rootfs currently, |
38 |
tho the traditional /usr/src/, /usr/local, and /usr/portage are either |
39 |
pointed elsewhere with the appropriate vars or mountpoints/symlinks to |
40 |
elsewhere. Of course that'd have to change a bit for a tmpfs rootfs, |
41 |
since /lib64, /usr and /etc would obviously be mounted from elsewhere, |
42 |
but they could still be either symlinked or bind-mounted to the |
43 |
appropriate location on the single (read-only) system-filesystem. |
44 |
|
45 |
FWIW I remember being truly fascinated with the power of symlinks when I |
46 |
first switched from MS. Now I consider them normal, but the power and |
47 |
flexibility of bind-mounts still amazes me, especially since, as with |
48 |
symlinks, it's possible to bind-mount individual files, but unlike |
49 |
symlinks (more like hard-links but cross-filesystem), it's possible to |
50 |
have some of the bind-mounts read-write (or dev, exec, etc) while others |
51 |
are read-only (or nodev, noexec...). |
52 |
|
53 |
> The beauty of btrfs subvolumes is that it lets you manage all your |
54 |
> storage as a single pool, even more flexibly than LVM. Sure, chopping |
55 |
> it up does reduce the impact of failure a bit, but I'd hate to have to |
56 |
> maintain such a system. Filesystem failure should be a very rare |
57 |
> occurance for any decent filesystem (of course, this is why I won't be |
58 |
> using btrfs in production for a while). |
59 |
|
60 |
Very rare, yes. Hardware issues happen tho. I remember the a/c failing |
61 |
at one point, thus causing ambient temps (Phoenix summer) to reach 50C or |
62 |
so, and who knows how much in the computer. Head-crash time. But after |
63 |
cooling off, the unmounted-at-the-time filesystems were damaged very |
64 |
little, while a couple of the mounted filesystems surely had physical |
65 |
grooves in the platter. Had that been all one filesystem, the damage |
66 |
would have been far less confined. That's one example. |
67 |
|
68 |
Another one, happened back when I was beta testing IE4 on MS, was due to |
69 |
a system software error on their part. IE started bypassing the |
70 |
filesystem and writing to the cache index directly, but it wasn't set |
71 |
system attribute, so the defragger moved the file and put something else |
72 |
in that physical disk location. I had my temp-inet-files on tmp, which |
73 |
was its own partition and didn't have significant issues, but some of the |
74 |
other betatesters lost valuable data, overwritten by IE, which was still |
75 |
bypassing the filesystem and writing directly to what it thought was its |
76 |
cache index file. |
77 |
|
78 |
So it's not always filesystem failure, itself. But I tried btrfs for a |
79 |
bit just to get an idea what it was all about, and agree totally with you |
80 |
there. I'm off of it entirely now, and won't be touching it again until |
81 |
I'd guess early next year at the earliest. The thing simply isn't ready |
82 |
for the expectations I have of my filesystems, and anybody using it now |
83 |
without backups is simply playing Russian Roulette with their data. |
84 |
|
85 |
-- |
86 |
Duncan - List replies preferred. No HTML msgs. |
87 |
"Every nonfree program has a lord, a master -- |
88 |
and if you use the program, he is your master." Richard Stallman |