1 |
Dnia 2013-07-21, o godz. 13:42:17 |
2 |
Pacho Ramos <pacho@g.o> napisał(a): |
3 |
|
4 |
> El sáb, 31-03-2012 a las 17:33 -0700, Zac Medico escribió: |
5 |
> > On 03/31/2012 04:25 PM, Walter Dnes wrote: |
6 |
> > > On Sat, Mar 31, 2012 at 10:42:50AM -0700, Zac Medico wrote |
7 |
> > >> On 03/31/2012 06:34 AM, Pacho Ramos wrote: |
8 |
> > >>> About the wiki page, I can only document reiserfs+tail usage as it's the |
9 |
> > >>> one I use and I know, about other alternatives like using squashfs, loop |
10 |
> > >>> mount... I cannot promise anything as I simply don't know how to set |
11 |
> > >>> them. |
12 |
> > >> |
13 |
> > >> Squashfs is really simple to use: |
14 |
> > >> |
15 |
> > >> mksquashfs /usr/portage portage.squashfs |
16 |
> > >> mount -o loop portage.squashfs /usr/portage |
17 |
> > > |
18 |
> > > Don't the "space-saving filesystems" (squashfs, reiserfs-with-tail, |
19 |
> > > etc) run more slowly due to their extra finicky steps to save space? If |
20 |
> > > you really want to save a gigabyte or 2, run "eclean -d distfiles" and |
21 |
> > > "localepurge" after every emerge update. I've also cobbled together my |
22 |
> > > own "autodepclean" script that check for, and optionally unmerges |
23 |
> > > unneeded stuff that was pulled in as a dependancy of a package that has |
24 |
> > > since been removed. |
25 |
> > |
26 |
> > Well, in this case squashfs is more about improving access time than |
27 |
> > saving space. You end up with the whole tree stored in a mostly |
28 |
> > contiguous chunk of disk space, which minimizes seek time. |
29 |
> |
30 |
> Would be possible to generate and provide squashed files at the same |
31 |
> time tarballs with portage tree snapshots are generated? mksquashfs can |
32 |
> take a lot of resources depending on the machine, but providing the |
33 |
> squashed images would still benefit people allowing them to download and |
34 |
> mount them |
35 |
|
36 |
I'm experimenting with squashfs lately and here's a few notes: |
37 |
|
38 |
1. I didn't find a good way of generating incremental images with |
39 |
squashfs itself. I didn't try tools like diffball (those that were used |
40 |
in emerge-delta-webrsync) but I recall they were very slow (you'd have |
41 |
to use 56K modem to get them faster than rsync) and I doubt they'll fit |
42 |
squashfs specifics. |
43 |
|
44 |
2. squashfs is best used with union filesystem like aufs3. However, |
45 |
that basically requires patching the kernel since FUSE-based union |
46 |
filesystems simply don't work. |
47 |
|
48 |
a) unionfs-fuse doesn't support replacing files from read-only branch, |
49 |
|
50 |
b) funinonfs gets broken with rsync somehow. |
51 |
|
52 |
I haven't tested le ol' unionfs, but aufs3 I get working great. |
53 |
|
54 |
3. squashfs+aufs3 really benefits from '--omit-dir-times' rsync option. |
55 |
Otherwise, it recreates the whole directory structure on each rsync. |
56 |
This also causes much less output. We should think about making this |
57 |
the default. |
58 |
|
59 |
4. 'emerge --sync' is ultra-fast with this combo. very big sync goes |
60 |
in less than a minute. |
61 |
|
62 |
5. I have doubts about 'emerge -1vDtu @world' speed. It is very |
63 |
subjective feeling but I feel like reiserfs was actually faster in this |
64 |
regard. However, space savings would surely benefit our users. |
65 |
|
66 |
6. if we're to do squahfs+aufs3, we need a clean dir structure for all |
67 |
of it, including squashfs files, intermediate mounts and r/w branches. |
68 |
|
69 |
7. we could probably get incremential squashfs+aufs3 through squashing |
70 |
old r/w branches and adding new ones on top of them. But considering |
71 |
the 'emerge --sync' speed gain, I don't know if this is really worth |
72 |
the effort, and if increase in branches wouldn't make it slow. |
73 |
|
74 |
-- |
75 |
Best regards, |
76 |
Michał Górny |