1 |
Beso <givemesugarr@×××××.com> posted |
2 |
d257c3560802131127g51bccaf6s920e73d1f2416766@××××××××××.com, excerpted |
3 |
below, on Wed, 13 Feb 2008 19:27:44 +0000: |
4 |
|
5 |
> i'll try out duncan's speedups for shm and but for the dev one i don't |
6 |
> use baselayout 2 and i'm still with the 1st version, since i don't feel |
7 |
> like upgrading to it yet. but i'd like to know some more about what are |
8 |
> the improvements of the second version. |
9 |
|
10 |
Well, among other things it's possible to /really/ do parallel startup |
11 |
now. The option was there before, but it didn't parallelize that much. |
12 |
Of course, if you have any startup scripts running that the dependencies |
13 |
aren't straight on, that might be hidden now, but will very likely show |
14 |
itself once the system actually does try things in parallel. (For those |
15 |
who prefer traditional serial startup, that remains the safer default.) |
16 |
|
17 |
Startup's also much faster as certain parts are written in C now, instead |
18 |
of as scripts. Of course, the various service scripts remain just that, |
19 |
scripts. |
20 |
|
21 |
With baselayout-1, the very early core scripts, clock, modules, lvm, |
22 |
raid, etc, were actually ordered based on a list rather than on their |
23 |
normal Gentoo dependencies (before, after, uses, needs, etc). That was |
24 |
because the dependency resolver didn't work quite right that early on. |
25 |
That has now been fixed, and all scripts including the early stuff like |
26 |
clock, start in the order the dependencies would indicate, not based on |
27 |
an arbitrary list. |
28 |
|
29 |
Various settings are in more logical/traditional locations with |
30 |
baselayout 2. An example is the /dev filesystem mounted if you have udev |
31 |
active. Previously, its configuration was in one or more of the |
32 |
baselayout config files (probably /etc/conf.d/rc but that was quite |
33 |
awhile ago here, and I've forgotten the details so can't be sure). Now, |
34 |
the setting in /etc/fstab for that filesystem is honored, as one might |
35 |
ordinarily expect for any filesystem. |
36 |
|
37 |
Former addons like lvm and raid now have their own initscripts, just as |
38 |
any other boot service. |
39 |
|
40 |
> 5. as for the raid stuff i cannot do it since i've only got one disk. |
41 |
> i'll try to see what happens with swap set to 100. |
42 |
|
43 |
For a single disk, it's possible you'll actually want it set the other |
44 |
way, toward 0 from 60, especially if you are annoyed at the time it takes |
45 |
to swap programs back in after big compiles or a night's system scan for |
46 |
slocate (I forgot what the scanner is called, as I don't have slocate |
47 |
installed here so don't run the database updater/scanner). On a system |
48 |
such as mine with swap roughly twice as fast as the main filesystems, |
49 |
however, keeping cache and swapping out apps makes much more sense, since |
50 |
it's faster to read back in the apps from swap than it is to reload the |
51 |
data I'd dump from cache to keep the apps in memory. I actually don't |
52 |
even notice swap usage unless I happen to look at ksysguard, unless it's |
53 |
swapping over a gig, which doesn't happen too often with 8 gigs RAM. |
54 |
Still, it's quite common to have a quarter to three quarter gig of |
55 |
swapped out apps since I've set swappiness to 100, thereby telling the |
56 |
kernel to keep cache if at all possible, and I routinely do -j12 |
57 |
compiles, often several at a time even, so several gigs of tmpfs plus |
58 |
several gigs of gcc instances in memory, thereby forcing minor swapping |
59 |
even with 8 gig RAM, isn't unusual. (Of course, I use emerge --pretend |
60 |
to ensure the packages I'm emerging in parallel don't conflict with or |
61 |
depend on each other, so the merges remain parallel.) |
62 |
|
63 |
> 6. if i use some |
64 |
> other programs while compiling into tmpfs bigones i need to nice the |
65 |
> process or i'll get some misbehaviour from the other programs. |
66 |
|
67 |
Here's another hint. Consider setting PORTAGE_NICENESS=19 in make.conf. |
68 |
Not only does it stop the hogging, the system then considers it batch |
69 |
priority and gives it longer timeslices. Thus, counter-intuitively for |
70 |
lowering the priority, a merge can actually take less real clock time, |
71 |
because the CPU is spending less time shuffling processes around and more |
72 |
time actually doing work. Of course, if you run folding@home or the |
73 |
like, you'll either want to turn that off when doing this, or set portage |
74 |
to 18 or better instead of 19, so it's not competing directly with your |
75 |
idle time client. |
76 |
|
77 |
With kernel 2.6.24, there's also a new scheduling option that groups by |
78 |
user. By default, root gets 2048 while other users get 1024, half the |
79 |
share of root. If you have portage set to compile with userprivs, it'll |
80 |
be using the portage user, with its own 1024 default setting, but if it's |
81 |
using root privs, it'll be getting root's default 2048 share. |
82 |
Particularly if you use a -j setting >2 jobs per cpu/core, you may wish |
83 |
to tweak this some. I use FEATURES=userpriv and userfetch, so most of |
84 |
the work gets done as the portage user altho the actual qmerge step is |
85 |
obviously done as root, and MAKEOPTS="-j20 -l12" (twenty jobs, but don't |
86 |
start new ones if the load is above 12) and my load average runs pretty |
87 |
close, actually about 12.5-13 when emerging stuff. With four cores (two |
88 |
CPUs times two cores each), that's a load average of just over three jobs |
89 |
per core. As mentioned, I have PORTAGE_NICENESS=19. Given all that, I |
90 |
find increasing my regular user to 2048 works about the best, keeping X |
91 |
and my various scrollers updating smoothly, amarok playing smoothly (it |
92 |
normally does) and updating its voiceprint almost smoothly (it doesn't, |
93 |
if I don't adjust the user share to 2048), etc. |
94 |
|
95 |
This setting can be found under /sys/kernel/uids/<uid>/cpushare. Read it |
96 |
to see what a user's share is, write it to write a new share for that |
97 |
user. (/etc/passwd lists the user/uid correspondence.) |
98 |
|
99 |
In ordered to have these files and this feature, you'll need to have Fair |
100 |
group CPU scheduling (FAIR_GROUP_SCHED), under General setup, and then |
101 |
its suboption, Basis for grouping tasks, set to user id |
102 |
(FAIR_USER_SCHED). Unfortunately, the files appear as a user logs in (or |
103 |
is invoked by the system, for daemon users) and disappear as they logout, |
104 |
so it's not as simple as setting an initscript to set these up at boot |
105 |
and forgetting about it. There's a sample script available that's |
106 |
supposed to automate setting and resetting these shares based on kernel |
107 |
events, but I couldn't get it to work last time I tried it, which was |
108 |
during the rcs I should say, so it's possible it wasn't all working just |
109 |
right, yet. However, altering the share files manually works, and isn't |
110 |
too bad for simple changes, as long as you don't go logging in and out a |
111 |
lot, thus losing the file and the changes made to it every time there's |
112 |
nothing running as a particular user any longer. |
113 |
|
114 |
Another setting that may be useful is the kernel's I/O scheduler. That's |
115 |
configured under Block layer, I/O Schedulers. CFQ (IOSCHED_CFQ) is what |
116 |
I choose. Among other things, it automatically prioritizes I/O requests |
117 |
based on CPU job priority (altho less granularly, as there's only a |
118 |
handful of I/O priority levels compared to the 40 CPU priority levels). |
119 |
You can tweak priorities further if desired, but this seems a great |
120 |
default and is something deadline definitely doesn't have and I don't |
121 |
believe anticipatory has either. With memory as tight as a gig, having |
122 |
the ability to batch-priority schedule i/o along with CPU is a very good |
123 |
thing, particularly when PORTAGE_NICENESS is set. Due to the nature of |
124 |
disk I/O, it's not going to stop the worst thrashing entirely, but it |
125 |
should significantly lessen its impact, and at less than worst case, it |
126 |
will likely make the system significantly more workable than it might be |
127 |
otherwise under equivalent load. |
128 |
|
129 |
Of course, YMMV, but I'd look at those, anyway. It could further |
130 |
increase efficiency. |
131 |
|
132 |
-- |
133 |
Duncan - List replies preferred. No HTML msgs. |
134 |
"Every nonfree program has a lord, a master -- |
135 |
and if you use the program, he is your master." Richard Stallman |
136 |
|
137 |
-- |
138 |
gentoo-amd64@l.g.o mailing list |