1 |
On Tue, Aug 12, 2008 at 8:05 AM, Beso <givemesugarr@×××××.com> wrote: |
2 |
> |
3 |
> |
4 |
> 2008/8/12 Duncan <1i5t5.duncan@×××.net> |
5 |
>> |
6 |
>> Beso <givemesugarr@×××××.com> posted |
7 |
>> d257c3560808120130o55c0c805n69bda3ed4cb9a823@××××××××××.com, excerpted |
8 |
>> below, on Tue, 12 Aug 2008 08:30:44 +0000: |
9 |
>> |
10 |
>> > if you're still using something the kernel won't kill nothing. the |
11 |
>> > behaviour you're referencing is the kernel cached pages. when you use |
12 |
>> > something you load it into memory. after you finish using it then the |
13 |
>> > kernel will continue to hold the pages in ram as cached pages, if you |
14 |
>> > have enough space to be able to speed up the eventual future reuse of |
15 |
>> > that particular object. |
16 |
>> |
17 |
>> Beso, I think he was referring to being totally out of memory+swap, thus |
18 |
>> triggering the kernel OOM (out of memory) killer. |
19 |
>> |
20 |
>> Yes, that can happen. However, in practice, at least from my experience, |
21 |
>> before the kernel ever gets to the point of actually killing anything, |
22 |
>> the system becomes basically unresponsive anyway, as the kernel searches |
23 |
>> for every last bit of memory it can recover to use for whatever is taking |
24 |
>> it all. I've never had that happen since I switched to /tmp on tmpfs so |
25 |
>> I don't know how it works in regard to that -- presumably it'd consider |
26 |
>> it temporary and kill it before killing applications, but I don't know |
27 |
>> that for sure -- but I did have it happen once when I had swap turned off |
28 |
>> and only a gig of memory -- and tried to scan something at an incredibly |
29 |
>> high resolution that would have used over a gig of memory for the scan |
30 |
>> data alone, had I had it there to use! Even with swap turned off, the |
31 |
>> system was unusable, as the kernel was still looking for every bit of |
32 |
>> memory it could find some 15 minutes or so into unresponsiveness, when I |
33 |
>> gave up and hit the reset. I don't know how much longer it would have |
34 |
>> continued before triggering the OOM killer, but it wasn't worth waiting |
35 |
>> around to find out. |
36 |
>> |
37 |
>> BTW, I did have a runaway process once some-time later (before I set |
38 |
>> system per-process memory limits using ulimit, see "help ulimit" at the |
39 |
>> bash prompt), after I had upgraded to 8 gigs RAM, with 16 gigs swap as 4 |
40 |
>> partitions of 4 gigs each, on 4 different hard drives (with priority set |
41 |
>> equal so the kernel striped them for 4X swap speed). That worked much |
42 |
>> better as I didn't let it get quite out of memory before killing it, but |
43 |
>> I did let the process go long enough to have it eat up the 8 gigs of |
44 |
>> regular memory plus 15 gigs or so of swap before I killed it, just to see |
45 |
>> how responsive the system remained while nearly 16 gigs into swap after I |
46 |
>> had 4-way striped it. The system was a bit draggy at that, but it was |
47 |
>> certainly WAY more responsive than that time I let it get totally out of |
48 |
>> memory with NO swap, and responsive enough that I could still kill the |
49 |
>> runaway process when I decided it was getting too close to leaving me in |
50 |
>> the same situation again. (While I let it run until 15 out of 16 gigs |
51 |
>> swap were used, I had setup a high priority root shell with the kill -9 |
52 |
>> command waiting for me to hit enter... before it got far into swap, just |
53 |
>> in case.) I'd have hated to have been 16 gigs into swap on a single- |
54 |
>> spindle swap system, that's for sure! |
55 |
>> |
56 |
>> So anyway, make sure you have enough memory+swap to compile OOo, and you |
57 |
>> shouldn't have any major problems. FWIW, I set my max capacity on the |
58 |
>> tmpfs to 6 GB, since I knew OOo took 5+ gigs as the largest package, tho |
59 |
>> I've never actually compiled it. And of course with 8 gigs RAM and 16 |
60 |
>> gigs swap, I have 24 gigs mem+swap to play with, and the 6 gig max tmpfs |
61 |
>> doesn't get anywhere near that, so I'm fine. |
62 |
>> |
63 |
>> BTW, Chris G, one of the devs in the game herd, has mentioned that there |
64 |
>> are a couple game-data packages that actually require more scratch-space |
65 |
>> to merge than OOo, but of course they aren't compiled, so if the system |
66 |
>> runs out of room installing them, no big deal, just create a sufficiently |
67 |
>> large temporary swap file or switch PORTAGE_TMPDIR back to disk |
68 |
>> temporarily, and retry. It's not like you're losing hours of work like |
69 |
>> would be possible if OOo ran out of room while emerging. Plus at least |
70 |
>> personally, I don't have to worry about that since the games in question |
71 |
>> aren't freedomware anyway, so I'd never install them in the first place. |
72 |
> |
73 |
> does it really worth to compile OOo instead of just downloading the bin |
74 |
> version?! the last time i've tried it the ammount of space taken "hostage", |
75 |
> the slowness of compilation and the really small improvement in speed (as |
76 |
> well as the other deps to install) made me chose the 32bit precompiled bin |
77 |
> package. |
78 |
> |
79 |
> |
80 |
> dott. ing. beso |
81 |
> |
82 |
|
83 |
Every time you re-install the -bin package you need to re-accept their |
84 |
license (er whatever, registaration perhaps?) at first run. Annoys me |
85 |
enough to compile it myself. |