1 |
Dan Armak posted <200410192203.49686.danarmak@g.o>, excerpted |
2 |
below, on Tue, 19 Oct 2004 22:03:41 +0200: |
3 |
|
4 |
>> From earlier, I remember you (?) saying what it DID cache, it |
5 |
>> invalidated entirely if there was just one change. I can see how this |
6 |
>> would be simpler to implement. However, if you are caching everything, |
7 |
>> I'd hope you are doing it in segments |
8 |
> |
9 |
> Caching is a builtin feature of configure scripts generated by autoconf. |
10 |
> configure --help | grep cache will show you. All we do in confcache is |
11 |
> store the cache after every run and give it to the next run. |
12 |
|
13 |
Ahh... I'd seen occasional (cached) entries in the various "checking ..." |
14 |
messages. This now makes sense. |
15 |
|
16 |
> So we have to invalidate it entirely and we can't segment it. Either |
17 |
> behaviour would require basically replacing/rewriting autoconf. And if |
18 |
> someone did do that, I'd be very happy :-) (Maybe the unsermake people?) |
19 |
> But for now, this is all we can do. |
20 |
|
21 |
Aye... that /does/ sound like an unsermake project. |
22 |
|
23 |
> That said if the existing bugs are worked out (like not panicking when |
24 |
> /proc/cpuinfo's checksum changes due to cpu clock changes) our confcache |
25 |
> is pretty good too in terms of performance. |
26 |
|
27 |
Ugly! Is the cache kept in such a way that for specific things like this, |
28 |
one can go in and excise them using sed and the like? That would seem the |
29 |
best solution if so. Simply make it as if the cpuinfo stuff hadn't been |
30 |
cached, yet, so it'd test for it each time new. |
31 |
|
32 |
Or.. perhaps even better, intercept the call for the cpuinfo checksum and |
33 |
return the existing checksum, or the existing data, whichever is easier, |
34 |
so it doesn't see it as changed. Of course there'd have to be a way to |
35 |
manually trigger a real check, say, if the CPU was replaced or whatever. |
36 |
|
37 |
In another discussion, on the AMD64 list, we were talking about how |
38 |
ultimately, power management for multiple CPUs might be accomplished by |
39 |
using the developing CPU hotplug code to turn off one CPU at a time until |
40 |
only one was left, then speed throttling the one CPU. (I've seen kernel |
41 |
discussion on using the hotplug code for suspend, tho not specifically |
42 |
for power management not suspend, however, it's a logical extension, if |
43 |
they are already talking about extending it to suspend, to use it for |
44 |
throttling as well, given that existing throttling power management |
45 |
doesn't work at all with multiple CPUs.) If speed throttling throws a |
46 |
wrench in the current system, consider what hotplugging CPUs will do to it |
47 |
as well! <g> Perhaps whatever solution is found for the first can address |
48 |
the second, when the time comes too, if it's planned for in advance. |
49 |
|
50 |
-- |
51 |
Duncan - List replies preferred. No HTML msgs. |
52 |
"They that can give up essential liberty to obtain a little |
53 |
temporary safety, deserve neither liberty nor safety." -- |
54 |
Benjamin Franklin |
55 |
|
56 |
|
57 |
|
58 |
-- |
59 |
gentoo-dev@g.o mailing list |