1 |
On Fri, Nov 04, 2005 at 01:01:19AM +0900, Jason Stubbs wrote: |
2 |
> +++ bin/emerge (working copy) |
3 |
> + cm = portage.settings.load_best_module("portdbapi.metadbmodule")(myportdir, |
4 |
> "metadata/cache", |
5 |
> + filter(lambda x: not x.startswith("UNUSED_0"), portage.auxdbkeys)) |
6 |
> |
7 |
> Anything wrong with getting rid of UNUSED_0 from auxdbkeys now? They only seem |
8 |
> to be used in the flatlist code. Couldn't the flatlist key count just be |
9 |
> dropped? |
10 |
|
11 |
Could be chucked, offhand. |
12 |
|
13 |
> +++ pym/portage.py (working copy) |
14 |
> + self.auxdb[mylocation][mycpv] = mydata |
15 |
> |
16 |
> I know that cache memory usage goes down with this patch, but can you explain |
17 |
> why that is? The above line seems to be in direct contradiction with that. |
18 |
|
19 |
Memory usage drops due to not having the entire eclass cache db in |
20 |
memory- stable was doing this. With the shift of eclass storage, it's not |
21 |
required under my patch. Ancillary benefit of dropping mem usage |
22 |
(hadn't thought about it tbh). |
23 |
|
24 |
The isolated line from above doesn't really have anything to do with |
25 |
memory usage; guessing you're assuming that's dict's of dicts; it's |
26 |
dict of cache backends, with each backend implementing |
27 |
__getitem__/__setitem__ ; so... mydata is handed into the backend |
28 |
which translates it into whatever it internally uses, and writes it to |
29 |
whatever longterm storage it uses. |
30 |
|
31 |
The only way that particular line could result in accumulating |
32 |
references and jacking up memory would be if the backend stored dicts |
33 |
in memory; even then, wouldn't jack up the references since the author of |
34 |
such a class would be damn wise to copy any passed in dicts to avoid |
35 |
the possibility of external code modifying that dict. |
36 |
~harring |