1 |
On Thu, Sep 6, 2012 at 10:07 AM, Dale <rdalek1967@×××××.com> wrote: |
2 |
> Neil Bothwick wrote: |
3 |
>> On Thu, 06 Sep 2012 07:48:59 -0500, Dale wrote: |
4 |
>> |
5 |
>>> I don't think that is correct. I am clearing the files in ram. That's |
6 |
>>> the point of drop_caches is to clear the kernels cache files. See post |
7 |
>>> to Nicolas Sebrecht a bit ago. |
8 |
>> Take a step back Dale and read the posts again. This is not about the |
9 |
>> state of the cache at the start of the emerge but during it. You may |
10 |
>> clear the cache before starting, but that doesn't stop is filling up |
11 |
>> again as soon as the emerge reaches src_unpack(). |
12 |
>> |
13 |
>> This has nothing to do with caching the data from the previous emerge |
14 |
>> run, it is all from the currently running emerge. You may think you are |
15 |
>> unpacking the tarball to disk and then loading those files into the |
16 |
>> compiler, but you are only using the copies that are cached when you |
17 |
>> unpack. |
18 |
>> |
19 |
>> |
20 |
> |
21 |
> |
22 |
> Then take a look at it this way. If I emerge seamonkey with portage's |
23 |
> work directory on disk and it takes 12 minutes, the first time. Then I |
24 |
> clear the caches and emerge seamonkey again while portage's work |
25 |
> directory is on tmpfs and it is 12 minutes. Then repeat that process a |
26 |
> few times more. If the outcome of all those emerges is 12 minutes, |
27 |
> regardless of the order, then putting portages work directory on tmpfs |
28 |
> makes no difference at all in that case. The emerge times are exactly |
29 |
> the same regardless of emerge using cache or not or portage's work |
30 |
> directory being on tmpfs or not. I don't care if emerge uses cache |
31 |
> DURING the emerge process because it is always enabled in both tests. |
32 |
> The point is whether portage's work directory is on tmpfs or not makes |
33 |
> emerges faster. |
34 |
> |
35 |
> The thing about what you are saying is that I ran those tests with the |
36 |
> files in memory. What I am saying is this, that is not the case. I am |
37 |
> clearing that memory with the drop_cache command between each test. |
38 |
|
39 |
Dale, here's what you're missing: |
40 |
|
41 |
emerge first downloads the source tarball and drops it on disk. Once |
42 |
the tarball has been placed on disk, the time required to read the |
43 |
tarball back into memory is negligible; it's a streamed format. |
44 |
|
45 |
The next step is what's important: the tarball gets extracted into |
46 |
PORTAGE_TEMP. At that moment onward, all the files that were inside |
47 |
that tarball are in your file cache until something bumps it out. |
48 |
|
49 |
If you have enough RAM, then the file will not be bumped out as a |
50 |
consequence of build-time memory usage. As a consequence, if you have |
51 |
enough ram, you won't see much (if any) difference in build times if |
52 |
you're comparing tmpfs to a normal filesystem...which means tmpfs (for |
53 |
you) won't have any benefit beyond being self-cleaning on a reboot or |
54 |
remount. |
55 |
|
56 |
So your drop_cache has no influence over build times, since the only |
57 |
cache behavior that matters is whatever happens between the time |
58 |
emerge unpacks the tarball and the time emerge exits. |
59 |
|
60 |
To see the difference, try something like "watch drop_cache" leave |
61 |
that running while you let a few builds fly. You should see an |
62 |
increase in build times. |
63 |
|
64 |
-- |
65 |
:wq |