1 |
Nicolas Sebrecht wrote: |
2 |
> The 06/09/12, Dale wrote: |
3 |
> |
4 |
>> Then explain to me why it was at times slower while on tmpfs? Trust me, |
5 |
>> I ran this test many times and in different orders and it did NOT make |
6 |
>> much if any difference. |
7 |
> As explained, this is expected if you have enough RAM. |
8 |
> |
9 |
> I didn't check but I would expect that files stored in tmpfs are NOT |
10 |
> duplicated in the the kernel cache in order to save RAM. So, the |
11 |
> different times could come from the fact that the kernel will first look |
12 |
> up in the kernel cache and /then/ look up in the tmpfs. |
13 |
> |
14 |
> In the scenario without tmpfs and lot of RAM, every unpacked file is |
15 |
> stored in the _kernel cache_ with really fast access much before hitting |
16 |
> the disk or even the disk cache (RAM speed and very few processor |
17 |
> calculation required). While retrieving, the file is found on first look |
18 |
> up from the kernel cache. |
19 |
|
20 |
|
21 |
The point you are missing is this. Between those tests, I CLEARED that |
22 |
cache. The thing you and Neil claim that makes a difference does not |
23 |
exist after you clear the cache. I CLEARED that cache between EACH and |
24 |
every test that was ran whether using tmpfs or not. I did this instead |
25 |
of rebooting my system after each test. |
26 |
|
27 |
|
28 |
> |
29 |
> In the other scenario with tmpfs and lot of RAM, every unpacked file is |
30 |
> stored in the tmpfs allowing very fast access (due to RAM speed) but |
31 |
> with the price of a first negative result from the kernel cache (and |
32 |
> perhaps additional time needed by the kernel for accessing the file |
33 |
> through the driver of the tmpfs filesystem). |
34 |
> |
35 |
> Using tmpfs will still be better as it prevents from writes to the disk |
36 |
> in the spare times, avoiding unnecessary mecanic movements and saving |
37 |
> disk life time. |
38 |
|
39 |
The thing is, this was tested because people wanted to see what the |
40 |
improvements was. When tested, it turned out that there was very little |
41 |
if any difference. So, in theory I would say that using tmpfs would |
42 |
result in faster compile times. After testing, theory left the building |
43 |
and reality showed that it did not make much if any difference. |
44 |
|
45 |
>> I might add, the cache on the drive I was using is nowhere near large |
46 |
>> enough to cache the tarball for the package. Heck, the cache on my |
47 |
>> current system drive is only 8Mbs according to hdparm. That is not much |
48 |
>> since I tested using much larger packages. You can't cache files larger |
49 |
>> than the cache. |
50 |
> The disk cache is out of the scope. |
51 |
|
52 |
True, just wanted to make sure we were talking about the same cache here. |
53 |
|
54 |
> |
55 |
>> Do I need to run a test, reboot, run the test again to show this is not |
56 |
>> making much if any difference? I mean, really? o_O |
57 |
> It won't make any difference from the drop cache configuration but it is |
58 |
> still not the point! |
59 |
> |
60 |
|
61 |
Well, why say that caching makes a difference then say it doesn't matter |
62 |
when those caches are cleared? Either caches matter or it doesn't. |
63 |
|
64 |
Dale |
65 |
|
66 |
:-) :-) |
67 |
|
68 |
-- |
69 |
I am only responsible for what I said ... Not for what you understood or how you interpreted my words! |