1 |
>>>>> On Wed, 05 Jan 2022, Sam James wrote: |
2 |
|
3 |
>> On 5 Jan 2022, at 08:28, Ulrich Mueller <ulm@g.o> wrote: |
4 |
>> |
5 |
>> Where does this number 2 GB come from? The amount of RAM strongly |
6 |
>> depends on the programming language and other factors, so I don't |
7 |
>> believe that there's one number that can be used for everything. |
8 |
>> (If only considering C and C++ 2 GB seems to be excessive, i.e. it |
9 |
>> will limit parallel build more than necessary. When linking, the number |
10 |
>> may be _much_ larger.) |
11 |
|
12 |
> This is essentially "common law" (or "common lore" if you like!) and |
13 |
> is well-accepted as a good rule of thumb. |
14 |
|
15 |
Well, if the 2 GiB was a universal constant then it would be valid for |
16 |
_all_ builds, not just the random subset inheriting check-reqs.eclass. |
17 |
|
18 |
> The number being larger doesn't really make any difference here; |
19 |
> the point is to help in cases where we're pretty sure there's going |
20 |
> to be a problem (only users of check-reqs.eclass, and we can |
21 |
> Introduce a variable to give ~RAM per job too if needed). |
22 |
|
23 |
Yeah, having the ebuild specify an estimate for the average memory usage |
24 |
per job would be more correct. |
25 |
|
26 |
These are all band-aids however. What we would really need are options |
27 |
like GNU parallel's --memfree and --memsuspend. |
28 |
|
29 |
> This is also about reducing the number of support queries about |
30 |
> builds which failed for trivial reasons, as someone who has to handle |
31 |
> quite a lot of those (until such a time as we can implement better |
32 |
> detection, but this is also a generally nice UX improvement -- as |
33 |
> per what Florian/flow said). |
34 |
|
35 |
But it would defeat the YAFIYGI principle that we commonly apply. :) |
36 |
|
37 |
Ulrich |