Gentoo Archives: gentoo-catalyst

From: Matt Turner <mattst88@g.o>
To: Michael 'veremitz' Everitt <gentoo@×××××××.xyz>
Cc: gentoo-catalyst@l.g.o
Subject: Re: [gentoo-catalyst] [PATCH 19/21] catalyst: Set jobs/load-average via catalyst.conf
Date: Thu, 21 May 2020 01:19:18
Message-Id: CAEdQ38Ef=FfjAACnktoGhEuC96qx842+6Swwr+HP9bXKS81m=Q@mail.gmail.com
1 On Wed, May 20, 2020 at 5:37 PM Michael 'veremitz' Everitt
2 <gentoo@×××××××.xyz> wrote:
3 >
4 > On 21/05/20 01:18, Matt Turner wrote:
5 > > On Wed, May 20, 2020 at 4:00 PM Michael 'veremitz' Everitt
6 > > <gentoo@×××××××.xyz> wrote:
7 > >> On 20/05/20 04:42, Matt Turner wrote:
8 > >>> We currently have two mechanisms of setting MAKEOPTS: in spec files and
9 > >>> in catalystrc.
10 > >>>
11 > >>> Setting makeopts in spec files doesn't make sense. The spec should
12 > >>> describe the thing that's being built and not contain options that are
13 > >>> specific to the build system.
14 > >>>
15 > >>> Setting makeopts via catalystrc is better, but it only applies to the
16 > >>> actual build system invocations, leaving emerge to run jobs serially or
17 > >>> again requiring configuration specific to the build machine to be put
18 > >>> into the spec file. For example:
19 > >>>
20 > >>> update_seed_command: ... --jobs 5 --load-average 5
21 > >>>
22 > >>> With jobs and load-average specified in catalyst.conf, catalyst has the
23 > >>> information required to configure both emerge and the build systems
24 > >>> emerge executes.
25 > >>>
26 > >>> This removes the undocumented makeopts spec file option and replaces it
27 > >>> with jobs and load-average settings in catalyst.conf.
28 > >>>
29 > >>> Signed-off-by: Matt Turner <mattst88@g.o>
30 > >>> ---
31 > >>> catalyst/base/stagebase.py | 12 +++++-------
32 > >>> catalyst/defaults.py | 2 ++
33 > >>> doc/catalyst-config.5.txt | 15 ++++++++++++---
34 > >>> etc/catalyst.conf | 8 ++++++++
35 > >>> etc/catalystrc | 3 ---
36 > >>> targets/support/chroot-functions.sh | 8 ++++++++
37 > >>> 6 files changed, 35 insertions(+), 13 deletions(-)
38 > >> NACK. This and patch 20 make it impossible to customise specs for different
39 > >> arches/subarches/libc/etc using different distcc_hosts with varying numbers
40 > >> of hosts and/or cores available. In many cases, you may not be able to set
41 > >> up identical distcc 'farms' or 'clusters' with the complete set of
42 > >> toolchains/chroots applicable to a 'standard' installation, due to hardware
43 > >> limitations.
44 > > I don't follow. You mention varying distcc hosts per arches/subarches/libc.
45 > >
46 > > Let's say there are 4 systems on the network. 3x amd64 systems and 1
47 > > mips system.
48 > >
49 > > If I'm compiling for amd64 on system and I'm using distcc, there's a
50 > > fixed number of distcc hosts on the network, aren't there?
51 > Yes, but do you feed your amd64 jobs to your mips system? how do you filter
52 > between them?
53
54 I wouldn't... but I don't think that matters? You just set
55 distcc_hosts to not send jobs to that system.
56
57 > > If I'm building for mips on the mips system with cross compilers
58 > > installed on, say two of the amd64 systems, then there's a fixed
59 > > number of distcc hosts on the network, aren't there?
60 > >
61 > > So, I don't think arches (or subarches) has any bearing.
62 > >
63 > > And for different libcs, distcc doesn't have any bearing on that, does
64 > > it? distcc (without pump mode, which has been removed from Gentoo)
65 > > sends preprocessed source files across the network, so what libc you
66 > > have doesn't have any bearing either.
67 > I have one or two systems with musl toolchains, but not all, again, how do
68 > I choose between them, from a common build host, for each different spec
69 > file variant I wish to run?
70
71 Like I said, distcc sends a preprocessed file across the network, so
72 there's not a dependence on the libc. I.e., a musl system running
73 distccd can compile files just fine for a system running glibc.
74
75 > > You seem to be thinking that setting a single system-wide distcc_hosts
76 > > is somehow limiting, but I can't see how. Please explain.
77 > >
78 > I fail to see how you can select from a pool of distcc slaves for a given
79 > job type/configuration.
80 >
81 > I'm assuming you've used distcc before? With cluster project's help I
82 > actually have several distcc daemons running on a given host, if I should
83 > require eg. gcc-8, 9 or (in the near future) 10. These run on different
84 > ports on each slave system.
85 > How would you propose selecting between these separate daemons for a given
86 > build in your new schema?
87
88 Yes, I use distcc to cross compile for slow mips and arm systems.
89
90
91 I don't understand why it takes so much more effort on the part of the
92 receiver to interpret your messages, but good grief.
93
94 I think the case Michael is thinking of is, e.g. a system building
95 both rel_type: default and rel_type: hardened stages. The set of
96 distcc slaves that have a non-hardened toolchain might be different
97 from the set of distcc slaves with a hardened toolchain.
98
99 I'm unsure how this works, so I'm asking #gentoo-toolchain.
100
101 So all of the previous email exchange was a collection of red herrings...