Gentoo Archives: gentoo-user

From: Andreas Fink <finkandreas@×××.de>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] Glibc and binpackages
Date: Tue, 17 Jan 2023 17:08:01
Message-Id: 20230117180740.588d99fd@web.de
In Reply to: Re: [gentoo-user] Glibc and binpackages by John Blinka
1 On Fri, 13 Jan 2023 11:17:29 -0500
2 John Blinka <john.blinka@×××××.com> wrote:
3
4 > On Thu, Jan 12, 2023 at 12:54 PM Laurence Perkins <lperkins@×××××××.net>
5 > wrote:
6 >
7 > > I’m not sure if I’m doing something horribly wrong, or missing something
8 > > blindingly obvious, but I’ve just had to boot a rescue shell yet again, so
9 > > I’m going to ask.
10 > >
11 > >
12 > >
13 > > To save time and effort, I have my big, powerful machine create
14 > > binpackages for everything when it updates, and then let all my smaller
15 > > machines pull from that. It works pretty well for the most part.
16 > >
17 >
18 > I do something quite similar, but have never had a glibc problem. Maybe the
19 > problem lies in differences between the specific details of our two
20 > approaches.
21 >
22 > I have 3 boxes with different hardware but identical portage setup,
23 > identical world file, identical o.s., etc, even identical CFLAGS, CPPFLAGS
24 > and CPU_FLAGS_X86 despite different processors. Like you, I build on my
25 > fastest box (but offload work via distcc), and save binpkgs. After a world
26 > update (emerge -DuNv —changed-deps @world) , I rsync all repositories and
27 > binpkgs from the fast box to the others. An emerge -DuNv —changed-deps
28 > —usepkgonly @world on the other boxes completes the update. I do this
29 > anywhere from daily to (rarely) weekly. Portage determines when to update
30 > glibc relative to other packages. There hasn’t been a problem in years with
31 > glibc.
32 >
33 > I believe there are more sophisticated ways to supply updated portage trees
34 > and binary packages across a local network. I think there are others on
35 > the list using these more sophisticated techniques successfully. Just a
36 > plain rsync satisfies my needs.
37 >
38 > It’s not clear to me whether you have the problem on your big powerful
39 > machine or on your other machines. If it’s the other machines, that
40 > suggests that portage knows the proper build sequence on the big machine
41 > and somehow doesn’t on the lesser machines. Why? What’s different?
42 >
43 > Perhaps there’s something in my update frequency or maintaining an
44 > identical setup on all my machines that avoids the problem you’re having?
45 >
46 > If installing glibc first works, then maybe put a wrapper around your
47 > emerge? Something that installs glibc first if there’s a new binpkg then
48 > goes on to the remaining updates.
49 >
50 > Just offered in case there’s a useful hint from my experience - not arguing
51 > that mine is the one true way (tm).
52 >
53 > HTH,
54 >
55 > John Blinka
56 >
57 > >
58
59 In case it is not clear what the underlying problem is:
60
61 Slow machine updates and is on the same set of packages as the fast
62 machine.
63
64 Fast machine updates glibc to a new version at time T1
65 Fast machine updates app-arch/xz-utils to a new version at time T2.
66 This version of xz CAN have glibc symbols from the very newest glibc
67 version that was merged at time T1. Everything is fine on the fast
68 machine.
69
70 Now the slow machine starts its update process at a time T3 > T2. The
71 list of packages includes glibc AND xz-utils, however xz-utils is often
72 pulled in before glibc which ends in a disaster.
73 Now you have an xz decompressing tool on your slow machine that cannot
74 run, because some library symbols from glibc are missing (because
75 glibc was not merged yet), and you're pretty much stuck in the middle
76 of the update with a broken system.
77
78 I have seen this kind of behaviour only when I have not updated for a
79 very long time the slow machine (i.e. no update for a year).
80
81 Anyway I think a reasonable default for emerge would be to merge glibc
82 as early as possible, because all other binary packages could have
83 been built with the newer glibc version, and could potentially
84 fail to run on the slow machine until glibc is updated.
85
86 Hope that clears up what happens, and why it fails to update / breaks
87 the slow machine.
88
89 Andreas