1 |
On 26 June 2011 14:59, Stuart Longland <redhatter@g.o> wrote: |
2 |
> Hi all, |
3 |
> |
4 |
> I've been busy for the past month or two, busy updating some of my |
5 |
> systems. In particular, the Yeeloong I have, hasn't seen attention in a |
6 |
> very long time. Soon as I update one part however, I find some swath of |
7 |
> packages break because of a soname change, anything Python-related stops |
8 |
> working because of a move from Python 2.6 to 2.7, or Perl gets updated. |
9 |
> |
10 |
|
11 |
I have a system I use/developed which tries to improve consistency, |
12 |
but it greatly increases the amount of compile/test runs that get done |
13 |
in the process, but for my cases that's ok because I *want* things to |
14 |
be rebuilt needlessly "just to make sure they can still be built". |
15 |
|
16 |
<Massive rant follows> |
17 |
|
18 |
I make some assumptions: |
19 |
|
20 |
Take a timestamp. |
21 |
|
22 |
Record all packages that need to be updated or reinstalled due to USE changes. |
23 |
|
24 |
Assume, that there is a chance that any direct dependent of those |
25 |
packages may become broken by this update, either at install time, or |
26 |
during runtime ( ie: .so breakages, ABI breakages, $LANGUAGE breakages |
27 |
etc ) |
28 |
|
29 |
Consider that all these packages were installed before that timestamp. |
30 |
|
31 |
You can then mostly assume all packages installed after that timestamp |
32 |
are built with the consideration of the changes that have occurred in |
33 |
their dependencies. |
34 |
|
35 |
For the most part, this principle appears to cover a very large range |
36 |
of scenarios, and is somewhat like a proactive "revdep-rebuild" that |
37 |
assumes that every new install/upgrade breaks everything that uses it. |
38 |
|
39 |
To this end, I have conjured a very Naïve toolset which does what I want it to. |
40 |
|
41 |
https://github.com/kentfredric/treebuilder |
42 |
|
43 |
Note a lot of the paths are hardcoded in the source, and its only |
44 |
really on Github for educational value. |
45 |
|
46 |
Workflow proceeds as follows: ( nb: I /do/ use paludis, so I'll be |
47 |
using its terms for the sake of accuracy ) |
48 |
|
49 |
./mk_timestamp.sh # this sets our timestamp values. |
50 |
|
51 |
echo >| rebuild.txt # start over the list of things that need rebuilding |
52 |
|
53 |
Then I sync portage and overlays. ( I have a custom script that calls |
54 |
cave sync as well as doing a few other tasks ) |
55 |
|
56 |
{ |
57 |
|
58 |
then I do a "deep update of system including new uses" but not execute it. |
59 |
|
60 |
cave resolve -c system |
61 |
|
62 |
And I record each updated/reinstalled package line-by-line in rebuild.txt |
63 |
|
64 |
Then perform it: |
65 |
|
66 |
cave resolve -c system --continue-on-failure if-independent -x |
67 |
|
68 |
} |
69 |
|
70 |
and then repeat that with 'world'. |
71 |
|
72 |
Then I run ./sync.sh , which works out all the "Still old" packages, |
73 |
computes some simple regexps' and emits 'rebuilds.out', which is a |
74 |
list of packages that "Might be broken" which I'll reinstall "just in |
75 |
case". |
76 |
|
77 |
VERY Often this is completely needless, ie: -r bumps to kdelibs |
78 |
triggers me rebuilding everything in KDE, -r bumps to perl causes me |
79 |
rebuilding every perl module in existence, and everything that uses |
80 |
perl ( including everything in KDE incidentally , as well as GHC and a |
81 |
few other very large nasties ). |
82 |
|
83 |
Once this list is complete, there are 2 approaches I generally take, |
84 |
|
85 |
1. If the list is small enough, I'll pass the whole thing to cave/paludis. |
86 |
|
87 |
cave resolve -c -1 $( cat /root/rebuilder/rebuilds.out ) |
88 |
|
89 |
and note record any significant changes ( ie: new uses/updates of |
90 |
dependendents for things that are "orphans" for whatever reason ) |
91 |
|
92 |
and then |
93 |
|
94 |
cave resolve -c -1 $( cat /root/rebuilder/rebuilds.out ) -x |
95 |
--continue-on-failure if-independent |
96 |
|
97 |
or |
98 |
|
99 |
2. If this list looks a bit large, I'll pass the things to reinstall |
100 |
randomly in groups |
101 |
|
102 |
dd if=/dev/urandom of=/tmp/rand count=1024 bs=1024 # generate |
103 |
random file for 'shuf' to produce random but repeatable sort. |
104 |
|
105 |
cave resolve -c -1 $( shuf --random-source=/tmp/rand -n 200 |
106 |
/root/rebuilder/rebuilds.out ) |
107 |
|
108 |
again, noting updates/etc |
109 |
|
110 |
cave resolve -c -1 $( shuf --random-source=/tmp/rand -n 200 |
111 |
/root/rebuilder/rebuilds.out ) -x --continue-on-failure |
112 |
if-independent |
113 |
|
114 |
|
115 |
After each build run, sync.sh is re-run, updating the list of things |
116 |
which this code still considers "broken", and then I continue the |
117 |
build/build/build/sync pattern until the build contains all items in |
118 |
rebuilds.out, and they are all either failing or skipped. |
119 |
|
120 |
At this point, all the files still listed in rebuilds.out are deemed |
121 |
"somewhat broken". This list is then concatentated with "brokens.out" |
122 |
and the process is repeated until all results fail/skip. |
123 |
|
124 |
then brokens.out and rebuilds.out are concatentated together and |
125 |
replace broken.txt , which is a list of "things to check later". |
126 |
|
127 |
at this point I consider that "merging things are as good as its going |
128 |
to get", and the entire process starts over, update the timestamp, |
129 |
sync portage. |
130 |
|
131 |
Over time, broken.txt adapts itself, growing and shrinking as new |
132 |
things start being broken due to dependencies, or being resolved due |
133 |
to fixes entering portage. |
134 |
|
135 |
Long story short, all of the above is mostly insanity, but its |
136 |
reasonably successfull. And after all, I am an insane kinda guy =) |
137 |
|
138 |
-- |
139 |
Kent |
140 |
|
141 |
perl -e "print substr( \"edrgmaM SPA NOcomil.ic\\@tfrken\", \$_ * 3, |
142 |
3 ) for ( 9,8,0,7,1,6,5,4,3,2 );" |
143 |
|
144 |
http://kent-fredric.fox.geek.nz |