Gentoo Archives: gentoo-project

From: Kent Fredric <kentnl@g.o>
To: gentoo-project@l.g.o
Subject: Re: [gentoo-project] Call for agenda items - Council meeting 2016-08-14
Date: Tue, 09 Aug 2016 10:06:34
Message-Id: 20160809220545.6f1c7dd3@katipo2.lan
In Reply to: Re: [gentoo-project] Call for agenda items - Council meeting 2016-08-14 by Rich Freeman
1 On Tue, 9 Aug 2016 01:59:35 -0400
2 Rich Freeman <rich0@g.o> wrote:
3
4 > While I think your proposal is a great one, I think this is actually
5 > the biggest limitation. A lot of our packages (most?) don't actually
6 > have tests that can be run on every build (most don't have tests, some
7 > have tests that take forever to run or can't be used on a clean
8 > install).
9
10 IMHO, That's not "ideal", but we don't need idealism to be useful here.
11
12 Tests passing give one kind of useful kind of quality test.
13
14 But "hey, it compiles" gives useful data in itself.
15
16 By easy counter example, "it doesn't compile" is in itself useful
17 information ( and the predominant supply of bugs filed are compilation
18 failures ).
19
20 Hell, sometimes I hit a compile failure and I just go "eeh, I'll look
21 into it next week". How many people are doing the same?
22
23 The beauty of the automated datapoint is it doesn't have to be "awesome
24 quality" to be useful, its just guidance for further investigation.
25
26 > While runtime testing doesn't HAVE to be extensive, we do want
27 > somebody to at least take a glance at it.
28
29 Indeed, I'm not hugely in favour of abolishing manual stabilization
30 entirely, but sometimes it just gets to a point where its a bit beyond
31 a joke with requests languishing untouched for months.
32
33 If there was even data saying "hey, look, its obvious this isn't ready
34 for stabilization", we could *remove* or otherwise mark for
35 postponement stabilization requests that were failing due to
36 crowd-source metrics.
37
38 This means it can also be used to focus existing stabilization efforts
39 to reduce the number of things being thrown in the face of manual
40 stabilizers.
41
42 >
43 > If everything you're proposing is just on top of what we're already
44 > doing, then we have the issue that people aren't keeping up with the
45 > current workload, and even if that report is ultra-nice it is actually
46 > one more step than we have today. The workload would only go down if
47 > a machine could look at the report and stabilize things without input
48 > at least some of the time.
49
50 Indeed, it would require the crowd service to be automated, and the
51 relevant usage of the data as automated as possible, and humans would
52 only go looking at the data when interested.
53
54 For instance, when somebody manually files a stable request, some
55 watcher could run off and scour the reports in a given window and
56 comment "Warning: Above threshold failure rates for target in last
57 n-days, proceed with caution", and it would only enhance the existing
58 stabilization workflow.

Replies