1 |
On Tue, Aug 9, 2016 at 8:11 PM, Brian Dolbec <dolsen@g.o> wrote: |
2 |
|
3 |
> On Tue, 9 Aug 2016 22:05:45 +1200 |
4 |
> Kent Fredric <kentnl@g.o> wrote: |
5 |
> |
6 |
> > On Tue, 9 Aug 2016 01:59:35 -0400 |
7 |
> > Rich Freeman <rich0@g.o> wrote: |
8 |
> > |
9 |
> > > While I think your proposal is a great one, I think this is actually |
10 |
> > > the biggest limitation. A lot of our packages (most?) don't |
11 |
> > > actually have tests that can be run on every build (most don't have |
12 |
> > > tests, some have tests that take forever to run or can't be used on |
13 |
> > > a clean install). |
14 |
> > |
15 |
> > IMHO, That's not "ideal", but we don't need idealism to be useful |
16 |
> > here. |
17 |
> > |
18 |
> > Tests passing give one kind of useful kind of quality test. |
19 |
> > |
20 |
> > But "hey, it compiles" gives useful data in itself. |
21 |
> > |
22 |
> > By easy counter example, "it doesn't compile" is in itself useful |
23 |
> > information ( and the predominant supply of bugs filed are compilation |
24 |
> > failures ). |
25 |
> > |
26 |
> > Hell, sometimes I hit a compile failure and I just go "eeh, I'll look |
27 |
> > into it next week". How many people are doing the same? |
28 |
> > |
29 |
> > The beauty of the automated datapoint is it doesn't have to be |
30 |
> > "awesome quality" to be useful, its just guidance for further |
31 |
> > investigation. |
32 |
> > > While runtime testing doesn't HAVE to be extensive, we do want |
33 |
> > > somebody to at least take a glance at it. |
34 |
> > |
35 |
> > Indeed, I'm not hugely in favour of abolishing manual stabilization |
36 |
> > entirely, but sometimes it just gets to a point where its a bit beyond |
37 |
> > a joke with requests languishing untouched for months. |
38 |
> > |
39 |
> > If there was even data saying "hey, look, its obvious this isn't ready |
40 |
> > for stabilization", we could *remove* or otherwise mark for |
41 |
> > postponement stabilization requests that were failing due to |
42 |
> > crowd-source metrics. |
43 |
> > |
44 |
> > This means it can also be used to focus existing stabilization efforts |
45 |
> > to reduce the number of things being thrown in the face of manual |
46 |
> > stabilizers. |
47 |
> > |
48 |
> > > |
49 |
> > > If everything you're proposing is just on top of what we're already |
50 |
> > > doing, then we have the issue that people aren't keeping up with the |
51 |
> > > current workload, and even if that report is ultra-nice it is |
52 |
> > > actually one more step than we have today. The workload would only |
53 |
> > > go down if a machine could look at the report and stabilize things |
54 |
> > > without input at least some of the time. |
55 |
> > |
56 |
> > Indeed, it would require the crowd service to be automated, and the |
57 |
> > relevant usage of the data as automated as possible, and humans would |
58 |
> > only go looking at the data when interested. |
59 |
> > |
60 |
> > For instance, when somebody manually files a stable request, some |
61 |
> > watcher could run off and scour the reports in a given window and |
62 |
> > comment "Warning: Above threshold failure rates for target in last |
63 |
> > n-days, proceed with caution", and it would only enhance the existing |
64 |
> > stabilization workflow. |
65 |
> |
66 |
> This whole thing you are proposing has been a past stats project many |
67 |
> times in GSOC for Gentoo. The last time, it produced a decent system |
68 |
> that was functional and __NEVER__ got deployed and turned on. It ran |
69 |
> for several years on the gentoo GSOC student server (vulture). It |
70 |
> never gained traction with the infra team due to lack of infra |
71 |
> resources and infra personnel to maintain it. |
72 |
> |
73 |
> Perhaps with the new hardware recently purchased to replace the failed |
74 |
> server from earlier this year, we should have some hardware resources. |
75 |
> If you can dedicate some time to work on the code which I'm sure will |
76 |
> need some updating now, I would help as well (not that I already can't |
77 |
> keep up with all the project coding I'm involved with). |
78 |
> |
79 |
> This is of course if we can get a green light from our infra team to be |
80 |
> able to implement a stats vm on the new ganeti system. |
81 |
> |
82 |
> We will also need some help from security people to ensure the system |
83 |
> is secure, nginx/lightttp configuration, etc... |
84 |
> |
85 |
> So, are you up for it? Any Gentoo dev willing to help admin such a |
86 |
> system, please reply with your area of expertise and ability to help. |
87 |
> |
88 |
> Maybe we can finally get a working and deployed stats system. |
89 |
> |
90 |
> -- |
91 |
> Brian Dolbec <dolsen> |
92 |
> |
93 |
> |
94 |
The similar GSoC project this year is in fact my project, named |
95 |
[Continuous Stabilization]. I would be very interested to know what I |
96 |
can do to ensure that the system is deployed and used this time. |