1 |
On Wed, 10 Aug 2016 06:45:16 +0530 |
2 |
Pallav Agarwal <pallavagarwal07@×××××.com> wrote: |
3 |
|
4 |
> On Tue, Aug 9, 2016 at 8:11 PM, Brian Dolbec <dolsen@g.o> |
5 |
> wrote: |
6 |
> |
7 |
> > On Tue, 9 Aug 2016 22:05:45 +1200 |
8 |
> > Kent Fredric <kentnl@g.o> wrote: |
9 |
> > |
10 |
> > > On Tue, 9 Aug 2016 01:59:35 -0400 |
11 |
> > > Rich Freeman <rich0@g.o> wrote: |
12 |
> > > |
13 |
> > > > While I think your proposal is a great one, I think this is |
14 |
> > > > actually the biggest limitation. A lot of our packages (most?) |
15 |
> > > > don't actually have tests that can be run on every build (most |
16 |
> > > > don't have tests, some have tests that take forever to run or |
17 |
> > > > can't be used on a clean install). |
18 |
> > > |
19 |
> > > IMHO, That's not "ideal", but we don't need idealism to be useful |
20 |
> > > here. |
21 |
> > > |
22 |
> > > Tests passing give one kind of useful kind of quality test. |
23 |
> > > |
24 |
> > > But "hey, it compiles" gives useful data in itself. |
25 |
> > > |
26 |
> > > By easy counter example, "it doesn't compile" is in itself useful |
27 |
> > > information ( and the predominant supply of bugs filed are |
28 |
> > > compilation failures ). |
29 |
> > > |
30 |
> > > Hell, sometimes I hit a compile failure and I just go "eeh, I'll |
31 |
> > > look into it next week". How many people are doing the same? |
32 |
> > > |
33 |
> > > The beauty of the automated datapoint is it doesn't have to be |
34 |
> > > "awesome quality" to be useful, its just guidance for further |
35 |
> > > investigation. |
36 |
> > > > While runtime testing doesn't HAVE to be extensive, we do want |
37 |
> > > > somebody to at least take a glance at it. |
38 |
> > > |
39 |
> > > Indeed, I'm not hugely in favour of abolishing manual |
40 |
> > > stabilization entirely, but sometimes it just gets to a point |
41 |
> > > where its a bit beyond a joke with requests languishing untouched |
42 |
> > > for months. |
43 |
> > > |
44 |
> > > If there was even data saying "hey, look, its obvious this isn't |
45 |
> > > ready for stabilization", we could *remove* or otherwise mark for |
46 |
> > > postponement stabilization requests that were failing due to |
47 |
> > > crowd-source metrics. |
48 |
> > > |
49 |
> > > This means it can also be used to focus existing stabilization |
50 |
> > > efforts to reduce the number of things being thrown in the face |
51 |
> > > of manual stabilizers. |
52 |
> > > |
53 |
> > > > |
54 |
> > > > If everything you're proposing is just on top of what we're |
55 |
> > > > already doing, then we have the issue that people aren't |
56 |
> > > > keeping up with the current workload, and even if that report |
57 |
> > > > is ultra-nice it is actually one more step than we have today. |
58 |
> > > > The workload would only go down if a machine could look at the |
59 |
> > > > report and stabilize things without input at least some of the |
60 |
> > > > time. |
61 |
> > > |
62 |
> > > Indeed, it would require the crowd service to be automated, and |
63 |
> > > the relevant usage of the data as automated as possible, and |
64 |
> > > humans would only go looking at the data when interested. |
65 |
> > > |
66 |
> > > For instance, when somebody manually files a stable request, some |
67 |
> > > watcher could run off and scour the reports in a given window and |
68 |
> > > comment "Warning: Above threshold failure rates for target in last |
69 |
> > > n-days, proceed with caution", and it would only enhance the |
70 |
> > > existing stabilization workflow. |
71 |
> > |
72 |
> > This whole thing you are proposing has been a past stats project |
73 |
> > many times in GSOC for Gentoo. The last time, it produced a decent |
74 |
> > system that was functional and __NEVER__ got deployed and turned |
75 |
> > on. It ran for several years on the gentoo GSOC student server |
76 |
> > (vulture). It never gained traction with the infra team due to |
77 |
> > lack of infra resources and infra personnel to maintain it. |
78 |
> > |
79 |
> > Perhaps with the new hardware recently purchased to replace the |
80 |
> > failed server from earlier this year, we should have some hardware |
81 |
> > resources. If you can dedicate some time to work on the code which |
82 |
> > I'm sure will need some updating now, I would help as well (not |
83 |
> > that I already can't keep up with all the project coding I'm |
84 |
> > involved with). |
85 |
> > |
86 |
> > This is of course if we can get a green light from our infra team |
87 |
> > to be able to implement a stats vm on the new ganeti system. |
88 |
> > |
89 |
> > We will also need some help from security people to ensure the |
90 |
> > system is secure, nginx/lightttp configuration, etc... |
91 |
> > |
92 |
> > So, are you up for it? Any Gentoo dev willing to help admin such a |
93 |
> > system, please reply with your area of expertise and ability to |
94 |
> > help. |
95 |
> > |
96 |
> > Maybe we can finally get a working and deployed stats system. |
97 |
> > |
98 |
> > -- |
99 |
> > Brian Dolbec <dolsen> |
100 |
> > |
101 |
> > |
102 |
> The similar GSoC project this year is in fact my project, named |
103 |
> [Continuous Stabilization]. I would be very interested to know what I |
104 |
> can do to ensure that the system is deployed and used this time. |
105 |
|
106 |
Yes, I am familiar with your project. |
107 |
|
108 |
The best way to ensure that it is set up and used is to become a gentoo |
109 |
developer after GSOC is over and keep working on your code and pushing |
110 |
to get it installed and running on a server. |
111 |
|
112 |
The stats project when we can get it going could be used to assist your |
113 |
code and help determine the stable candidates. I don't remember if it |
114 |
got implemented, but I proposed an optional/configurable method for the |
115 |
stats project to make users aware of a questionnaire about a pkg they |
116 |
have installed. That way a maintainer can ask some questions of users |
117 |
of the pkg to help determine any issues that might prevent |
118 |
stabilization (all anonymous of course, if the user chooses). |
119 |
|
120 |
-- |
121 |
Brian Dolbec <dolsen> |