1 |
On Sun, 12 Apr 2020 11:21:29 +0200 |
2 |
Michał Górny <mgorny@g.o> wrote: |
3 |
|
4 |
> On Sun, 2020-04-12 at 10:43 +0200, Agostino Sarubbo wrote: |
5 |
> > Hello all, |
6 |
> > |
7 |
> > If you work on the stabilization workflow you may have noticed that: |
8 |
> > |
9 |
> > - There are people that rant if you don't run src_test against their packages; |
10 |
> > - There are people that rant if you open a test failure bug against their |
11 |
> > packages and you block the stabilization. |
12 |
> > |
13 |
> > So, unless there will be a clear policy about that, I'd suggest to introduce a |
14 |
> > way to establish if a package is expected to pass src_test or not. |
15 |
> > |
16 |
> > A simple way to achieve this goal would be: |
17 |
> > introduce a new bugzilla custom flag (like "Runtime testing required" we |
18 |
> > already have) maybe called "src_test pass" or similar, that, by default is yes |
19 |
> > and the maintainer should set it to no when needed. |
20 |
> > |
21 |
> > In case of 'yes', the arch team member must compile with FEATURE="test" and he |
22 |
> > is allowed to block the stabilization in case of test-failure. |
23 |
> > |
24 |
> > In case there will be a test-failure there are two choices: |
25 |
> > 1) The maintainer fixes the test failure; |
26 |
> > 2) The maintainer does not want to take care, so he can simply remove the |
27 |
> > blocker and set "src_test pass" to no. |
28 |
> > |
29 |
> > Both of the above are fine for me. |
30 |
> > |
31 |
> > Obviously, if there are already test-failure bugs open, the flag "src_test |
32 |
> > pass" should be set to 'no' when the stabilization bugs is filled. |
33 |
> > |
34 |
> |
35 |
> This is not a problem that can be solved by a binary flag. |
36 |
> |
37 |
> If package's test suite is entirely broken and unmaintained, the package |
38 |
> should use RESTRICT=test and not silently ask arch teams to ignore it. |
39 |
> |
40 |
> If package's test suite is only slightly broken, then I'd prefer saying |
41 |
> 'please report but ignore *these* test failures' because I can't fix |
42 |
> them right now but they don't seem major. Skipping the test suite |
43 |
> entirely is not a solution because it doesn't disambiguate between 'few |
44 |
> tests fail' and 'every single test explodes'. |
45 |
> |
46 |
|
47 |
I agree with this, the metric for marking an ebuild as "tests |
48 |
don't work, please ignore them is RESTRICT=test". Usually for packages |
49 |
that have a *few* tests that fail and don't seem major, I would suggest |
50 |
trying to patch out the tests, or ask the test harness to skip the |
51 |
known bad tests. This way the tinderbox will (hopefully) still succeed. |