1 |
On Thu, Nov 17, 2016 at 4:37 AM, Michael Palimaka <kensington@g.o> wrote: |
2 |
> On 17/11/16 20:16, Rich Freeman wrote: |
3 |
>> On Thu, Nov 17, 2016 at 2:16 AM, Michael Palimaka <kensington@g.o> wrote: |
4 |
>>> * A leaf package such as {{package|kde-apps/kcalc}} may not require any |
5 |
>>> runtime testing at all |
6 |
>> |
7 |
>> I'm not really a big fan of this, but if we REALLY didn't want to do |
8 |
>> any runtime testing on a package then we should have some way to tag |
9 |
>> the package as such, and then have some way to do automated |
10 |
>> build-test-only stabilization. If you aren't doing runtime testing |
11 |
>> then manual stabilization adds zero value. |
12 |
> |
13 |
> How much value do you think we gain from runtime testing a package like |
14 |
> kcalc as part of the stabilisation process, considering that it already |
15 |
> sat in ~arch for at least 30 days? |
16 |
|
17 |
We ensure that it actually runs at all with non-~arch dependencies? |
18 |
|
19 |
The 30 days spent in ~arch tells you very little about whether the |
20 |
package works with stable dependencies, since only those running mixed |
21 |
keywords would be testing that. |
22 |
|
23 |
> |
24 |
> Also, based on conversations with various arch team members, my |
25 |
> understanding is that a lot of stabilisations that happen right now are |
26 |
> already build-only. |
27 |
> |
28 |
|
29 |
Certainly this isn't the documented process and it is the first I've |
30 |
heard of this. |
31 |
|
32 |
I think one of two things make sense: |
33 |
|
34 |
1. Manual runtime testing followed by stabilization. |
35 |
2. Automated build testing followed by stabilization, with no human involved. |
36 |
|
37 |
What doesn't make sense is manual build testing. The person is adding |
38 |
zero value. |
39 |
|
40 |
>> |
41 |
>> Overall though the writeup was good and maybe it will trigger some |
42 |
>> discussion. I tend to think that if we want to do things like testing |
43 |
>> permutations and such then automated build-only tools might be the way |
44 |
>> to address this. Manual effort should be focused on things like |
45 |
>> runtime testing where it adds the most value. This also strikes me as |
46 |
>> the sort of thing that could probably be assigned out to volunteers |
47 |
>> who do not have commit access. |
48 |
> |
49 |
> There's a few tools for this out there already. I've personally been |
50 |
> working to update app-portage/tatt for git - see |
51 |
> https://asciinema.org/a/cqsy983t9jimszvypcjr3zg5m for a demo. |
52 |
> |
53 |
|
54 |
Assuming we decide we don't care about runtime testing (which I'm not |
55 |
sure I'm a fan of), it sounds like the only thing this is missing is: |
56 |
1. Running as a service on Gentoo infra without any person at the keyboard. |
57 |
2. Automatically monitoring the bug queue for anything that can be |
58 |
stabilized, taking into account blockers/dependencies/etc. |
59 |
3. Posting failures to the bug, and then removing that bug from the |
60 |
queue until somebody marks it as ready to go back in. |
61 |
4. If there is a success updating the bug (including closing if the |
62 |
last arch) and doing the commit using an infra account. |
63 |
|
64 |
However, I'd be interested in metrics on failures discovered in |
65 |
runtime testing and so on, or missed with a lack of runtime testing. |
66 |
I'll admit that a lot of runtime testing tends to be fairly shallow, |
67 |
but I do think there is something to be said for doing some kind of |
68 |
runtime testing. |
69 |
|
70 |
I think we need to think about why we actually have a stable branch. |
71 |
Does it offer any value if all we do is build testing, when I'm sure |
72 |
the maintainers at least build their packages in ~arch before they |
73 |
commit them? |
74 |
|
75 |
-- |
76 |
Rich |