1 |
On Wed, 2 Oct 2013 10:12:00 +1300 |
2 |
Kent Fredric <kentfredric@×××××.com> wrote: |
3 |
|
4 |
> On 2 October 2013 08:51, Peter Stuge <peter@×××××.se> wrote: |
5 |
> |
6 |
> > I agree, but I think the problem is basically that many people |
7 |
> > consider it impossible for "newer" to ever be shitty. |
8 |
> > |
9 |
> > Even if they are intimately familiar with the details of a package |
10 |
> > upstream they may still not be capable of determining what is shitty |
11 |
> > in the context of a distribution. |
12 |
> > |
13 |
> > I guess most stabilization requesters as well as actual stabilizers |
14 |
> > are even less familiar with upstream details, so determining what is |
15 |
> > shitty and not is really hard.. :\ |
16 |
> > |
17 |
> |
18 |
> |
19 |
> The other really annoying thing you have to consider, is that most |
20 |
> people out there are using all (~ARCH) or all (ARCH) keywording, not |
21 |
> a mix of the two. |
22 |
> |
23 |
> ^ This has the fun side effect of meaning packages that are (~ARCH) |
24 |
> and have (ARCH) dependents, where the package that is currently |
25 |
> (~ARCH) is pending stabilization, has likely had nobody test it at |
26 |
> all except for arch testers. |
27 |
|
28 |
AFAIK this moot since users running testing are expected to expect |
29 |
breakage regardless of what the breakage is, as long as it caused by |
30 |
testing package. |
31 |
|
32 |
btw, I'm one of those running mixed stable/testing system and based on |
33 |
my limited experience I believe this is a rare scenario. |
34 |
|
35 |
> So if you're relying on the presence of filed bugs to give some sort |
36 |
> of coverage metric, you're going to be out of luck from time to time. |
37 |
> For instance, that fun bug where stabilising a version of libraw |
38 |
> broke the things depending upon it that were already stable. |
39 |
> |
40 |
> Its ironic really, thats essentially a hidden bug that exists as a |
41 |
> result of having two tiers of testing. |
42 |
> |
43 |
> https://bugs.gentoo.org/show_bug.cgi?id=458516 |
44 |
> |
45 |
> I've been long wishing there was a FEATURE I could turn on that would |
46 |
> just report installs to a database somewhere, showing |
47 |
> success/fail/succcess+tests/fail+tests , with dependencies, useflags, |
48 |
> and other relevant context, so you'd at least have a dataset of |
49 |
> *success* rates, and you could do more heavy testing on things where |
50 |
> there were fail results, or an absense of results. |
51 |
|
52 |
+1 |
53 |
|
54 |
> CPAN has a comparable service that leverages end users reporting test |
55 |
> runs on code while they install it, and some end users, given this |
56 |
> power, go so out and set up mass automated testing boxes, the utility |
57 |
> of which I find useful on a daily basis, because a user is far more |
58 |
> likely to get an automated success/fail report sent to a server, and |
59 |
> far *less* likely to want to set time aside to go through the |
60 |
> rigmarole of filing a bug report. |
61 |
> |
62 |
> Some users are also inclined to just try a few versions either side, |
63 |
> and never file a bug report, or try twiddling USE flags or disabling |
64 |
> problematic FEATURES to find a version that works for them, and you |
65 |
> may never see a bug report for that. |
66 |
> |
67 |
> An automated "X combination failed" report at very least lets you |
68 |
> know a datapoint where a failure occurred. |
69 |
|
70 |
I'd be cautious about involving users in this as it very often happens |
71 |
to myself that something breaks, I get mad and then figure it was my |
72 |
own fault (eg. messing with cflags I shouldn't mess with) |
73 |
|
74 |
> I'm not saying we should make any automated decision making *based* on |
75 |
> those reports, but it would be far more useful to have a list of known |
76 |
> failure cases up front than to ask a tester to try be lucky and find |
77 |
> it by looking for it. |
78 |
> |
79 |
> ( After all, bugs often arise when you're not looking for them, as |
80 |
> opposed to when you are, and some bugs, when looked for, vanish ) |
81 |
> |