1 |
On 02.05.2007, at 02:32, Marius Mauch wrote: |
2 |
|
3 |
> a) cost (in terms of runtime, resource usage, additional deps) |
4 |
|
5 |
Tools for this could be implemented in the package manager. The |
6 |
package has to be installed and tested by the developer, so if |
7 |
portage would show the times for each stage or the times for the test |
8 |
and the rest or something like that, the developer could get an idea: |
9 |
If test time is smaller than build time (or less than half of |
10 |
complete time), than it's not that much cost. It test time is less |
11 |
then 1 hour (or whatever), than it's not that much cost. In any other |
12 |
case it's much cost. |
13 |
|
14 |
> b) effectiveness (does a failing/working test mean the package is |
15 |
> broken/working?) |
16 |
|
17 |
To figure this out before releasing a package to the tree might be |
18 |
lots of work. so this could be figured out later. If there are bugs |
19 |
about tests failing, try to reproduce it or ask the reporter to do |
20 |
some tests if everything is working as expected. |
21 |
|
22 |
> c) importance (is there a realistic chance for the test to be useful?) |
23 |
|
24 |
This can easily be decided, as mentioned in other posts (scientific |
25 |
packages, core packages, cryptographic packages,...) |
26 |
|
27 |
> d) correctness (does the test match the implementation? overlaps a bit |
28 |
> with effectiveness) |
29 |
|
30 |
This might be a lot of work. I think this cannot be tested in a sane |
31 |
way for every package. So it's probably up to the maintainer/herd or |
32 |
upstream to decide if he/they sould take care of this |
33 |
|
34 |
Philipp |
35 |
|
36 |
|
37 |
-- |
38 |
gentoo-dev@g.o mailing list |