1 |
On Wed, Sep 5, 2012 at 12:44 PM, Rich Freeman <rich0@g.o> wrote: |
2 |
> On Wed, Sep 5, 2012 at 3:19 AM, Fabio Erculiani <lxnay@g.o> wrote: |
3 |
>> The complexity would become: |
4 |
>> O((b + r + p) * P) |
5 |
>> b = amount of buildtime dependencies |
6 |
>> r = amount of runtime dependencies |
7 |
>> p = amount of post-dependencies |
8 |
>> P = amount of packages i need to apply the filter to |
9 |
>> |
10 |
>> Instead of just: |
11 |
>> O(b * P) |
12 |
> |
13 |
> Well, actually, in both cases the complexity is O(n), assuming you |
14 |
> only need to make a constant number of passes through the deps per |
15 |
> package. |
16 |
> |
17 |
> The whole point of O notation is that it is about how it scales, not |
18 |
> how long it takes. An O(n) algorithm can actually be slower than an |
19 |
> O(n^n) algorithm even on a large dataset. However, the key is that at |
20 |
> some point if the dataset gets large enough the O(n) algorithm will |
21 |
> always become faster. |
22 |
> |
23 |
> I tend to agree with Cirian though - the time for some code to run |
24 |
> through the dependencies array and do something isn't going to be very |
25 |
> high. If a piece of code has to do it many times there is nothing |
26 |
> that says the package manager can't index it. |
27 |
|
28 |
I don't want to diverge (again) from the main topic, but I think that |
29 |
you're just oversimplifying it. |
30 |
If you consider parsing an ebuild something hidden behind a lot of |
31 |
abstraction layers, O(n) vs O(n/2) is a big difference, even if both, |
32 |
normalized, are still O(n). And I would never design an API which |
33 |
assumes that O(n/2) equals to O(n), because you don't know how that is |
34 |
going to be used in upper layers, today, tomorrow and in 5 years. |
35 |
|
36 |
> |
37 |
> Rich |
38 |
> |
39 |
|
40 |
|
41 |
|
42 |
-- |
43 |
Fabio Erculiani |