1 |
On Wed, Sep 5, 2012 at 7:27 AM, Ciaran McCreesh |
2 |
<ciaran.mccreesh@××××××××××.com> wrote: |
3 |
> Uhm. O(n) == O(n/2). Anything assuming they're different is just wrong. |
4 |
|
5 |
We're basically debating definitions. O notation is used to indicate |
6 |
how algorithms scale and nobody uses O(n/2) and such as a result. |
7 |
|
8 |
An algorithm that is twice as slow is still twice as slow. That might |
9 |
or might not matter. However, this isn't the domain where O notation |
10 |
is used. You use O notation when your algorithm takes 30 seconds to |
11 |
run and you want to know what happens with the dataset doubles 3000 |
12 |
times. It generally doesn't matter if the result is that it will take |
13 |
1 trillion years to operate or 2 trillion years. You care more about |
14 |
whether it will take minutes, hours, weeks, years, or whatever. |
15 |
|
16 |
I can't really think of any practical examples where multiplying the |
17 |
time to parse a list of maybe 50 items vs 5 lists of 10 items is going |
18 |
to make that big of a difference. They're just lines in a text file - |
19 |
your CPU can compare a few billions characters per second. Sure, if |
20 |
you add 75 layers of abstraction you might be able to find just the |
21 |
right point where a factor of 5 is going to make it intolerable but a |
22 |
factor of 1 is almost acceptable, but go ahead and add/remove a few |
23 |
layers and suddenly it is all fine or all horrible anyway. That is a |
24 |
bit contrived. That's why everybody ignores constant factors in O |
25 |
notation anyway. |
26 |
|
27 |
Rich |