1 |
On Wed, Sep 5, 2012 at 2:46 PM, Rich Freeman <rich0@g.o> wrote: |
2 |
> On Wed, Sep 5, 2012 at 7:27 AM, Ciaran McCreesh |
3 |
> <ciaran.mccreesh@××××××××××.com> wrote: |
4 |
>> Uhm. O(n) == O(n/2). Anything assuming they're different is just wrong. |
5 |
> |
6 |
> We're basically debating definitions. O notation is used to indicate |
7 |
> how algorithms scale and nobody uses O(n/2) and such as a result. |
8 |
> |
9 |
> An algorithm that is twice as slow is still twice as slow. That might |
10 |
> or might not matter. However, this isn't the domain where O notation |
11 |
> is used. You use O notation when your algorithm takes 30 seconds to |
12 |
> run and you want to know what happens with the dataset doubles 3000 |
13 |
> times. It generally doesn't matter if the result is that it will take |
14 |
> 1 trillion years to operate or 2 trillion years. You care more about |
15 |
> whether it will take minutes, hours, weeks, years, or whatever. |
16 |
> |
17 |
> I can't really think of any practical examples where multiplying the |
18 |
> time to parse a list of maybe 50 items vs 5 lists of 10 items is going |
19 |
> to make that big of a difference. They're just lines in a text file - |
20 |
> your CPU can compare a few billions characters per second. Sure, if |
21 |
> you add 75 layers of abstraction you might be able to find just the |
22 |
> right point where a factor of 5 is going to make it intolerable but a |
23 |
> factor of 1 is almost acceptable, but go ahead and add/remove a few |
24 |
> layers and suddenly it is all fine or all horrible anyway. That is a |
25 |
> bit contrived. That's why everybody ignores constant factors in O |
26 |
> notation anyway. |
27 |
|
28 |
so tl;dr, this doesn't matter because string comparison is very fast. |
29 |
|
30 |
-A |
31 |
|
32 |
> |
33 |
> Rich |
34 |
> |