1 |
On Tue, 24 Feb 2009 18:24:16 +0000 |
2 |
Ciaran McCreesh <ciaran.mccreesh@××××××××××.com> wrote: |
3 |
|
4 |
> On Tue, 24 Feb 2009 18:16:54 +0100 |
5 |
> Luca Barbato <lu_zero@g.o> wrote: |
6 |
> > > You're doubling the number of files that have to be read for an |
7 |
> > > operation that's almost purely i/o bound. On top of that, you're |
8 |
> > > introducing a whole bunch of disk seeks in what's otherwise a nice |
9 |
> > > linear operation. |
10 |
> > |
11 |
> > I see words, not numbers. |
12 |
> |
13 |
> Number: double. That's a '2 times'. |
14 |
|
15 |
That only means you're increasing the constant factor in the |
16 |
complexity of the thing... which may very well be completely negligible |
17 |
unless someone provides real benchmarks. I would be very surprised if |
18 |
that "2 times" factor happens to be true, because finding a string in a |
19 |
file is an order of magnitude simpler than sourcing said file with |
20 |
bash. Moreover this doesn't take into account disk and os cache. |
21 |
|
22 |
Alexis. |