1 |
Excuse me butting in... I'm just a little confused. |
2 |
Not that this is anything new, I'm just ... well, confused. |
3 |
|
4 |
On 01/14/2010 12:49 PM, Nirbheek Chauhan wrote: |
5 |
> In theory, yes. In practice, git is too slow to handle 30,000 files. |
6 |
> Even simple operations like git add become painful even if you put the |
7 |
> whole of portage on tmpfs since git does a stat() on every single file |
8 |
> in the repository with every operation. |
9 |
> |
10 |
My understanding is that git was developed as the SCM for the kernel |
11 |
project. |
12 |
A quick check in an arbitary untouched kernel in /usr/src/ suggests a |
13 |
file [1] count of 25300. |
14 |
|
15 |
Assuming that my figure isn't out by an order of magnitude, how does the |
16 |
kernel team get along with git and 25k files but it is deathly slow for |
17 |
our 30k? |
18 |
Or, to phrase the question better... what are they doing that allows |
19 |
them to manage? |
20 |
|
21 |
Regards, |
22 |
Daniel |
23 |
|
24 |
[1] `find -type f | wc -l`, so all regular files |