1 |
Dave Oxley <dave <at> daveoxley.co.uk> writes: |
2 |
|
3 |
|
4 |
> Yeah, its a hub. I'll stop trying to set it to full duplex now. |
5 |
|
6 |
> It is 40 times quicker in one direction than the other. Can you |
7 |
> give me a hint where to go from here. |
8 |
|
9 |
Try connecting the 2 systems with only a 'cross-over cable' run the applications |
10 |
and make measurements. If this results in an increase in the bandwidth |
11 |
in either direction, you may want to put systems back on the hub, and |
12 |
have a third system run ethereal. Look at your data traffic and see |
13 |
if anythingelse is using the bandwidth from either of these 2 system |
14 |
or what else is plug into the hub/switch. |
15 |
|
16 |
Is the hub a 10Mbps only hub/switch, check that. On 10 Mbps ethernet |
17 |
hubs,you can never reach the full 10 Mbps, in fact with many systems |
18 |
chattering,the practical throughput is marginally around 33%. |
19 |
|
20 |
If when you are on the cross over cable and you get similar poor |
21 |
results,then the problem may be in the ethernet driver code, |
22 |
kernel, irq settings or |
23 |
some other low level part of the kernel/modules, especially if |
24 |
you get the same skewed results with several different |
25 |
applications moving data between the systems. But, if when |
26 |
you move data between these 2 isolated system, and |
27 |
get different bandwidth performance semantics, then the problem |
28 |
is most likely between the applications or a bottleneck in the |
29 |
application code (poor data structure for example). |
30 |
|
31 |
Make sure you computers are not resource limited, thus blocking |
32 |
the processthat you are running to move the data. Top and ntop |
33 |
are just a few toolsto help track down these sort of issues. |
34 |
|
35 |
Sadly, you may have a complex mix of part or all of these |
36 |
aforementioned issues... |
37 |
|
38 |
hth, |
39 |
James |
40 |
|
41 |
-- |
42 |
gentoo-user@g.o mailing list |