1 |
> On a remote network I run ethtool on both cards and I got both 1000Mb/s |
2 |
> speed |
3 |
> |
4 |
> |
5 |
As the 20 odd MB/s you're getting is above what is possible on 100M |
6 |
ethernet, you can rule out any ethernet interfaces at 100M. |
7 |
|
8 |
Can you describe the network between the two systems with the slow transfer? |
9 |
|
10 |
If there is a fast WAN from one side of the globe to the other it could be |
11 |
latency related. OpenSSH used to have a fixed internal window size that |
12 |
made it slow on high bandwidth high latency links, and I notice the hpn USE |
13 |
flag still exists in the openssh ebuild, which implies the issue with |
14 |
openssh still exists. Rsync can use either ssh or its own protocol, so if |
15 |
there's a high latency link between the two boxes and rsync is using ssh, |
16 |
you could investigate rebuilding openssh with +hpn. |
17 |
|
18 |
What does ping show the latency as? |
19 |
|
20 |
Otherwise i'd be thinking about packet loss. First place to start for that |
21 |
is on the endpoint interfaces; |
22 |
ifconfig enp35s0f0 | grep err |
23 |
RX errors 0 dropped 0 overruns 0 frame 0 |
24 |
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 |