1 |
>> >> A while back I was having networking issues. I eventually tried |
2 |
>> >> drastically lowering the MTU of all the systems onsite and the |
3 |
>> >> issues disappeared. I always thought the issue was due to the MTU |
4 |
>> >> on our modem/router. Today I read that AT&T DSL requires a 1492 |
5 |
>> >> MTU so I increased the MTU of our systems up to 1492 and haven't |
6 |
>> >> had any issues. Do certain ISPs require you to change the MTU of |
7 |
>> >> your entire network, or is this likely due to our AT&T |
8 |
>> >> modem/router itself? |
9 |
>> > |
10 |
>> > Are you using tunnels or a firewall that blocks related "icmp |
11 |
>> > fragmentation needed" packets? |
12 |
>> |
13 |
>> |
14 |
>> I have this in shorewall/rules: |
15 |
>> |
16 |
>> ACCEPT all all icmp - - - 10/sec:20 |
17 |
>> |
18 |
>> Which I believe accepts all icmp packets but throttles them to 10/sec |
19 |
>> to avoid being flooded. Is that OK? |
20 |
> |
21 |
> You should probably add a rule before that to let pass all icmp traffic |
22 |
> related to active connections. I think this can be done with conntrack |
23 |
> or "-m related". I didn't use iptables in a long time so I cannot |
24 |
> exactly help you with instructions. |
25 |
> |
26 |
> On the other hand, you're probably interested in only limiting icmp |
27 |
> echo-request. So you may want to stick with that and not limit icmp |
28 |
> altogether. ICMP is a very important part of a functional ip stack, you |
29 |
> should not play with it before fully understanding the consequences. |
30 |
> |
31 |
> I don't think it makes too much sense to bother about icmp. In case, |
32 |
> icmp is dropped first usually - and limiting it on your interfaces |
33 |
> doesn't help at all against flooding (because your provider still |
34 |
> delivers it to your router through your downstream, it's too late to |
35 |
> do limiting at this stage), it just saves you from maybe replying to all |
36 |
> those packets (and thus saves upstream bandwidth which on standard |
37 |
> asymmetric lines is ridiculous small compared to the massive |
38 |
> downstreams). |
39 |
> |
40 |
> But again: You really (!) should not limit icmp traffic with such a |
41 |
> general rule but instead limit only specific types of icmp (like |
42 |
> echo-request), and maybe even block other types completely on the |
43 |
> external interface (redirect, source quench, a few others). |
44 |
> |
45 |
> Most others are important for announcing problems on the packet route - |
46 |
> like smaller MTU (path mtu discovery), unreachable destinations etc. |
47 |
> Unselectively blocking or limiting them will result in a lot of |
48 |
> timeouts and intermittent connection freezes which are hard to |
49 |
> understand. |
50 |
|
51 |
|
52 |
I removed the limiting yesterday so that I'm simply allowing icmp packets: |
53 |
|
54 |
ACCEPT all all icmp |
55 |
|
56 |
It didn't help with my TCP Queuing problem but I'll leave it as-is |
57 |
because I'm sure you're right about the problems it could cause. |
58 |
|
59 |
|
60 |
>> > Please also try to find out if you're experiencing packet loss. If |
61 |
>> > fragmented packets cannot be reassembled due to some packets lost, |
62 |
>> > you will probably find connections freezing or going really slow. |
63 |
>> |
64 |
>> I will watch the output of ifconfig today to see if there are any RX |
65 |
>> or TX errors. |
66 |
> |
67 |
> I almost expect you won't see any numbers there but instead see the |
68 |
> counter of your limit rule rise during periods where you see the |
69 |
> problems. TX and RX errors only catch layer 1 or layer 2 losses, you |
70 |
> are probably experiencing packet loss at or above layer 3 (and I guess |
71 |
> due to your limit rule). |
72 |
> |
73 |
> Maybe run a ping to a destination which you are having problems with, |
74 |
> then reproduce the problem (with the network idle otherwise). You should |
75 |
> see ping packets dropped only then. |
76 |
> |
77 |
> You can also ping with increasing packet sizes (see ping --help) and |
78 |
> see when the packet becomes too big for path MTU. But instead lowering |
79 |
> your MTU then, you should allow icmp-fragmentation-needed come through |
80 |
> reliably. Lowering MTU only makes sense to stop overly fragmentation in |
81 |
> the first place and optimize for a specific packet path (like traffic |
82 |
> through one or multiple VPN tunnels) where fragmentation would |
83 |
> otherwise increase latency a lot, or where icmp-frag-needed does not |
84 |
> correctly work. |
85 |
|
86 |
|
87 |
I'll try pinging today once the issue pops up. |
88 |
|
89 |
- Grant |