After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 603575 - MTU calculation is wrong and may cause vpn connection to hang
MTU calculation is wrong and may cause vpn connection to hang
Status: RESOLVED OBSOLETE
Product: NetworkManager
Classification: Platform
Component: VPN: vpnc
0.7.x
Other Linux
: Normal normal
: ---
Assigned To: Dan Williams
NetworkManager maintainer(s)
Depends on:
Blocks: nm-patch
 
 
Reported: 2009-12-02 10:23 UTC by John Haxby
Modified: 2020-11-12 14:31 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Patch to correctly discover the vpn tun device mtu (4.96 KB, patch)
2009-12-02 10:23 UTC, John Haxby
none Details | Review
Updated patch to support IPv6 (5.86 KB, patch)
2010-05-11 13:29 UTC, John Haxby
none Details | Review
[patch] patch rebased on current master (5.69 KB, patch)
2014-07-23 20:46 UTC, Thomas Haller
none Details | Review
[patch] patch rebased on current master (v2) (5.55 KB, patch)
2016-09-07 12:14 UTC, Thomas Haller
none Details | Review

Description John Haxby 2009-12-02 10:23:49 UTC
Created attachment 148889 [details] [review]
Patch to correctly discover the vpn tun device mtu

The NetworkManager-vpnc backend uses the environment variable INTERNAL_IP4_MTU (if set) to determine the correct MTU for the tun device that will be created by vpnc.  This is the same environment variable that the default vpnc script uses and the same environment variable that the vpnc program actually doesn't set at all!

This is not normally a problem because, in most cases, the MTU for the device that will route to the VPN gateway is 1500 and the correct MTU for the tun device is then 1412 (1500-88).  However, if 

  ip route get $VPNGATEWAY

reports an MTU less than 1500 (say, 1458 which is often optimal for ADSL, at least around here, or 976 which is typical for ppp over serial lines) then the MTU will be too large.  The upshot of this is that while ping reports a working connection and, generally speaking, an ssh connection will work anything that will produce MTU-sized packets will fail because the packets are too big to fit in a single frame.   The easy way to spot this is that ping works and scp of a largish file doesn't.

The correct way to determine the MTU is as above: extract the MTU from "ip route get $VPNGATEWAY" and subtract 88 from it.   You can do this by hand after the vpn connection has been established with "ifconfig tun0 mtu 1370" (for an original MTU of 1458) or "ifconfig tun0 mtu 888" (for an original MTU of 976) and scp will no longer hang.

The attached patch does exactly that -- it uses rtnetlink to do "ip route get $VPNGATEWAY" and extract the MTU from that.  It seems to work perfectly for me and I've tested the new addr2mtu() routine extensively outside the nm-service-vpnc-helper code so I'm reasonably confident it's correct!
Comment 1 John Haxby 2009-12-02 10:33:11 UTC
See also bug 584200
Comment 2 Dan Williams 2009-12-04 19:52:18 UTC
Nice patch; though since NM requires libnl to be around (and NM-vpnc requires NM), perhaps we could use libnl to reduce some of the open-coded netlink stuff there?  I can try to do that if you don't get there first :)  NM has a bunch of examples of libnl code, for example here:

(some utility code)
http://cgit.freedesktop.org/NetworkManager/NetworkManager/tree/src/nm-netlink.c

(some actual route-manipulation codem, see check_one_route() and flush_routes())
http://cgit.freedesktop.org/NetworkManager/NetworkManager/tree/src/NetworkManagerSystem.c
Comment 3 John Haxby 2009-12-08 18:42:49 UTC
Unless I'm much mistaken, libnl 1.1 (and even libnl 2.0) doesn't provide a way to construct this type of query:

    memset(&req, 0, sizeof(req));
    req.n.nlmsg_len = NLMSG_LENGTH(sizeof(struct rtmsg));
    req.n.nlmsg_flags = NLM_F_REQUEST | NLM_F_ACK;
    req.n.nlmsg_type = RTM_GETROUTE;
    req.r.rtm_family = AF_INET;
    req.r.rtm_dst_len = 32;
    len = RTA_LENGTH(sizeof(addr));
    req.rta.rta_type = RTA_DST;
    req.rta.rta_len = len;
    memcpy(RTA_DATA(&req.rta), &addr, sizeof(addr));
    req.n.nlmsg_len = NLMSG_ALIGN(req.n.nlmsg_len) + RTA_ALIGN(len);
    req.n.nlmsg_seq = time(NULL);

The critical thing htere is that the RTM_GETROUTE request includes an address family (libnl seems to use AF_UNSPEC), address prefix (32, a complete address) and, last but by no means least, an address (addr).

As I said, unless I'm much mistaken libnl allows one to get the complete routing table and to manipulate it but there seems to be no way to do "ip route get $VPNGATEWAY".

I'd like to be wrong -- libnl would be a better way of doing this kind of query.
Comment 4 Pierre Ossman 2010-02-03 12:01:35 UTC
Is the overhead value of 88 correct when using NAT-T though? As a MTU of 1412 does not work here (MTU to server is 1500).
Comment 5 John Haxby 2010-02-03 12:21:49 UTC
> Is the overhead value of 88 correct when using NAT-T though? As a MTU of 1412
> does not work here (MTU to server is 1500).

Yes, the calculation is correct.  The script in vpnc uses a similar calculation (paraphrased slightly):

  MTU=$(($(ip link show $DEV | grep mtu
              | sed 's/^.* mtu \([[:digit:]]\+\).*$/\1/') - 88))

As you probably know, the reason the MTU is reduced is because the individual IP packets making up the VPN stream must not be fragmented: fragmented packets are simply discarded (I believe) and connections simply stall when you try to transfer large amounts of data.

You should check to see if vpnc works without using NetworkManager.  You should also check that small packets (from ping) make it through and large packets (eg using scp to copy a file) stall.  If you're seeing the stalling problem caused by a wrong MTU calculation then chances are you have fragmentation because your local MTU (1500) is different from that connecting you to your ISP (eg 1458 for PPPoA or 976 for PPP over serial links).
Comment 6 John Haxby 2010-02-03 12:23:01 UTC
Dan -- what more information do you need?
Comment 7 Pierre Ossman 2010-02-03 13:40:16 UTC
The calculation done by vpnc's own scripts also result in a non-functional MTU unfortunately.

From what I can gather on the web, cisco seems to like an MTU value of 1300, and not "outgoing if - 88", so I would advocate using that in the name of interoperability.

Both client and server are on fiber connections with a MTU of 1500 to the ISP.
Comment 8 John Haxby 2010-02-03 13:47:18 UTC
> The calculation done by vpnc's own scripts also result in a non-functional MTU
> unfortunately.

Can you take this up on the vpnc lists and see what they say?

In the mean time, try reducing your mtu to, say, 1388 (which will give you the Cisco recommended MTU once the calculation is complete).   I suspect that your ISP internally uses a small MTU and that's what is causing you trouble.  If you can establish what the maximum MTU you can safely use locally you may be able to determine what the ISP is up to.  (You may be able to ask the ISP; I know BT has -- or had -- specific recommendations for the MTU).
Comment 9 Pierre Ossman 2010-05-10 12:40:05 UTC
(In reply to comment #8)
> Can you take this up on the vpnc lists and see what they say?
> 

I've sent a mail. We'll see what they have to say. Still, I'd say lowering the current hard coded value should be done until a proper fix is in place. Better to have a performance hit than some users being completely without functionality.

> In the mean time, try reducing your mtu to, say, 1388 (which will give you the
> Cisco recommended MTU once the calculation is complete).   I suspect that your
> ISP internally uses a small MTU and that's what is causing you trouble.  If you
> can establish what the maximum MTU you can safely use locally you may be able
> to determine what the ISP is up to.  (You may be able to ask the ISP; I know BT
> has -- or had -- specific recommendations for the MTU).

BT must be one of the last few holdouts then because a MTU of 1500 is more or less guaranteed these days. Besides, it's trivial to check your MTU anyway:

$ tracepath ping.sunet.se
 1:  192.168.128.27 (192.168.128.27)                        0.270ms pmtu 1500
 1:  192.168.128.1 (192.168.128.1)                          1.907ms 
 1:  192.168.128.1 (192.168.128.1)                          1.914ms 
 2:  swipnet-gw.cendio.se (193.12.253.65)                   2.976ms 
 3:  lin-ncore-2.gigabiteth1-1s3.swip.net (130.244.206.2)   2.834ms 
 4:  avk-core-1.pos3-2.swip.net (130.244.205.29)            6.026ms 
 5:  netnod-ix-ge-b-sth.sunet.se (194.68.128.19)            6.411ms asymm  6 
 6:  x3tug-xe-2-3-0.sunet.se (130.242.82.134)              13.247ms asymm  9 
 7:  ping.sunet.se (192.36.125.18)                          6.508ms reached
     Resume: pmtu 1500 hops 7 back 247
Comment 10 David Woodhouse 2010-05-11 08:22:12 UTC
(In reply to comment #2)
> Nice patch; 

Pfft. Lacking IPv6 support -- please don't add more Legacy-IP-only code to NetworkManager.
Comment 11 John Haxby 2010-05-11 08:52:05 UTC
> Pfft. Lacking IPv6 support -- please don't add more Legacy-IP-only code to
> NetworkManager.

That's a good point, I'll re-work the patch accordingly.  I wasn't aware that vpnc supported IPv6 (it seems it does).
Comment 12 John Haxby 2010-05-11 13:29:38 UTC
Created attachment 160819 [details] [review]
Updated patch to support IPv6

This updated patch calculates the MTU even if the $VPNGATEWAY is an IPv6 address.  Without having an IPv6 vpn gateway to test against I can't check that it works completely, but the mtu-calculation code works in isolation for both IPv4 and IPv6 and the helper works for my IPv4 vpn gateway.

I'd be most grateful if someone could merge this.
Comment 13 John Haxby 2010-05-11 13:47:40 UTC
(In reply to comment #9)
> (In reply to comment #8)
> > Can you take this up on the vpnc lists and see what they say?
> > 
> 
> I've sent a mail. We'll see what they have to say. Still, I'd say lowering the
> current hard coded value should be done until a proper fix is in place. Better
> to have a performance hit than some users being completely without
> functionality.
> 

Did you ever get a reply from the vpnc lists?   Unless I've missed something the scripts are still subtracting 88 from the MTU which is right according to the protocol definition.

I notice that the vpnc scripts have (or had) a bug raised against them because they were using the default route to calculate the mtu, but I don't see them changing from $mtu-88 and that does seem to work for everyone except you :-)
Comment 14 Pierre Ossman 2010-05-11 15:28:57 UTC
(In reply to comment #13)
> 
> Did you ever get a reply from the vpnc lists?   Unless I've missed something
> the scripts are still subtracting 88 from the MTU which is right according to
> the protocol definition.

Nope.

What's the official protocol definition here?

> 
> I notice that the vpnc scripts have (or had) a bug raised against them because
> they were using the default route to calculate the mtu, but I don't see them
> changing from $mtu-88 and that does seem to work for everyone except you :-)

Are you implying that noone cares about me? ;)

Still, I'm not alone. E.g. here's a bug report against Fedora:

https://bugzilla.redhat.com/show_bug.cgi?id=486113
Comment 15 John Haxby 2010-05-11 16:24:18 UTC
I can't remember the RFC, but the protocol being carried is ESP.  There's some negotiation that vpnc does with the vpn gateway to handle authentication and key negotiation, but the protocol is ESP which is where the 88 byte header comes from.

What I think is happening to you and may be happening to others is that someone is fragmenting IP packets and hiding the maximum MTU.  This may be why setting the MTU to something small works for a fair number of people.

I'd advocate calculating the MTU as that works for most people (on the grounds that if it was most people, vpnc would have changed) and having NetworkManager-vpnc have a means of reducing the MTU in the GUI for those people who get stuck.
Comment 16 Pierre Ossman 2010-08-11 11:22:06 UTC
ESP is not used in this case as NAT-T is involved, and that might be why things are getting fouled up.
Comment 17 Dan Williams 2010-08-11 18:03:07 UTC
The NM-vpnc plugin is pretty naive about MTU calculation because vpnc itself doesn't report what MTU should be used.  So it's all up to NM-vpnc to figure out what that should be.

I'd be happy to take a patch that adds an "mtu-overhead" key to the VPN dict that the plugin can set based on the options of the connection (ie, if ESP then set that to 88, and if Cisco-UDP then set it to 12 or whatever it's supposed to be) and then in NM we can set the MTU of the tunnel by (MTU of underlying interface - mtuoverhead).  Shouldn't be too hard actually.

Note that you can already set the MTU of the underlying device, it's just that that value doesn't affect the MTU of the tunnel yet.
Comment 18 Tobias Mueller 2010-10-01 10:11:09 UTC
Closing this bug report as no further information has been provided. Please feel free to reopen this bug if you can provide the information asked for.
Thanks!
Comment 19 Pierre Ossman 2010-10-01 10:37:18 UTC
What information? There has been a suggestion to a) add a GUI for manual override of MTU, and b) investigate the overhead under different configurations and make NM take that into account. Both are development issues and I can't see any requests for more information to solve these.
Comment 20 John Haxby 2010-10-01 10:44:44 UTC
In comment #2 when Dan Williams set NEEDINFO, I asked "what information?"  I never received a reply to that.

Re-opening.  Dan, if you need more information from me you need to tell me what information you need.
Comment 21 Thomas Haller 2014-07-23 20:46:53 UTC
Created attachment 281507 [details] [review]
[patch] patch rebased on current master

I rebased the patch on current master (it applies on commit 4cb88cc6f990c8277b23712ff5d20226984016b0).

vpnc still does not set the INTERNAL_IP4_MTU environment variable, so the current mtu-code does not work (and that is probably not going to change as vpnc is not very active).

I changed the patch to first try reading INTERNAL_IP4_MTU, so you could still overwrite the detection by setting the environment variable.

Also, the code (still) does not use libnl. But since the current approach seems to work (and nobody came up to now rewrote it), I think that is acceptable.

Also, this patch adds "detection of MTU", which is not the same as comment 17 where a configuration option is requested instead. Detection should be performed in absence of a configuration option.
Comment 22 Thomas Haller 2016-09-07 12:14:59 UTC
Created attachment 334980 [details] [review]
[patch] patch rebased on current master (v2)

rebased again to current master (8fe9cff381493e79b71a7ff5c9d23be87592daf5)
Comment 23 Thomas Haller 2016-09-07 12:15:33 UTC
(In reply to Thomas Haller from comment #22)
> rebased again to current master (8fe9cff381493e79b71a7ff5c9d23be87592daf5)

should be: b2da16dd568f17dbc25c0de5ff39ecb9b8451f04
Comment 24 John Haxby 2016-09-07 12:42:10 UTC
Looks good to me.
Comment 25 Beniamino Galvani 2016-09-09 07:23:46 UTC
Now vpnc-scripts have changed and the MTU is obtained as the MTU of
the device associated with the route to the gateway:

http://git.infradead.org/users/dwmw2/vpnc-scripts.git/blob/HEAD:/vpnc-script#l160

This is different from what the patch does (obtaining the MTU of the
route), since the route MTU can be undefined if it wasn't specifically
added by administrator, while the device MTU is always set.

Probably the patch should fall back to the device MTU if the
route MTU is not present?
Comment 26 John Haxby 2016-09-09 08:08:49 UTC
I'm pretty sure that's not the way it works.

I think VPNGATEWAY is the far end of the vpn tunnel so gettting the MTU for that route will give the correct MTU.

Originally I said that you could get the MTU from "ip route get $VPNGATEWAY" but of course that doesn't actually show the MTU, it just gives you the device that you're going to get.   You need to follow that up with ip link show $DEV, which is exactly what the vpnc scripts do.

In C, you don't need to do that because the information is there, it's just that ip route get doesn't show it.
Comment 27 Beniamino Galvani 2016-09-09 11:57:31 UTC
(In reply to John Haxby from comment #26)
> I'm pretty sure that's not the way it works.
> 
> I think VPNGATEWAY is the far end of the vpn tunnel so gettting the MTU for
> that route will give the correct MTU.
> 
> Originally I said that you could get the MTU from "ip route get $VPNGATEWAY"
> but of course that doesn't actually show the MTU, it just gives you the
> device that you're going to get.   You need to follow that up with ip link
> show $DEV, which is exactly what the vpnc scripts do.
> 
> In C, you don't need to do that because the information is there, it's just
> that ip route get doesn't show it.

Hmm, in my tests with the patch applied I don't see the METRICS attribute in the netlink response received by the plugin and so the MTU is always set to 1412 (even when the physical interface has MTU x < 1500).
Comment 28 André Klapper 2020-11-12 14:31:20 UTC
bugzilla.gnome.org is being shut down in favor of a GitLab instance. 
We are closing all old bug reports and feature requests in GNOME Bugzilla which have not seen updates for a long time.

If you still use NetworkManager and if you still see this bug / want this feature in a recent and supported version of NetworkManager, then please feel free to report it at https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/

Thank you for creating this report and we are sorry it could not be implemented (workforce and time is unfortunately limited).