GNOME Bugzilla – Bug 758772
NetworkManager openVPN connection causes DNS leak
Last modified: 2016-09-05 08:03:08 UTC
I am using Ubuntu 15.10 with - network-manager 1.0.4-0ubuntu5.1 - network-manager-openvpn 0.9.10.0-1ubuntu2 And I configured my ethernet connexion to automatically use my openvpn vpn setup when connecting. If I go to dnsleaktest.com, I found out that networkamanager leaks my dns of my FAI provider and don't use the DNS of the VPN. However if I close and reopen manually the VPN conenction from networkmanager applet, the DNS leak does not occur anymore. So the bug occurs only when the VPN connection is set-up automatically when login or coming from sleep state. The DNS are not updated to the ones of the VPN and stays the one previously defined.
no. I was wrong to assume it's because the automatic connection. Once the connection is up, dns could be ok but leaks 5 minutes later withtout the connection ever changing !!! so wtf !
to make things clear, connection uses the LAN DHCP DNS instead of DNS of the VPN tunnel connection setup by network manager but it seems heratic
From the NetworkManager log, it's clear than the previous (before establishing the VPN connection) DNS settings is retained by dnsmasq and not removed. So you end up with 3 DNS servers settings defined in dnsmasq after the VPN connection is setup, one of which is still the LAN/ISP DNS that causes the random DNS leak. That DNS entry should have deleted. I don't know if it's upstream bug or ubuntu. I have opened https://bugs.launchpad.net/ubuntu/+source/network-manager-openvpn/+bug/1520771 which shows a work-around for the bug
I have observed this problem as well. Related: https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1211110
I've filled a bug in redhat's bugzilla as I was unable to find this in the beginning. https://bugzilla.redhat.com/show_bug.cgi?id=1322932 I also experience this problem and it is not limited to NetworkManager configuration with dnsmaq. This bug appears also when NetworkManager is managing /etc/resolv.conf directly. Additional info: I've tested to deny access to 1-st and 2-nd NS servers from the machine running OpenVPS server and the result was that the local NS server was used, despite the fact that the checkbox 'Use this connection only for resources on its network' is unchecked. This leads to DNS leak if the first 2 NS servers could not be reached.
this DNS issue has some similarities with route-management, where users want that once VPN connects no traffic is routed via other devices (bug 749376). NetworkManager currently doesn't understand a concept that activating a certain connection "vpn0" should prevent DNS/routing via another connection "eth0". For NM, there are just two connections active, both have DNS/routing configured (with differing priorities) and such is the outcome. But the existence/priorities don't affect each other. As current workaround, you have to ensure that when activating "vpn0" the untrusted connection "eth0-no-dns" on it's own doesn't provide the conflicting DNS/routes. Which would for example mean, you cannot specify the VPN gateway via DNS (because "eth0-no-dns" could not resolve it). Anyway, this is certainly a valuable feature.
This could be fixed for example with adding checkbox "Don't use local DNS when connected to this vpn". As far as I know the resolving via local DNS could be made before establishing VPN connection. After that when we have the IP address of the VPN server, the local DNS could be safely replaced during active vpn session. When using multiple wireless networks, it wouldn't be convenient to disable local DNS servers for every new WiFi network. If there is option to configure the VPN in such way that no matter what connection is used it will replace local DNS, this will be easy for the user and will be more reliable than configuring every single connection in advance.
*** Bug 765235 has been marked as a duplicate of this bug. ***
What do you guys think about this: we could add a ipv[4,6].dns-priority property and use it to sort the IP configurations in the DNS manager, so that the ones with a lower value will appear first in resolv.conf. The default value would be 0 which means "use the default value depending on connection type", corresponding to: 50 for VPNs 100 for connection with default route 200 for other connections to keep the old behavior. The special value -1 would mean "use exclusively this (and others with the same value)". With this we would address both problems of (a) DNS leaks and (b) allowing users to specify an ordering between name servers when multiple connections are active.
(In reply to Beniamino Galvani from comment #9) > What do you guys think about this: we could add a > ipv[4,6].dns-priority property and use it to sort the IP > configurations in the DNS manager, so that the ones with a lower value > will appear first in resolv.conf. The default value would be 0 which > means "use the default value depending on connection type", > corresponding to: > > 50 for VPNs > 100 for connection with default route > 200 for other connections > > to keep the old behavior. The special value -1 would mean "use > exclusively this (and others with the same value)". This sounds good. But we could allow ~all~ negative values to mean: those connections require exclusive DNS (which would disable all settings that have a value >= 0). Then, you could also have differing priorities for exclusive connection. Like: VPN_1a and VPN_1b could be -1, and VPN2 could be -2. If you connect either VPN_1a or VPN_1b (but not VPN2), then only those get DNS configuration. Once you connect VPN2, also VPN_1x is disabled. > > With this we would address both problems of (a) DNS leaks and (b) > allowing users to specify an ordering between name servers when > multiple connections are active.
I like dns-priority idea a lot. Moreover, adding Thomas extension, I could not imagine any uncovered DNS conf user scenario.
(In reply to Thomas Haller from comment #10) > (In reply to Beniamino Galvani from comment #9) > > What do you guys think about this: we could add a > > ipv[4,6].dns-priority property and use it to sort the IP > > configurations in the DNS manager, so that the ones with a lower value > > will appear first in resolv.conf. The default value would be 0 which > > means "use the default value depending on connection type", > > corresponding to: > > > > 50 for VPNs > > 100 for connection with default route This is a bit strange. In this case, the default value depends on a quite peculiar runtime condition (i.e. whether the connection has the default route, which isn't even clear whether that means IPv4 or IPv6). Usually our default-fallback-values are either based on some external/global (static) configuration, on some very generic (usually static) device properties, or on the (static) connection itself -- yes, there is some runtime element there too, but it's usually more clearly defined. > > 200 for other connections I would choose 50 for VPN and 100 for everything else. In face of identical priorities, policy would do what it does now: sort somehow -- and preferring the best-device.
(In reply to Thomas Haller from comment #10) > This sounds good. But we could allow ~all~ negative values to mean: those > connections require exclusive DNS (which would disable all settings that > have a value >= 0). > Then, you could also have differing priorities for exclusive connection. (In reply to Thomas Haller from comment #12) > I would choose 50 for VPN and 100 for everything else. > > In face of identical priorities, policy would do what it does now: sort > somehow -- and preferring the best-device. These proposals sound fine to me. A caveat of this dns-priority thing is that when using dnsmasq the prioritization would not work because dnsmasq forwards queries to all name servers at the same time. There is a 'strict-order' option, but it seems broken (according to dnsmasq's author himself) since it basically means "if the first server time-outs, return an error to client and then switch to the next server for future queries". Instead, the exclusive mode would work also with dnsmasq. There is a related request (bug 746422) to set the split mode for VPN DNS servers according to never-default setting, which is independent from prioritization, but probably should be handled together.
Pushed branch bg/dns-priority-bgo758772, please review.
>> dns: use a single list to store configurations I generally dislike the use of g_object_get_data() and GSList. Often when used, they'd better not be. For nm_dns_plugin_update() we construct 3 separate GSList arguments (vpn_configs, dev_configs, other_configs). Which on the one hand involves additional cloning of the list. But more importantly, the information passed there is not even sufficient, so we hack around it by passing the interface name via NM_DNS_IP_CONFIG_DATA_TAG. After that, the required information is there (for now), but we lost information which might be necessary in the future (like the sorting order). How about replacing GSList *configs with GArray *configs and @configs contains a list of NMDnsIPConfigData. NMDnsIPConfigData of course gets extended to contain also the pointer to the NMIPxConfig object. Then, drop build_plugin_config_lists(), and pass priv->configs->data directly to: void nm_dns_plugin_update(NMDnsPlugin *plugin, const NMDnsIPConfigData *dns_config_data, const NMGlobalDnsConfig *global_config, const char *hostname);
The default-value for the dns-priority cannot be configured as global default value. I think it should be. And that that place, the fallback value for 50 or 100 should be chosen.
(In reply to Thomas Haller from comment #15) > >> dns: use a single list to store configurations Fixed the points above and repushed.
>> dns: use a single array for all configurations I like it. _LOGE ("merge config: [ %-7s v%c %s ]", ^^^ >> libnm-core: add dns-priority to NMSettingIPConfig NMSettingIPConfig:dns-priority: + g_param_spec_int (NM_SETTING_IP_CONFIG_DNS_PRIORITY, "", "", + G_MININT, G_MAXINT, 0, How about to make the valid range int32. Yes, on most (all?) platform gint is equal to gint32, but let's be explict. Also, because on dbus, we have it "i" (32 bit). nm_setting_ip_config_get_dns_priority() can stay at gint (as glib already requires sizeof(int) >= 4). As you prefer... also next commit: priority = svGetValueInt64 (ifcfg, "IPV6_DNS_PRIORITY", 10, G_MININT32, G_MAXINT32, 0); and here: + g_param_spec_int (NM_IP4_CONFIG_DNS_PRIORITY, "", "", + G_MININT, G_MAXINT, 0, >> dns: use DNS priority from IP configuration dispose() must be re-entrant: if (priv->configs) { for (i = 0; i < priv->configs->len; i++) { ... g_ptr_array_free (priv->configs, TRUE); priv->configs = NULL; } rest lgtm
(In reply to Thomas Haller from comment #18) > >> dns: use a single array for all configurations > _LOGE ("merge config: [ %-7s v%c %s ]", Fixed. > How about to make the valid range int32. Yes, on most (all?) platform gint > is equal to gint32, but let's be explict. > Also, because on dbus, we have it "i" (32 bit). Makes sense, changed. > dispose() must be re-entrant: Fixed this too, thanks. Also, added a new commit "dns: don't use the global configuration to compute initial hash" and repushed.
>> dns: use a single array for all configurations _LOGE ("merge config: [ %-7s v%c %s ]", still with level <error> :) rest lgtm
Looks all fine to me.
Merged to master: https://cgit.freedesktop.org/NetworkManager/NetworkManager/commit/?id=8da3e658f7313f56928d22cfe13f9ab78cc1dd3c
According to git tag --contains 8da3e658f7313f56928d22cfe13f9ab78cc1dd3c and git branch --contain 8da3e658f7313f56928d22cfe13f9ab78cc1dd3c * master this is still not merged. Looking at the comments this will be included in 1.4?
(In reply to Mincho Gaydarov from comment #23) > According to > > git tag --contains 8da3e658f7313f56928d22cfe13f9ab78cc1dd3c > and > git branch --contain 8da3e658f7313f56928d22cfe13f9ab78cc1dd3c > * master > > this is still not merged. > > Looking at the comments this will be included in 1.4? it is merged to "master", as you see in `git branch --contains`. There is no tag, because there was no upstream release since then. 1.4.0 will be the next upstream release which contains this feature.
*** Bug 738647 has been marked as a duplicate of this bug. ***