After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 712392 - Delay server availability checks on network change
Delay server availability checks on network change
Status: RESOLVED FIXED
Product: evolution-data-server
Classification: Platform
Component: Calendar
3.12.x (obsolete)
Other Linux
: Normal major
: ---
Assigned To: evolution-calendar-maintainers
Evolution QA team
Depends on:
Blocks:
 
 
Reported: 2013-11-15 19:48 UTC by Milan Crha
Modified: 2014-10-24 12:20 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Milan Crha 2013-11-15 19:48:45 UTC
Moving this from a downstream bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1030579

Description of problem:

After resuming from suspend, evolution-data-server creates hundreds of google.com dns queries. See the attached wireshark screenshot. Only way I found to stop these queries is to "killall evolution-calendar-factory". If I don't, my dsl router crashes after 20-30 seconds. So it's very annoying.

Note that I don't have google set as an online account (anymore). And I don't have evolution installed. I'm just a gnome shell user.

Version-Release number of selected component (if applicable):
evolution-data-server-3.10.1-1.fc20.x86_64

How reproducible:
It happens after I resume from suspend, but not all the times. After the killall, the problem disappears.

Actual results:
Flood of DNS queries making my dsl router crash

Expected results:
One or zero query...
Comment 1 Milan Crha 2013-11-20 11:56:47 UTC
Here's a snapshot (backtrace) of a calendar factory when it gets to the flood state.

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f7d6ebdba8d in poll () at ../sysdeps/unix/syscall-template.S:81
81	T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)
Traceback (most recent call last):
  File "/usr/share/gdb/auto-load/usr/lib64/libgobject-2.0.so.0.3800.1-gdb.py", line 9, in <module>
    from gobject import register
  File "/usr/share/glib-2.0/gdb/gobject.py", line 3, in <module>
    import gdb.backtrace
ImportError: No module named backtrace

Thread 1 (Thread 0x7f7d744db840 (LWP 1833))

  • #0 poll
    at ../sysdeps/unix/syscall-template.S line 81
  • #1 g_main_context_poll
    at gmain.c line 4006
  • #2 g_main_context_iterate
    at gmain.c line 3707
  • #3 g_main_loop_run
    at gmain.c line 3906
  • #4 dbus_server_run_server
    at e-dbus-server.c line 222
  • #5 ffi_call_unix64
    from /lib64/libffi.so.6
  • #6 ffi_call
    from /lib64/libffi.so.6
  • #7 g_cclosure_marshal_generic_va
    at gclosure.c line 1550
  • #8 _g_closure_invoke_va
    at gclosure.c line 840
  • #9 g_signal_emit_valist
    at gsignal.c line 3238
  • #10 g_signal_emit
    at gsignal.c line 3386
  • #11 e_dbus_server_run
    at e-dbus-server.c line 411
  • #12 main
    at evolution-calendar-factory.c line 140

Comment 2 fback 2014-01-28 12:51:34 UTC
(In reply to comment #1)
> Here's a snapshot (backtrace) of a calendar factory when it gets to the flood
> state.
> 
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x00007f7d6ebdba8d in poll () at ../sysdeps/unix/syscall-template.S:81
> 81    T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)
> Traceback (most recent call last):
>   File "/usr/share/gdb/auto-load/usr/lib64/libgobject-2.0.so.0.3800.1-gdb.py",
> line 9, in <module>
>     from gobject import register
>   File "/usr/share/glib-2.0/gdb/gobject.py", line 3, in <module>
>     import gdb.backtrace
> ImportError: No module named backtrace
> 
> 

I observe the same behavior on debian/sid with gnome 3.8, but it keeps flooding DNS with queries for exchange server. The Exchange account is created with gnome-online-accounts, and it's the only existing in the system. Each query receives successfull answer. No action is required for this to happen.

Feel free to contact me, I'm willing to help with debugging the issue.
Comment 3 Milan Crha 2014-01-29 20:02:09 UTC
Thanks for the update. The thing is to figure out which process does the flood, and why. When you say exchange, do you connect to it with evolution-ews? If so, then please run these processes like is shown below in this order, each in a separate console, with evolution and evolution-alarm-notify processes closed:

   $ EWS_DEBUG=2 /usr/libexec/evolution-addressbook-factory &>eaf.txt
   $ EWS_DEBUG=2 /usr/libexec/evolution-source-registry &>esr.txt
   $ EWS_DEBUG=2 /usr/libexec/evolution-calendar-factory &>ecf.txt
   $ EWS_DEBUG=2 evolution &>evo.txt

if you get the flood, kill all the processes and see which is doing it (its log should contain some repeating libsoup requests towards the server, with a reply which may eventually explain what is failing and why it is repeating the request).

Being this for Google/CalDAV, the commands are the same, only instead of writing EWS_DEBUG=2 use CALDAV_DEBUG=all.

Beware, the logs may contain private information, like server addresses, email addresses, (probably not password), but definitely email/event/contacts content from the server, if any of the process will be able to connect to the server and get data from it - the raw data are printed into the logs.
Comment 4 fback 2014-02-06 11:21:19 UTC
(In reply to comment #3)
> Thanks for the update. The thing is to figure out which process does the flood,
> and why. When you say exchange, do you connect to it with evolution-ews? 

Yes, with evolution-ews plugin.


If so,
> then please run these processes like is shown below in this order, each in a
> separate console, with evolution and evolution-alarm-notify processes closed:
> 
>    $ EWS_DEBUG=2 /usr/libexec/evolution-addressbook-factory &>eaf.txt

This ends very shortly, nothing interesting in the log. It spawned evolution-source-registry. So I'm not sure which one of them sent some queries about autodiscover and exchange A and AAAA.

Killed ESR before proceeding.

>    $ EWS_DEBUG=2 /usr/libexec/evolution-source-registry &>esr.txt

This one stays after started. 

>    $ EWS_DEBUG=2 /usr/libexec/evolution-calendar-factory &>ecf.txt

This one ends after some time, leaving nothing interresting in log.


Following this produces some repeated queries about exchange server in a short time, but it seems they stop after things settle down. There are queries, that possibly could be avoided, but I'm far from calling them flooding, just single A and AAAA query about 2 - 5 minutes apart.

I had to run those tests under xfce. Gnome respawns those services again and again right after they are killed -- maybe that's the cause? Any ideas how to explore/verify this?
Comment 5 Milan Crha 2014-02-07 09:16:11 UTC
(In reply to comment #4)
> >    $ EWS_DEBUG=2 /usr/libexec/evolution-addressbook-factory &>eaf.txt
> 
> This ends very shortly, nothing interesting in the log.

Oh, I'm sorry, I forgot to add a parameter for the both factories, the '-w' to left it waiting for client connections.

> I had to run those tests under xfce. Gnome respawns those services again and
> again right after they are killed -- maybe that's the cause? Any ideas how to
> explore/verify this?

Right, gnome-shell's calendar "makes sure" the evolution-calendar-factory will be kept running, and when it's closed/killed/crashed/whatever, it respawns it automatically. It's possible that this respawn causes the flood, especially if the calendar crashes during update of local calendar content of one of your remote calendars (like Exchange), and it cannot get past this crash, thus it cycles in:
   a) gnome-shell's calendar server runs evolution-calendar-factory
   b) gnome-shell's calendar opens remote (EWS) calendar(s)
   c) evolution-calendar-factory runs update of the remote calendar
   d) the update causes a crash of the evolution-calendar-factory
   e) gnome-shell's calendar server notices the corresponding D-Bus service
      is gone and goes to step a)

Whether the factory really crashed or not I cannot tell, I would expect that some crash-catcher/ABRT/bug-buddy/... would catch the crash and tell you about it.

The ~/.cache/evolution/calendar/<some-weird-uid>/ contains local copies of your remote calendars, if you delete it, then the whole content will be downloaded from the server again, maybe causing the crash, when it reaches the offending event, it any.

By the way, even under gnome-shell, instead of killing the evolution-calendar-factory, you can just run it, which will replace the current running instance.
Comment 6 fback 2014-02-23 19:57:25 UTC
First, sorry for not answering right away, I had rather busy week...

> > I had to run those tests under xfce. Gnome respawns those services again and
> > again right after they are killed -- maybe that's the cause? Any ideas how to
> > explore/verify this?

On my second thought I've found this idea just stupid... If that was true, there would be a process with changing PID. But it's not the case.

> 
> Right, gnome-shell's calendar "makes sure" the evolution-calendar-factory will
> be kept running, and when it's closed/killed/crashed/whatever, it respawns it
> automatically. It's possible that this respawn causes the flood, especially if
> the calendar crashes during update of local calendar content of one of your
> remote calendars (like Exchange), and it cannot get past this crash, thus it
> cycles in:
>    a) gnome-shell's calendar server runs evolution-calendar-factory
>    b) gnome-shell's calendar opens remote (EWS) calendar(s)
>    c) evolution-calendar-factory runs update of the remote calendar
>    d) the update causes a crash of the evolution-calendar-factory
>    e) gnome-shell's calendar server notices the corresponding D-Bus service
>       is gone and goes to step a)
> 
> Whether the factory really crashed or not I cannot tell, I would expect that
> some crash-catcher/ABRT/bug-buddy/... would catch the crash and tell you about
> it.
> 
> The ~/.cache/evolution/calendar/<some-weird-uid>/ contains local copies of your
> remote calendars, if you delete it, then the whole content will be downloaded
> from the server again, maybe causing the crash, when it reaches the offending
> event, it any.
> 
> By the way, even under gnome-shell, instead of killing the
> evolution-calendar-factory, you can just run it, which will replace the current
> running instance.

I did this, but logs are perfectly fine. No warnings, no errors, just messages exchanged with exchange server.

The only thing I noticed so far, those queries always happen in threes, like this:

20:45:31.534401 IP 192.168.1.164.58518 > 192.168.1.1.53: 15425+ A? exchange.example.net. (33)
20:45:31.534782 IP 192.168.1.164.38791 > 192.168.1.1.53: 11553+ A? exchange.example.net. (33)
20:45:31.535096 IP 192.168.1.164.52285 > 192.168.1.1.53: 45571+ A? exchange.example.net. (33)
20:45:31.535148 IP 192.168.1.164.38791 > 192.168.1.1.53: 37681+ AAAA? exchange.example.net. (33)
20:45:31.535201 IP 192.168.1.164.52285 > 192.168.1.1.53: 36341+ AAAA? exchange.example.net. (33)
20:45:31.535250 IP 192.168.1.164.58518 > 192.168.1.1.53: 54134+ AAAA? exchange.example.net. (33)
20:45:31.595746 IP 192.168.1.1.53 > 192.168.1.164.38791: 11553 1/0/0 A 172.16.31.10 (49)
20:45:31.685937 IP 192.168.1.1.53 > 192.168.1.164.38791: 37681 0/1/0 (84)
20:45:31.696726 IP 192.168.1.1.53 > 192.168.1.164.52285: 36341 0/1/0 (84)
20:45:31.706097 IP 192.168.1.1.53 > 192.168.1.164.58518: 54134 0/1/0 (84)

(*) server name and IP obviously changed.

During "normal" operation there are sporadic single queries:

20:50:06.493449 IP 192.168.1.164.42436 > 192.168.1.1.53: 24205+ A? exchange.example.net. (33)
20:50:06.495283 IP 192.168.1.1.53 > 192.168.1.164.42436: 24205 1/0/0 A 172.16.31.10 (49)
20:50:06.495386 IP 192.168.1.164.42436 > 192.168.1.1.53: 5903+ AAAA? exchange.example.net. (33)
20:50:06.544401 IP 192.168.1.1.53 > 192.168.1.164.42436: 5903 0/1/0 (84)

During last hour or so I didn't notice any AAAA query before A reply. But this might be just coincidence.
Comment 7 fback 2014-02-24 08:27:20 UTC
(In reply to comment #6)
> First, sorry for not answering right away, I had rather busy week...
> 
> > > I had to run those tests under xfce. Gnome respawns those services again and
> > > again right after they are killed -- maybe that's the cause? Any ideas how to
> > > explore/verify this?
> 
> On my second thought I've found this idea just stupid... If that was true,
> there would be a process with changing PID. But it's not the case.
> 
> > 
> > Right, gnome-shell's calendar "makes sure" the evolution-calendar-factory will
> > be kept running, and when it's closed/killed/crashed/whatever, it respawns it
> > automatically. It's possible that this respawn causes the flood, especially if
> > the calendar crashes during update of local calendar content of one of your
> > remote calendars (like Exchange), and it cannot get past this crash, thus it
> > cycles in:
> >    a) gnome-shell's calendar server runs evolution-calendar-factory
> >    b) gnome-shell's calendar opens remote (EWS) calendar(s)
> >    c) evolution-calendar-factory runs update of the remote calendar
> >    d) the update causes a crash of the evolution-calendar-factory
> >    e) gnome-shell's calendar server notices the corresponding D-Bus service
> >       is gone and goes to step a)
> > 
> > Whether the factory really crashed or not I cannot tell, I would expect that
> > some crash-catcher/ABRT/bug-buddy/... would catch the crash and tell you about
> > it.
> > 
> > The ~/.cache/evolution/calendar/<some-weird-uid>/ contains local copies of your
> > remote calendars, if you delete it, then the whole content will be downloaded
> > from the server again, maybe causing the crash, when it reaches the offending
> > event, it any.
> > 
> > By the way, even under gnome-shell, instead of killing the
> > evolution-calendar-factory, you can just run it, which will replace the current
> > running instance.
> 
> I did this, but logs are perfectly fine. No warnings, no errors, just messages
> exchanged with exchange server.
> 
> The only thing I noticed so far, those queries always happen in threes, like
> this:
> 
> 20:45:31.534401 IP 192.168.1.164.58518 > 192.168.1.1.53: 15425+ A?
> exchange.example.net. (33)
> 20:45:31.534782 IP 192.168.1.164.38791 > 192.168.1.1.53: 11553+ A?
> exchange.example.net. (33)
> 20:45:31.535096 IP 192.168.1.164.52285 > 192.168.1.1.53: 45571+ A?
> exchange.example.net. (33)
> 20:45:31.535148 IP 192.168.1.164.38791 > 192.168.1.1.53: 37681+ AAAA?
> exchange.example.net. (33)
> 20:45:31.535201 IP 192.168.1.164.52285 > 192.168.1.1.53: 36341+ AAAA?
> exchange.example.net. (33)
> 20:45:31.535250 IP 192.168.1.164.58518 > 192.168.1.1.53: 54134+ AAAA?
> exchange.example.net. (33)
> 20:45:31.595746 IP 192.168.1.1.53 > 192.168.1.164.38791: 11553 1/0/0 A
> 172.16.31.10 (49)
> 20:45:31.685937 IP 192.168.1.1.53 > 192.168.1.164.38791: 37681 0/1/0 (84)
> 20:45:31.696726 IP 192.168.1.1.53 > 192.168.1.164.52285: 36341 0/1/0 (84)
> 20:45:31.706097 IP 192.168.1.1.53 > 192.168.1.164.58518: 54134 0/1/0 (84)
> 
> (*) server name and IP obviously changed.
> 
> During "normal" operation there are sporadic single queries:
> 
> 20:50:06.493449 IP 192.168.1.164.42436 > 192.168.1.1.53: 24205+ A?
> exchange.example.net. (33)
> 20:50:06.495283 IP 192.168.1.1.53 > 192.168.1.164.42436: 24205 1/0/0 A
> 172.16.31.10 (49)
> 20:50:06.495386 IP 192.168.1.164.42436 > 192.168.1.1.53: 5903+ AAAA?
> exchange.example.net. (33)
> 20:50:06.544401 IP 192.168.1.1.53 > 192.168.1.164.42436: 5903 0/1/0 (84)
> 
> During last hour or so I didn't notice any AAAA query before A reply. But this
> might be just coincidence.

Update:

I straced evolution-calendar-factory and noticed that:

 * there is PID "rolling", process spawns child processes
 * every one of them queries for exchange server (but why only this, I have other online accounts with calendars)
 * they die after a while like this:

[pid  5414] <... futex resumed> )       = -1 ETIMEDOUT (Connection timed out)
[pid  5414] futex(0x7fb03a458390, FUTEX_WAKE_PRIVATE, 1) = 0
[pid  5414] futex(0x7fb03a47ad94, FUTEX_WAIT_BITSET_PRIVATE, 767, {3177, 424096000}, ffffffff <unfinished ...>
[pid  5411] <... futex resumed> )       = -1 ETIMEDOUT (Connection timed out)
[pid  5411] futex(0x7fb03a458390, FUTEX_WAKE_PRIVATE, 1) = 0
[pid  5411] madvise(0x7fafe77ff000, 8368128, MADV_DONTNEED) = 0
[pid  5411] _exit(0)                    = ?
Process 5411 detached
[pid  5415] <... futex resumed> )       = -1 ETIMEDOUT (Connection timed out)
[pid  5415] futex(0x7fb03a458390, FUTEX_WAKE_PRIVATE, 1) = 0
[pid  5415] madvise(0x7fb001fca000, 8368128, MADV_DONTNEED) = 0
[pid  5415] _exit(0)                    = ?
Process 5415 detached


Nothing appears in the log, when e-c-f is spawned from cmdline with debug enabled.

Any ideas what to do next?
Comment 8 Milan Crha 2014-02-24 09:16:47 UTC
I guess the spawned evolution-ews related subprocess is /usr/bin/ntlm_auth, which is used with NTLM based authentications to the configured Exchange server. it may do its own DNS queries when negotiating authentication with the server. It is not supposed to do it repeatedly or at least not too often, once it logins to the server successfully. You can run the evolution-calendar-factory with an EWS related debugging on, to see what it does:
   $ EWS_DEBUG=2 /usr/libexec/evolution-calendar-factory -w
which prints the communication between the server and the evolution-ews on the console.
Comment 9 fback 2014-02-24 13:46:02 UTC
(In reply to comment #8)
> I guess the spawned evolution-ews related subprocess is /usr/bin/ntlm_auth,
> which is used with NTLM based authentications to the configured Exchange
> server. it may do its own DNS queries when negotiating authentication with the
> server. It is not supposed to do it repeatedly or at least not too often, once
> it logins to the server successfully. You can run the
> evolution-calendar-factory with an EWS related debugging on, to see what it
> does:
>    $ EWS_DEBUG=2 /usr/libexec/evolution-calendar-factory -w
> which prints the communication between the server and the evolution-ews on the
> console.

Nothing new appears on console when DNS is being flooded.

Log seems to me quite normal:

Registering ECalBackendCalDAVEventsFactory ('caldav:VEVENT')
Registering ECalBackendCalDAVJournalFactory ('caldav:VJOURNAL')
Registering ECalBackendCalDAVTodosFactory ('caldav:VTODO')
Registering ECalBackendHttpEventsFactory ('webcal:VEVENT')
Registering ECalBackendHttpJournalFactory ('webcal:VJOURNAL')
Registering ECalBackendHttpTodosFactory ('webcal:VTODO')
Registering ECalBackendWeatherEventsFactory ('weather:VEVENT')
Registering ECalBackendFileEventsFactory ('local:VEVENT')
Registering ECalBackendFileJournalFactory ('local:VJOURNAL')
Registering ECalBackendFileTodosFactory ('local:VTODO')
Registering ECalBackendContactsEventsFactory ('contacts:VEVENT')
Registering ECalBackendEwsEventsFactory ('ews:VEVENT')
Registering ECalBackendEwsJournalFactory ('ews:VJOURNAL')
Registering ECalBackendEwsTodosFactory ('ews:VTODO')
Server is up and running...
Bus name 'org.gnome.evolution.dataserver.Calendar4' acquired.

and later successfull Sync packets:
> POST /EWS/Exchange.asmx HTTP/1.1
> Soup-Debug-Timestamp: 1393248743
> Soup-Debug: SoupSessionAsync 1 (0x7fe8c34346a0), ESoapMessage 1 (0x7fe888073130), SoupSocket 1 (0x7fe87c0083b0)
> Host: 
> User-Agent: Evolution/3.8.5
> Connection: Keep-Alive
> Content-Type: text/xml; charset=utf-8
> Authorization: NTLM 
> 
> <?xml version="1.0" encoding="UTF-8" standalone="no"?>
> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" [...]

< HTTP/1.1 200 OK
< Soup-Debug-Timestamp: 1393248743
< Soup-Debug: ESoapMessage 1 (0x7fe888073130)
< Cache-Control: private
< Transfer-Encoding: chunked
< Content-Type: text/xml; charset=utf-8
< Server: Microsoft-IIS/7.5
< Set-Cookie: 
< X-AspNet-Version: 2.0.50727
< Persistent-Auth: true
< X-Powered-By: ASP.NET
< Date: Mon, 24 Feb 2014 13:32:23 GMT
  
<?xml version="1.0" encoding="utf-8"?>
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
  <s:Header> [...]

then few more exchanges with HTTP/1.1 200 OK codes. They appear sporadically, and in no sync with those DNS floods.
Comment 10 Milan Crha 2014-02-24 15:38:42 UTC
OK, in that case the flood is unrelated to evolution-ews and it's true this is about Google queries, not exchange queries.

I'm out of idea currently. I would guess that there are either done repeated login attempts against the Google server but also failing for some reason, or there was a network change notification done in the background, which made the backends re-check server availability.

One more logging option is specific for Google calendars, as I mentioned in comment #3, use CALDAV_DEBUG=all instead of EWS_DEBUG=x when running the calendar factory process, where will be printed actual activity against the Google calendar server.
Comment 11 fback 2014-02-24 16:18:53 UTC
(In reply to comment #10)
> OK, in that case the flood is unrelated to evolution-ews and it's true this is
> about Google queries, not exchange queries.

No, unfortunately. DNS is queried only for my exchange. I have additional google account, but this flood never happened for this account.

I tried this on freshly installed system. Exchange and Google accounts added from gnome-online-accounts. Also tried to remove all .name gnome-related dirs from ${HOME} and add accounts again.

> I'm out of idea currently.

I'm going to create only google account on a fresh install and see what happens. It's possible exchange account was always added as the first one, maybe this is important. But I will not have access to hadware that allows to run virtual machines before wednesday.
Comment 12 fback 2014-03-10 19:52:15 UTC
(In reply to comment #11)
> 
> I'm going to create only google account on a fresh install and see what
> happens. It's possible exchange account was always added as the first one,
> maybe this is important. But I will not have access to hadware that allows to
> run virtual machines before wednesday.

Finally had some time to push things a little:

I tried more-or-less what I describer earlier, and the results are:

 * google account, no evolution -> no flooding
 * google account, evolution, but no ews plugin -> no flooding
 * google + exchange, no evolution -> no flooding (I guess, I'm able to congigure this exchange account, but there's no client to use this data until evolution-ews is installed)
 * google + exchange, evolution, but no -ews plugin -> no flooding
 * google + exchange, evolution + ews plugin -> flooding starts
 * google account, evolution + ews plugin -> no flooding

Just adding exchange account + ews plugin is not enough, I have to start evolution once. But after this, I don't have to start evolution at all (after logout / login, or reboot) for this flood to happen. All queries are for A and AAAA records of exchange server.

Closing evolution, then removing account using g-o-a stops flooding.

Is that enough to blame -ews plugin?

Anybody with gdb knowledge and idea what to check next?
Comment 13 Milan Crha 2014-03-11 10:45:30 UTC
(In reply to comment #12)
> Is that enough to blame -ews plugin?

Right. The initial (downstream) reporter had a problem with google.com flooding, while you have the same issue with the evolution-ews plugin.

> Anybody with gdb knowledge and idea what to check next?

The question is which process does the flood. The initial downstream reporter has an issue with the evolution=calendar-factory, but for you, in case of the ews plugin, it can be even evolution-source-registry process (or the calendar or book factory), the easiest might be to start with the source registry.

Please:
a) install debuginfo packages for evolution-data-server and evolution-ews
b) kill the currently running evolution-source-registry
c) run evolution-source-registry from a console with an EWS debugging on:
   $ EWS_DEBUG=2 /usr/libexec/evolution-source-registry

You may see at least some initial checking for available folders there, which might, eventually, be repeated due to some failure. If you want to start another process, then the best would be /usr/libexec/evolution/3.10/evolution-alarm-notify, which runs the evolution-calendar-factory for you, and begins the download of events in the calendar(s). Do this only if there will be no flooding in the source registry process.

By the way, what authentication type you use for the EWS account? (Basic/NTLM/...)
Comment 14 Milan Crha 2014-10-23 16:33:38 UTC
I'm sorry I aimed this in a wrong direction. The flood can still happen, it depends how many connection changes are notified by the GNetworkManager (I've just got one such flood myself - finally). I will cope with it in the code.
Comment 15 Milan Crha 2014-10-24 07:27:25 UTC
Network change could be fired multiple times, in a speedy rate, but still with idle being reached, which caused the trouble when each idle callback tried to reach its server. Postponing by 5 seconds avoids these rechecks.

Created commit e862aaf in eds master (3.13.7+) [1]
Created commit d5d4e25 in eds evolution-data-server-3-12 (3.12.8+)

[1] https://git.gnome.org/browse/evolution-data-server/commit/?id=e862aaf
Comment 16 Milan Crha 2014-10-24 11:46:22 UTC
One overlooked detail:

Created commit 13d7c86 in eds master (3.13.7+) [2]
Created commit cdbe6fd in eds evolution-data-server-3-12 (3.12.8+)

[2] https://git.gnome.org/browse/evolution-data-server/commit/?id=13d7c86