GNOME Bugzilla – Bug 660926
Mysql "Server Disappeared" errors
Last modified: 2018-06-29 23:01:33 UTC
"Christopher X. Candreva" <chris@westnet.com> reports: I've connected both with the mysql command line and phpMyAdmin, and dumped the database with mysqldump, so I'm sure the user can access it. Moreover, I've been writing to the same database for over a month. It's not dieing on the initial connect, it does quite a bit of work before it goes into the reconnect loop. This same server runs my Mythtv backend, which makes heavy use of the mysql server and is not having any problems. (I shutdown mythv while testing so I know it's not putting a load that is causing the problem) Recapping, saving to SQL from a month old XML file will work. I'm guessing it's either some data I entered last time, or something about the size I've reached. I've placed both the gnucash.trace and mysql transaction log files at http://www.westnet.com/~chris/gnucash/ If this helps. They are from the same run, so the time stamps should match. You can see in the mysql log that 55 querys are performed successfully before it enters the reconnect loop. The trace ends with me hitting Ctrl-C
It turns out there's a MySql bug [1] on the 2006--server went away error, and it has to do with trying to send to the server a query that's too big. I had been thinking that the problem was in the other direction, that the *responese* was too big and the client was choking. That bug was supposed to have been fixed in 5.0 -- to do what you're experiencing on 5.1. Looks like they've got a regression. You might want to report that. So, try jacking up the max_allowed_packet on the server side (apparently in /etc/my.cnf) and see if that makes a difference. I also had a look at the error-handling code. We just try to reconnect on a 2006, and the message we emit doesn't report the error message... so I think that the server is responding to our connection attempts with another 2006. We could close the connection and re-initialize, but that's really not going to help much with the actual error -- we'll still get another 2006 as soon as we retry whatever query caused the problem, so I guess bailing with a helpful error message is the best we can do. If jacking up the max_allowed_packet works, then we can make that a recommendation in the error message. [1] http://bugs.mysql.com/bug.php?id=1011
Chris reports that increasing the timeout as follows permits his file to load: From within the mysql command line client, to view current value: mysql> show global variables like 'max_allowed_packet'; +--------------------+---------+ | Variable_name | Value | +--------------------+---------+ | max_allowed_packet | 1048576 | +--------------------+---------+ 1 row in set (0.00 sec) To set a new value: mysql> set global max_allowed_packet=(1048576 * 2); or mysql> set global max_allowed_packet=2097152;
1048576 can also be expressed in the command as 1M, and values up to 1G are permitted for this parameter. See http://dev.mysql.com/doc/refman/5.5/en/packet-too-large.html . This is a server setting that can't be changed from the client, i.e. Gnucash, so I'm closing the bug as not gnome.
GnuCash bug tracking has moved to a new Bugzilla host. This bug has been copied to https://bugs.gnucash.org/show_bug.cgi?id=660926. Please update any external references or bookmarks.