I am using SQLAlchemy with two MySQL databases. One of these being my development database hosted locally on my machine and the other is a MySQL server provided by ClearDB on Heroku for production.
I have a long running session open with the database while it performs a syncing operation with another service. On my local machine this finishes fine but on production I get the error (2013, 'Lost connection to MySQL server during query').
I've read other posts that say it can either be the size of the request is too big or the pool refresh variable that needs to be adjusted. I don't believe the transaction payload is relatively that large and setting a
select @@interactive_timeout, @@wait_timeout
Even though the two databases appear to be using the same timeouts and config, generally. It turned out to be a timeout performed somewhere else by ClearDB.
ClearDB monitor connections and kill them when they are open for over a minute. I wasn't originally able to find this docuemnted.
The fix was actually setting the
pool_recycle parameter to
pool_recycle=60 on creating the engine. My previous attempt at this I was using an arbitrary number (because I didn't know ClearDB's timeout) higher than this.