MySql transactions--strategy for unreliable connections
Posted on 2014-01-14
I have developed a C++ Qt application which interfaces with a MySql database.
Currently, the database is local== on the running the application. I'm now scaling up to have multiple instances of the application running on several machines.
It's necessary to get all the data from the various clients into a single database that's used by an Apache/MySql/PHP setup that builds and updates web pages when the application does Sql inserts and deletes. It's desirable to have the data in a single database immediately after applications do SQL transactions (call it "day of"), but that's not absolutely required.
The venues where the application will be run don't always have internet connectivity. And, when they do have connectivity it's often unreliable. Maybe only cell phone access and with weak signal.
So, I'm trying to devise a strategy that will always allow the application to run and save data locally, regardless of whether an external connection exists. If a connection doesn't exist, local data could be sent to the remote server(s) at a later time when a reliable connection is available.
I see three scenarios:
1) No connectivity of any kind "day of". The application should do inserts to the local database. Data is uploaded to the web at a later time.
2) Connectivity via wireless Lan is available, but that network isn't connected to the internet. The wireless LAN could go down but the application still needs to keep working. Data is uploaded to a common server later when an internet connection is available.
3) A connection to the internet is available. But, like the wireless LAN I don't want it to be a dependency for using the application "day of".
Seems like this is probably a common situation, so I thought I'd ask for input on the best way to do it. Here's what I'm thinking so far:
1) Make sure inserts that use an auto-incremented id from a prior insert are included in a transaction with those inserts.
2) Check for success of every transaction--on fail, save the SQL text to a file for later use when server connectivity, becomes available.
So, I'm planning on having connections to up to three databases. A local one, one on a local network, and one on the internet. The latter two will have associated files that will have the SQL commands for all the transactions that didn't succeed.
Does this make sense? Is there a better way?