Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x
?
Solved

MySql transactions--strategy for unreliable connections

Posted on 2014-01-14
4
Medium Priority
?
118 Views
Last Modified: 2016-05-29
I have developed a C++ Qt application which interfaces with a MySql database.  

Currently, the database is local== on the running the application.  I'm now scaling up to have multiple instances of the application running on several machines.

It's necessary to get all the data from the various clients into a single database that's used by an Apache/MySql/PHP setup that builds and updates web pages when the application does Sql inserts and deletes.  It's desirable to have the data in a single database immediately after applications do SQL transactions (call it "day of"), but that's not absolutely required.

The venues where the application will be run don't always have internet connectivity.  And, when they do have connectivity it's often unreliable.  Maybe only cell phone access and with weak signal.

So, I'm trying to devise a strategy that will always allow the application to run and save data locally, regardless of whether an external connection exists.  If a connection doesn't exist, local data could be sent to the remote server(s) at a later time when a reliable connection is available.

I see three scenarios:

1)   No connectivity of any kind "day of".  The application should do inserts to the local  database.  Data is uploaded to the web at a later time.

2)  Connectivity via wireless Lan is available, but that network isn't connected to the internet.   The wireless LAN could go down but the application still needs to keep working. Data is uploaded to a common server later when an internet connection is available.

3) A connection to the internet is available.  But, like the wireless LAN I don't want it to be a dependency for using the application "day of".  

Seems like this is probably a common situation, so I thought I'd ask for input on the best way to do it.  Here's what I'm thinking so far:

1)  Make sure inserts that use an auto-incremented id from a prior insert are included in a transaction with those inserts.  
2)  Check for success of every transaction--on fail, save the SQL text to a file for later use when server connectivity, becomes available.

So, I'm planning on having connections to up to three databases.  A local one, one on a local network, and one on the internet.  The latter two will have associated files that will have the SQL commands for all the transactions that didn't succeed.

Does this make sense?  Is there a better way?

Thanks

Dave Thomas
0
Comment
Question by:DaveThomasPilot
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
4 Comments
 
LVL 39

Assisted Solution

by:Aaron Tomosky
Aaron Tomosky earned 1000 total points
ID: 39789275
High level view:
Save everything locally, marked dirty by default
make something that sends the local stuff to the central db
mark things as clean when you receive a good return value

Periodically check for dirty stuff

If you always design this way, you never have to worry about different versions for different setups.
0
 
LVL 7

Accepted Solution

by:
Phil Davidson earned 1000 total points
ID: 40463821
MySQL isn't ACID compliant.  PostgreSQL is ACID compliant.  Mission critical databases have successfully ran on MySQL however.

It sounds like you have through about this a fair amount.  I think you'll succeed based on your preparation.  I would  read about

1) asynchronous replication and/or long-distance replication (which is designed for intermittent breaks in connectivity) for this project:  http://www.clusterdb.com/mysql-cluster/setting-up-mysql-asynchronous-replication-for-high-availability

2) MySQL clustering: http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster.html
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Outsource Your Fax Infrastructure to the Cloud (And come out looking like an IT Hero!) Relative to the many demands on today’s IT teams, spending capital, time and resources to maintain physical fax servers and infrastructure is not a high priority.
Containers like Docker and Rocket are getting more popular every day. In my conversations with customers, they consistently ask what containers are and how they can use them in their environment. If you’re as curious as most people, read on. . .
Sending a Secure fax is easy with eFax Corporate (http://www.enterprise.efax.com). First, Just open a new email message.  In the To field, type your recipient's fax number @efaxsend.com. You can even send a secure international fax — just include t…
In this video, Percona Solution Engineer Rick Golba discuss how (and why) you implement high availability in a database environment. To discuss how Percona Consulting can help with your design and architecture needs for your database and infrastr…
Suggested Courses

705 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question