Improve company productivity with a Business Account.Sign Up


PHP / MySQL - highly transactional programming & thousands of concurrent users?

Posted on 2009-04-07
Medium Priority
Last Modified: 2012-06-27
# Strong understanding of highly transactional programming
# Code optimization to support thousands of concurrent users


I've just seen these included in the specs for a job advertisement for a PHP, MySQL Web Developer.  Could you give me some ideas / examples of regular solutions / patterns to solve these problems.

What is highly transactional programming? Is this just a fancy term for a web app / site that will have to do a lot of DB updates?

What are the normal methods to accommodate high numbers of concurrent users in web apps.  What can i do in the PHP / MySQL? or is it really down to Apache / MySQL Server and the hardware to deal with this?

Question by:damoth1
  • 2
LVL 34

Accepted Solution

Beverley Portlock earned 1000 total points
ID: 24087777
There are too many factors to fit a realistic answer in, but....

Transactional database updates usually refer to the practice of using journalled databases. These allow you to add updates to several tables and then when a consistent point is reached to "commit" all the updates in one go. It means that if a user backs out of a transaction then any uncommitted changes can just be dropped. This is usually done via the Inno-DB engine.

To be honest, if you have ask these sorts of questions then I'm not sure you should be going for the job! (sorry!)

To deal with thousands of users it is a combination of high-speed connections, fail-over setups, large, fast servers, connection pooling, optimisied database design... it needs a book, not an EE answer.

Author Comment

ID: 24087872
Thanks bportlock,

These 2 are listed as preferred skills - i have all the required skills on the job spec.  I just want to understand what the two terms meant.

Anyway, now i know what is meant by Transactional Database Updates.  Could i get a little more information about concurrent users just in terms of PHP / MySQL - i just want a very high level overview like what was given  for transactional databases above.
LVL 34

Assisted Solution

by:Beverley Portlock
Beverley Portlock earned 1000 total points
ID: 24088533
A lot of what has to be done will depend on the precise configuration being used. For instance, there may be multiple Apache servers being fed on a load-balanced basis. The speed of each Apache server would depend on the memory available the number of children that the server process can spawn, how many are created initially, how many held in reserve.

It may be possible for the databases to be held separately with remote processes consolidating the data in an off-line manner. Probably the must important consideration would be the number of updates as this can drastically affect the database response. It may be that queries are queued and processed by a "dispatcher engine" which I have used in fast processing environments with multi-processor computers (normally IBM AS/400s).

One way to increase responsive is to open 20 or 30 connections to the database and make them persist so that they exist in a pool to be allocated. This saves the overhead of re-establishing a server connection every time a transaction starts but you need an efficient connection pool manager or else the overhead is worse than making a fresh connection.

There are in-memory tables for fast lookup, query optimisation, data denormalisation.... there are dozens of techniques.

At a guess, your first headache is getting the users to the server at a reasonable speed - that mean lots of bandwidth on the networks into the server room. Next the servers will need a good chunk of memory - say 128K per Apache process and if you are running 30 Apache processes then you are looking at nearly 4GB of memory so you want a 64bit server, probably with a minimum of 2 cores per chip. I would imagine some form of RAID - RAID 5 is resilient but slow if it loses a disc. RAID 0 is faster and may suffice if you have the right controllers. The discs will be your biggest bottleneck since, being mechanical, they are the slowest component. I would want a multi-disc setup and the machine partitioned onto different discs to separate data / caches / OS / programs.

I'm skimming the surface here, but you get the idea.


Featured Post

Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Containers like Docker and Rocket are getting more popular every day. In my conversations with customers, they consistently ask what containers are and how they can use them in their environment. If you’re as curious as most people, read on. . .
Originally, this post was published on Monitis Blog, you can check it here . In business circles, we sometimes hear that today is the “age of the customer.” And so it is. Thanks to the enormous advances over the past few years in consumer techno…
The viewer will learn how to create a basic form using some HTML5 and PHP for later processing. Set up your basic HTML file. Open your form tag and set the method and action attributes.: (CODE) Set up your first few inputs one for the name and …
In this video, Percona Solution Engineer Rick Golba discuss how (and why) you implement high availability in a database environment. To discuss how Percona Consulting can help with your design and architecture needs for your database and infrastr…

601 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question