• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 257
  • Last Modified:

High volumne

Can anyone provide me some concerns/suggestions/tipical issues/related tips ... on High Volume Transaction ? I have basic knowledge but have not ever exposured to a high volume transaction system yet. Can anyone give me a good leading so that it would helpful for me to develop solutions for high volume transaction enterprise class systems in commercial settings.
Thanks a lot.
5 Solutions
Aneesh RetnakaranDatabase AdministratorCommented:
Check this https://secure.logmein.com/welcome/visualization/fullscreen/ 
that's an example of a high volume application
There are multiple variables to consider.
First though the main performance bottleneck in a database is the storage.  Disks can only seek so fast.  To deal with this you should have many disks and partition schemes to balance the daily load to all the disks.
Also, do not have SQL do things that the app/middle tier can do like sorting and some joining like translating IDs to names of non-changing data.  Also avoid using SQL to generate Identity values, with proper data design the IDs can be generated by the client and the DB only has to store it.
You can partition your data between servers like last names between A-M on one server and the others on the other, use as many servers as you like.  But programming for this can be tricky and may not be agile enough to change where the boundaries are to keep things properly load balanced.
If your budget is really big, you can look at MS Data Warehouse where they mainly throw hardware at it and give you an effectively load-balanced SQL/Storage solution.
For high volume you're going to need a few things. This is my personal order of priority.

1. Memory. Lots of memory dedicated to Buffer Space. Without any real metrics I would suggest at least 25% of the database size available as Buffer. Note that this must be backed by real RAM. Paging is your enemy on a database server.
2. Dedicated servers. Nothing worse than wanting the CPU and something else has it, or waiting for someone else's write to complete.
3. Storage. Faster the better, and the good news is that there is some staggeringly fast storage available through SSD now. Your targets for SSD disk are TempDB and the .LDF files. Don't waste SSD storage on the .MDF files, as they'll be fine on slower storage.
4. CPU. Enough that you are consistently below 60% busy.

There are lots of forums out there that will help (including this one), but to get a more specific answer, you'll need to ask a more specific question!
Upgrade your Question Security!

Your question, your audience. Choose who sees your identity—and your question—with question security.

minglelinchAuthor Commented:
Thanks for the great helpful comments.

How about high availability strategies?
Eugene ZCommented:
the answer is "it depends":
on your budget
technology, software
and what do you plan to use

select what one you plan (e.g. real time )

and check

Transaction processing system
And on your software edition. As this is in a SQL forum, I will make the recommendation to check the features of the various of SQL server editions http://www.microsoft.com/en-us/sqlserver/editions.aspx
Certainly there are a lot of options for you and as others have noted it "depends" on a lot of things... especially the budget.


1) Is your goal a high transaction volume (writes - inserts/updates/deletes) or high capacity working with lots of data collected? If you're into transaction rate that IO latency/IOPS are going to be your bottlenecks... If high capacity processing then you'll likely need RAM and CPU power.

2) What is your ballpark budget for this?


1) Since this is posted in MS SQL I'll assume the solution you want is Microsoft SQL Server based.

2) If high transaction volume is required you'll need storage with low-latency to achieve the highest IOPS. As mentioned in other comments the FusionIO IODrive 2 is pretty wicked fast in terms of IO latency and IOPS... those are your tickets to high transaction throughput.

3) The RAID controller in your SQL Server(s) may have a battery-backed RAM cache that allows you to do write caching. This can greatly increase your ability to handle bursts of transactions... although doesn't help much in terms of sustained high transaction rates.

4) SQL 2014, soon to be released, will be interesting for transaction volume. If you can tolerate some potential transaction data loss (last few seconds) in case of a catastrophic server failure... and that's a BIG IF... then SQL 2014 has a new feature called "delayed durability" which can greatly increase transaction throughput. I'm getting much higher rates in my tests.

5) Test... test... test. If you can try different hardware and properly configure and test it.

6) If you need High Availability and extreme Transaction Rates you may want to use Mirroring instead of clustering. I use clustering in many of my installations but transaction volume is often limited by IO latency for those who need extreme transaction rates.

I'm currently on a project that requires that we attain extreme transaction/sec rates and hardware wise am spending $110K for a pair (for High Availability) of servers and lots of PCIe based storage.
minglelinchAuthor Commented:
Thanks for all helpful comments.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Cloud Class® Course: Amazon Web Services - Basic

Are you thinking about creating an Amazon Web Services account for your business? Not sure where to start? In this course you’ll get an overview of the history of AWS and take a tour of their user interface.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now