We help IT Professionals succeed at work.

High IOWait time on Linux ES3 running Databases.

Last Modified: 2013-12-20
I am running Linux ES3 on an IBM Bladecenter, which is connected into a Clariion CX500 via 2 Fabric switches (4 paths). Linux is running a large progress database with about 75-100 users connecteda t any given time. From time to time we see an enormous amount of IOwait on Linux, and the DBA's keep pointing back to the disk's being too slow. Can anyone give me a direction on how I could find out exactly which process is causing the High IOWait? I've user SAR and iostat, but nothing really details the IOWait portion. My ultimate goal is to show that the Disks aren't the bottleneck. I feel this all spawns from poor performance inside the database itself, but I can't find a way to strongly prove it.


Watch Question


There is a program out there called iometer, (http://www.iometer.org) on SourceForge. Originally an Intel application, now OSDL.
It should tell you what you want to know about your io subsystem.

I'm not sure it's not about the disks. I have only little respect to EMC, but that is not the point.

The problem is with the disks, however, it can be caused by an incorrect DB settings, such as too small cache (which results in too many IO operations), too many commits (you will kill a larger storage and a larger server if you were to 'commit' after each transaction), or plain incorrect allocation of disks and arrays for the DB - did you separate the logs from the DB? Did you alter cache behavior (which is recommended in RH3) to disable readahead?

As said - the problem is with the disks, but they are not necessarily the cause.


The logs are indeed on a separate spindle from the DB. I am not sure on the Disabling of readahead since I didn't build these, but I will take a look into that.

To profile the system and see where the delays are being caused I would use a variety of tools to see what is taking the time. First i would put top up in a window sorted by cpu usage.
At the same time I would have the performance meter running ((System, Admin, System Monitor) and I would run iometer in a third window and then watch as a user accesses the database, running a report or select or whatever they are doing that they think is taking too long. By doing this you can see what process and what system are using up the time.
If cpu usage goes to 100% but disk is not then it's the DB software, if both disk and cpu go to 100 it's how the DB is accessing the disk.
While watching cpu also watch memory and swap mem usage. It's possible you need more ram or a larger swap (is swap on a separate spindle?)
IF memory is not the issue and cpu is not but disk throughput maxes out then your disk subsystem is the culprit.
Hope this helps.


I'm having issues "make"ing iometer. Any thoughts for this?

I can see that when we run reports on the database, that iowait jumps very high. I'm sure this is a database/linux tuning issue since during this time I can copy/move a 1GB file in seconds to the DB partition while the iowait is going on.

Sorry no thoughts on building, I used the binaries for x86 which arenot goign to work for you. But I think you just answered your initial question, it's not the disks since you are able to do the copy while the iowait is on.

So what in progress is causing the iowait. Maybe when two users access the same record the first gets it and the second is given an iowait instead of a locked record error. In this case most likely it's a timeout wait where the wait is x time and you can change it to 1/2x.  higher iowaits may occur where more then 2 people are trying to get to the same record, each iowait becoming a factor of x longer.
I would look to see where this might be configured and see if changing it gives you  better performance.

Do you have cache hit/miss measurements from the Clariion?
Are you familiar with NaviSphere Analyzer?  It's a product they charge for (or it comes free with the enterprise license of NaviSphere)

IIf you are licensed for it, I suggest you going to http://powerlink.emc.com and downloading the manual for it.

You can get there from within NaviSphere by right clicking over a LUN, disk, pretty much anything and going to analyzer. Statistics logging has to be enabled on your SP in order to use Analyzer.  The values are different for LUN, disk, or SP, so don't try to mix those when you are looking at values.

I would first start by looking at the physical disks (not the LUN level) that DB uses and if they max out on IOPS frequently (140 IOPS for 10k, 180 IOPS for 15k for each individual disk) if the disks hit that value frequently, you need to span more spindles, or change the RAID type.  If everything looks ok, I would check utilization on the storage processors, also check cache (forced flushes, high watermark, etc)

Check out this long discussion on iowait issue in RHEL3: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=121434
Do you know how big the database is and what the value of -B (database startup parameter).
Database Blocksize should 4K on Linux.

It's easy to proof that it is the database. Just monitor the database for a while with promon. This is  a tool installed with the progress database.
promon <path/to/database/databasename>
choose option 5 and look what the buffer hit percentage is. This should be 95% or more. If this is less then make some more thorough measurements of the database.

You could also facing the problem of checkpoints: You can see this also under number 5 in the "Buffers flushed" section. If there are rather large numbers here then this is the problem.

A more detailed picture you get from
promon <path/to/database/databasename>
then R&D, 3, 1 -->performance indicators. Maybe you could post those results here?

So far everyone's been focusing on the disks or the db, but it could indeed be the software accessing the database.  Is this a problem that seemed to happen after code was changed?  A few programs that access large tables and/or used frequently, and don't use correct indices can have a major impact on IO.  There are also a lot of client start up parameters that can help memory and io management, I would have assumed your DBA's would have had those tuned correctly.

Also, proper raid configuration can have a major impact if your disks are on a raid controler.  See: http://progress.atgnow.com/esprogress/documentDisplay.do?clusterName=CombinedCluster&preview=1&groupId=2&page=http://progress.atgnow.com/esprogress/docs/Solutions/Progress/ESERVER_P102208.xhtml&docType=1006&resultType=5002&docProp=$solution_id&docPropValue=21152
This one is on us!
(Get your first solution completely free - no credit card required)
Unlock the solution to this question.
Join our community and discover your potential

Experts Exchange is the only place where you can interact directly with leading experts in the technology field. Become a member today and access the collective knowledge of thousands of technology experts.

*This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.


Please enter a first name

Please enter a last name

8+ characters (letters, numbers, and a symbol)

By clicking, you agree to the Terms of Use and Privacy Policy.