Learn how to a build a cloud-first strategyRegister Now

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 464
  • Last Modified:

How do I tune UNIX

I am using TRU Unix 64 V 5.1A. I have an Oracle database sitting on it. Now both the OS and DB are now moving fairly slowly. To improve/enhence performance I am assuming one starts by tuning Hardware, then OS and lastlty DB.(First Question: is this assumption valid?). Hower at the present moment i do not have the resources to enhence the hardware, and that gets me to the OS.

Currently my space operates at btwn 80 and 98% of the 177 GB Raid 5 Hard Disks. (The system really gets worse after 96% and i clean up some unncessary log files to bring it back to 80% - still its not good enough) CPU usage is usually below 40%. However I am not sure footed on analysing the 2GB memory that i have. Second Question is how do i analyse the memory and third question is are there other aspects of the OS worth looking at before moving to the DB. (My DB questions will be in the Oracle area!).  Thanx.
  • 4
  • 2
  • 2
  • +3
1 Solution
Actually, I would examine the database before tuning the O/S, other than to look at generic actions that may improve the situation... e.g. to shutdown unneeded services and programs.

Filesystems are typically designed for performance when at less than 90% of capacity.  At 95% of capacity, performance will degrade, and no amount of tuning will resolve this.  You need to get disk usage under control, or increase capacity.

RAM is always a good addition, as it will allow you to be more effective with your tuning.  That said, I would look at tuning Oracle to better use the available memory for what you're specifically doing with it, and then look at the O/S to determine what is being stressed the most, and see what tuning can be done.

As I don't have an intimate knowledge of Oracle or TRU Unix 64, I'm afraid I can't offer more than general advice on this topic, but I can state the obvious.. :)  Memory, paging and disk i/o are going to be the primary things you want to focus on.  But I would deal with the disk usage situation first... if you're at 80% after trimming the fat, one can expect that your space requirements are only going to continue to grow and reduce the gap... if you're working towards a fixed event, that's one thing, but if you're just trying to delay as long as possible, you're likely to spend more time baby-sitting it than it would cost to solve the problem directly.

Hope that's of some help.
your asssumption is completely invalid.
most likely you allocated more than available memory to oracle SGA, and now your system crawls because of this
And, frankly, Oracle can be quite particular about how its operations are ordered. A stored procedure with Tasks A - B - C - D may have lousy performance, but if the order of the Tasks is changed to A - B - D - C, suddenly there is a dramatic improvement. Your DBA needs to make sure that they don't have anything in Oracle that's dragging it down.
What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

Raid 5 is slow for databases
AdvFS is slow for database because it writes twice
Filesystem is slower than raw disk

Filesystems are not optimized to be full often.

Once UNIX filesystem is full it stops from reallocating files and becomes suboptimal (Same with your favorite - Window XB )

2nd to analyse memory
vmstat 1
ipcs -a

most likely you or dba killed oracle ( or it crashed ) and you now have too many ipc shm segments

3rd (empty answer )
please post results from

I'd focus on the following...

1. Raid 5 is slow, try to move to raid 10 if you can... (with as many disks as you can spare...)
2. check your swap, if you box is swapping you need to fix that
   - fixing it by 1. adding more memory to the box, and 2. checking you Oracle SGA and make sure you are not running to many instances on the box.
3. never go above 85% full on a filesystem, it makes it very slow(ish)...

And I'll also guess what is happening... (might be risky! :-) )

Your oracle application has a very large SGA (buffer size, java pool and so on...), and or your shmmax is very high (>2GB).
Given that the above is correct your box is probably swapping which is very bad...

A very quick check of this is to run top (or swapon -s /swapinfo if your box hasn't got top)


I'd tend to disagree regarding RAID 5 being slow compared to RAID 10.

RAID 5 does striping (RAID 0) and calculates parity using XOR.  The parity stripe is written to the third disk, assuming three disks total.  The "expense" here is having to calculate (and write) parity; disks being equal, the only expense should be the CPU time.  Assuming a good RAID controller with on-board processors, RAID 5 should be able to keep up just fine.

RAID 10 is striping (RAID 0) plus mirroring (RAID 1).  You are writing the same data, twice, to two striped arrays.  The only difference here is instead of calculating parity, you're writing twice (in parallel), which should have a lower CPU cost.  But again, this should only matter if your RAID controller is not doing the work and/or not keeping up.

So yes, RAID 10 should be slightly faster.  However, RAID 5 can be expanded to an infinite number of hard drives.  As you expand the array to include more drives, the speed will increase significantly.  This is something that RAID 10 cannot do.  What you run into instead is RAID 15; RAID 1 (mirroring) between RAID 5 arrays.

It's minor details, but the devil is in the details...
oracle writes 8k block

this means that raid5 pulls one stripe from each member disk (16k typical) and writes two stripes.
in same case raid10 reads one stripe and writes two

feel the diffence

if stripes are too small SCSI bus for RAID member disks get overloaded, so this gets even worse.

My ten years of advanced computing exerience shows that all RAID5 controllers heat up faster and work slower than normal single disk, due to sub-optimal ram-consuming store-read-write buffering, no operating system or database can accomodate, some of recent have not caught up with command queuing in SCSI-2 disks, and even SATA (same queuing which is available for *leven years or so), so this makes things even worse.
Problem is that they are so expensive that SME-s buy them and cannot ensure quality using their own resource.
Go figure out yourself - pull disk from raid and dump modes after.

I was under the impression that raid5 did not write in parallel, rather in sequence...

And striping will write in parallel?!

Thus raid10 would have a better write performance but about the same in read?!

Or do I need to go back to the books... ;-)


That is the essence.

Let us imagine raid5 of X disks and 128K stripe size
Now you write 4K ( typically with database )
It reads 128K from X disks
After that it writes two disks with 128K
Is this what you want ???


Featured Post

Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

  • 4
  • 2
  • 2
  • +3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now