Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 167
  • Last Modified:

slow database performance

After moving to the new data center the overall performance of the database is noticeably slow. The process which used to finish in 3 minutes taking 6-7 minutes. How can I check that it is the database which is slow or the network ?The database is now on a new datacenter but the database is just the same as per server, memory ,cpu ,,sga ,pga etc.
This is Oracle database 11g rel 2 on Sun Solaris
2 Solutions
Joseph GanSystem AdminCommented:
Check network configuration, the database is just a software running on the server, it shouldn't change after relocation, and servers hardware will be the same as you have mentioned.

Only things changed here is the network. Check ports, switches and cables etc.
If this process is only local to the machine you are running it on (i.e - does not require communication to any other piece of equipment across the network) then despite your claim the hardware is the same, I would have to disagree.  Some of the factors to take into consideration is the disk configurations being used on the device.  Assuming you are running everything natively with local drives then there can be a rather significant impact based on how the hard drives are configured (RAID 1 vs 5 vs 10 etc...).

Additionally, unless you actually moved the physical server from the old to the new datacenter, then it is not on the same hardware.  There could very likely be something different about the server it is on vs. the one it was moved from.  Even something as seemingly insignificant as the OS version and the versions of the software running on the server can make a significant impact on a system.

Now if we assume it is the identical hardware, software, and OS patch levels and the ONLY thing different is the network and the process that is running slow communicates across the network, then I would agree 100 percent with ganjos.  Check the bandwidth (run some tests) to ensure that you are getting what you expect between the client and the database server.  

My gut would say either you have a network hardware problem as ganjo suggested, a network bandwidth problem - possibly it is a device between the server and the client system if the client system is not located in the data center as well, or what I would truly look at is the server specifications.  I have seen data centers boast of their identical hardware just to find out they were quoting virtualized servers which may or may not run as well as they would on native hardware.
slightwv (䄆 Netminder) Commented:
>> it shouldn't change after relocation,

In addition to the last post about poorly configured disk systems,

I also have to disagree a bit with the first post.
How was the database moved?

Could be statistics are now stale.
Could be invalid objects/indexes.  Maybe some indexes didn't get recreated?
Are you 100% sure ALL patches from the old system have been applies to the new system?  11gR2 might not be good enough.
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Geert GruwezOracle dbaCommented:
all the same ? it should be on newer hardware so technically it should run faster.
often when moving or upgrading a database i reorganize some tables using the move table command

indexes get the status unusable in that case and also need to be rebuilt.

besides that, any software like an antivirus running which is badly (or not) configured ?
OranewAuthor Commented:
I appreciate all of you for your valuable time and advice. We did not rebuild any indexes as the database was shutdown and all files (including redo,temp,control files,init.ora ) were copied to the new server using scp and opened. The os is exactely the same with all os patches,kernel paramameters,ulimit etc  and also the database home is the same and opatch lsinventory result is matched.
slightwv (䄆 Netminder) Commented:
I would definitely look at underlying hardware differences first.

I never thought about virus scanning but fully agree with the above post, that would kill a system if every update to a datafile caused the datafile to be rescanned.

Look at the OS for processes using excessive cpu and/or disk io.

Can you explain what the process that now takes 6 minutes does?

If it is pulling Gigabytes of data from the server to a local machine and it went from a local Gigabit network connection to a WAN connection, then it may be network.

I would look at the database server first, then keep expanding until you find the 'issue'.

For example:
If you are pulling massive amounts of data, try it directly on the database server (write the data locally not using the WAN).  If that performance is acceptable, then try ftp'ing/copying a file from the database server to the local machine NOT using any database tools/products.
OranewAuthor Commented:
We moved back to the old data center until we test this throughly what is the issue.
I will update what was found. I appreciate everyone who participated in the discussion and tried to help me on this issue.

Featured Post

[Webinar On Demand] Database Backup and Recovery

Does your company store data on premises, off site, in the cloud, or a combination of these? If you answered “yes”, you need a data backup recovery plan that fits each and every platform. Watch now as as Percona teaches us how to build agile data backup recovery plan.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now