Solved

Cost is high, but query is faster. how?

Posted on 2014-09-07
6
314 Views
Last Modified: 2014-09-15
Dear Experts,

In my application, it is full of queries having cost > 2000.

But application is still faster,

which criteria / which query i should consider for tuning.?
Why the cost is high and how the query is fast?
0
Comment
Question by:sakthikumar
6 Comments
 
LVL 73

Expert Comment

by:sdstuber
Comment Utility
cost doesn't represent speed

it's an internal number that has no meaning outside of the optimizer.

you could consider it a design bug that the number is exposed at all


also even if you are to try to decipher some meaning from cost, remember that the optimizer is a collection of estimates and sometimes it's wrong so, again, don't rely on it.
0
 
LVL 37

Expert Comment

by:Gerwin Jansen
Comment Utility
>> But application is still faster
Faster than what? Before you made some changes? If so, what were the changes? Do you have indexes, do you gather statistics on a regular basis?
0
 

Author Comment

by:sakthikumar
Comment Utility
Yes statistics are gathered on regular basis, but index might not be available.

So how can I evaluate the performance of the query and consider for tuning.?
where I can get the poor performance queries?
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 
LVL 34

Expert Comment

by:johnsone
Comment Utility
The best place to get "poor performance" queries is the users.  If they aren't complaining about it, then it isn't a problem.

What you think is statistically a poor performing query is within an acceptable limit to a user.  Why go wasting time on improving a query that nobody will see or care about the result?

You could go through every query that comes through the database and try to improve it.  Seems like a waste of effort.
0
 
LVL 73

Assisted Solution

by:sdstuber
sdstuber earned 250 total points
Comment Utility
In the absence of user input, ask your system admins.  If the system load was to grow, what would break first?  Would you run out of memory, cpu, io?  Then check your awr, statspack or other monitoring tools to find the queries that are the biggest consumers of those constrained resources.

or, rather than looking at specific queries for tuning, look at design issues like parse-to-execute ratios on an oltp system.  If you have a high parse ratio then you probably have developers that need to learn to use bind variables.

but - these are secondary to what johnsone said.  Go by your users first, specifically the processes that are costing your company money.  If you see a query that you think is bad but the customers don't care, then move on to something they do care about.

If you have a shared system, then things might get a little more complicated.  If users in db1 are complaining about a memory intensive query, you might want to look at db2 on the same server and tune some queries there that are memory hogs.
0
 
LVL 4

Accepted Solution

by:
jtrifts earned 250 total points
Comment Utility
Cost is essentially an estimated measure of the resources the Optimizer is expected to use when producing an explain plan for a query. There are no true units to the cost.

The problem with the notion of cost, is that it gives the developer or dba the false illusion that small means fast and large means slow. This is not the case.

Neither is it necessarily the case that small means good and large means bad -- though most developers will aim for a lower number in general.

It is entirely possible and in fact happens all the time, that queries with higher costs will return results in less time than those with smaller costs. However, large cost queries place a greater burden on the system when there are lots of them being executed at the same time. Thus, there is a general practice to seek lower cost queries in general.
As a practice of performance tuning of the wider DB, the practice of attacking high cost queries can have an overall positive impact on your DB performance (but as I say, it might not actually speed up specific queries).

In terms of criteria for tuning, you should consider addressing the problem areas. How do you determine this? Do you have processes that are taking a long time to run? Are they getting longer and longer such that jobs are at risk of not completing within their available time windows?

Do you have users complaining that certain actions take too long (or that they execute something then have to get up and go get a coffee)?

Tuning activities can often be a process of user management -- what the focus areas should be often come down to who is evaluating the benefit.
End users want screens to get data returned to them quickly.  
Accountants / finance department would want you to ensure that they do not have upgrade tin (hardware).  
Managers often want their reports to get to them faster.
Executives want their BI data loaded quickly so they are better able to get their strategic reporting from warehouses.

At the end of the day -- you will get the most value out of addressing the issues that affect the most (or the most *important*) people.
If it is the end users who hate using your system because their most common tasks take ages to complete due to poor performance, that is a good candidate.
If the end users love the system, but managers are frustrated because the batch jobs often don't finish before the batch window slips into office hours, you might need to tackle the batch jobs.
If your BI team is complaining that they cannot load the warehouse because your ODS load takes forever, go there.

Knowing your users and communicating with them to understand what makes them tick (and what pisses them off) is key to your role as a consultant. Happy users mean more business, more $$$, more contracts.

From a tuning point of view, it has been commonly believed that 80% of the issues with performance are found at the application layer. in our case that means in SQL, PL/SQL, etc.

After that, you can look at tuning the DB itself. After that you can look at the OS and then H/W.

With any tuning exercise, you need to have a goal in mind. You can tune until the cows come home -- but if the system is already *good enough* leave it alone and find something to do which makes your client happy.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Join & Write a Comment

Suggested Solutions

This article started out as an Experts-Exchange question, which then grew into a quick tip to go along with an IOUG presentation for the Collaborate confernce and then later grew again into a full blown article with expanded functionality and legacy…
Note: this article covers simple compression. Oracle introduced in version 11g release 2 a new feature called Advanced Compression which is not covered here. General principle of Oracle compression Oracle compression is a way of reducing the d…
This video shows how to copy a database user from one database to another user DBMS_METADATA.  It also shows how to copy a user's permissions and discusses password hash differences between Oracle 10g and 11g.
This video shows information on the Oracle Data Dictionary, starting with the Oracle documentation, explaining the different types of Data Dictionary views available by group and permissions as well as giving examples on how to retrieve data from th…

728 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

9 Experts available now in Live!

Get 1:1 Help Now