Improve company productivity with a Business Account.Sign Up

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1301
  • Last Modified:

Oracle JDBC Performance : Query Tuning

Oracle 8.1.6, JDBC, JSP, Tomcat3.2.3, JDK1.3.1, 10-15 Concurrent Users

My application is like this ...
"JSP -> JDBC -> Oracle_Thin/OCI8_Driver -> Oracle8.1.6"

Facing performance issue. Don't know what specific is causing it. I have following questions...

[1]. When the application is running for some time the performance keeps going down. How to know that what queries is consuming what time. Is there any way to know from Oracle as what queries (select, insert, update, trigger etc) have taken what time (the history table), so that I could analyze it.

[2]. Since my application allows the users to build the query, sometime user end up writing querries that takes huge time causing system to almost hangs down and hence the other users suffer. How to set the query timeout at Oracle as well as Java level (setQueryTimeout(int seconds)?).

[3]. Will the Oracle resources be automatically released when a query execution is timedout.

[4]. How to know from Oracle as how many Cursors are open at a given point of time.

[5]. In general what steps one should follow to fine tune the database communications (Java + Oracle). I have around 15 main tables that is being populated. Currently the maximum records in few of these tables are only 3000 (3K). There are 15 other corresponding audit tables that I populate for any record-update in the previous 15 tables. Apart from this I have 40 master tables having 5-20 records each used for choice listing in the GUI and for referencing in the main 15 tables. I have generated no index etc etc. Just plain vanilla tables. Please suggest the performance tuning points, tools etc.

Best Regards
  • 5
  • 4
  • 3
  • +4
1 Solution
Guy Hengel [angelIII / a3]Billing EngineerCommented:
[1] set up the TopSession utility from Oracle CD, this should give you great insights...

[2] you should not let the users build the complete query, but only give them the option to add parameters. Otherwise you will always end up in the situation that resources are badly used. using Timeout doesn't really help, because some queries WILL take some more time as others...

[3] yes, although not necessarily immediately. Some items can be shared, rows that where read from disk to memory will stay until replaced by other data.

SELECT sid, user_name ,count(*)
    FROM v$open_cursor
    GROUP BY sid, user_name
[   ORDER BY user_name / count(*) DESC .... ]

[5] tuning is a difficult an ongoing process. This said, you should have indexes on your tables, at least the implicit index used by the primary key.
If you have the option, you should have your database files into several tablespaces on different disk drives/controllers.
Ensure your application only queries for data it REALLY needs, limiting the query in rows and columns the user will see at some point. Don't query the whole table to finally display only 1 record...


As I am true understanding....

You tables is very small and you needn't use indexes at all!
Your problem may be in memory resources.

You may consider tracing most hard sessions:
SQL>connect internal
SQL>alter system set timed_statistics=true;

Where SID and SERIAL# you can select from v$session table.

after that you get dump file (it's location in user_dump_dest - init.ora parameter) for example - 1234.trc.

tkprof 1234.trc 1234.txt.

after that you may see in 1234.txt file statistics abou your sql statements and time for there executions.

But in common case you need read related Oracle documentation "Designing and tuning for performance"

Best regards!
for [2] and [3], you can look limit the resource utilization per Oracle user account by creating profiles and assigning it to users.

see below for more...

for [4],

select a.sql_text
from v$session s, v$open_cursor o, v$sqlarea a
s.saddr=o.saddr and
s.sid=o.sid and
o.address=a.address and
o.hash_value=a.hash_value and
s.schemaname='<the name of the schema>';

You can also limit the maximum number of open cursors, by setting open_cursors in your init<sid>.ora. This only lets your system resources off the hook, but would prevent users from executing new queries when this limit is exceeded


In addition to the fine comments in the above posts, as you rightly guessed indexes on those tables having > 3000 records might help - but it is really hard to say offhand.

SGA size, shared pool size, system real memory, I/O contention, rollback segment sizing - to name just a few - are what one would look at in the tuning process.

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Mark GeerlingsDatabase AdministratorCommented:
It sounds like you have a relatively small Oracle database, and relatively few users so it should be possible to get very good response times even on a modest server.  Oracle tables with only a dozen or two dozen rows may not benefit from indexes (at least from a performance perspective, unique indexes can still be handy from a data integrity perspective to prevent duplicates or to support referential integrity).  All Oracle tables though that have a couple hundred or more rows should have at least one index, or performance will suffer.

Can you give us some details about your database server: number and speed of CPUs; amount of RAM; number of disk drives; RAID or non-RAID; number of other applications running on the server; etc.  Also please tell us your Oracle SGA size and the SGA breakdown (how much for shared_pool (variable size), how much for db_block_buffers (database buffers) etc.
javaq092999Author Commented:
p_yaroslav and All

I am not able to execute SQL>dbms_system.set_sql_trace_in_session(1,1)

Still Looking For
How to set the Query Time Out before running a query on SQL propmt.

My Oracle Details
(Plain Vanilla Desktop :)) Windows 2000, 1 Celeron 466 CPU, 384MB RAM, 10GB HDD, Non-RAID, Oracle 8.1.6, No application running apart from Tomcat, Oracle and my JSP application inside Tomcat

Total System Global Area            68162828 bytes
Fixed Size                             70924 bytes
Variable Size                       40464384 bytes
Database Buffers                    27549696 bytes
Redo Buffers                           77824 bytes
db_files = 1024
open_cursors = 100
db_file_multiblock_read_count = 8
db_block_buffers = 3363
db_block_size = 8192
shared_pool_size = 15728640

1)For execution of dbms_system.set_sql_trace_in_session(1,1)
 you must have administrative privileges or grant to this package.
Second way for obtain trace:
You may alter user session putting additional code in it:

execute immediate "alter session set sql_trace=true".

2) You may set on server in sqlnet.ora file parameter:
SQLNET.EXPIRE_TIME=10 (after 10 minutes user will be disconnected from server)

javaq092999Author Commented:

1) I tried following and got the error (note : I am not an Oracle expert)

SVRMGR> connect internal/oracle
SVRMGR> alter system set timed_statistics=true;
Statement processed.
SVRMGR> dbms_system.set_sql_trace_in_session(1,1);
ORA-00900: invalid SQL statement

Similar error comes when tried on SQL*Plus SQL>

2) Nope you have got my questions wrong. I want Query Execution Timed Out. That is I want to set at SQL> promt a time of 120 sec for QTO. Now if I execute a query that takes Oracle to execute in more than 120 seconds will result in abort of query execution by Oracle.


1) More detailed:
SQL> execute dbms_system.set_sql_trace_in_session(SID,SERIAL#,TRUE);

2) FOR QTO you nedd use profiles for users.
You may set in profile:
-Connect time
-Idle time
-Reads/Session ....

I think what is happening is that

1.  Cursors are not getting closed and eventually memory problems.  I know that users will get errors on unix system but I am not sure about NT.

2.  Shared pool behaving like a queue (sql statements entering and exiting). Are you using bind variables for queries (you can still use bind variables for queries being generated on the fly)?

3.  Do you have enough memory?  Check if your memory is paging/swappning?

Just monitor the open cursors and other memory for sometime and you will eventually find the culprit.
Mark GeerlingsDatabase AdministratorCommented:
An Oracle database running on single 466mHZ CPU with 384MB of RAM and just one hard drive will be *SLOW* especially if that machine also has to run the client interface!

Oracle on NT will be much faster if you can put it on a dedicated server (running nothing else) with at least three SCSI hard drives (six to 10 drives would be nice) and 512MB of RAM.  Ideally you spread out the Oracle data tablespace(s), index tablespace(s), rollback tablespace, temp tablespace, system tablespace, redo logs, and the NT swap file all on separate disks.  If they all have to contend for just one disk, nothing will be fast.

If you have to keep it on that single-disk machine with just 384MB of RAM and run other applications at the same time, you will really need to do a thorough job of tuning Oracle and use NT's Performance Monitor to check on where the bottleneck is (CPU, memory or disk).  It is most likely disk access.  To see the disk counters in Performance Monitor you will need to do:
diskperf -y
from a DOS prompt and reboot, if that hasn't ever been done on that machine.

Tuning Oracle precisely is a complex task.  The two most important variables in the init*.ora file are: shared_pool_size and db_block_buffers.  You may have to adjust those down to reduce (or eliminate if possible) NT's use of the swap file, since that is a performance killer.  Then you need to check the statements that are executed in Oracle with a tool like Oracle's trace utility (or a third-party tool) to see the execution plan and adjust them, or adjust indexes to make them as efficient as you can.  None of this is simple.

I had posted the URL earlier for you to see how timeouts and resources can be implemented using profiles - but since you refuse to look inside those URLs, here is a snippet:

CREATE PROFILE <profile_name>
   CPU_PER_SESSION            <hundredths of a second>
   CPU_PER_CALL               <hundredths of a second>
   CONNECT_TIME               <total elapsed time in mins>
   PRIVATE SGA                15K;

You can then assign this profile to different users

ALTER USER <user_name> PROFILE <profile_name>;

By the way, did you note markgeer's comments on why a system with your configuration is not one where you would expect things to be lightning fast?

Moderators have gone a long way to close over 20 questions you left open.

Do not let this happen here.

E-E Moderator
javaq092999Author Commented:
Thank you all for comments, specially markgeer. I picked ideas from almost all the comments but accepting this one as answer as I used this. Also the comments from Markgeer are worth an answer for me. Can we select multiple comments as answer :) ?
You can post a separate question titled "Points for markgeer" and award him any number of points as you see fit. When markgeer posts a comment to the new question, just accept that as an answer

javaq092999Author Commented:
;) Good idea! But will not do that!
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

  • 5
  • 4
  • 3
  • +4
Tackle projects and never again get stuck behind a technical roadblock.
Join Now