Link to home
Start Free TrialLog in
Avatar of chriseebh
chriseebh

asked on

Sybase Performance

Have a deadlock situation for almost every two weeks, and application is moving but very slow, but could not verify.  
No access to the box.  At this time, the transaction log was filled up, it was now set as 1 GB in size but increasing the space to 2GB will not resolve the situation which I feel.

The sybase recover by itself after the peak hours.  The system was set in default.

The only maitenance tasks are
1.      Update statistics followed by sp_recompile on all tables in apps database
2.      DBCC Checkstorage done on SSR database
3.      Normal DBCCs done on other databases e.g. master
4.      reorg compact run on all DOL tables that meet criteria (1% or 1000 rows can be reclaimed)
5.      As of next weekend reorg reindex will be run on specified DOL tables E.G. tables etc.

The database has no scheduled restarts.

The Sybase configuration for deadlocks is the default configuration that you can see below.
Configuration option is not unique.
 
 Parameter Name                            Default     Memory  Used Config Value Run Value  
 ------------------------------ ----------- ----------- ------------ -----------
 deadlock checking period               500           0            500                         500
 deadlock pipe active                           0           0            0                              0
 deadlock pipe max messages             0           0            0                              0
 deadlock retries                                  5           0            5                              5
 print deadlock information                  0           0            0                              0
 
(1 row affected)
(return status = 1)

CPU was average at 20-30% but at peak hours 30-40%

Number of 'open databases' configured:  12
Actual number of databases in server:  9
'OpenDbRemaining' parameter's average:  9.98

Average connections used: 49.236486 (9%)
Average connections free: 550.763514 (91%)
Total 'number of user connections' configured: 600
       
Maximum connections remaining: 596
Minimum connections remaining: 0

During that the transaction logs was fill and in the end it was crashed due to db option "abort on log full

Anything which I might have missed??  Any suggestion are welcome.
The last option which I have not tried is to turn on the sp_sysmon?  The only time it happens was every two weeks no fix day, so if when the problem occur, turning the sp_sysmon will it help?  For 3 incidents, the deadlock encounter and my application hang.
Avatar of ChrisKing
ChrisKing

re the logs:
are you running 'truncate logs on checkpoint", if yes, what is the checkpoint period (maybe increase the frequency of this)
if not, how often are you running a "dump tran" (maybe increase the frequency of this).

of course if you are running very large transactions, neither of the these will help, you either need to increase the log size or reduce the transaction size. Reducing the transaction size will probably reduce the deadlocks too.

can you give us an example of the transactions being run and the number of rows and duration that they run for ?
Avatar of chriseebh

ASKER

I may need to check again "truncate logs on checkpoint", but increasing the checkpoint will cause an additional checks and thus more deadlocks will encounter, correct me if I am wrong

dump tran frequency - how to check?  any command to run?

Could you elaborate what you mean by transaction size?  Are you referrring the file that was feed into the system which will cause how many rows "inserted" into the system?  How to check?
deadlock checks are tied to the "deadlock checking period " you mention before, not the checkpoint

the "dump tran" will be someting you will be running from a crontab (or may an 'at' command on windows)
if you are not aware of it, I am sure you won't be running it.

the transaction size will be the metrics of you inserts/updates/deletes performed within a single transaction (begin tran ... commit)
eg DELETE FROM mytable WHERE date < '1-jan-2003'
might delete 2 millon rows and take 15 minutes to run

How are you feeding the file into the system ? BCP or some other program ?
is this what is causing the deadlocks ?
Thanks for the info, I am not really gd at sybase, more comfortable and familiar with Oracle.
It is a SunOS.

I will try to run a transaction and see how long it takes...working on a script to see how long the system takes.

Feeding the file into sybase thru an external program.

The programs is not the one that causing the deadlocks.  But a numerouse events that trigers it and in the end the system became very slow, and after the peak hours, it recover by itself without any changes made to the database.  Suspect tere is a contention but do not know how to find the culprit.
ASKER CERTIFIED SOLUTION
Avatar of ChrisKing
ChrisKing

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ps. dont run your own transaction, try to get the stat of the job they are running.

BTW a deadlock every 2 weeks is nothing, but it sounds like the app does not handle them correctly. What is your app doi when it gets a deadlock error ?
the apps does handle the deadlocks and will wait till the lock on the table is release before it use again

deadlock is normal which i believe so.  So I am out of wits as this is unusual.
that sounds like the issue !

that is not how to handle the deadlock, there should be no waiting for locks to be released.

On getting the deadlock the app should
1. realise that the current transaction has been lost (rolled back by the server)
2. if processing under multiple connections, take an appropriate response on those connections
3. increment it internal deadlock retry count.
4. if maximum retries has been reached, abort
5. begin a new transaction
6. start proessing again from the last begin tran

it sounds like your app is waiting indefinately for locks to be released hence "hanging" (and is probably waiting for lock that it is holding on another connection).

Let Sybase do the deadlock handling for you, dont wait for the locks to be released. Sybase will tell you is there is still an issue or will make you wait until the ather app releases those locks.

When the app next hangs, run a sp_lock and determine who is holding the locks that it is waiting for.
but when I run the sp_lock, I could not see anything was "locking" even when the error prompted so I believe this was not the case.

I am in the amidst of writing script and let it run every 15mins and pipe to a log.  Place there for next week, it should tell me something.

Before trimming the script, anything which I should look out for which will help me investigate?
what is in the script ?

if the sp_lock shows nothing and the app is still waiting then I say that is the bug, remove the wait logic from the app, it is not working and is not required
doing some count(*) of the rows on the tables which will tells my apps if is still running, but "slow" which I can't tell
check the datbase for locks and transaction log size at every 15mins interval

I have one issue with another site with Sybase too, the system was never shutdown
the only thing I noticed is that the server physical memory running low as sybase server "consumed" almost 75%...so I recommend rebooting, after rebooting, I have no problem with them ever since.  This is in the HP box

I always have a bad time with Sybase and Sun together...

I may recommend the DBA to reboot as he never done it all before on the box

Looking back at the code, the "wait" logic was found to be "sybase" handling... not on our apps.



select count(*) will take a while to runn and might cause blocking
instead use:- exec sp_spaceused 'tablename'

for logs, use:- exec sp_helpsegment logsegment

> Looking back at the code, the "wait" logic was found to be "sybase" handling... not on our apps.
can you expain any further ?
any luck ?
is there anything else ?
No luck at all.  Problem as expected, occurred again.

Went on site and could not find anything.  Only 6% deadlocks were utilised.
Found Deadlocks error in apps log but apps still functioning.  Number of deadlocks = 50,000
maybe you should look at DOL (data only locks).

they are great at reducing deadlocks BUT do slow down the locking (so be warned). Don't go overboard, start with the top 1 or 2 tables in regard to deadlocks.
Problem identified as Server's memory is low. There is some system performance issues.

No of deadlocks - 6% used 50,000 to increase another 5%
Transaction logs should be 2GB instead of 1GB