Solved

SQL database is not responding!  Help please...

Posted on 2004-10-05
22
400 Views
Last Modified: 2007-12-19
Hello,

We're having some major issues with our database.   We can navigate the website just fine, but when we attempt to do updates, the requests are timing out.  Some tables in the database are over 5 million records big.  That's not really huge, so we don't understand what's going on.

Someone suggested dumping the database, deleting the database from the SQL server group in Enterprise manager, recreating the database, and then attaching the databaese files again.  Will that resolve anything?

How about doing a full backup and restore?  Does that defragment the database?  Or is that a waste?

Any suggestions would be great.  

Oh I already tried optimizing/shrinking/etc...seems to help for 2 minutes, then it's back to a flat-line :(

0
Comment
Question by:xtreme416
22 Comments
 

Author Comment

by:xtreme416
ID: 12226568
added more points :)
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 12226641
Please answer ALL of those questions:
*) do the tables have indexes/primary keys (I assume yes, only to assure)
*) what are the settings and size for the transaction log files of the database(s)
*) what are the sizes and free space for the disks the data and transaction log files reside on
*) what is the size and free space of the disks where the tempdb, master etc databases reside on
*) what is the size of the current ERRORLOG file of SQL Server
*) what is the RAM size for the server, what is the memory config for the SQL Server
*) what are other applications/services running on this server

CHeers

0
 

Author Comment

by:xtreme416
ID: 12226794
*) do the tables have indexes/primary keys (I assume yes, only to assure)
Yes, but not all of them.

*) what are the settings and size for the transaction log files of the database(s)
After I run the DBCC Shrinkdatabase command, it goes down to 1 mb.  After a re-indexing command it jumps to 5 gigs!!

*) what are the sizes and free space for the disks the data and transaction log files reside on
disk space is 67.7 GB wityh 28 GB free

*) what is the size and free space of the disks where the tempdb, master etc databases reside on
same as above - same disk.

*) what is the size of the current ERRORLOG file of SQL Server
there's a few errorlogs in the SQL Data\MSSQL$database\LOG folder.  Most are under 50 kb.


*) what is the RAM size for the server, what is the memory config for the SQL Server
server has 2 GB ram.  memory for SQL is configured to "dynamic" with min at 0 and max at 2 GB.

*) what are other applications/services running on this server
nothing else, this is a dedicated sql server.

0
 
LVL 21

Expert Comment

by:mastoo
ID: 12226802
I'll add a couple:

1. Rrun sp_who while you have an update in progress and look at the blk column for anything other than 0.
2. Check your SQL Server and system logs.  Some disk problems might manifest there.

P.S. And the answer is No - a backup and restore would be pointless.
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 12226998
*) what is the RAM size for the server, what is the memory config for the SQL Server
>>server has 2 GB ram.  memory for SQL is configured to "dynamic" with min at 0 and max at 2 GB.
Here you should "cut" the memory assigned to the SQL Server to 1.8GB. Reason is to let some RAM to the OS, in order to avoid swapping :-)

*) what is the size of the current ERRORLOG file of SQL Server
>>there's a few errorlogs in the SQL Data\MSSQL$database\LOG folder.  Most are under 50 kb.
There is actually only 1 file called ERRORLOG, the others are called ERRORLOG.x , where x goes from 1 to 5 typically
So unless the ERRORLOG file is not 12 GB (as I saw on another live system yesterday, there is no trouble.
Think to check this filesize from time to time, and to restart SQL Server in order to roll the ERRORLOG file

*) do the tables have indexes/primary keys (I assume yes, only to assure)
>>Yes, but not all of them.
Why?

*) what are the settings and size for the transaction log files of the database(s)
>>After I run the DBCC Shrinkdatabase command, it goes down to 1 mb.  After a re-indexing command it jumps to 5 gigs!!
Thanks for this explicit indications. Actually, there are some tables that have no indexes, and you are doing reindexing.
What is the reindexing thought for? What schedule? What table sizes?

I assume you have a database that is not small (~20G I guess), and that most of this data is in few tables. The reindexing occurs on exactly those tables?

You should NOT set the transaction log file to a small size. If you DB has indeed 20GB, your transaction log file(s) should also have at least 5GB, I would recommend that you have at least 3 transaction log files with 2GB, which can extent to a max of 5GB each in chunks of 1GB (forget the % increase which is rubbish)
Actually, if the reindexing requires 5GB, and your initial transaction log file is 1MB with 10% increase, I leave you with the exercice of how many times the transaction log file has to extent before 5GB have been reached :-)  
(roughly it should be a dozen times)

Check which queries behave "bad", use the Query Profiler to find out the worst queries, and check/tune them with Query Analyser. Note that sometimes the tuning can/should occur in the calling application, indexes are often the solution but not always.
Finally, you need to check out if you have locks produced by your system (sp_lock)

Cheers
0
 

Author Comment

by:xtreme416
ID: 12227222
The database has many tables, but yes, probably a dozen account for most of the data.  The entire database is around 12 GB.  And we noticed that any time the log file goes over 1 GB that everything slows down considerably, to the point that the asp scripts start to time out.

How do I set up the log file in order to optimize performance?  Where do I chage the settings?  In the maintenance plan or is there somewhere else?
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 12227255
Edit the database settings, you have the settings for the data files and the transaction log files, you will immediately see how to put the setting (according to the values I gave above).

Again, you might have a asp script that needs tuning also, if it generates too much transaction data where possible avoidable...

CHeers
0
 

Author Comment

by:xtreme416
ID: 12227334
oh ok...right now the log file is set to automatically grow by percent (10%) and Unrestricted file growth.  I'm running a Shrinkdatabase command right now, and the log file has grown to 5 GB!  
0
 

Author Comment

by:xtreme416
ID: 12227666
the Shrinkdatabase command just finished...took 45 minutes.  Is that normal?  Log file grow to 7 GB!  Should I truncate it or do anything at this point?  Website seems responsive though.

0
 
LVL 4

Expert Comment

by:strangelynormal1
ID: 12230728
You may want to check your Update codes.  You may have a bad trigger or cartesian.
0
 
LVL 34

Expert Comment

by:arbert
ID: 12232919
Personally, I'd allocate even less than 1.8mb to SQL Server.  Are you running IIS on the same box, or does SQL have the box to itself.

Also, do you have the transaction logs on different disks/raidgroups than the data?  Are you running RAID at all?
0
Free Trending Threat Insights Every Day

Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 12234514
>>the Shrinkdatabase command just finished...took 45 minutes.  Is that normal?  
for a 12GB online database this is normal. As I said: DON'T USE SHRINKDATABASE or SHRINKFILE for normal production environment. Assign a good working space for both the datafiles and the transaction log files, and let the database work.
Let it some free space inside the files, because shrinking and extending the files is a VERY VERY expensive operation.
Best is, FORGET about that command!!!

>>oh ok...right now the log file is set to automatically grow by percent (10%) and Unrestricted file growth.  I'm running a Shrinkdatabase command right now, and the log file has grown to 5 GB!  
NO. that's bad settings. You might keep the autogrow settings, but DO NOT KEEP UNRESTRICTED FILE GROWTH. Set a Max Size, and change the Grow from 10% to 1G.

Don't try to cut down the transaction log file. Here a comparison:
Imagine you have to transport tons of material, what would you use (Question for 50€):
A your private car
B a small truck
C a big truck
D a series of trucks
Ok, you choose D, obviously. Now imagine, you boss FORCES you to take your private car all the time, will you be happy about that ????? The same, let SQL Server use a well-sized transaction log, best several files.

Now, let's continue the comparison for the autogrow:
Ok, your boss accepts you can upgrade the transportation medium eventually, if it's really not enough, while you start with 1 big truck. Obviously, if he didn't give you a MAX BUDGET, you could upgrade to anything, say 100 big trucks, which would kill the company. For SQL Server it's similar: if YOU let SQL Server grow the data/transaction log files without control, the WILL grow and fill the disk, which will halt SQL Server. It's YOUR job as DBA to check if the files really need growth, and in case to grow the disk storage before growing the data files.

CHeers
0
 

Author Comment

by:xtreme416
ID: 12238054
thanks AngelIII, the analogy helped :)

I'm not a DBA, just a programmer.  But I do everything around here, which explains why nothing works! hehehe  

So in order to change the setting for the transaction log...there are 2 boxes under the Automatically grow file property.  One for file growth and the other for max file size.  If I understand correctly I should change the max file size setting to restrict growth (mb) to 1000.  But how about File Growth?  Should I change that to "In megabytes" or keep it as percent?  If I change to megabytes, what value to use?  That's the incremental increase, right?  So change it to something like 10 mb?  (you said above 1 GB which seems high, unless you were talking about the max file size).

thanks for all the help so far!

0
 

Author Comment

by:xtreme416
ID: 12238073
oh and would setting a max size, mean that once the log file reaches that size it would roll over and start over?  OR would it mean someone having to clean things up and shrink it??
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 12238531
If you have a initial size of 3 GB, incrementing in bits of 10MB until reaching 5GB is rather pointless.
The increment should be a size to allow the file to increase without getting high fragmentation on disk.
So in Megabytes use values like 1024 (=1GB) in your case.

>>oh and would setting a max size, mean that once the log file reaches that size it would roll over and start over?  OR would it mean someone having to clean things up and shrink it??
You will need to implement a full and/or transaction log backups regulary (if the database is set to recovery model=full). If the database has recovery model=simple, you don't need to backup, and don't need to worry about the transaction log file. If the transaction log writer pointer reaches the end of the file, it will cycle at the beginning of the file(s) again.

CHeers
0
 

Author Comment

by:xtreme416
ID: 12240645
that's angel that helps.

One question though:  what if we have a large running transaction set that would balloon the log file over 1 GB.  Sometimes we archive records out of the main tables into "archive" tables within the same database, just to speed us query response.  

The log file is basically there if you need to back out transactions or the system needs to do a rollback for something, right?  

Would the log file recycle to the beginning of the file during the running transaction set?  And what impact would setting a maximum file size have in this case?

0
 
LVL 142

Accepted Solution

by:
Guy Hengel [angelIII / a3] earned 400 total points
ID: 12241010
>>Would the log file recycle to the beginning of the file during the running transaction set?  
No. 1 transaction needs to fit into the transaction log file, although the transaction log file will expand (if allowed) to the necessary space. This grow will be restricted to the max size if defined.

This means that you will need a transaction log file that is large enough to store the biggest transactions, but also the total max amount of transactions between each full and transactional backups:
if you do a full backup once per day only, the transaction log file would need to cope for a entire day of transactions, ie the busiest one you have in the week/month/year whatever.
If you also do transaction log backups once per hour, the transaction log file only needs to handle 1 hour of transctions, but again the busiest of all.
All this explains why you should give a well-sized transaction log file, without shrinking / growing it all the time, on a managed production system.

>>One question though:  what if we have a large running transaction set that would balloon the log file over 1 GB.  
Actually, you won't be able to cut that total size down. You can only split the transaction down to several transactions, and distribute it over time (each part in a different part of the day). A DBA needs to be patient!!!
Say you want to move 1 month of data to a history table. Of course, you could start 1 transaction, insert the data to the history table, delete in the main table, and commit the transaction if no errors occured.
but you might start 31 smaller transactions, 1 per hour, doing each 1 day of the month....
Just to give you an idea


0
 

Author Comment

by:xtreme416
ID: 12245260
ok you've been a great help...one last question (promise)

do you recommend running a defrag job on the database tables?  I ran DBCC SHOWCONTIG, and some of the tables are over 90-95% fragmented...with this one below being the worst!

Table: 'someTable' (1536724527); index ID: 1, database ID: 7

TABLE level scan performed.
- Pages Scanned................................: 23731
- Extents Scanned..............................: 2977
- Extent Switches..............................: 2976
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.66% [2967:2977]
- Logical Scan Fragmentation ..................: 99.63%
- Extent Scan Fragmentation ...................: 0.94%
- Avg. Bytes Free per Page.....................: 767.4
- Avg. Page Density (full).....................: 90.52%

Any suggestions?
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 12246439
>> Extent Scan Fragmentation ...................: 0.94%
Actually, this is the relevant figure, which means that there is very low fragmentation.

>> Avg. Page Density (full).....................: 90.52%
This means that the pages are in average 90,52 percent full, which is good

>> Scan Density [Best Count:Actual Count].......: 99.66% [2967:2977]
This also means that it is ok, the higher this value, the better.

>>and some of the tables are over 90-95% fragmented
so how do you see they are fragmented?

Don't forget that SQL Server doesn't have the same ideas as Oracle:
In oracle you could create a table with a initial extend size of any size you want, and let the table "autogrow" up to X extends, specifying the grow ration.
In SQL Server, you do nothing. SQL Server creates the table with a initial extend of 8K, and only by adding records the table will first grow by 7 extends of 8K, then of extends of contineous 8x8K=64K (if possible).

CHeers
0
 

Author Comment

by:xtreme416
ID: 12247694
>>and some of the tables are over 90-95% fragmented
so how do you see they are fragmented?

I was going by:
 Logical Scan Fragmentation ..................: 99.63%

Is this not relevant?




0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 12247794
You are right, I missed that row!!

Here we go with a good and complete article about this issue:
http://www.sql-server-performance.com/rd_index_fragmentation.asp

CHeers
0
 

Author Comment

by:xtreme416
ID: 12278091
thanks angel.  All your suggestions were great.  We still have lots of work to do, but your comments and suggestions have given us lots to think about :)

0

Featured Post

What Security Threats Are You Missing?

Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

Join & Write a Comment

Performance is the key factor for any successful data integration project, knowing the type of transformation that you’re using is the first step on optimizing the SSIS flow performance, by utilizing the correct transformation or the design alternat…
Slowly Changing Dimension Transformation component in data task flow is very useful for us to manage and control how data changes in SSIS.
Using examples as well as descriptions, and references to Books Online, show the documentation available for date manipulation functions and by using a select few of these functions, show how date based data can be manipulated with these functions.
Via a live example, show how to backup a database, simulate a failure backup the tail of the database transaction log and perform the restore.

708 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

14 Experts available now in Live!

Get 1:1 Help Now