Link to home
Start Free TrialLog in
Avatar of xtreme416
xtreme416

asked on

SQL database is not responding! Help please...

Hello,

We're having some major issues with our database.   We can navigate the website just fine, but when we attempt to do updates, the requests are timing out.  Some tables in the database are over 5 million records big.  That's not really huge, so we don't understand what's going on.

Someone suggested dumping the database, deleting the database from the SQL server group in Enterprise manager, recreating the database, and then attaching the databaese files again.  Will that resolve anything?

How about doing a full backup and restore?  Does that defragment the database?  Or is that a waste?

Any suggestions would be great.  

Oh I already tried optimizing/shrinking/etc...seems to help for 2 minutes, then it's back to a flat-line :(

Avatar of xtreme416
xtreme416

ASKER

added more points :)
Avatar of Guy Hengel [angelIII / a3]
Please answer ALL of those questions:
*) do the tables have indexes/primary keys (I assume yes, only to assure)
*) what are the settings and size for the transaction log files of the database(s)
*) what are the sizes and free space for the disks the data and transaction log files reside on
*) what is the size and free space of the disks where the tempdb, master etc databases reside on
*) what is the size of the current ERRORLOG file of SQL Server
*) what is the RAM size for the server, what is the memory config for the SQL Server
*) what are other applications/services running on this server

CHeers

*) do the tables have indexes/primary keys (I assume yes, only to assure)
Yes, but not all of them.

*) what are the settings and size for the transaction log files of the database(s)
After I run the DBCC Shrinkdatabase command, it goes down to 1 mb.  After a re-indexing command it jumps to 5 gigs!!

*) what are the sizes and free space for the disks the data and transaction log files reside on
disk space is 67.7 GB wityh 28 GB free

*) what is the size and free space of the disks where the tempdb, master etc databases reside on
same as above - same disk.

*) what is the size of the current ERRORLOG file of SQL Server
there's a few errorlogs in the SQL Data\MSSQL$database\LOG folder.  Most are under 50 kb.


*) what is the RAM size for the server, what is the memory config for the SQL Server
server has 2 GB ram.  memory for SQL is configured to "dynamic" with min at 0 and max at 2 GB.

*) what are other applications/services running on this server
nothing else, this is a dedicated sql server.

I'll add a couple:

1. Rrun sp_who while you have an update in progress and look at the blk column for anything other than 0.
2. Check your SQL Server and system logs.  Some disk problems might manifest there.

P.S. And the answer is No - a backup and restore would be pointless.
*) what is the RAM size for the server, what is the memory config for the SQL Server
>>server has 2 GB ram.  memory for SQL is configured to "dynamic" with min at 0 and max at 2 GB.
Here you should "cut" the memory assigned to the SQL Server to 1.8GB. Reason is to let some RAM to the OS, in order to avoid swapping :-)

*) what is the size of the current ERRORLOG file of SQL Server
>>there's a few errorlogs in the SQL Data\MSSQL$database\LOG folder.  Most are under 50 kb.
There is actually only 1 file called ERRORLOG, the others are called ERRORLOG.x , where x goes from 1 to 5 typically
So unless the ERRORLOG file is not 12 GB (as I saw on another live system yesterday, there is no trouble.
Think to check this filesize from time to time, and to restart SQL Server in order to roll the ERRORLOG file

*) do the tables have indexes/primary keys (I assume yes, only to assure)
>>Yes, but not all of them.
Why?

*) what are the settings and size for the transaction log files of the database(s)
>>After I run the DBCC Shrinkdatabase command, it goes down to 1 mb.  After a re-indexing command it jumps to 5 gigs!!
Thanks for this explicit indications. Actually, there are some tables that have no indexes, and you are doing reindexing.
What is the reindexing thought for? What schedule? What table sizes?

I assume you have a database that is not small (~20G I guess), and that most of this data is in few tables. The reindexing occurs on exactly those tables?

You should NOT set the transaction log file to a small size. If you DB has indeed 20GB, your transaction log file(s) should also have at least 5GB, I would recommend that you have at least 3 transaction log files with 2GB, which can extent to a max of 5GB each in chunks of 1GB (forget the % increase which is rubbish)
Actually, if the reindexing requires 5GB, and your initial transaction log file is 1MB with 10% increase, I leave you with the exercice of how many times the transaction log file has to extent before 5GB have been reached :-)  
(roughly it should be a dozen times)

Check which queries behave "bad", use the Query Profiler to find out the worst queries, and check/tune them with Query Analyser. Note that sometimes the tuning can/should occur in the calling application, indexes are often the solution but not always.
Finally, you need to check out if you have locks produced by your system (sp_lock)

Cheers
The database has many tables, but yes, probably a dozen account for most of the data.  The entire database is around 12 GB.  And we noticed that any time the log file goes over 1 GB that everything slows down considerably, to the point that the asp scripts start to time out.

How do I set up the log file in order to optimize performance?  Where do I chage the settings?  In the maintenance plan or is there somewhere else?
Edit the database settings, you have the settings for the data files and the transaction log files, you will immediately see how to put the setting (according to the values I gave above).

Again, you might have a asp script that needs tuning also, if it generates too much transaction data where possible avoidable...

CHeers
oh ok...right now the log file is set to automatically grow by percent (10%) and Unrestricted file growth.  I'm running a Shrinkdatabase command right now, and the log file has grown to 5 GB!  
the Shrinkdatabase command just finished...took 45 minutes.  Is that normal?  Log file grow to 7 GB!  Should I truncate it or do anything at this point?  Website seems responsive though.

You may want to check your Update codes.  You may have a bad trigger or cartesian.
Personally, I'd allocate even less than 1.8mb to SQL Server.  Are you running IIS on the same box, or does SQL have the box to itself.

Also, do you have the transaction logs on different disks/raidgroups than the data?  Are you running RAID at all?
>>the Shrinkdatabase command just finished...took 45 minutes.  Is that normal?  
for a 12GB online database this is normal. As I said: DON'T USE SHRINKDATABASE or SHRINKFILE for normal production environment. Assign a good working space for both the datafiles and the transaction log files, and let the database work.
Let it some free space inside the files, because shrinking and extending the files is a VERY VERY expensive operation.
Best is, FORGET about that command!!!

>>oh ok...right now the log file is set to automatically grow by percent (10%) and Unrestricted file growth.  I'm running a Shrinkdatabase command right now, and the log file has grown to 5 GB!  
NO. that's bad settings. You might keep the autogrow settings, but DO NOT KEEP UNRESTRICTED FILE GROWTH. Set a Max Size, and change the Grow from 10% to 1G.

Don't try to cut down the transaction log file. Here a comparison:
Imagine you have to transport tons of material, what would you use (Question for 50€):
A your private car
B a small truck
C a big truck
D a series of trucks
Ok, you choose D, obviously. Now imagine, you boss FORCES you to take your private car all the time, will you be happy about that ????? The same, let SQL Server use a well-sized transaction log, best several files.

Now, let's continue the comparison for the autogrow:
Ok, your boss accepts you can upgrade the transportation medium eventually, if it's really not enough, while you start with 1 big truck. Obviously, if he didn't give you a MAX BUDGET, you could upgrade to anything, say 100 big trucks, which would kill the company. For SQL Server it's similar: if YOU let SQL Server grow the data/transaction log files without control, the WILL grow and fill the disk, which will halt SQL Server. It's YOUR job as DBA to check if the files really need growth, and in case to grow the disk storage before growing the data files.

CHeers
thanks AngelIII, the analogy helped :)

I'm not a DBA, just a programmer.  But I do everything around here, which explains why nothing works! hehehe  

So in order to change the setting for the transaction log...there are 2 boxes under the Automatically grow file property.  One for file growth and the other for max file size.  If I understand correctly I should change the max file size setting to restrict growth (mb) to 1000.  But how about File Growth?  Should I change that to "In megabytes" or keep it as percent?  If I change to megabytes, what value to use?  That's the incremental increase, right?  So change it to something like 10 mb?  (you said above 1 GB which seems high, unless you were talking about the max file size).

thanks for all the help so far!

oh and would setting a max size, mean that once the log file reaches that size it would roll over and start over?  OR would it mean someone having to clean things up and shrink it??
If you have a initial size of 3 GB, incrementing in bits of 10MB until reaching 5GB is rather pointless.
The increment should be a size to allow the file to increase without getting high fragmentation on disk.
So in Megabytes use values like 1024 (=1GB) in your case.

>>oh and would setting a max size, mean that once the log file reaches that size it would roll over and start over?  OR would it mean someone having to clean things up and shrink it??
You will need to implement a full and/or transaction log backups regulary (if the database is set to recovery model=full). If the database has recovery model=simple, you don't need to backup, and don't need to worry about the transaction log file. If the transaction log writer pointer reaches the end of the file, it will cycle at the beginning of the file(s) again.

CHeers
that's angel that helps.

One question though:  what if we have a large running transaction set that would balloon the log file over 1 GB.  Sometimes we archive records out of the main tables into "archive" tables within the same database, just to speed us query response.  

The log file is basically there if you need to back out transactions or the system needs to do a rollback for something, right?  

Would the log file recycle to the beginning of the file during the running transaction set?  And what impact would setting a maximum file size have in this case?

ASKER CERTIFIED SOLUTION
Avatar of Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3]
Flag of Luxembourg image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ok you've been a great help...one last question (promise)

do you recommend running a defrag job on the database tables?  I ran DBCC SHOWCONTIG, and some of the tables are over 90-95% fragmented...with this one below being the worst!

Table: 'someTable' (1536724527); index ID: 1, database ID: 7

TABLE level scan performed.
- Pages Scanned................................: 23731
- Extents Scanned..............................: 2977
- Extent Switches..............................: 2976
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.66% [2967:2977]
- Logical Scan Fragmentation ..................: 99.63%
- Extent Scan Fragmentation ...................: 0.94%
- Avg. Bytes Free per Page.....................: 767.4
- Avg. Page Density (full).....................: 90.52%

Any suggestions?
>> Extent Scan Fragmentation ...................: 0.94%
Actually, this is the relevant figure, which means that there is very low fragmentation.

>> Avg. Page Density (full).....................: 90.52%
This means that the pages are in average 90,52 percent full, which is good

>> Scan Density [Best Count:Actual Count].......: 99.66% [2967:2977]
This also means that it is ok, the higher this value, the better.

>>and some of the tables are over 90-95% fragmented
so how do you see they are fragmented?

Don't forget that SQL Server doesn't have the same ideas as Oracle:
In oracle you could create a table with a initial extend size of any size you want, and let the table "autogrow" up to X extends, specifying the grow ration.
In SQL Server, you do nothing. SQL Server creates the table with a initial extend of 8K, and only by adding records the table will first grow by 7 extends of 8K, then of extends of contineous 8x8K=64K (if possible).

CHeers
>>and some of the tables are over 90-95% fragmented
so how do you see they are fragmented?

I was going by:
 Logical Scan Fragmentation ..................: 99.63%

Is this not relevant?




You are right, I missed that row!!

Here we go with a good and complete article about this issue:
http://www.sql-server-performance.com/rd_index_fragmentation.asp

CHeers
thanks angel.  All your suggestions were great.  We still have lots of work to do, but your comments and suggestions have given us lots to think about :)