Solved

Memory Upgrade (SQL 2000)

Posted on 2006-10-20
38
320 Views
Last Modified: 2008-02-01
I've recently upgraded a SQL 2000 Server from 4GB to 8GB of RAM. PAE, 3GB and AWE were enabled - AWE was set to ~6GB. The target SQL memory was around 4.1GB and the total SQL memory in use matched that. The VERY odd thing is this did nothing at all to help performance. It almost seems that it hampered it. Any ideas out there?
0
Comment
Question by:af500
  • 19
  • 19
38 Comments
 
LVL 29

Expert Comment

by:Nightman
ID: 17775972
What version of SQL Server?
What version of windows OS?

Are you SURE that memory was your problem, and not I/O?
Have you updated statistics on tables since the upgrade?
0
 
LVL 4

Author Comment

by:af500
ID: 17776078
- SQL 2000 EE
- Windows Server 2003 EE

Not any more ... what perf counters could I use to debug?

No, I can not say that for sure that the statistics on the tables were updated after the upgrade.. how is that done?
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17776119
to update statistics: sp_updatestats

performance counters for SQL 2005 http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspx

Looking for 2000 now (had a link somewhere).

In the mean time, look at I/O - %disk time, average disk queue lengths (those mentioned in the previous article will also apply - but not all the SQL 2005 counters are available in 2000)
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17776204
Have a look at http://www.microsoft.com/technet/prodtechnol/sql/2000/plan/sql2kcon.mspx

Not all of the counters are available (but an interesting article nonetheless) but the same concepts apply.

Also:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops5.mspx
and
http://www.microsoft.com/technet/prodtechnol/sql/2000/reskit/part4/c1461.mspx

Lot's of articles at http://www.sql-server-performance.com/
Will find something speficic for 2000 soon
0
 
LVL 4

Author Comment

by:af500
ID: 17776383
higher CPU is showing high % disk time.. I assume the two go hand in hand?
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17776413
not always

I have had an experience whre high % disk time due to heavy I/O load actually resulted in low CPU utilisation (which makes sense - the processes couldn't finish because they were waiting for a response from the disk cache api, and were then waiting themselves).

updatestatistics will probably help (and shouls really be run frequently).

What is your drive configuration like?
e.g.
TempDB, DAta, Transaction Logs and Data all on one disk?
Or split on to dedicated RAID containers (or LUNs on a SAN)?

What does your average disk queue length look like?
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17776428
Also, what else is running on the server? You have 8GB of RAM, but only half is allocated to SQL?
0
 
LVL 4

Author Comment

by:af500
ID: 17776800
* There is nothing else on the SQL servers... dedicated.... there are two - active/passive config
* The logs and data are on separate RAIDs.
* AVG disk queue is 0.27 / AVG % disk time is 30 (this is the data array)
0
 
LVL 4

Author Comment

by:af500
ID: 17776809
Of the 8GB, 6GB was set in AWE however SQL was only using 4GB.

The size of the DB is only 4GB too.
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17776834
I would configure SQL to use the full 6GB - why waste it?

Where is the temp db? and what is the avg disk queue on that drive?
Any AV running?

Have you run sp_updatestats yet?

ALSO, run DBCC showcontig - will help you determine what your index fragmentation is like. You may need to consider reindexing (high page fragmentation will be a performance problem)
0
 
LVL 4

Author Comment

by:af500
ID: 17776949
Sorry, I did configure AWE to use all 6GB... but SQL only took 4.

- have not run updatestats - just reading a bit more about it...
- No AV running
- will run DBCC showcontig shortly.
- checking on tempdb
0
 
LVL 4

Author Comment

by:af500
ID: 17776960
- tempdb is on the same array as my database
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17776977
You say your database is 4GB.

Is it set to autogrow? If so, in percent or defined size? (real time growing of databases is also a performance overhead).

sp_updatestats is 100% safe (will not harm your server at all, although could have a performance hit if run during periods of heavy load).

Consider increasing the amount of memory that SQL can use.
0
 
LVL 4

Author Comment

by:af500
ID: 17776980
DBCC showcontig ramped up the % disk time to about 1280 for about 40 seconds.

Are there some values in here I should be looking out for?
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17776997
Look at:
Scan Density [Best Count:Actual Count] (should be high!)
Logical Scan Fragmentation (Should be low)
Extent Scan Fragmentation: (Should be low)
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17777015
most DBCC commans are heavy on the temp db, but for a small database like this I would imagine (and was right) that it would be quite quick.

reference material on showcontig
http://www.databasejournal.com/features/mssql/article.php/1467801
0
 
LVL 4

Author Comment

by:af500
ID: 17777020
- The .mdf is ~4.8GB, the .ldf is ~8.8GB
- Both the mdf and the ldf are set to autogrow by 10%
- Will have to run the sp_updatestats later on :-)
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17777028
do you back up your transaction log?
0
 
LVL 4

Author Comment

by:af500
ID: 17777090
Yes - back that up nightly.. should it be backed up before the actual db is backed up? (you're getting all the PTS by the way)
0
Top 6 Sources for Identifying Threat Actor TTPs

Understanding your enemy is essential. These six sources will help you identify the most popular threat actor tactics, techniques, and procedures (TTPs).

 
LVL 29

Expert Comment

by:Nightman
ID: 17777133
:)

doesn't really make an enormous difference. We back ours up every 20 minutes (need REALLY granular recovery).

Run sp_updatestates later on.
I can't remember whether or not the SQL Memeory counters are available in 2000. Key to look at are buffer cache hit ratio (i.e. SQL has enough memory and the retrieved pages stay in memory for a long time) and page life expentancy.

What size is the swap file on the OS? Ideally should be 1.5 times the physical memory (so 12 GB).

If all counters are looking good, and sp_updatestats does not significantly improve things, you should look at reindexing:

--Script to automatically reindex all tables in a database

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'Reindexing ' + @TableName
DBCC DBREINDEX(@TableName,' ',0)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor

DEALLOCATE TableCursor

This will defragment your indexes and reorganise the data pages and extents to make them as contiguos as possible. Note that in SQL 2000 this will LOCK the data pages while running (best to do after hours!). SQL 2005 you can do this online in real time with non-text based (i.e varchar, char, nvarcher, etc) indexes.

Let me know how it goes.
0
 
LVL 4

Author Comment

by:af500
ID: 17777158
The auto update statistics is on - is that any different than manually firing the sproc? How often would that fire?
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17777172
auto update is not 100% reliable. Running the sproc forces it to happen.

ALSO, you can update and resample all ofthe data to get really accurate stats, but I wouldn't do that - it will happen when reindexing anyway (and that is something that you SHOULD do on a regular basis - weekly, if you can. Should be able to schedule it with SQL Agent Maintenance plans)
0
 
LVL 4

Author Comment

by:af500
ID: 17777272
Sorry, I am not sure I mentioned this above.. can't see it.. but after the upgrade it does look like I've seen a significant CPU increase since the upgrade...
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17777284
Strongly recommend sp_updatestats
and if this doesn't improve then REINDEX

I would REINDEX anyway. See what effect this has.

Any changes to the schema? (e.g. tables, triggers, views or stored procedures modified)?
0
 
LVL 4

Author Comment

by:af500
ID: 17788032
Ran SP_updatestats - will try the reindex this eve...
0
 
LVL 4

Author Comment

by:af500
ID: 17788093
There are a handfull of tables with high extent scan fragmentation >60% and low scan density <30%... would re indexing help this? how?
0
 
LVL 29

Accepted Solution

by:
Nightman earned 500 total points
ID: 17788119
DBCC DBREINDEX(YourTableName,' ',0)

Alternatively, you can run the previous script that I posted which will do all of them (reposted below)

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'Reindexing ' + @TableName
DBCC DBREINDEX(@TableName,' ',0)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor

DEALLOCATE TableCursor

0
 
LVL 4

Author Comment

by:af500
ID: 17790838
Before:

TABLE level scan performed.
- Pages Scanned................................: 47
- Extents Scanned..............................: 11
- Extent Switches..............................: 10
- Avg. Pages per Extent........................: 4.3
- Scan Density [Best Count:Actual Count].......: 54.55% [6:11]
- Logical Scan Fragmentation ..................: 4.26%
- Extent Scan Fragmentation ...................: 72.73%
- Avg. Bytes Free per Page.....................: 3201.7
- Avg. Page Density (full).....................: 60.44%

After:
- Pages Scanned................................: 32
- Extents Scanned..............................: 5
- Extent Switches..............................: 4
- Avg. Pages per Extent........................: 6.4
- Scan Density [Best Count:Actual Count].......: 80.00% [4:5]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 0.00%
- Avg. Bytes Free per Page.....................: 906.1
- Avg. Page Density (full).....................: 88.80%
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17790878
quite a big difference.

Scan density improved significantly.
Extent fragmentaion down to 0 from 73%!

Have you done this for all of your key tables?
0
 
LVL 4

Author Comment

by:af500
ID: 17791050
On my dev site - yes :)

Would this effect writes as well as reads? Or mostly reads?
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17791133
Both, although more read than write.

You need to look into why you have high levels of fragmentation. Maintenance operations such as batch deletes could end up fragmenting your tables and indexes. Likewise, incorrectly clustered indexes with out of sequence inserts (and high fill factors) will force page splits, significantly affecting insert performance (and therefore any reads that are happening at the same time).

Also, the longer your inserts, updates and deletes take to execute, the longer they will have the data pages locked, with means that your reads will suffer further. You CAN address this issue by using table hints (e.g. SELECT * FROM Table WITH(NOLOCK)) - this essentially reads uncommitted data and ignores any locks (and also does not lock on reading either).

This is also to be used with caution - e.g. you will be able to read data from a transaction that has not been committed, and may even be in the process of rollback - this is risky from the perspective of data integrity, but only you will be able to determine where (or if) this can be used in any given procedure.

Next step after the reindexing and statistics updates - look at the perfmon stats on memory, make sure that your virtual memory is correctly sized (12GB swap file, preferably on the OS drive - but I usually create a logical partition for this, so if the C: fills up with logs or other rubbish for whatever reason, the server can still be booted because there is dedicated space for the swap file - no space means no boot!).

And if performance is still poor after all of this you need to look into using SQL Profiler to identify long-running transactions and other locks (and consider improving indexing and possibly redesign).

I imagine that because I got the points that you are happy with the answer - has your performance improved?
0
 
LVL 4

Author Comment

by:af500
ID: 17791266
one of the devs has been going through sprocs and statements tuning where possible.. i will run the updates when the server quiets down a little - that script you gave above took a while to run :-)  any negative effects to firing that??
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17791283
just that it will lock the data pages on the tables that it is running on. So updates, inserts and reads WILL time out while this is in process.

As I said, best to run off-peak (or notify users of scheduled downtime if you can).
0
 
LVL 4

Author Comment

by:af500
ID: 17791404
off-peak it is ;-)
0
 
LVL 4

Author Comment

by:af500
ID: 17791429
If all the results show extent fragmentation down by 30-60% - should I see an immediate difference?
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17791445
Depends on the table - if this holds large amounts of data and is a key table to your queries, then yes, pretty quickly. However, you may want to force a sp_updatestats (instead of waiting for the auto update to kick in).

Also remember that if there are many other tables that are in need of re-indexing, you may need to do them all (difficult to tell from here - you know your application better than I do)
0
 
LVL 4

Author Comment

by:af500
ID: 17791482
should the defrag occur after the update? is there any rule of thumb for that?
0
 
LVL 29

Expert Comment

by:Nightman
ID: 17791501
Theoretically the reindex forces an sp_updatestats for that table. However, I have never been convinced (and haven't been able to prove or disprove one way or another), so I always run the sp_updatestats afterwards anyway.
0

Featured Post

Enabling OSINT in Activity Based Intelligence

Activity based intelligence (ABI) requires access to all available sources of data. Recorded Future allows analysts to observe structured data on the open, deep, and dark web.

Join & Write a Comment

Everyone has problem when going to load data into Data warehouse (EDW). They all need to confirm that data quality is good but they don't no how to proceed. Microsoft has provided new task within SSIS 2008 called "Data Profiler Task". It solve th…
Ever needed a SQL 2008 Database replicated/mirrored/log shipped on another server but you can't take the downtime inflicted by initial snapshot or disconnect while T-logs are restored or mirror applied? You can use SQL Server Initialize from Backup…
Familiarize people with the process of utilizing SQL Server functions from within Microsoft Access. Microsoft Access is a very powerful client/server development tool. One of the SQL Server objects that you can interact with from within Microsoft Ac…
Via a live example, show how to extract information from SQL Server on Database, Connection and Server properties

706 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

20 Experts available now in Live!

Get 1:1 Help Now