Solved

auto statistics

Posted on 2007-04-08
17
521 Views
Last Modified: 2010-05-18
Hi there
On maintenance plans i just increased the optimization
update statistics used by optimizer to 40 every night.

I was about to turn it off on the database and got
auto create statistics and auto update statistics

Got confused which one to turn off.

Our system is extremely busy used all day long with many
transactions

Which one do i turn off.....during the day.

If i turn if off i want to make sure that i do it at night
with a script ....which script would this be
dbcc updatestatistics ('dbname')


Thanks
0
Comment
Question by:TRACEYMARY
  • 9
  • 4
  • 4
17 Comments
 
LVL 8

Assisted Solution

by:Brain2000
Brain2000 earned 200 total points
ID: 18873508
Turn off the auto update statistics checkmark on the database.  However, keep in mind that this could actually lead to slower performing queries depending how much data is added or removed from teh database.  Statistics are used to do things such as determine which half of a join would return less data so it knows how to perform the join in the most efficient manner.

If you are looking to reduce contention, I would recommend looking into other things, such as making sure AutoClose is off, AutoShrink is off, and that you have proper indexes on the database.  In addition, make sure that you are using Client Side cursors for all of your queries that return only a handful of rows and are not doing paging operations.

What resource is getting choked on the SQL server?  CPU, RAM, or drive I/O?
0
 
LVL 7

Author Comment

by:TRACEYMARY
ID: 18873743
Thanks i probrably leave them off for now and see if increasing the 40% optimization works which is going to run each night.

What is get choked....i see a lot of PAGEIO_LATCH, NETWORK_WAIT....memory is only 1.5
We have probrably 300 processes with at least 10 runnable options at once.
Im trying to benchmark all the CPU, RAM and disk I./O i have seen the CPU go up to 100% many times and i seen disk queue length go way over the 90 to 110 yes i know default should be under 2 sometimes i think its never going to finish.
We have deadlocks all over place too.....Pretty crazy.

I just looked at all my main erp databases and done a full reindex on certain table i.e some of them i made 0 fill factor and some 70 if massive tables doing a lot of inserts.

Monday i going to profiler for long running queries and page splits to get that under control.

Deadlock i just read i can do dbcc trace on 1306 and 1204 and it tells me exact what is blocked that looks great.

I want to get a list of main stored procedures called in a day too.....working on that one.

I would like to get a memory breakdown of all the processes (if i do select * from sysprocesses this will give me the memory will this be that is currently used ? and whats this in 8 kb

Not sure how to get whats using the CPU ? over a period of time.
Drive I/O not sure how to measure this ..

Boss want number of connections but i think he want number of processes (if some one connects they could be using many processes and using this amount of memory) ....what you think there.

He also wants me to capture the entire database transactions then replay them all back again ...
and tell him whats wrong with sql ...ouch

Any pointers is greatly appreciated.
0
 
LVL 27

Assisted Solution

by:ptjcb
ptjcb earned 300 total points
ID: 18878069
http://www.sql-server-performance.com/

One of my favorite sites for information on how to tune SQL. Which version of SQL Server?
0
 
LVL 7

Author Comment

by:TRACEYMARY
ID: 18878137
2000...2005 will be soon
0
 
LVL 7

Author Comment

by:TRACEYMARY
ID: 18878149
The auto update statistics is the on i can turn off and then do dbcc updatestatistics 'dbname'
so going to see if taking if off during heavy loads help

0
 
LVL 27

Accepted Solution

by:
ptjcb earned 300 total points
ID: 18878514
Cool, I assume that this is SP4.

Are you doing bulk inserts, which cause index defrags? Perfmon is your friend. This might help:
http://support.microsoft.com/kb/897284/en-us

Do your index defrags at (as low as possible) low IO times.

Don't worry too much about the fill factor. Fill factor values are changed by SQL as time goes on.

http://www.sql-server-performance.com/gv_baselining_tips.asp

(also read the other articles in that series)

You have to know what you are looking at first.
You will need to have a baseline to measure everything. That way when management calls and says SQL is running slow (whatever that means) you can check, using perfmon and what other tools your company can afford,  your current setup compares to the baseline.

http://www.sql-server-performance.com/sql_server_performance_audit10.asp

You have to find the long-running queries.
I have been in the situation where I have had a mixture of SQL code in stored procedures or inline calls from the application. Spending the time to identify long running stored procedures may be a moot point, if inline calls are really the issue.

http://www.sql-server-performance.com/sql_server_performance_audit.asp
Set up a SQL performance audit.

Boss want number of connections but i think he want number of processes (if some one connects they could be using many processes and using this amount of memory) ....what you think there.

He also wants me to capture the entire database transactions then replay them all back again ...
and tell him whats wrong with sql ...ouch

Number of connections will be meaningless, but a user can open many connections in a single transaction, but sure, if that's what he wants....

Your boss should be informed that running profiler (which will capture all of the transactions) will also add a drag to the system (which is already overloaded). It can be done, but I believe that capturing and fixing the long running queries will solve the issue. Working through every procedure will give you job security for a long time.

I assume that you have a group of developers who are constantly updating, adding, changing, and generally adding to the code. One person keeping up with a group is frustrating to say the least.



0
 
LVL 8

Expert Comment

by:Brain2000
ID: 18879127
It sounds like you may be missing some indexes if you're hitting 100% CPU with a very high I/O load going on.  Unless you really have max'd out the box based on the amount of work it must do.

I've found that latching problems can also happen if you are doing extremely large table joins through a primary key (i.e. more than 100,000 rows) and doing order by's with columns between both tables.  This can cause massive havoc, and can be somewhat relieved through carefully placed clustered indexes.
0
 
LVL 7

Author Comment

by:TRACEYMARY
ID: 18879525
Thanks everyone thats some good threads and reading.

Basically its overloaded......bulk copy going all the time they use imports and exports all day long, then time collection i was monitoring this today through profiler that gets over 4000 duration if not 10000 aswell so i investigating that too.

The indexes yes they go down to really low numbers 24 frag over space of a week... I will monitor daily to see what happens and whether i need to defrag nightly ...


0
IT, Stop Being Called Into Every Meeting

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

 
LVL 7

Author Comment

by:TRACEYMARY
ID: 18882200
I created this
SELECT 'SQL1', program_name, dbid,loginame, hostname, sum(memusage) as memusage, sum(cpu) as cpu, count(*) as connections
FROM
Master..sysprocesses (nolock)
WHERE ecid=0 and spid > 50
GROUP BY program_name , dbid, hostname, loginame

The memory in here and cpu is this accumulative or what is used at the current time.
I have
                     memory          cpu
      11660      267040      5      2007-04-10 00:00:02.617
so is this 11660 (bits bytes) * 8 * 1024 to get mgs
 and cpu is seconds (4 minutes being used)

Thanks
0
 
LVL 7

Author Comment

by:TRACEYMARY
ID: 18885985
I just read this from your threads
Example 1: A log write that is stuck for 45 seconds
A SQL Server log file write attempt periodically became stuck for approximately 45 seconds. The log write was not finishing in a timely manner. This behavior created a blocking condition that led to 30-second client time-outs.
I think i did have blocks all of a sudden after this....
Example 1: A log write that is stuck for 45 seconds
A SQL Server log file write attempt periodically became stuck for approximately 45 seconds. The log write was not finishing in a timely manner. This behavior created a blocking condition that led to 30-second client time-outs.

That seems to be what i have encountered,,,,all of a sudden this happens and no one knows where
Do i turn on dbcc traceon(830, -1)
0
 
LVL 27

Expert Comment

by:ptjcb
ID: 18888938
Yes, log writing can block other processes.

"While not recommended, stalled and stuck I/O detection can be completely disabled with trace flag 830." Basically, you are saying that you don't want to detect stalled I/O functions. You could try it if you think that will clear up your problem.

You will have to stop/restart to test this.
0
 
LVL 7

Author Comment

by:TRACEYMARY
ID: 18889655
I did not want to turn it off i just want to have a message appear if it happens again.
Anyway to detect it ?
0
 
LVL 27

Expert Comment

by:ptjcb
ID: 18890068
In SQL 2000, I believe that it is detected and written to the SQL log automatically because of SP4. Check your logs to be sure.
0
 
LVL 8

Expert Comment

by:Brain2000
ID: 18911469
Even if you turn off the detection, it's still going to happen.  Based on the high disk queue description, the timeout has a chance of happening.  But 30 seconds is even a long time for a disk queue to not flush.  Which makes me believe it might be something else....

With the amount of data that is being imported, there is a good chance that the block is coming from the database growing.  Let's say you have a 40GB database, and you have the growth set to 10% (default).  When it needs more room, you will have to wait while SQL writes out 4GB of new space before anything will continue.  I would recommend changing the growth settings from 10% to 20MB.

Even if that solves the problem, you still have a high disk queue problem which also needs solving.  If your drive is starved for I/O cycles, then perhaps more spindles would help.  You could go as simple as buying one single drive and move the log file and tempdb to that spindle.  Or you could go all out and buy a 16 drive enclosure making a large RAID0+1 (I wouldn't go RAID5 because it sounds like you have an extremely high write volume) for the database.  How many spindles are you currently running this database on?
0
 
LVL 7

Author Comment

by:TRACEYMARY
ID: 18912011
Thank you for replying.
The data growth on my main databases is 1024 for a 40 gig some of the smaller tables 6 gig are 10% i will change that to 20 mb.  What you think about the 1024 (should i make that a bit smaller).
Did not even think about that.

High disk queue,   i just asked for a copy of the SAN configuration we have our data on the first 5 luns which they told me was disk raid 5.   as for the spinkdles.   How do i find that information out.

0
 
LVL 7

Author Comment

by:TRACEYMARY
ID: 18927928
Thanks for all your help on this question.
We got 12 gig of memory now ....
I set up permon yesterday now can't open the file ....too large.  (back to drawing board on that one).
May just write to the database here and do reporting of this.
Did profiler on long duration got some results but then did not give me enough information to figure out what was being reported in profiler working on that.
Put some new indexes on the databases..(thats always good ha)

Have a great day
0
 
LVL 8

Expert Comment

by:Brain2000
ID: 18936789
To find out your disk spindles, just count the hard drives.  If you have a RAID 5, then you have at least 3.  If you have very high write volume, I would recommend going with a RAID 0+1 with at least 6 spindles (you'll get the size of 3 spindles).  That will boost your write speed a bit, and give you tremendous read speed.  But use the RAID that you are most comfortable using and that works best for your scenario.

By the way, if you are not running an x64 machine, then make sure you have AWE enabled or you will not get more than 2GB out of your 12GB of RAM for SQL.  Microsoft has an article on how to enable AWE for SQL to get past the 32 bit barrier, and utilize up to 64GB(?) of RAM.
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

Nowadays, some of developer are too much worried about data. Who is using data, who is updating it etc. etc. Because, data is more costlier in term of money and information. So security of data is focusing concern in days. Lets' understand the Au…
When you hear the word proxy, you may become apprehensive. This article will help you to understand Proxy and when it is useful. Let's talk Proxy for SQL Server. (Not in terms of Internet access.) Typically, you'll run into this type of problem w…
Via a live example, show how to set up a backup for SQL Server using a Maintenance Plan and how to schedule the job into SQL Server Agent.
Viewers will learn how the fundamental information of how to create a table.

758 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

21 Experts available now in Live!

Get 1:1 Help Now