Identifying and Monitoring long running Stored Procedures & Queries in SQL 2005

Michael-Thomas
Michael-Thomas used Ask the Experts™
on
I would like to create a stored procedure that can report back on queries and stored procedures that run longer than X time (15s in my case).  I am looking for all the regular performance details for the long running query/sp such as logical/physical read/writes along with the running times and the idea is to have the results inserted into a table and have daily reports emailed later.

Thanks in advance
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Database Architect - Dba - Data Scientist
Commented:
Run a server side trace and apply a filter of 15ms.  Import the content of the trace in a table for further analysis.

Commented:
you can use DMVs to query for SP query stats.  


to get more info on sys.dm_exec_query_stats, check out http://msdn.microsoft.com/en-us/library/ms189741%28SQL.90%29.aspx

SELECT 
  t.TEXT QueryName, 
  s.execution_count AS ExecutionCount, 
  s.max_elapsed_time AS MaxElapsedTime,
  ISNULL(s.total_elapsed_time / s.execution_count, 0) AS AvgElapsedTime,
  s.creation_time AS LogCreatedOn,
  ISNULL(s.execution_count / DATEDIFF(s, s.creation_time, GETDATE()), 0) AS FrequencyPerSec, 
  s.total_logical_reads,
  s.total_logical_writes  
FROM sys.dm_exec_query_stats s
CROSS APPLY sys.dm_exec_sql_text( s.sql_handle ) t
where s.max_elapsed_time = 15000
ORDER BY
s.max_elapsed_time DESC
GO

Open in new window

Commented:
make sure to run  DBCC FREEPROCCACHE to flush the cached SP query stats if you want to start with fresh data.
Learn SQL Server Core 2016

This course will introduce you to SQL Server Core 2016, as well as teach you about SSMS, data tools, installation, server configuration, using Management Studio, and writing and executing queries.

Scott PletcherSenior DBA
Most Valuable Expert 2018
Top Expert 2014

Commented:
That could be a big hit on performance on a prod box.

Insteaad you might want to consider capturing the results of a system view query, then running the query again in a <fixed amount of time>.  You can then subtract the two values to get the activity that occurred during the <fixed amount of time>.
lcohanDatabase Analyst

Commented:
Indeed do NOT run FREEPROCCACE or DROPCLEANBUFFERS on prod dbs and you could use the code below for your purpose but SQL 2005 has pretty good Standard reports for that - just right click the server in SSMS go to Reports and select Standard. You could also download performance_dashboard report from Microsoft which I recommend:

http://www.microsoft.com/downloads/en/details.aspx?familyid=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en


--CREATE PROC Get_ActiveProcesses @Delay smallint= 5
--AS
----
---- Description Returns list of active processes and their buffer contents (what they execute)
---- A process is considered as active if it has some changes of cpu time consumed or number
---- of io operation in specified period.
---- Input (optional) @Delay - Time interval to catch activity
---- Output Result set with active processes
----
SET nocount on

DECLARE @Delay smallint
SET @Delay = 120


IF @Delay > 59
      SET @Delay = 59

IF @Delay < 1
      SET @Delay = 1

PRINT @Delay

DECLARE @DelayClock CHAR(8), @Internal_Value int

SET @DelayClock = '00:00:' + LTRIM(STR(@Delay))

CREATE TABLE #tmpsysprocesses (
      EventTime DATETIME,
      DBName sysname NULL,
      ObjectName sysname NULL,
      spid smallint,
      [Host_Name] sysname NULL,
      [Program_Name] sysname NULL,
      [Login_Name] sysname NULL,
      [Start_Time] DATETIME,
      TotalReads int,
      TotalWrites int,
      TotalCPU int,
      Writes_in_TempDB int,
      CmdType VARCHAR(16),
      CmdStatus VARCHAR(30),
      SQL_Text xml,
      Blocking_spid smallint,
      Blocking_Text xml)

INSERT INTO #tmpsysprocesses
SELECT GETDATE(), (SELECT DB_NAME(dbid) FROM master.sys.dm_exec_sql_text(x.sql_handle)) AS dbname,
(SELECT OBJECT_NAME(objectid) FROM master.sys.dm_exec_sql_text(x.sql_handle)) AS objectname,
x.session_id AS spid,
x.host_name,
x.program_name,
x.login_name,
x.start_time,
x.totalReads,
x.totalWrites,
x.totalCPU,
x.writes_in_tempdb,
x.command AS cmdtype,
x.status,
(SELECT text AS [text()] FROM master.sys.dm_exec_sql_text(x.sql_handle) FOR XML PATH(''), TYPE) AS sql_text,
COALESCE(x.blocking_session_id, 0) AS blocking_spid,
(SELECT p.text FROM (SELECT MIN(sql_handle) AS sql_handle FROM master.sys.dm_exec_requests r2 WHERE r2.session_id = x.blocking_session_id) AS r_blocking
CROSS APPLY
(SELECT text AS [text()] FROM master.sys.dm_exec_sql_text(r_blocking.sql_handle) FOR XML PATH(''), TYPE) p (text)) AS blocking_text
FROM
(SELECT r.session_id, s.host_name, s.login_name, r.start_time, r.sql_handle, r.blocking_session_id,r.command,r.status,
 SUM(r.reads) AS totalReads, SUM(r.writes) AS totalWrites, SUM(r.cpu_time) AS totalCPU,
 SUM(tsu.user_objects_alloc_page_count + tsu.internal_objects_alloc_page_count) AS writes_in_tempdb, s.program_name
 FROM sys.dm_exec_requests r
 JOIN sys.dm_exec_sessions s ON s.session_id = r.session_id
 JOIN sys.dm_db_task_space_usage tsu ON s.session_id = tsu.session_id and r.request_id = tsu.request_id
 WHERE r.status IN ('running', 'runnable', 'suspended')
 GROUP BY r.session_id, s.host_name, s.login_name, r.start_time, r.sql_handle, r.blocking_session_id, r.command, r.status, s.program_name) x

WAITFOR delay @DelayClock

CREATE TABLE #tmpsysprocesses2 (
      EventTime DATETIME,
      DBName sysname NULL,
      ObjectName sysname NULL,
      spid smallint,
      [Host_Name] sysname NULL,
      [Program_Name] sysname NULL,
      [Login_Name] sysname NULL,
      [Start_Time] DATETIME,
      TotalReads int,
      TotalWrites int,
      TotalCPU int,
      Writes_in_TempDB int,
      CmdType VARCHAR(16),
      CmdStatus VARCHAR(30),
      SQL_Text xml,
      Blocking_spid smallint,
      Blocking_Text xml)

INSERT INTO #tmpsysprocesses2
SELECT GETDATE(), (SELECT DB_NAME(dbid) FROM master.sys.dm_exec_sql_text(x.sql_handle)) AS dbname,
(SELECT OBJECT_NAME(objectid) FROM master.sys.dm_exec_sql_text(x.sql_handle)) AS objectname,
x.session_id AS spid,
x.host_name,
x.program_name,
x.login_name,
x.start_time,
x.totalReads,
x.totalWrites,
x.totalCPU,
x.writes_in_tempdb,
x.command AS cmdtype,
x.status,
(SELECT text AS [text()] FROM master.sys.dm_exec_sql_text(x.sql_handle) FOR XML PATH(''), TYPE) AS sql_text,
COALESCE(x.blocking_session_id, 0) AS blocking_spid,
(SELECT p.text FROM (SELECT MIN(sql_handle) AS sql_handle FROM master.sys.dm_exec_requests r2 WHERE r2.session_id = x.blocking_session_id) AS r_blocking
CROSS APPLY
(SELECT text AS [text()] FROM master.sys.dm_exec_sql_text(r_blocking.sql_handle) FOR XML PATH(''), TYPE) p (text)) AS blocking_text
FROM
(SELECT r.session_id, s.host_name, s.login_name, r.start_time, r.sql_handle, r.blocking_session_id,r.command,r.status,
 SUM(r.reads) AS totalReads, SUM(r.writes) AS totalWrites, SUM(r.cpu_time) AS totalCPU,
 SUM(tsu.user_objects_alloc_page_count + tsu.internal_objects_alloc_page_count) AS writes_in_tempdb, s.program_name
 FROM sys.dm_exec_requests r
 JOIN sys.dm_exec_sessions s ON s.session_id = r.session_id
 JOIN sys.dm_db_task_space_usage tsu ON s.session_id = tsu.session_id and r.request_id = tsu.request_id
 WHERE r.status IN ('running', 'runnable', 'suspended')
 GROUP BY r.session_id, s.host_name, s.login_name, r.start_time, r.sql_handle, r.blocking_session_id, r.command, r.status, s.program_name) x

CREATE TABLE #xp_msver (
      [index] int,
      [name]VARCHAR(1000) NULL,
      internal_value INT NULL,
      character_value VARCHAR(1000) NULL)

INSERT INTO #xp_msver
EXEC master..xp_msver 'ProcessorCount'

SELECT @Internal_Value = internal_value
FROM #xp_msver
WHERE [name] = 'ProcessorCount'

SELECT t2.DBName,t2.ObjectName,t2.spid 'ProcessId',t2.TotalReads,t2.TotalWrites,t2.TotalReads+t2.TotalWrites 'TotalPhysical_IO',(t2.TotalReads+t2.TotalWrites)-(t.TotalReads+t.TotalWrites) 'Physical_IO_InTheTimeFragment',t2.TotalCPU,t2.TotalCPU-t.TotalCPU 'CPU_ConsumedInTheTimeFragment',t2.Writes_in_TempDB,UPPER(t2.CmdStatus) 'CmdStatus',t2.CmdType,t2.SQL_Text,t2.[Program_Name] 'Application_Name',t2.[Host_Name],t2.[Login_Name],t2.[Start_Time],t2.Blocking_spid,t2.Blocking_Text
FROM #tmpsysprocesses2 t2 INNER JOIN #tmpsysprocesses t
ON t2.spid = t.spid AND t2.start_time = t.start_time --AND t2.DBName IS NOT NULL

SELECT MAX(DATEDIFF(ms,t.EventTime,t2.EventTime)) 'Fragment_Duration', @Internal_Value 'Number_Of_CPUs', SUM(t2.TotalCPU-t.TotalCPU)'SUM CPU_ConsumedInTheTimeFragment', SUM((t2.TotalReads+t2.TotalWrites)-(t.TotalReads+t.TotalWrites))'SUM Physical_IO_InTheTimeFragment'
FROM #tmpsysprocesses2 t2 INNER JOIN #tmpsysprocesses t ON t2.spid = t.spid

SELECT TOP 5 'Top 5 Queries by CPU' AS Comment, total_worker_time/execution_count AS [Avg_CPU_Time], execution_count AS [Execution_Count],
(SELECT text AS [text()] FROM master.sys.dm_exec_sql_text(qs.sql_handle) FOR XML PATH(''), TYPE) AS Statement_Text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY total_worker_time/execution_count DESC;

SELECT TOP 5 'Top 5 Queries by IO' AS Comment, (total_logical_writes + total_logical_reads)/execution_count AS [Avg_IO_per_Exec], execution_count AS [Execution_Count],
(SELECT text AS [text()] FROM master.sys.dm_exec_sql_text(qs.sql_handle) FOR XML PATH(''), TYPE) AS Statement_Text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY (total_logical_writes + total_logical_reads)/execution_count DESC;

DROP TABLE #tmpsysprocesses
DROP TABLE #tmpsysprocesses2
DROP TABLE #xp_msver
GO





Author

Commented:
Thanks, definitely don't want to run FREEPROCCACE or DROPCLEANBUFFERS on my production DBs.  

I am going to try both Racimo's server side trace (too bad there is no option to load into a table by default) and lcohan's query and I'll get an update later this afternoon.
lcohanDatabase Analyst

Commented:
If you run SQL Profiler, after you put your credentials on the "Trace Properties" window check box "Save to table" and follow the steps - this way trace data is saved into a table. Hope this is what you wanted.
Racim BOUDJAKDJIDatabase Architect - Dba - Data Scientist

Commented:
<<If you run SQL Profiler, after you put your credentials on the "Trace Properties" window check box "Save to table" and follow the steps - this way trace data is saved into a table. Hope this is what you wanted.>>
A few words of caution on that...

I would not recommend running client side tracing on a production since it can be very dangerous to the servers that are under heavy load.  Using profiler can create an overhead that can bring a server down under heavy concurrency situations.  On the other hand, server side scripting on the other hand is very safe even under heavy conditions but can only be set up through scripting.
lcohanDatabase Analyst

Commented:
That is all true indeed: if you you are not familiar with the tool itself you could crash your SQL by only running a default trace however....it is the SQL built in tool that if used wisely I believe many other databases would like to have it. For instance we use it ocasionaly with success against a web site SQL backend database to identify speciffic security and/or performance issues.

Just don't run on the same computer as your SQL server and make sure your trace data itself is light.

Author

Commented:
Thanks, definitely not planning on running profiler on the server itself.  I did learn that one the hard way.
Racim BOUDJAKDJIDatabase Architect - Dba - Data Scientist

Commented:
<<Just don't run on the same computer as your SQL server and make sure your trace data itself is light.>>
Actually, client side tracing is dangerous whether you run the client locally or remotely. I have seen profiler sessions bringing down servers *in minutes*, even though the profiler was running remotely.

The cause is that client side overhead is *not* only related to the RAM consumed by the client, which is fixed, but mainly to the memory overhead caused by the buffer size increase to stream data to the client in real time.  To ellaborate,under high transactionnal throughput, client side tracing creates memory pressure, because the streaming buffer can not purge itself fast enough to make room for new data to be streamed.  

The only real safe solution is to use server side tracing which does not have that kind of overhead.  But unfortunately it can not be configured in a GUI.

HTH
Racim BOUDJAKDJIDatabase Architect - Dba - Data Scientist

Commented:
<<Thanks, definitely not planning on running profiler on the server itself.  I did learn that one the hard way.>>
Do NOT use Profiler on a production if you have high transactional load.  The only safe way to do that is to use server side tracing that establish a trace without a GUI.

In my company, it is strictly forbidden to run profile on some servers that have been brought down.

From MS Support
http://support.microsoft.com/kb/929728/en-us

".... do not use SQL Profiler to perform a client-side trace of the events in the instance of SQL Server. Instead, use system stored procedures to perform a server-side trace of the events...."


meaning the problem is known and identified...

More links on how to do this right...

http://www.mssqltips.com/tip.asp?tip=1035
http://vyaskn.tripod.com/server_side_tracing_in_sql_server.htm

Hope this helps
Racim BOUDJAKDJIDatabase Architect - Dba - Data Scientist

Commented:
To clarify further...

When you run SQL Profiler against an instance of SQL Server 2000 or of SQL Server 2005, each active SPID of the instance requires an exclusive latch (LATCH_EX) to write the event to SQL Profiler. The SPID writes the event to the local memory buffer first. Then, the event is transferred to SQL Profiler. If the local memory buffer is full, the SPID discards the event. After the events in the local memory buffer are transferred to SQL Profiler, the SPID can write new events to the local memory buffer. If other SPIDs are waiting to write events to SQL Profiler, these SPIDs wait in a queue.
Scott PletcherSenior DBA
Most Valuable Expert 2018
Top Expert 2014

Commented:
>>  (too bad there is no option to load into a table by default) <<

There actually is, but it's too much overhead, so I strongly recommend you *not* do it.

You can load the trace file(s) into table(s) later if you need to.

Author

Commented:
Thanks everyone.  
In the end I chose to use a server side trace and actually used the scripts from http://vyaskn.tripod.com/server_side_tracing_in_sql_server.htm to make a bit easier for reuse.  I also found a site that can really help if you're not sure of the details for creating what you want (http://sqlserverpedia.com/blog/sql-server-bloggers/how-to-create-a-server-side-trace-with-sql-profiler/).

Author

Commented:
Following examples on the provided url http://vyaskn.tripod.com/server_side_tracing_in_sql_server.htm helped alot on setting up the server side trace
Racim BOUDJAKDJIDatabase Architect - Dba - Data Scientist

Commented:
<<Following examples on the provided url http://vyaskn.tripod.com/server_side_tracing_in_sql_server.htm helped alot on setting up the server side trace>>
Glad it helped.  In the end, server side traces are safe and highly reusable since they can be programmed to run for specific period of time, each time a problem arises, with almost neglectable impact on servers's performance.

Good luck...

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial