Solved

MS SQL Stored Procedure and very slow response

Posted on 2009-07-03
10
362 Views
Last Modified: 2012-05-07
Hello All,
I have a web system which checks around 5000 rows of data and for each row of data performs the checks and then sends the individual row back to the database (MS SQL Server 2000) with the checks complete and a PASS, FAIL or NO ANSWER. This updates various things and is taking around 10 seconds which is fine as I can show the user a waiting page. However when I add the crucial bit in the code box below, it takes 9 minutes! This bit of the query works out the totals for each id in the 5000 rows of data e.g.  this is sent back to the database:

10001, PASS

The query below works out the totals and then updates another table. The system then sends another bit of data

10001, PASS - run query
10001, FAIL ... 5000 times

As I only want one row of summary per item it has to check to see if anything exists and then do an insert or update. I am sure there is a more efficient way of doing this so any help at all would be greatly appreciated!
Declare @Count int 
 

Set @Count = (SELECT COUNT(Field1) FROM db1.dbo.tb1 WHERE Field1=@intField1 AND Field2=@intField2)
 

If @Count = 0 

Begin

	if @strState='Field5' and @Count = 0 

		begin

			INSERT INTO db1.dbo.tb1 (Field1, Field2 , Field3, Field4, Field5, Field6, Field7, Field8)  VALUES ( @intField1, @intField2, 100, 1, 1, 0, 0, @intField8)

		end
 

	 if  @strState='Field6' and @Count = 0 

		begin

			INSERT INTO db1.dbo.tb1( Field1, Field2 , Field3, Field4, Field5, Field6, Field7, Field8)  VALUES ( @intField1, @intField2, 0, 1, 0, 1, 0, @intField8)

		end
 

	 if  @strState='Field7' and @Count = 0 

		begin

			INSERT INTO db1.dbo.tb1 ( Field1, Field2 , Field3, Field4, Field5, Field6, Field7, Field8)  VALUES ( @intField1, @intField2, 0, 1, 0, 0, 1, @intField8)

		end

End
 

If @Count = 1 

Begin
 

Declare @Field4 int

Declare @Field5 int

Declare @Field6 int

Declare @Field7 int
 

Set @Field4 = (SELECT Field4 FROM db1.dbo.tb1 WHERE Field1=@intField1 AND Field2=@intField2) 

Set @Field4 = @Field4 +1
 

	If @strState='Field5' and @Count = 1 

		begin
 

			Set @Field5 = (SELECT Field5 FROM db1.dbo.tb1 WHERE Field1=@intField1 AND Field2=@intField2)
 

			Set @Field5 = @Field5  + 1
 

			Set @Field6 = (SELECT Field6 FROM db1.dbo.tb1 WHERE Field1=@intField1 AND Field2=@intField2)

			Set @Field7 = (SELECT Field7 FROM db1.dbo.tb1 WHERE Field1=@intField1 AND Field2=@intField2) 

			

			If @Field6=0 and @Field7=0

				begin

					UPDATE db1.dbo.tb1 SET Field1=@intField1, Field2=@intField2 , Field3=100 , Field4=@Field4 , Field5=@Field5, Field8=@intField8 WHERE  Field1=@intField1 and  Field2=@intField2

				end

			else

				begin

					UPDATE db1.dbo.tb1 SET Field1=@intField1, Field2=@intField2 , Field3=0 , Field4=@Field4 , Field5=@Field5, Field8=@intField8 WHERE  Field1=@intField1 and  Field2=@intField2

				end

		end
 

	 if  @strState='Field6' and @Count = 1 

		begin
 

			Set @Field6 = (SELECT Field6 FROM db1.dbo.tb1 WHERE Field1=@intField1 AND Field2=@intField2)

			Set @Field6 =  @Field6 + 1
 

			UPDATE db1.dbo.tb1 SET Field1=@intField1, Field2=@intField2 , Field3=0, Field4=@Field4, Field6=@Field6, Field8=@intField8  WHERE  Field1=@intField1 and  Field2=@intField2

		end
 

	 if  @strState='Field7' and @Count = 1 

		begin
 

			Set @Field7 = (SELECT Field7 FROM db1.dbo.tb1 WHERE Field1=@intField1 AND Field2=@intField2) 

			Set @Field7 = @Field7  + 1
 

			UPDATE db1.dbo.tb1 SET Field1=@intField1, Field2=@intField2 , Field3=0, Field4=@Field4, Field7= @Field7, Field8=@intField8  WHERE  Field1=@intField1 and  Field2=@intField2

		end

End

Open in new window

0
Comment
Question by:pidge_uk
  • 6
  • 3
10 Comments
 

Author Comment

by:pidge_uk
ID: 24771464
Another thing, If I run this query on my test server it runs in 16 seconds. The only difference is there is more data on the live site and also we have a web server and a database server where as the test server has MS SQL running locally.
0
 
LVL 14

Expert Comment

by:shru_0409
ID: 24772307
check which query is taking extra time
try to commit the rows after some limit. i think redo log generated more..
0
 

Author Comment

by:pidge_uk
ID: 24772332
Hello there,

Thanks for the help. The query which is taking the time is included above.
0
 

Author Comment

by:pidge_uk
ID: 24773126
Hello All,

I have improved things with the following code snippet  (taken the loop from another solution) but it is still slow. tb1 has over 2,000,000 rows could this be the issue even though the select queries have wheres?

Thanks for any help.



 

Begin

		declare @nextid int

		declare @lastid int

		declare @selectcmd nvarchar(255)

		Declare @intPercentage as int
 

		

		CREATE TABLE #tmploop(

			id int identity(1,1) not null,

			PDID nvarchar(10) not null

			)

		CREATE INDEX[IX_loop_id] ON [#tmploop]([id]) ON [PRIMARY]

		 

		insert into #tmploop(PDID) 

		select distinct A.field1 from db1.dbo.tb1 A where field2=@intfield2
 

		 

		select @lastid = max(id) from #tmploop

		set @nextid = 1

		 

		while @nextid <= @lastid

		BEGIN

		

		Declare @PDID As integer

		Set @PDID = (SELECT PDID FROM #tmploop WHERE ID=@nextid)
 

		Declare @C_P As integer

		Set @C_P = (SELECT COUNT(Result)  from db1.dbo.tb1 A where A.field1=@PDID AND field2=@intfield2  AND Result= 'PASS')
 

		Declare @C_F As integer

		Set @C_F = (SELECT COUNT(Result)  from db1.dbo.tb1 A where  A.field1=@PDID AND field2=@intfield2  AND Result= 'FAIL')
 

		Declare @C_N As integer

		Set @C_N = (SELECT COUNT(Result)  from db1.dbo.tb1 A where A.field1=@PDID AND field2=@intfield2  AND Result= 'NOANSWER')
 

		Set @intPercentage = 100

		Declare @total As int
 

		Set @total = @C_F + @C_N + @C_P

		

			Begin

				if @C_F= 0 AND @C_N=0

					begin

						INSERT INTO db2.dbo.tb2 (field1, field2 , field3, total, pass, fail, noanswer, percentage)  VALUES ( @PDID, @intfield2, 100, @total, @C_P,@C_F, @C_N, @intPercentage)

					end

			

				else

					begin

						INSERT INTO db2.dbo.tb2( field1, field2 , field3, total, pass, fail, noanswer, percentage)  VALUES ( @PDID, @intfield2, 0, @total, @C_P, @C_F, @C_N, @intPercentage)

					end

			

			End

				

		Set @nextid = @nextid + 1

		

		END
 

End
 

Drop table #tmploop

GO

Open in new window

0
 

Author Comment

by:pidge_uk
ID: 24773137
Oh this is the error I get:

System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteScalar()
0
Backup Your Microsoft Windows Server®

Backup all your Microsoft Windows Server – on-premises, in remote locations, in private and hybrid clouds. Your entire Windows Server will be backed up in one easy step with patented, block-level disk imaging. We achieve RTOs (recovery time objectives) as low as 15 seconds.

 
LVL 14

Expert Comment

by:profya
ID: 24780666
No details is available for us to test, however, your approach looks to me similar to a cursor, especially the while loop. I think the key step in enhancing the query performance is to take advantage of aggregation.
This condition field2=@intfield2 is meaningless since the temporary table has this condition applied when it is filled up.
You actually as I can see want to count result types, in this case you can for example create another temporary table and fill it with something like:
INSERT INTO #newTempTable SELECT result, COUNT(*) AS result_count FROM db1.dbo.tb1 GROUP BY result
You will get three records, then you can get their value with something like:
SELECT @C_P=result_count FROM #newTempTable WHERE result='PASS'
Then do your calculations and insert it into the final table.

Of course you can combine the query that counts the result into one query using sub queries, something like:
SELECT @C_P=(SELECT COUNT(*) FROM db1.dbo.tb1  WHERE result='PASS') , @C_F=(SELECT COUNT(*) FROM db1.dbo.tb1 WHERE result='FAIL') AS fail_results, .........

However, the core is to change the query plan and strategy and to avoid running into loops as much as possible, the database engine knows how to optimize queries if well planned. In addition to all that what I said is only suggestion, you have to make sure that it meets your logic.
I hope this helps.
0
 
LVL 14

Expert Comment

by:profya
ID: 24780671
I forgot something, you can increase the query timeout, however, this is not the best solution, you have to optimize your query plan first.
0
 

Author Comment

by:pidge_uk
ID: 24781145
Hello Profya,

Thanks for that. I had actually thought I needed to improve the query plan so I made the system work out the totals as it went along and then just add the counts to the insert on the database.Am I right in thinking a count statement on 2 million rows of data even though the count is looking at a few thousand will be a slow query? Surely SQL can manage counts on a million rows without a problem.

0
 
LVL 14

Accepted Solution

by:
profya earned 500 total points
ID: 24783335
No I think, since counting records in the database level is faster than iterating using loops or cursors. The database query optimizer selects the best plan it can use to provide you with the result. For example it may apply the WHERE clause first to narrow the query result then make the count, and it may make the count while it is scanning rows. I am not a database engine designer, however, I noticed that using aggregate methods in most of the cases faster. Using count(*) is faster than using count(col) because the database does not need to check if col is null or not. SQL Server 2000 does not support partitioning, if it were supporting, partitioning enhances cases like what you have a lot.
I suggest you now to get totals using count and the exact where clause needed, then do the required calculations on those totals, I expect this to clearly enhance the overall process performance. Because I noticed duplicated where clauses, unrequired loop and work duplication within the loop. These things hurts the performance badly.
Before you can decide, you can test the query I have suggested, that one uses sub-queries to calculate totals, SQL Query analyzer gives you the time elapsed to run the query, see if the boost is considerable, then apply your logic on that and go ahead. I hope this helps.
0
 

Author Closing Comment

by:pidge_uk
ID: 31599503
Thanks for that profya. It was indeed a badly designed query that was casuing the problems. Thanks for answering.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Why is this different from all of the other step by step guides?  Because I make a living as a DBA and not as a writer and I lived through this experience. Defining the name: When I talk to people they say different names on this subject stuff l…
If you find yourself in this situation “I have used SELECT DISTINCT but I’m getting duplicates” then I'm sorry to say you are using the wrong SQL technique as it only does one thing which is: produces whole rows that are unique. If the results you a…
Familiarize people with the process of retrieving data from SQL Server using an Access pass-thru query. Microsoft Access is a very powerful client/server development tool. One of the ways that you can retrieve data from a SQL Server is by using a pa…
Via a live example, show how to shrink a transaction log file down to a reasonable size.

920 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

16 Experts available now in Live!

Get 1:1 Help Now