Stored Procedure Much Slower in .NET SqlCommand than in SSMS

Posted on 2013-02-04
Last Modified: 2013-02-11
I have a stored procedure that is executing an INSERT statement that we are seeing significant delays when executing.  When running it from our C# .NET application to insert 30 records in a row, it's taking roughly 4 seconds total to complete (only counting the tame it takes to run the SqlCommand.ExecuteNonQuery() method).  However, calling the same exact stored procedure from within SQL Server Management Studio the same number of times only takes about 0.4 seconds.  I can't figure out what's different between the 2 setups that would make such a difference of 10x speed.

I have tried all of the following with no noticeable change in speed:
1.  Creating the stored procedure "WITH RECOMPILE"
2.  Checking all of the "SET" values that are configured within SSMS and C#.  The only difference one was SET ARITHABORT, which was ON in SSMS and OFF when called from the .NET application.  Adding "SET ARITHABORT ON" to the start of the stored procedure made no difference, though.
3.  Removed all default values from the sproc parameters

The code used to call the stored procedure from the .NET application is:

using (SqlConnection newConn = new SqlConnection(connectionString))
   using (SqlCommand uCmd = new SqlCommand("sproc_name", newConn))
      uCmd.CommandType = CommandType.StoredProcedure;
      //About 15 parameters added using:
      uCmd.Parameters.AddWithValue("@ParamName", value);

      //One output parameter
      SqlParameter paramOUT = new SqlParameter("@OutPutKey", SqlDbType.UniqueIdentifier);
      paramOUT.Direction = ParameterDirection.Output;


Open in new window

The stored procedure itself is just a list of set commands (SET ANSI_NULLS ON, SET QUOTED_IDENTIFIER ON, SET ARITHABORT ON), a list of non-defaulted parameters, and the setting of the output variable that will be the new uniqueidentifier that will be inserted as the primary key in the table, followed by the INSERT statement itself.  

The application is build on .NET 4 and the SQL server is  MS SQL Server 2005.
Question by:DigitechComputer
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 2
  • 2
  • +1
LVL 50

Assisted Solution

Lowfatspread earned 250 total points
ID: 38853156
if i understand you correctly there is a significant difference between calling from your application a stored procedure 15 times and locally looping 15 times around in ssms calling the same stored procedure....

can you post the actual stored procedure code being invoked...

can you also explain what the intention of the application process is... if you have a scenario where you wish to insert/add/amend a set of data to your database you are probably better of thing and designing your application to pass the whole "set" of data to the database in as few steps as necessary  ie on a collection basis? rather than an individual object basis....
LVL 15

Expert Comment

by:Aaron Shilo
ID: 38854005

you should trace the execution of the proc in PROFILER and view the execution plan that the .net app connection receives and compare it with the execution plan your get when you execute the proc directly from SSMS.

LVL 40

Assisted Solution

by:Jacques Bourgeois (James Burger)
Jacques Bourgeois (James Burger) earned 250 total points
ID: 38854177
When you send your commands one by one, there is a communication with some overhead for each command. This overhead is not there when running inside of Management Studio, which is also more closely linked to the server.

You could improve the execution by sending your commands in batch instead of one by one. Lookup UpdateBatchSize in the documentation.
Best Practices: Disaster Recovery Testing

Besides backup, any IT division should have a disaster recovery plan. You will find a few tips below relating to the development of such a plan and to what issues one should pay special attention in the course of backup planning.


Author Comment

ID: 38857525
It turns out there was an issue in my initial testing environment, and the execution times of the insert stored procedure from SSMS and .NET are actually about the same.  Both are still slower than I'd like them to be, though.

I looked into adding them all at once as you suggested.  The application is typically saving 1-30 (usually fewer) business objects at a time, calling a different insert stored procedure for each.  I don't think I would benefit from any sort of bulk inserting, but maybe you have something else in mind.

I did try having the calls to these sprocs all share a single SqlConnection which is only opened once at the start and closed once when all of the inserts are done.  I also changed like business objects to share the same SqlCommand object, just modifying the parameter values instead of redeclaring them.  These changes gave me about a 25% boost in performance.  Do you have any other ideas for how this setup can be improved?  

Here is an example of the insert stored procedure I use:

alter procedure InsertStuff
	@Field1 uniqueidentifier,
	@Field2 datetime,
	@CreateDate datetime,
	@PrimaryKEY uniqueidentifier OUTPUT

declare @newCreateDate datetime 
set @newCreateDate=getDate()

set @PrimaryKEY = NEWID()

	INSERT INTO [dbo].[Table]

Open in new window

LVL 50

Assisted Solution

Lowfatspread earned 250 total points
ID: 38857587
if you are only inserting one row then there is no need for this

declare @newCreateDate datetime
set @newCreateDate=getDate()

just reference getdate() directly in the values list...

the only indication of a saving is the fact that you have an output variable defined for the key.... why do you need that?

presumably to use in some parent/child relationship....

so could you not pass that collection of parent/child data just once to sql and let a larger stored procedure handle all the necessary update in one interaction....
LVL 40

Accepted Solution

Jacques Bourgeois (James Burger) earned 250 total points
ID: 38857679
You cannot have a more basic INSERT as this one. And you find that it is too slow with no more than 30 inserts at a time?

I would look for the structure of the database, specially at the proper use of indexes and triggers.

Triggers are not evident in an INSERT SQL command, but can slow down things in the background. Improving a trigger can thus improve your INSERT. Working in transaction mode can sometimes improve things. Most programmers will tell you that it does nothing with INSERT since you are not hitting the same record more than once, but if the triggers do end up hitting the same row(s) many times, working inside of a transaction could help a bit.

Many database designers are not aware that indexes improve things on SELECT and UPDATE, but slow down things on INSERT. I have seen a few instances of indexes that where used only a few times a year for special reports. These slow down INSERT operations all year long only to save time once or twice on yearly reports.

I also wonder why you need the NewID function. Couldn't you use a GUID primary key instead of generating your own?

Author Comment

ID: 38860356
I will directly call getdate() in the values list. I had thought that calling a function in an insert wasn't allowed, as is the case with sproc parameters.  I should also be able to eliminate the output parameter, creating the primary key value on the .NET side.  

The table in question doesn't use triggers, but I was able to eliminate 3 mostly-unneeded indexes.  

It's very hard to tell since the more I run this query, the more SQL optimizes it, but unfortunately these changes don't seem to have made much of a difference.  It even seems to be taking slightly longer to save now, but that's almost certainly because SQL hasn't fully optimized the new structures yet.  A single call to the sproc with empty values in its parameters is currently taking about 25ms, which still seems higher than it should be.  Is there anything else I can look at to improve performance other than hardware and network setup?

Author Closing Comment

ID: 38877826
So it turns out that having a clustered index on a non-sequential uniqueidentifier field is a bad idea.  Changing the clustered index to a sequential field like CreateDate made a very noticeable improvement in save times.  Combined with your other suggestions, INSERTs are performing a lot better than they used to.  Thanks to all for your suggestions!

Featured Post

What is SQL Server and how does it work?

The purpose of this paper is to provide you background on SQL Server. It’s your self-study guide for learning fundamentals. It includes both the history of SQL and its technical basics. Concepts and definitions will form the solid foundation of your future DBA expertise.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
API v SOA 8 42
SQL query for highest sequence 4 64
sql query 5 44
Need to trim my database size 9 26
This article explains how to reset the password of the sa account on a Microsoft SQL Server.  The steps in this article work in SQL 2005, 2008, 2008 R2, 2012, 2014 and 2016.
Real-time is more about the business, not the technology. In day-to-day life, to make real-time decisions like buying or investing, business needs the latest information(e.g. Gold Rate/Stock Rate). Unlike traditional days, you need not wait for a fe…
Via a live example combined with referencing Books Online, show some of the information that can be extracted from the Catalog Views in SQL Server.
Via a live example, show how to shrink a transaction log file down to a reasonable size.

730 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question