Link to home
Start Free TrialLog in
Avatar of Simon Cripps
Simon CrippsFlag for United Kingdom of Great Britain and Northern Ireland

asked on

SQL web server 2008 R2 express performamce diferences

Hi,
I am running a daily batch job to update a sizable amount of data. I have been testing this on my development machine and the job takes about 2 hours. I have migrated this to my production virtual server and the job now takes 12hrs.
I believe that the SQL servers are set up the same – but I am happy to check specific if they are pointed out.
All data is stored locally relative to the running job (either on the server or the PC)
The Development PC is used only for development. The Virtual server hosts the website application that uses this data. as the job runs this has a crippling impact on the website to make it unusable. Viewing Task manager on the server as the job is running the CPU hovers around 20% and the RAM at 2.5GB of the available 4GB. I currently have very little traffic to the website and do not want to encourage traffic till this is resolved.

Specs are
Development
Windows 7 Professional version 6.1.76
x64 based PC 2.93Duo CPU
4GB RAM
MS SQL Server 2008 Express 10.0.25

Virtual server
Windows Web server 2008 R2 ver. 6.1.76
x64 based
2 x quad core AMD 2.29 GHz processors
4GB RAM
MS SQL Server 2008 Express 10.0.1

Any suggestions on why there is a difference and how to resolve this would really be much appreciated. This is preventing my web site from running effectively for 10hrs a day as the job runs.
Avatar of san_spy
san_spy
Flag of India image

Run a profiler and check the timelines taken for each steps.
Also compare the results with your test and production environment.
ASKER CERTIFIED SOLUTION
Avatar of kmalte
kmalte

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Yes, disk may be coming into play here as they most likely use shared storage versus your PCs local storage.  Also a difference in types of drives and controller.  But I will leave that to the more hardware savvy as I have been out of that line of work for some time.

As far as SQL server, since Express is only going to use 1gb memory and 1 cpu in each instance, you would think they would be the same.  To confirm the versions, run "select @@version" on both systems via SSMS and post the results, please.  Since you are dealing with 64-bit systems, possibly if one is 2008 R2 64-bit it is performing better than the other.

Other thoughts on disk.  If that is the issue, then you want to lessen the amount disk operations as possible.  So one thing is sizing of database, logs and tempdb.  Make sure they are at optimal size to reduce the amount of times they have to grow during job execution.  Keep in mind that if job is doing a lot of deletes and then population of new data on tables that all the deletes also hit the transaction logs which are being written to disk -- if you have multiple partitions, it is a GOOD idea to have logs on different spindle.  

References:
The Difference Between Truncation and Deletion in Microsoft SQL Server
-- by chapmandew, has good information on how and why to use truncate table.

Managing the Transaction Log for the Accidental DBA
Managing Fragmentation for the Accidental DBA
-- by mark_wills, both have some good information. The latter is on fragmentation which could be useful to note in your case.

For the IIS side of things, since you have control over the application, you can look at caching.  If the data that the web site is using doesn't change often or doesn't need to be reflected live (i.e., you can use stale information until the job is complete), then you may want to enable some caching on the application end.  You can use interval caching so site only goes back to the db every 4 hours say, otherwise it delivers data already retrieved.  This will help with keeping IIS requests from fighting with the db during the job's execution.

Also on the disk I/O side, you can look at the IIS logging.  If you are not intending to use this feature at all then turn it off.  We use to trim this down to the bare essential data we would actually use which reduces the amount needed to be written to disk for each IIS request.  Additionally, we made sure the file was rolling over on some good cutoff point so file doesn't get too long which if you have ever tried to open | manipulate a file in notepad that was way too big you will know can slow down performance of write.  

Hope that helps!
Avatar of Simon Cripps

ASKER

MW: Thanks that is a lot of info, I'll take a few days to digest and get back, cheers
Yes, I got into a bit there didn't I. *smile* I guess I figured if I was going to post a mark_wills article, I need to do a post that did justice.  As you will read, Mark gives very good detail on everything he posts on ... hopefully, some of this helps you in your efforts.

Take your time, just remember to post progress/feedback every 3-4 days so that your question doesn't drop into abandoned category.

Regards,

Kevin
I think that disk performace is the issue here, I have downloaded and run DiskBench, which monitors the disc performce as the job transfers a file. I did this several times on the dev machine and the server to the same file on each machine.
The results were significant on the dev PC to transfer the 174mb file from one directory to another was averaging at 650mb/s on the VPS this was only averaging at 2.5 MB/s. With that much of a difference I'm surprissed that the job is only running twice as slow on the virtual machine.
Apart from making the jobs more efficient are there any settings I can look at (or ask my service provider to resolve) to improve the hardware performance.


I have also run for interest and the results are:
Select @@Version  and got the following results
on Dev PC
Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (X64)   Mar 29 2009 10:11:52   Copyright (c) 1988-2008 Microsoft Corporation  Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: )

on VPS

Microsoft SQL Server 2008 (RTM) - 10.0.1600.22 (X64)   Jul  9 2008 14:17:44   Copyright (c) 1988-2008 Microsoft Corporation  Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: )
Avatar of kmalte
kmalte

I dont think that your provider will move you to a dedicated or less used disk for free but thats pure speculation from my side. What you could do is ask for a quote to lease an SQL standard license since the express version wont use more than 1 GB RAM and 1 cpu per instance just as mwvisa wrote. Increasing the RAM memory used by SQL should result in fewer disk I/O:s.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Agree on previous comment.  It goes back to the use of physical RAM versus Virtual Memory (paging) or in SQL's case tempdb which I eluded to in other posts where you see the server thrashing itself because it now is fighting high disk I/O, CPU to handle request for extended period of time because can't return quickly, and high memory then tempdb has to grow and so on and just gets to be a performance mess.

Might be a good way to approach the provider as they shouldn't turn down making another sale. *smile*
From the appearance of this solution it looks like I have requested that this be closed, without assigning points.
This could not be further from the truth, there was some great contributions above and I thought I had accepted this closed with multiple solution. If that is the case please ignore this OBJECT request otherwise please reopen so that I can award points to the excellent contributions.
Appologies for any confusion
From the appearance of this solution it looks like I have requested that this be closed, without assigning points.
This could not be further from the truth, there was some great contributions above and I thought I had accepted this closed with multiple solution. If that is the case please ignore this OBJECT request otherwise please reopen so that I can award points to the excellent contributions.
Appologies for any confusion
Crippsy,

There is currently a slight bug in the close process for some users.  You originally assigned points fine as you have above -- it is just that instead of being immediate, it is going to wait +4 days as if this were an auto-accept.

I am glad you have resubmitted this though as this time you assigned more points to kmalte, which is probably more fair.

Thanks for accepting our answers.

Good luck!

Best regards and happy coding,

Kevin