Link to home
Start Free TrialLog in
Avatar of Simon Cripps
Simon CrippsFlag for United Kingdom of Great Britain and Northern Ireland

asked on

Cloud server set up

Hi, I'm looking for some advice on setting up my server. I am using a cloud server and am able to twee the CPU and memory settings. However I want to make sure I have efficient use of my money.
Now the site runs OK most of the time, there are a couple of pages that take up to 10 second to load, but most of the important landing pages are delivered within a few seconds. I read through a database with a million lines of products on.
Once a day I update my data for all million records overnight in a scheduled batch run.
This is the job that my development PC takes about 2 hours to run as it updates the records my PC has a 4gb RAM and 2.93hz processor.
The server is currently configured to 2 gb RAM and 1 Core CPU (up to 2ghz).
As the job runs on the server the memory seams to stabilise at almost maximum of 1.87gb, the CPU fluctuates between 50% and 90% however takes 13hrs, and during all that time the website performance is impacted.
As The CPU does not often reach 100% would an extra core CPU make the job run faster or free up the memory. Alternatively do I get another GB memory and would that allow the job to run quicker.
As cost is key and extra core processor cost the same as 1gb memory, what would be the priority and most efficient use be. Or do I resign myself to having to pay for both CPU and Memory.
Any suggestions?
Avatar of Kevin Cross
Kevin Cross
Flag of United States of America image

I would go for the 1gb memory.  SQL will need the memory in preparing the data during updates, etc in the nightly batch.  You can see this in the statistics by your memory being for all intensive purposes maxed and CPU around 75% utilized.  If you increase the memory, since more can be processed quicker, then you may see the CPU usage level shift up a bit ; however, the time period should be a lot shorter.

I had a server when I started my current job (5 years ago) that kept failing every day or at least appeared to.  The issue was the system was running 100% CPU for extended period of time blocking out clients.  By simply adding more memory, I was able to get the server to handle the query requests by clients and stay stable -- that system is still running today with same CPU it had on a box that was sized | purchased close to 10 years ago.

The moral of the story is that on extremely large queries that previously crashed the system, the server would still spike up CPU during request but was able to organize the resultset in memory without having to page to disk almost instantaneously, so the high CPU was no longer an issue.

See the issue is not so much high CPU, but the amount of time which a CPU is put under a particular strain so the key is to lessen that.  And keep in mind that what I have observed is that once you go past physical memory, then you get into virtual memory which creates disk i/o which can cause and already high CPU system to spin more as it is trying to handle the i/o operations at the same time.  So the issue grows exponentially which is why you see 13:2 performance even though memory is only 1:2 (2:4).  Likewise, increasing the memory only 1gb more may seem minor, but that added throughput will show gains a lot faster than you would think especially if Windows and SQL are performance tuned correctly.

:) HTH
Avatar of Simon Cripps


I've increase the memory to 3 gb and this initially appeared to improve the performance by looking at task manager freeing up some of the CPU an the memory. The performance of the job still takes about 8 hours (compared to 2 on the development PC) .
Then I increased the CPU, with this increase all that seamed to happen was take the CPU usage down to very low levels of about 20%. So now the Server is using 2.1gb of memory (out of 3gb) and has 2 core processors (which should be more powerful processor power than the development PC, yet the job is still extremely slow and can prevent the website from working.

a) Would you suspect that taking the memory to match that of the development PC (at 4GB) make a difference even though the server is currently running with about 30% spare.
b) Would you think that this may be a more fundamental issue with the set up of SQL server. The database is running from a backup of the development machine and so I thought would be the same. Do you know if there are certain features of a database or server that are not carried across in a back up, things such as indexing etc.

your views would be very much appreciated
what do you mean by "cloud server"? do you use something like Amazon EC2?
do you use a single server for both the web site and the database?

Yes I use 1and1 Dynamic cloud server which I beleive is just like  Amazon EC2.
The same server is used for the database and the website. The job that runs updates the website data.

I was updating the data on the development machine, then backing up and zipping up the database and transfering it to the production (cloud) server but the transfer took a couple of hours and the site had to be stopped as I restored the production database. now I have an automated job that updates the website data. However appauling impact on performance.
Avatar of Kevin Cross
Kevin Cross
Flag of United States of America image

Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Very much agree you were right in the extra GB of memory. I think I will have to go and match the 4gb memory as well. (bit concerned that this may have no effect as it appears that I am only using 2.15 gb and therefor have a spare 0.85gb memory,  It is now obvious that the job need abov 2gb, however does not seem that taking it up to 4 will do anything better than increas the free memory to 1.85gb.
My knowledge of RAM is limited if I say went up to 8GB do you expect the performance to keep on increasing? I will move upto 4gb and let you know.

With regards to getting to the 2hrs I still suspect that I should get the same performance as the PC as all the data is transfered to the server prior to running the job and therefor the job is run locally on the PC.
I think I have further issues to resolve outside of this discussion as to efficiencies of the jobs and that even at best I am likely to get a 2hr hit to performance daily and may need t look to other ways to get the updated database than running against production data.
>>The same server is used for the database and the website.<<
That is not recommended.  You are better off using one server for your database and another for your website.

Also, the key questions you have not told us (or I have not seen) are:  
Are you using SQL Server 32 or 64bit?  
What edition of SQL Server 2008 are you using?
Okay, so the job is running locally to the server...

Before moving to 8gb of memory, it would be good to at least get this to run in the same time as it previously was.  As you said at some point the more memory may not be of any use as SQL may not be able to take advantage of it depending on what is making the process run long.

It may help ultimately though, but memory usage is based on operating system version so if you don't have an OS capable of addressing over 4gb this will be a waste of your money.

Once you get the system to 4gb, a trick you can use if this is strictly a Sql Server is to tune Windows to only utilize 1gb for kernel memory versus the standard 2gb it allocates.  This will leave 3gb for Sql.

Additionally, how are you measuring memory usage.  With most VM environments there are graphical statistics available.  Can you see what the memory and other resource patterns are like during the execution of the job?  It might give you some good insight as to what is going on.

If the server is using 2.15gb at rest, then when the job runs it only has 0.85gb to do its work if you think of it that way instead of spare memory.
Oh, didn't see AC's posting, but agree.  That goes to my point on memory usage and 8gb.  The information requested and same for OS will help in assisting.

If this is a web server also, then you will have to keep in mind that you will need resources for number of concurrent requests.  This is like my story on our server.  The issue isn't so much the server needing all the memory or cpu for a short time, but what happens when the memory and cpu is already high and then more requests come in needing more memory for example you get disk i/o and cpu now trying to do another activity.  Even without the excessive paging, think about the disk i/o, cpu and memory being utilized by IIS at the same time SQL is struggling to get the job done.
>>Oh, didn't see AC's posting<<
As usual I was just hijacking the thread and reinforcing your point.  :)
Very interesting, thanks for your comments.
I hadn't looked at running the site from one server and the database from another. looks like another expence and learning a few more technicalities in website design.
I am using SQL 2008 R2 and I think 64bit (my server is down at the moment but think that is the case)

Thanks for those tips on tuning windows, agian something else I will have to learn to do as I have no experience in this area, and wondering what issues I may have with this running on a cloud server. As I say maybe a discussion for another day now.

All I was using to see the system resources was Task Manager, by the sounds of it in my ignorance this is not the best or correct tool. I believe that this will give a view of the sum of the concurrent processes, which as this is soley a server for my website I am happy(ish) looking at the total. I have not yet had big volumes to see what happens then. That will be a nice probem to have one day.

Thanks for the additional information.  Task Manager isn't necessarily wrong, just keep in mind that it is showing the resources being consumed currently by running tasks (processes); therefore, it is not taking into account specific activities that may spike the memory or cpu needed during processing of your SQL job nor is it going to simulate for you what happens when you additionally add web requests happening concurrently.  So the point is that having 0.85gb "free" when the server is at rest is misleading.  

Try turning on the /3gb switch in the OS and that will do what I indicated of telling Windows to use 1gb for kernel memory and leaves 3gb for use by IIS/SQL processes.
>>I am using SQL 2008 R2 and I think 64bit<<
This information is critical  Any advice we can give you depends on this.  We also need to know what Edition you are using.  For example if you are using the Workgroup Edition it does not support more than 4GB and Express does not support more than 1GB.
>>Try turning on the /3gb switch in the OS <<
That would make the assumption they are still using a 32-bit O/S, right?
I'm using
windows Web server 2008 R2
System type
Good point, AC!
Hmmmm.  Web Server Edition, so does that mean SQL Server 2008 R2 Express
I'm using
windows Web server 2008 R2 version 6.1.76000 build 7600
System type  x64-based PC
Trying to find out which edition, in the help and support window it only says Windows Server 2008 R2 with no edition name underneath. In system information it has Other OS Description "Not available"

SQL Server 2008 Version 10.0.1600.22

as this is 64bit does that mean I can't configure the memory trick using /3gb?
yes it is SQL Server 2008 R2 Express
Then the maximum memory supported by MS SQL Server Express is 1GB.
Yes.  That is unfortunately a fact.
acperkins, do you know how Windows Web Server R2 now operates with respect to kernel memory?  Is the OS still allocating first 2gb for kernel?  If so, would you say 1gb for Sql + 1gb for IIS and 2gb for OS is fair assumption to recommend 4gb total memory?
I don't really know.  But I suspect it is no different from any other 64-bit Eidtion.
ac, mw,
Many thanks for all your thought here. I think that the initial question of what to increase, CPU or RAM is resolved in that RAM appears to have the greater effect in this case.

I have now set up the server with 2 processors and 4gb. moving from 3 to 4 gb has seemd to make little difference. as much as you were saying that SQL server express can only utilise 1GB of memory, therefor the extra (moving from 3 to 4gb RAM) is primaraly going to waste.

My issue now, and as we have touched on above, is why with the sever now set up to be more powerfull than the development box that the performance of this job is still about 4 to 5 times slower.

I think if fairness to all and to keep the questions relevant that I close this question as resolved and open a seperate one with regards to the specifics of SQL server performance between the development and serve boxes.

Many thanks for such good and interesting input.
Many thanks, very knowlegeable in this area thanks for the help.