Solved

worth spending on faster cpu?

Posted on 2010-11-26
24
1,253 Views
Last Modified: 2012-06-27
We are building a rack mountable server with 14 drives, 64 GB memory, and other features.

The budget isn't unlimited. I found out that the previous admin was ordering hex core CPUs (2.67Ghz Intel Xeon x5650). However I could use some more RAM in my opinion (SQL server) due to a very large and active environment. Below are the options to choose from.

What I'm trying to figure out is whether a 6 core is worth the extra cost over the 4 core and then within those, whether we'll see a big enough boost from 2.4 Ghz to 2.67 Ghz to justify the cost. The 6 core also are at 1333Mhz vs 1066 for the quad cores.

Intel® Xeon® E5503, 2.0Ghz, 4M Cache, 800MHz Max Mem [subtract $1,040.00]
Intel® Xeon® E5506, 2.13Ghz, 4M Cache, 800MHz Max Mem [subtract $1,020.00]
Intel® Xeon® E5530, 2.4Ghz, 8M Cache, 5.86 GT/s QPI, Turbo, HT [subtract $590.00]
Intel® Xeon® E5620 2.4Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem [subtract $790.00]
Intel® Xeon® E5630 2.53Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem [subtract $600.00]
Intel® Xeon® E5640 2.66Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem [subtract $340.00]
Intel® Xeon® L5520, 2.26Ghz, 8M Cache, 5.86 GT/s QPI, Turbo, HT [subtract $590.00]
Intel® Xeon® L5609 1.86Ghz, 12M Cache, 800MHz Max Mem [subtract $650.00]
Intel® Xeon® L5630 2.13Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem [subtract $550.00]
Intel® Xeon® L5640 2.26Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem [Included in Price]
Intel® Xeon® X5560, 2.8Ghz, 8M Cache, 6.40 GT/s QPI, Turbo, HT [add $150.00]
Intel® Xeon® X5650, 2.66Ghz, 12M Cache,Turbo, HT, 1333MHz Max Mem [subtract $10.00]
Intel® Xeon® X5660, 2.8Ghz, 12M Cache,Turbo, HT, 1333MHz Max Mem [add $220.00]
Intel® Xeon® X5670, 2.93Ghz, 12M Cache,Turbo, HT, 1333MHz Max Mem [add $500.00]
0
Comment
Question by:MrVault
  • 12
  • 5
  • 3
  • +2
24 Comments
 
LVL 76

Expert Comment

by:arnold
ID: 34219490
How big is the database?
Is it a dedicated database server or will it be doing other things as well?

I'd go for the E5620 and spend the extra money on additional memory as well as make sure that the Harddrives are optimal, i.e. SAS with RAID.

Do you have a benchmark for the servers you have?
http://www.cpubenchmark.net/cpu_lookup.php?cpu=[Dual+CPU]+Intel+Xeon+E5620+%40+2.40GHz

You can using that information determine whether the issue is with CPU or other resources on the system i.e. I/O or memory, etc.
0
 

Author Comment

by:MrVault
ID: 34219565
The DB is roughly 400GB but when it gets bloated before we run our cleanup jobs it can hit 1.3 TB.

The server is pretty much dedicated. Only other apps on there are AV client web server that's tied to it. Ideally they'd move the web server part off but that hasn't happened yet.

We plan on using 300 or 600 GB SAS drives with appropriate RAID configurations.

What is it about eh E5620 that you like? Is 2.4 Ghz vs 2.66 not that noticeable? And likewise 1066 vs 1333?
0
 
LVL 9

Expert Comment

by:TBK-Consulting
ID: 34219578
You will see the performance boost in the processor from the hex core and the bus speed increase from 1066 to 1333 .. But yes all things being equal you do need to look at everything as a whole first and foremost and make sure everything is optimized.

If you are using quad cores now but not optimized the RAID drives, or the memory then you'll notice speed increases with the bigger processor, but you'll also notice the increases with more RAM (probably not as much difference however) and you would also notice speed increases by newer faster hard drive technology, as well as even network optimization (fiber instead of two pair, etc ...) There are lots of places you can look to get faster speeds ...
0
 
LVL 4

Expert Comment

by:rstjean
ID: 34219638
I would look at better drive striping and memory first before upping the CPU.
0
 

Author Comment

by:MrVault
ID: 34219667
Certainly many things can be improved as you've both noted. Are there CPU stats I can look at to see if getting 2.67 vs 2.4 is worth the couple hundred or getting 6 cores with 1333 bus speed is worth $790? Whereas I might be able to buy 16 GB ram for $900.
0
 
LVL 9

Expert Comment

by:TBK-Consulting
ID: 34219670
1066 vs 1333 is definitely noticeable, 2.4 versus 2.66 is not that noticeable
0
 

Author Comment

by:MrVault
ID: 34219691
can you explain tbk? i'm not too familiar with cpu mechanisms except to know that more cores is better, faster processing is better. :)
0
 
LVL 76

Expert Comment

by:arnold
ID: 34219709
I've posted the link that illustrates that but the issue is those graphs are data points for a specific range of applications/functions which will not be the same as what your system will be doing.

You can use perfmon/cacti/sql profiler to collect data points and then compare the same set of data to one collected from a system with the newer hardware or where you would make adjustments and see how it affects the performance of the system whether the impact it has on those counters/data points.

http://www.dbazone.com/services_tune_shdw.html?gclid=CLHHyeeyv6UCFYnc4Aod1xIxXw
http://www.sql-server-performance.com/

You could setup historic data collection such that you can overtime see where the bottleneck in the performance is.

You should use the maintenance plans/sql jobs to maintain the database at optimal state.
i.e. rebuild indexes every two months and reorganize indexes every two months such that every month some adjustment is made to the indexes.

http://blogs.msdn.com/b/joaol/archive/2008/01/28/script-to-rebuild-and-reorganize-database-indexes-sql-server-2005.aspx

The script will based your settings in terms of the fragmentation threshold will either reorganize or rebuild the index.  The issue is that this script is per database.
Note make sure to alter the temp table you use if you setup this script for multiple databases on the same server.

Use SQL profiler/tunning agent to gather sample data and then let the data be evaluated for adjustments.
I.e. you may have a table that could benefit from a specific type of an additional index. etc.
0
 

Author Comment

by:MrVault
ID: 34219725
When this chart says "DUAL CPU" is it saying two sockets each with the specific processor installed? The Xeon 5650 is a hex core, but I don't see it mentioning that.

http://www.cpubenchmark.net/cpu_lookup.php?cpu=[Dual+CPU]+Intel+Xeon+E5620+%40+2.40GHz
0
 

Author Comment

by:MrVault
ID: 34219731
you have to pay for dbazone right? I don't  have an account so your link just redirected to a "call us" page.
0
 
LVL 55

Expert Comment

by:andyalder
ID: 34219859
E5630 is in the sweetspot on that price list, not much more saving on 5620 and 5640 is quite a bit more. Intel don't have seperate plants to make different speed CPUs, they just test them and then program in the base speed. With good cooling turbo can speed up a CPU to the extent of a 5620 being faster than a 5630.

I wouldn't worry too much about the RAM speed, with two rows of DIMMs (6 per CPU) it'll probably slow down to 1066 anyway. Try to avoid more than 6 DIMMs per CPU or it'll clock the RAM speed down even more.

>I might be able to buy 16 GB ram...
You don't populate 5500 and 5600 based servers in 16GB type sizes, each CPU has three memory channels and you should populate them evenly to get the best out of interleaving. Sensible sizes are 6GB (as it comes so single CPU), 12GB, 18GB, 24GB etc.

SQL loves RAM, but it also loves disks, if you use twice as many 300GB ones as you would use 600GB ones the disk subsystem will cost about the same but be twice as fast.
0
How to improve team productivity

Quip adds documents, spreadsheets, and tasklists to your Slack experience
- Elevate ideas to Quip docs
- Share Quip docs in Slack
- Get notified of changes to your docs
- Available on iOS/Android/Desktop/Web
- Online/Offline

 

Author Comment

by:MrVault
ID: 34219878
Thanks for joining the conversation Andy. Some follow up questions:

We don't plan on overclocking. We're using stock cooling from dell R510 servers.

We typically have had 8x8GB ram chips. the R510 can support that or 4x16GB ram chips (among other options). Our thought was to do the 4x16, putting 2 on each channel so that we're only using 4 dimms instead of 8, allowing future growth if we determine later RAM is a bottleneck worth getting more than 128GB of. Are you saying that a 5600 intel chip cannot support more than 96 GB ram (3x16, x2)?

Lastly, can you expand upon the disk drive size thing? First I should say the R510 can only support 2 internal 2.5 flex drives and 12 3.5 SAS drives at most. Given the size of our database and the desire to separate out parts on different physical disks, going 300GB across the board will not be likely.
0
 

Author Comment

by:MrVault
ID: 34219880
for that matter, are you saying the speed we'll see on a RAID 10 with 600 drives will lessen because there's another RAID1 set with 300 GB drives?
0
 
LVL 55

Expert Comment

by:andyalder
ID: 34219948
Turbo isn't overclocking - well it is but the CPU does it itself - see http://ark.intel.com/Product.aspx?id=47925&processor=E5620&spec-codes=SLBV4 for example.

Only 8 DIMM slots on a Dell R510? I'd get something else then, DL380 has 18 DIMM slots and takes more disks. As you're a Dell house by the looks of it a Dell R710 takes 18 DIMMs as well. Note, single Xeon 5500/5600 based servers can only have half the DIMM slots populated.

I think there's something wrong with Dell's website, (I don't really know Dell, bux Xeons are Xeons)
http://www.dell.com/us/en/enterprise/servers/rack_optimized/cp.aspx?refid=rack_optimized&s=biz&cs=555 says maximum 196GB fro both the R510 and R710, which implies at least 12 DIMM slots - not that I could afford to populate them with 16GB DIMMs, 4 and 8GB ones are more sensibly priced.

On the disk front it's not a case of smaller disks being faster, but if you've got twice as many of them it'll be twice as fast pulling data off them.
0
 

Author Comment

by:MrVault
ID: 34219974
The 8 dimms was meaning we only were getting 64 GB to start. There are 18 as you saw. The more disks with the DL380 are for 2.5 inch which means not 15K and not as big. We will have 2 of these processors, not 1, which makes me think even more it's not necessary for faster CPU. The bus part I'm not sure about. That CPU benchmark site showed the X5640 scoring much better than the E5620.

with our rep, the price difference between 4x16 and 8x8 is about $200 which essentially is insurance if you ever need more RAM than you could get with 8x18 (144GB). I may be wrong, but I always thought a single large chip is better than multiple smaller chips in the handling of info. For example, 1x1GB better than 4x256MB. Performance wise, not talking about future growth.

As for the disks, the faster read is true if you use RAID 5, but we're not. Per SQL best practices we're separating out high IO files onto separate disks so they don't compete rather than one large RAID 10 set with 12 drives. So I'm not sure the more disks factor really comes into play.
0
 
LVL 55

Expert Comment

by:andyalder
ID: 34220127
You still shouldn't use 8 DIMMs, as I said before you should optimize the memory interleaving by populating all memory busses the same, so use 6 or 12 DIMMs.

A single large DIMM is actually slower than two smaller ones even if on the same memory channel. That's because of things like precharge delay, if you've got two DIMMS then one can be charging up while you discharge the other one. (someone else might put that better than me). But your 16GB DIMMS are near enough 2*8GB DIMMS on the same board anyway, the 8GB DIMMs are available as 2rank and the 16GB ones are 4rank. Ranks are near enough the number of normal DIMMs they stick onto one DIMM.

As to disks the faster read and write is for any RAID level, if you've got more of them in the array the faster it will be. If you have 2 for OS, 2 for logs, 2 for tempdb than you've only got 6 left for data, maybe you'll need an external disk enclosure, 6 data disks isn't very many.
0
 
LVL 76

Expert Comment

by:arnold
ID: 34220748
not familiar with dbzone, just posted a link to their discussion for optimization/tunning. The other links discuss the steps/options to configure counters and evaluate options to improve performance.

In memory the bottle neck is rarely in the speed of access i.e. you have a single large dimm that has two data points on it versus having four smaller dimms of equal total size where each data point is on a separate dimm.  IMHO, the controller sequential access on the larger DIMM to get the information versus on the two separate DIMMS is not as much as an issue as it is with having the pages of memory swapped out of RAM and on to DISKS.

leading to the other multiple spindles (individual disks that make up a RAID conrtoller) are better that fewer disks, the requirement for a large and quickly growing database forces the issue in terms of the size of the hard drive one must use and the RAID configuration one would use. I.e. RAID 10 is better if you are using 600 GB hard drives versus RAID 5 in the event of a drive failure as well as enhanced/increased fault tolerance.

I.e. in a 6x600 GB drive, you would lose 1.8TB of storage in raid 10 versus 600GB in a raid 5, but you have three pairs of raid 1 where each pair can lose a drive without an impact on the system (a loss of both drives in a pair will lead to data loss0.  In a raid 5 configuration, you can  only lose one drive such that a second failure will lead to data loss while at the same seeing a significant decline in server performance. RAID 10 has a fault tolerance of three drives in this scenario, while RAID 5 has a fault tolerance of a single drive.
The rebuild process of a RAID 10 is equivalent to a raid 1 since the rebuild only handles the rebuilding of data in a pair that lost a drive.  In the RAID 5 the rebuild will take significantly longer while the system is in decreased performance state as well as being in a precarious situation i.e. a possible second drive failure.

When a database is that large, a SAN/DAS with clustering is likely something one would look at for redundancy and high availability. I.e. a high capcity shared storage. The MD DAS, Equalogic, del/emc SAN might be something to consider.

There is a point where buying new servers with ever increasing individual disk capacity becomes more expensive than the hi initial cost of a SAN with the lower cost of the additional storage modules.

When a dedicated SQL server was CPU bound, it is often the result of a lack of RAM leading the system into excessive swapping.
0
 

Author Comment

by:MrVault
ID: 34230143
Thanks for the insight Arnold. well put. Some thoughts:

If the performance of the memory is not a factor when comparing 8x8gb vs 4x16gb then the question is simple: is it worth spending $250 more so I can have more dimm slots available incase of future need? with 18 dimm slots available we'll have to see.

I totally agree on the RAID 10. If we can keep our DB size down I'd prefer RAID 10 over RAID 5 for any write intensive activity.

The issue we have with using a SAN/DAS for the DB is that it seems the cost of a server with 14 drives available to host just the database is more cost effective than a SAN. Just about half the cost actually (assuming we populated the server with all 600GB 15K SAS drives). Clearly there are some management, growth, replication, and even possibly performance benefits of using a SAN, but I'm not sure they justify double or triple the cost. It's been a while since I've spec-ed out a DAS solution so maybe that's something we should look into. But again that only really applies if we can't keep the storage down.

Lastly, if our server is CPU bound, then I appreciate your advice on looking at the memory angle. I hadn't realized that paging puts significant load on the CPU. What is the counter and goal that you suggest I look for to  see if paging is really an issue. We generally set our page file to 1.5 x RAM from the get-go so that there's no growth overhead. In our case that's 96 GB page file so it's hard to tell just from it's size if there's paging going on.
0
 
LVL 76

Expert Comment

by:arnold
ID: 34231284
Look at the taskmanager performance tab commit charge.  Is the peak commit charge exceeds 10% of the Limit value?

Placement of the page file is/could also  be an issue when overloaded.
I.e. when the system pages, it needs the I/O resource to write and read from the disks where the page file was setup.  With the large database where you place the transaction logs could also impact performance i.e. you do not have too many options on where those logs can be placed and likely they are on the same set of RAID disks where the database files are.
In terms of consideration whether paying now or overtime, which is better.
Is the need for the new system a result of the regular renewal cycle 3-5 years) or is it necessitated by the growth of the database??

The SAN if one goes for it could serve multiple other services. i.e. for storage of backup. etc.



Now the counters that should be considered when tunning SQL:

http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/
http://www.mssqltips.com/category.asp?catid=9
http://www.mssqltips.com/tip.asp?tip=1265
http://blogs.msdn.com/b/nickmac/archive/2009/08/27/sql-2008-and-performance-tuning-material.aspx

index
http://www.sql-server-performance.com/articles/per/Obtaining_Index_Usage_Information_p1.aspx

Another issue if you have not implemented it already is to use table partitioning to reduce the scope of the query.
http://blogs.msdn.com/b/nickmac/archive/2009/08/27/sql-2008-and-performance-tuning-material.aspx
http://www.brentozar.com/archive/2008/06/sql-server-partitioning-not-the-answer-to-everything/
0
 

Author Comment

by:MrVault
ID: 34231423
Those numbers in taskmanager, do you know what they correspond to in perfmon? peak times are middle of the night, not during the day. That way I'll just add it to our daily perfmon report.

pagefile is usually not on a disk with the tempdb or tlogs or the db.

I thought the tlogs being separate from the db would be a huge help but in our of our setups where they are on their own spindles, the tlogs don't hit the disk too hard despite such a large db. perhaps because it's in simple mode?

the need for a new system is there because we are a growing company. we build one of these setups every month with lots of new customers and when I joined I was tasked to design a better setup. until now they've been using servers with only 6 internal drives which makes the options few for disk setup and file placement.

I'll have to see if the performance of a equallogic SAN we're using could handle backups as well as the database. We don't want them to fight for resources. Typically we have one tray with 12 usable and 2 hot-spares. Maybe I can get Dell to allow us an extra for proof of concept for 30 days. Maybe I'm high too :-)

0
 

Author Comment

by:MrVault
ID: 34476727
I was hoping arnold could tell me how to monitor what he suggested in perfmon so I can get this data and make the right decision. if anyone else knows how to do that, I'm all ears.
0
 
LVL 76

Accepted Solution

by:
arnold earned 500 total points
ID: 34490503
I posted links through out on the perfmon counters one should monitor.
i.e. run perfmon on the local server or on a remote. point the perfmon to the system you want to monitor (itself or a remote one).

http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/
To establish a baseline for comparison
http://www.sql-server-performance.com/articles/per/10_baselining_tips_p1.aspx

IMHO, SAN if you have to build a system every month or two, actually is a more cost effective.
I.e. the SAN can have their capacity increased with an addition of a shelf.
Depending on which you are looking, there are those whose cost for the first component and the additional component are closer like equalogic. The other ones are often a high initial cost with a lower per additional storage shelf cost.


0
 

Author Comment

by:MrVault
ID: 34491488
thanks arnold. those links are really helpful, though they don't seem to mention the commit data you referred. good enough though. thanks
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

If you have heard of RFC822 date formats, they can be quite a challenge in SQL Server. RFC822 is an Internet standard format for email message headers, including all dates within those headers. The RFC822 protocols are available in detail at:   ht…
SQL Server engine let you use a Windows account or a SQL Server account to connect to a SQL Server instance. This can be configured immediatly during the SQL Server installation or after in the Server Authentication section in the Server properties …
Internet Business Fax to Email Made Easy - With eFax Corporate (http://www.enterprise.efax.com), you'll receive a dedicated online fax number, which is used the same way as a typical analog fax number. You'll receive secure faxes in your email, fr…
This video gives you a great overview about bandwidth monitoring with SNMP and WMI with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're looking for how to monitor bandwidth using netflow or packet s…

743 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

14 Experts available now in Live!

Get 1:1 Help Now