Link to home
Start Free TrialLog in
Avatar of Mystical_Ice
Mystical_Ice

asked on

AMD vs Intel? on dell's website - which has greater VM performance?

Hey guys - question:

I'm configuring a server for a client that's going to be running ESXi on it, along with two virtual machines (one of them a domain controller, the other a SQL server).

On Dell's website, i configured a tower poweredge server with 2x Xeons (didn't see an option for a tower with AMD processors), with about 8GB RAM, 2x SAS drives and 2x SATA drives (because getting 4 SAS drives was ridiculously expensive), and no operating system, for a little over $4000...

i then went to the rack mount poweredge section, and configured a server with 2x SIX-core AMD opterons, 12GB of RAM, and 4x SAS drives for about $3400......

In 'specs' the AMDs were better than the Xeons - more cores, higher clock speed, cache size, etc.  I know specs mean nothing, and that the Xeon eats the Opteron for breakfast these days, but still, why such a huge price gap?  It seems for one, that rack servers are a LOT cheaper overall than tower servers, and AMD cheaper than Intel processors.

I just want to get the right thing for this client - do you think the 2x six-core opterons would be just as good as the Xeons?  This server's primary VM is going to be its SQl server, and it's going to be doing a lot of work...

just want to get some insight :/
ASKER CERTIFIED SOLUTION
Avatar of David
David
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Mystical_Ice
Mystical_Ice

ASKER

i should have made myself more clear - the 2x SATA drives, mirrored, are going to run ESXi (which is not processor or disk intensive at ALL. i know that much), and the domain controller (which isn't going to be doing a whole lot...)

the 2x SAS drives, mirrored, are going to be dedicated to the SQL server.  All work done on the SQL server is going to be done on the server itself, which means no data whatsoever is going to be coming in over the network, except maybe to the domain controller (which is on the same esxi server).

Still seem like a bad idea?
Well, that is different, you can imagine my horror at the thought of a typical DC & SQL database and a pair of virtual computers with 2 mirrored drive sets.
Got a suggestion, however.  

Get a single industrial class SSD instead of the 2 SATA disks.   Put all the operating systems on the SSD, along with some scratch table space for the SQL server.

You will pay about the same amount of money for 2 enterprise-class SATA disks as a small SSD, but you'll get 20,000+ random IOPS instead of maybe the 30 you'll average on a pair of SATA drives doing a mix of random and sequential.    A SSD has better reliability than a pair of mechanical drives on a RAID1 controller also.


Then you may even be able to justify SATA disks for the database since most of the I/Os will be on the SSD.
don't know if Dell offers anything like that within the same budget; would you be able to give me a model# or something that i can try to configure on Dell's website by any chance?  

also - really, for the SQL server i'm going more for speed than reliability.  if one of the drives on the mirror dies, we'll replace it, and in the event both dies, we have backups to external drives we can use - the biggest thing they want is speed
THis is the defacto-site for SSD reviews.   Most of them are in the channel, so you can google for the part number and put pricing in the search engine.
If you want speed, then obviously SSD is the way to go, and use a pair of SATAs for the data that won't change as much.  You never mentioned the total amount of data required, so maybe a large SSD will give you what you need.  No reason to waste money and performance trying to RAID1 SSDs, either, so just get the biggest one.


http://ssd-reviews.com/
about 100GB of data total on the SQL server.  And of that, the actual database is only about 50GB

I don't even SEE SSDs on Dell's poweredge configuration... there was one, but it was $2200+ for a 100GB... can't justify that.
Geez, you can get all of that with some GB to spare, and 50,000 random 4KB sized IOPs for well under $1000 LIST. Goto the ssd site and do some shopping.  They give you list, read the reviews.  If you want a SAS interface, you will be in the 800MB/sec random I/O throughput sustained and still come out saving money. ROI on electricity alone pays for itself :)
Oh good site - so if i purchased a 3.5" SSD, would it just plug into the server (if it has a 3.5" backplane)?  Where would i get a chassis for it to fit in to the server?  Would it work with regular SATA drives?  I know the interface is the same, but didn't know if it was that easy.
If it has a SATA or SAS connectors, you plug it into the backplane same as any other drive.  Just get the right dimensions, of course.   But, no need to plug it into the backplane, as this may actually cause a performance hit, depending on the architecture.  You don't want to have a RAID controller throttling I/O for it.  Not too many RAID controllers can handle this many IOPs.

Plug it into a CDROM or floppy slot and use a SATA port on the motherboard.  Some SSDs can just fit somewhere inside of the case, others plug directly into an IDE header, in the same way you would plug in an IDE cable.   Look at dimensions.
To clarify, if you have a SSD with a sata connector, it appears exactly like a SATA disk drive. This is by design .. to make it easy :)
Finally, since you don't need all those slots, why buy an expensive chassis?  Get a less expensive 1U platform, but obviously get a server-class motherboard.  . Many SSDs, certainly the enterprise class, have same physical dimensions as a 3.5 or 2.5" mechanical HDDs, right down to the screw holes being in right place. So you stick it in the empty hotswap drive bay they ship with a naked server.
OK so get a SSD, but don't use the SATA backplane; plug it into a SATA port on the motherboard, to avoid the RAID controller?  ok.  what if the motherboard doesn't have any SATA ports, and only the backplane/RAID controller does?
Then yank the RAID controller and put a dumb SATA controller in it's place, or use a free slot.

PCIe is best, of course.  Make sure it is NOT a RAID controller.  The PCIe is important because you don't want a PCI SATA card if you can help it because there is no reason to architect that card to handle the type of throughput and IOPs you are going to get.
The SATA backplane will be ok, but the downside is that most RAID controllers don't have CPUs fast enough to handle the IOPs, so you will see a performance hit.  But in grand scheme of things, you'll have all the IOPs your hardware will be capable of asking for, so don't buy anything if you don't have to.  Just know that you can probably squeeze a little more performance from a dedicated PCIe based SATA card.
 
You've been a big help.  Thanks so much