Advice on datacenter hardware

Hi,

This question does not ask for a solution but rather is a poll for opinions. I will split the points by the quality and comprehensiveness of answers. I hope it's ok and you will be willing to share.

We're planning a new medium data center (about 5000 computing nodes). We're looking for two main objectives: price/computational performance and price/easiness of maintenance. Based on your experience, what server hardware have performed best for you in such goals? Specifically, how did your servers (pizza, multi-node in pizza chassis or blades) perform out-of-box, i.e. what percent of them proved to be faulty and required hardware replacement (disks, RAM, network cards, motherboards), and then, how did these servers withstand intensive computative and i/o environment? The rate of failures in two of these situations is most interesting. The approximate setup of a typical server would be a 8-16 cores, 48-96Gb RAM, IPMI and 2 LAN ports.

We're talking x86 (Intel, AMD)-based servers, and the brands would naturally be:
* HP
* Dell
* IBM
* SuperMicro
* Anything else?

Thanks.
LVL 9
parparovAsked:
Who is Participating?
 
eeRootCommented:
The failure rates on HP and Cisco blade servers are very low.  I can't recall the last time I had a bad part out of the box.  If you are planning to use the Cisco Nexus switches, the you should definitely look at the Cisco Blades.  Having blades and switches both from Cisco lets you use VPC and a few other technologies to their fullest.  Or, if you're looking at HP switches and/or storage, then the HP blades would integrate well.  You get the idea.
It's hard for me to answer the I/O question, since every environment is different.  All I can say is that for a 5000 node server environment, I would consider 10Gbps a necessity for network access, SAN, Vmotion, etc.  You should probably look into the 40Gbps options.  Or look at Cisco's VPC, which let's you bundle multiple 10Gbps connections together in a way that's faster and more efficient then the old spanning-tree and etherchannel options.
0
 
andyalderCommented:
Throw Cisco into the mix of server makers. I wouldn't normally recommend but they'd be as happy as a dog with two dicks to have a 5000 node datacenter case study so would give you a huge discount.
0
 
parparovAuthor Commented:
andyalder,

Thanks, that's an innovative thought for me. :)
0
On-Demand: Securing Your Wi-Fi for Summer Travel

Traveling this summer?Check out our on-demand webinar to learn about the importance of Wi-Fi security and 3 easy measures you can start taking immediately to protect your private data while using public Wi-Fi. Follow us today to learn more!

 
Rich RumbleSecurity SamuraiCommented:
I'm surprised this hasn't garnered more attention... typically this polls/questions get such varying degrees of answers and everyone loving this over that...
I've been to 214 different data centers, and each one did something different. Equipment varies, rack's vary, design techniques vary (raised floor, chilled air/forced air, "cyclone" walls...) and it's all very subjective, just ask people what their favorite TV show is and what home theater system they prefer :)

I like dell, and many people I know like dell too. That said I have an equal amount of people who like HP, IBM, Sun, Apple (yes servers), and even home grown. Parts fail, some brands more have an issue with one thing or another, the dell's a few years ago, we'd always replace the raid/perc cards. Now we don't replace those at all, much of the hardware is more reliable than it once was. HDD fail, Ram fails, Mobo's and NIC's fail, nothing out of the ordinary there.
Some data centers use liquid cooling (submerged servers), some data centers use chillers, some use outside air in the winter, some use DC power direct to the rack (servers have no power supply in the traditional sense).
There ar just too many variables here... You say 5000 nodes, is that 5k servers? Is that virtual or physical, how many racks will you have, and how many U's can each rack hold, how much power...
I see it's a poll, and you want to know about us, however we do need to know more about your goals. Do you plan to have disaster recovery, second site/colo or at the same site, do you have multiple power providers in the DC. You can read a lot about other peoples DC's on the web, you may also consider going with Cloud services as they can grow very easily and you don't have to worry so much about the other variables like hardware, space, power and cooling.
Arguably, the OS you use will affect the HW choice as well, you can run lower in the "ring" using citrix/Xen virutalization than Vmware or VirtualBox... so linux may allow you to get cheaper hardware because the host os has less overhead because it does not require all the GUI environment that windows does, which leads to more ram for your guests.
-rich
0
 
kevinhsiehCommented:
Very interesting question. 5K seems like a lot of nodes unless you are a cloud provider or doing a HPC cluster. What do you plan on using these machines for? With this large number of nodes, blades are going to be a lot more efficient compared to traditional rack servers. I know that Dell also makes a line of servers for the cloud/HPC market where the servers don't have all of the HA components such as redundant power, RAID, etc. because the failure of a single node isn't a big deal. That reduces the capital costs and power/cooling as well.
0
 
kevinhsiehCommented:
I forgot to mention that it might actually make sense for you to run x86 workloads on IBM mainframes. Much greater computing density and efficiency per watt, and probably lower capital costs as well. We really need to know what the datacenter will be used for.
0
 
kevinhsiehCommented:
Should probably look at Infiniband connections for networking and storage as well (particularly for blades), but again, without knowing requirments and uses for these nodes, it's all just speculation.
0
 
andyalderCommented:
I've always wondered if you use hot aisle containment and pipe that hot air on to the roof of the datacenter instead of chilling it again whether you could make enough electricity via the updraught driving turbines to pay for cleaning the air you suck in at the bottom. A bit like enviromission's thermal tower without the cost of the greenhouses underneath it. Meaning the physical shape of the building is as important as the kit that goes in it.
0
 
Rich RumbleSecurity SamuraiCommented:
That is the "cyclone wall" I referred to above, I've been to 6 DC's that are made from Silo's or other round buildings, some above ground some below. I recently visited one of the more famous ones (see video link) in Canada. http://www.datacenterknowledge.com/archives/2009/12/10/wild-new-design-data-center-in-a-silo/
You may be able to do some reclamation of the energy being expended, but you'll expend energy directing the "exhaust" so (re)gains are negligible.
-rich
0
 
parparovAuthor Commented:
Thanks everyone. The data center setup is finalized for us; it was only the hardware makes themselves I was interested in.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.