SERVER -- memory rank -- single, dual, quad ?

Is the below true ?

 1. Single Rank Memory is faster than Dual Rank Memory, in laymen’s terms when a computer accesses Single Rank Memory it only has to go around the track once, where are Dual Rank it would have to go around the track twice

 2. ranks cannot be accessed simultaneously as they share the same data path

 3. I should get "dual" rank memory for my Dell PowerEdge T630 dual Xeon E5-2630v3 100 person Windows 2012 R2 RAID-10 file server ?
=========================================================================================================

It is important to ensure that DIMMs with appropriate number of ranks are populated in each channel for optimal performance. Whenever possible, it is recommended to use dual-rank DIMMs in the system. Dual-rank DIMMs offer better interleaving and hence better performance than single-rank DIMMs.
For instance, a system populated with six 2GB dual-rank DIMMs outperforms a system populated with six 2GB single-rank DIMMs by 7% for SPECjbb2005. Dual-rank DIMMs are also better than quad-rank DIMMs because quad-rank DIMMs will cause the memory speed to be down-clocked.
Another important guideline is to populate equivalent ranks per channel. For instance, mixing one single-rank DIMM and one dual-rank DIMM in a channel should be avoided.

A memory rank is, simply put, a block or area of data that is created using some or all the memory chips on a memory module.
A rank must be 64 bits of data wide; on memory modules which support Error Correction Code (ECC), the 64-bit wide data area requires an 8-bit wide ECC area for a total width of 72 bits. Depending on how memory modules are engineered, they can contain one, two, or four areas of 64-bit wide data areas (or 72-bit wide areas, where 72 bits = 64 data bits and 8 ECC bits).

So to sum up everything, it appears that ranks have more to do with density and pricing than actual performance. Granted, I'm working off of generalized statements from a vendor and wikipedia, I don't think most people put much effort into researching ranks. All that matters (for most server admins) is that RAM have matching ranks.
finance_teacherAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

andyalderCommented:
1, no it is false. A dual-rank DIMM behaves similarly to two single-rank DIMMs in one slot, (they were sometimes called double-sided because of this which is a bit of a misnomer). It doesn't do everything twice for a single memory read-write cycle.

2 - Although you can't access both ranks at the same time there are some speed advantages to using dual-rank DIMMs, the memory controller can send read requests to both ranks and then wait for both answers to come back, since there are several clock cycles involved in fetching the data from the cells time is saved as by requesting both at once the second one will be in the buffer quicker than if the controller asked for one after the other - similar to hard disks in that respect.

3. Dual-rank where ever possible so long as it doesn't slow the clock speed down to bus loading. 3 DIMMs per channel may run at a slower clock-rate than two or one DIMM per channel.

Your summing up is pretty accurate, it's density and price, not raw speed that matters for almost all benchmark tests and real life, you only need fast memory speed when doing memory intensive graphics or running HPC calculations such as weather forecasts, with general business applications it's quantity rather than quality that gives the speed as if it isn't in RAM it has to be fetched from slow disk.

Take a look at "White Paper Fujitsu PRIMERGY Servers Memory Performance of Xeon E5-2600 v2 (Ivy Bridge-EP) based Systems", they deliberately knobble their servers by mis-configuring the memory so that just one channel is used and it's only a tad slower than a well configured one. One of the table shows SPECint_rate_base2006 being 6% slower with one DIMM channel rather than all 4 being populated, the STREAM benchmark shows the same config to be 61% slower but real life applications are not like the STREAM benchmark, they are like specint.

>All that matters (for most server admins) is that RAM have matching ranks.

Doesn't really matter if ranks don't match, you're in the realms of just 6% slower if it's mis-configured, you wouldn't normally notice it.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Dan McFaddenSystems EngineerCommented:
You are diving too deep for configuring the memory for a file server.

For a file server, the following are important (kinda in order):
1. HDD size and speed
2. NIC configuration and speed (multiple NICs with multiple ports in 1 or more NIC Teams)
3. Amount of RAM
4. redundant power supplies
5. number of CPU(s)

<disclaimer>
     I'm sure the order of the list could start a technical preference argument, but this list is based on  
     my experience.
</disclaimer>

In a dedicated file server, most of a system's RAM will be used to for caching purposes for the file sharing process.  In this situation, faster RAM isn't going to get you as much value as the quantity of RAM in the server.  File transfer will be limited by 2 factors;  (1)the server network bandwidth and the client's network bandwidth.  Where feasible, the server should have an order of magnitude greater bandwidth than the clients.  Meaning, the server should run at 1, 2, or 4 Gbps (teamed, aggregate bandwidth) and the clients typically at 100Mbps.

The other limiting factor is disk speed and configuration.  Buying 10 2TB 5400RPM disk will just keep you up at night.  You need to understand what type of access needs what type of disk.

Backup disk can be slow.  But forward facing (client) disks should be speedy 7200RPM+ disks in a RAID configuration.  But with Server 2012, you can get away without RAID by using Storage Services and commodity HDDs.  This is a completely separate discussion on implementation.

So, IMO, focusing so much on what RAM to buy for a file server is focusing on the wrong topic.

- Basically buy matched pairs of sticks which get placed in the matching sockets.
- Don't mix sticks of various sizes
- Don't mix sticks of various speeds
- Don't mix sticks from different vendors

Dan
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Server Hardware

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.