We help IT Professionals succeed at work.

# Power and BTU ratings calculation

S Connelly asked
on
Hello all,

Hopefully someone can help with my confusion. I am trying to calculate the power requirements and BTUs between two servers that are identical with everything but the storage devices.

I am building two servers, where the chassis power supply specifications are:
1000W Output @ 100-140V, 12-8A, 50-60Hz
1280W Output @ 180-240V, 8-6A, 50-60Hz

All the additional components (motherboard, cpu, memory, etc) will come from different manufacturers but will be the same in both servers.

The main difference between the two servers will be:
Server 1: 12 x 4TB traditional 3.5" hard drives
Server 2: 12 x 2TB SSD drives

The confusion I am having is that I would expect that the power requirements to run Server 1 would be significantly higher than Server 2 because traditional hard drives require significantly power power (at startup and normal operation). Solid state drives require a lot less power. However, the manufacturer is telling me that the power ratings are the same because the power supply dictates how much power is used.

So, assuming that the manufacturer is correct on the power requirements. What about the BTUs? This is definitely going to be a very different value. The problem is that BTUs are usually calculated with voltage and current.

Any advice is appreciate.

Thank you,
Doug
Comment
Watch Question

## View Solutions Only

Commented:
First of all, the power supply numbers you are citing are the maximum rated output capabilities of the power supplies.  This is not the same as the input power requirement for two reasons: they'll not be running at full capacity (making the input requirement less) and they are not 100% efficient (making the input requirement higher).

You can look at the specs on the hard drives to see how much power they draw.  Add about 25% to that (assume 80% efficient power supply unless you have better data) and you'll get what input power they will draw.

You can use an inexpensive device to measure the actual power draw of the server.  I've used Kill-A-Watt devices that cost around \$30 or less to do that.

As far as BTUs go, 1 Watt is about equal to 3.4 BTU/hour.  It is a fairly safe assumption that ALL of the power going into the server will end up as heat, so multiply the power input on the server by 3.4 and you'll get how many BTU/hour it will produce.

Commented:
Just in case I wasn't clear, the manufacturer's comment that " the power supply dictates how much power is used" is nonsense!  The only impacts it has are the total limit of what it can produce (which you should try to stay far away from) and its efficiency.
Distinguished Expert 2019
Commented:
Power supplies also have an efficiency rating. 80% is typical, you won't typically see more than 90%.  The more efficient the power supply the less heat that it will dissipate.  The conservation of energy rule means that energy cannot be created or lost but only transformed into another form of energy.

Intelâ€™s ATX specification only requires that a power supply is 60% efficient at 50% load. Most decent quality power supplies made in the last decade are around 70% efficient at 50% load.

EnergyStar Power Supply Rating Minimums

20% load       50% load       100% load
80 PLUS                         80%                  80%                 80%
80 PLUS Bronze          82%                   85%                 82%
80 PLUS Silver           85%                   88%                 85%
80 PLUS Gold           87%                   90%                 87%
80 PLUS Platinum     90%                   92%                 89%

As you can see the P/S are most efficient at 50% load.  If there is no load then the P/S won't be drawing anything other than a very small amount of current. Also when selecting a PSU you have to look at the 12V and 5V rail limits. Also ensure that your SATA Power connectors have the orange 3.3V wire (typically only needed with 1.8" drives).
IT Manager
Commented:
Your power figures above are not completely clear to me:
1000W Output @ 100-140V, 12-8A, 50-60Hz
1280W Output @ 180-240V, 8-6A, 50-60Hz

Your voltage figures are completely different.

The first set (100-140V) look to refer to voltages in The Americas - 120V.
The second set of 180-240V are more in line with what I would expect in Europe where power is normally 220V or 240V.

You can't compare the PDUs on this basis, it's Apples vs Oranges.

However, CompProbSolve is right anyway, they are maximum ratings anyway, and you will get much more realistic answers with a KillAWatt meter
http://www.p3international.com/products/p4400.html

I would also suggest that in real usage, you will not see any significant difference between the power and BTU use of these 2 servers.  Unless you're filling 100 racks with identical models, most cooling systems will cope admirably with this.  Of course, if one does run hotter, the cooling just works harder.

Our comms room had a recent blip, and the chiller units went berzerk, and the cooling took the room down to about 5 dec C, or about 40 deg F.  The normal temp is 20 deg C, 70 F.  While we checked into this, we looked at the environmental temp on the servers, and our Dells are rated as being happy between 5 and 35 deg C (40 F to 110F).  They can even tolerate 5 dec C *beyond* this for short periods. The point is that the cooling has a lot of flexibility in most modern setups.

If you can post the model numbers of the servers themselves, we can comment further, but I don't thing you'll see any real-world differences.
Technical Writer

Commented:
So sorry it has taken this long to respond and grade this question.

Thank you so much everyone. Your answers were correct and insightful. :)
Technical Writer

Commented:
BTW, I am curious about one characteristic that I observed and I think Danny Child first predicted this...
"I would also suggest that in real usage, you will not see any significant difference between the power and BTU use of these 2 servers. "

I began this thread because I thought that is was unlikely that a server using all mechanical hard drives (12 in total) would consume the same amount of power as an identical server with all flash drives (12 in total). In other words, the two servers are identical except for their storage arrays.

Well, I placed a power meter on both units and I was surprised to learn that the real (typical) power consumption was roughly the same for both units. Both units typically consumed around 680 watts when working on a typical workload.

I am puzzled by these results. I always thought the SSD drives consumed significantly less power.

Any thoughts?

Thanks.
Distinguished Expert 2019

Commented:
What you probably haven't factored in is how much of those 680 watts is directly attributable to the hard drives themselves.

Commented:
I looked up some Seagate drives and found the following
ST2000VN0001 (conventional HD) has a 6.4W typical operating power consumption, 4.5W in idle
XF1230-1A1920 (SATA SSD) has a 4.5W max. active average power, 0.7W in idle

With 12 drives (assuming the specs above are realistic in both cases), you should see about 22W more draw with the conventional HDs when active and about 45W more when idling.

Those differences aren't dramatic when looking at 680W overall, but still should have been measurable.  Was the power draw varying enough that you'd not have noticed a 22W difference?
Technical Writer

Commented:
You are absolutely correct. Thank you. I just compared the manufacturer's specifications for the SSDs vs the HDDs and I was surprised to learn that there is only an average 3 watts (each) difference between them or about 36 watts in total per chassis. I always thought that the difference was much greater.

What puzzles me is that physically, the mechanical hard drives seem to generate a higher temperature than the solid state drives. I just checked and I swear that the SSD drives appear physically cooler. But the SSD drives are encased in plastic or resin and the HHDs are encased in aluminum. Perhaps the different physical properties create an illusion that one is actually running hotter than the other?

Thank you again. :)

Commented:
"the mechanical hard drives seem to generate a higher temperature"

I'd bet that part of the HDD (where the motor is) does get significantly hotter whereas the SSD is a pretty consistent temperature throughout.

Keep in mind that the specs I found have the HDDs dissipating about 50% more energy than the SSDs.  Virtually all of that goes into heat, so you would expect the HDDs to be hotter.  3W is not insignificant when over a small volume.
Distinguished Expert 2019

Commented:
Now you are dealing with efficiency.  Any energy that is created as heat doesn't get converted to work.  SSD's convert more of their energy to work.  Under a heavy load I've noticed some ssd's getting rather warm. Many years ago I had 2 x 1GB SCSI 5.25 Full Height drives that drew 35 Watts and it was like having two light bulbs inside of the case even at idle.

Commented:
"SSD's convert more of their energy to work"

I think that if you look at the energy output of almost all devices, nearly 100% of their energy consumed goes into heat.  What is the "work" that the SSD produces that doesn't get turned into heat eventually?  Unless it is transmitting e-m signals, sound, or mechanical output (that would likely end up as heat eventually anyway), or stores the energy in some fashion (which would be temporary), it all ends up as heat.
Technical Writer

Commented:
This is great information and a good reminder of a basic science principle... so, when calculating BTUs, the formula is P(BTU/hr), where 1 watt = 3.412 BTU per hour, this formula assumes correctly that nearly 100% of the consumed power will eventually transform into heat (like CompProbSolv stated).

Thanks again.