• Status: Solved
• Priority: Medium
• Security: Public
• Views: 807

# Integer benchmark VS floating pt benchmark

Okay,

I have done a long long hardwork of comparing two computers performance using 2 own written benchmarks (mips.java and mflops.java) and 2 standard benchmarks (linpack and dhrystone), and heres the result i get:

Running the toy integer and floating pt benchmarks on com1 and com2:

•     Within 1 sec, Com1 can run floating point arithmetic operations 2.47 times faster than the Com2.
•     Within 1 sec, Com1 can run integer arithmetic operations 4.3 times faster than the Com2.

Running the standard Linpack(floating) and Dhrystone(integers) benchmarks on com1 and com2:

•     Within 1 sec, com1 would run floating-point operations 7.33 times faster than com2
•     Within 1 sec, com1 would run integer arithmetic algorithm 4.92 times faster than com2.

Running the real applications, one to test integer and another to test floatingpt, on com1 and com2:

•     Within 1 sec, Com1 can run the floating point operations 11.8 times (11.8:1) faster than Com2
•     Within 1 sec, Com1 can run integer arithmetic operations 4.5 times (4.5:1) faster than Com2.

now the question is:

1. Looking at the results above, i can see that integer benchmarks work better than floating point benchmark to be consistent with the ratio of real applications. Why is this the case?

2. The peak MIPS and MFLOPS i got from running on both computers only get to 20++% of the peak rates. Besides the reason of using JAVA implementations, what other factors have caused this to happen?

THANKS.
0
jtcy
• 2
• 2
1 Solution

Commented:
> 1. Looking at the results above, i can see that integer benchmarks work better than floating point
> benchmark to be consistent with the ratio of real applications. Why is this the case?

Most general-purpose applications are integer based (even financial applications).  Beyond numeric
calcualtions, all string manipulations and program flow of control instructions are considered
"integer" in nature.  A few scientific, graphical, and financial applications benefit from floating point
support, but the vast majority of "productivity apps" do not.  For that reason, volume chip manufactures
concentrate on integer performance.  Many scientific applications tend to gravitate toward processors
with superior floating point performance (either multiple, concurrent floating point units.  or massive
vector processors like the Altivec).

> 2. The peak MIPS and MFLOPS i got from running on both computers only get to 20++% of the peak rates.
> Besides the reason of using JAVA implementations, what other factors have caused this to happen?

Many of the "official" published benchmarks are performed using highly optimized compilers - sometimes
optimized especially for the benchmarks.  There have been several documented cases of compiler
writers recognizing benchmark code and generating tuned code.  This optimization is nearly always
useless for general applications, so your average word processor doesn't run twice as fast just because
SPEC int does.

0

Author Commented:
Um..I appreciate ur comment, but I still don get to fully understand why is the case of no.1 or what caused no.2~
0

Commented:
To compare two benchmark programs, you should do something like this:
Run the programs on more than 2 computers.
Then you should plot the results in a graph:
x-axis for one program
y-axis for the other program
And now you can use some statistical model to find the correlation between the results.
If you find a good correlation, this means that the programs are both either good or bad.
If you find a bad correlation, this means that at least one of your program is bad.
0

Author Commented:
i know. i even got the result above. thats not wat i am asking.
0

Commented:
> i know. i even got the result above. thats not wat i am asking.
what i mean is that you'll always have some kind of overhead.
some of which will be proportional to the number of operations and some of which will be independent.
so, you cannot base your analysis simply on the ratio computer1/computer2.
you first need to eliminate the part of overhead that is independent of the number of operations.
to do that, you need to test your program on several computers, compare the result with some standard benchmarking, and do a linear regression.  now you can determine the independent overhead.  and you calculate your ratio (computer1 - independent overhead)/(computer2 - independent overhead).
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

## Featured Post

• 2
• 2
Tackle projects and never again get stuck behind a technical roadblock.