I have done a long long hardwork of comparing two computers performance using 2 own written benchmarks (mips.java and mflops.java) and 2 standard benchmarks (linpack and dhrystone), and heres the result i get:
Running the toy integer and floating pt benchmarks on com1 and com2:
• Within 1 sec, Com1 can run floating point arithmetic operations 2.47 times faster than the Com2.
• Within 1 sec, Com1 can run integer arithmetic operations 4.3 times faster than the Com2.
Running the standard Linpack(floating) and Dhrystone(integers) benchmarks on com1 and com2:
• Within 1 sec, com1 would run floating-point operations 7.33 times faster than com2
• Within 1 sec, com1 would run integer arithmetic algorithm 4.92 times faster than com2.
Running the real applications, one to test integer and another to test floatingpt, on com1 and com2:
• Within 1 sec, Com1 can run the floating point operations 11.8 times (11.8:1) faster than Com2
• Within 1 sec, Com1 can run integer arithmetic operations 4.5 times (4.5:1) faster than Com2.
now the question is:
1. Looking at the results above, i can see that integer benchmarks work better than floating point benchmark to be consistent with the ratio of real applications. Why is this the case?
2. The peak MIPS and MFLOPS i got from running on both computers only get to 20++% of the peak rates. Besides the reason of using JAVA implementations, what other factors have caused this to happen?