In my previous article, I wrote that, "Code that works well is good code." When I say, "works well," I mean both "works correctly" and "works efficiently." We've all pored over bits of code, trying to root out inefficiencies. One of the ways we can examine Java code is by decompiling it, using the javap command from the JDK. (If you're not familiar with how to read bytecode, please see an article like this one for a primer: Java Bytecode Fundamentals)
You should see a list of the methods in the BytecodeDemo class, along with the bytecode that the Java VM would execute in order to run those methods. The bytecode definitely shows a few things that could be called "inefficiencies," but it might not be the best idea to "fix" them.
First, look at the bytecode for the computeValueAndDoNothing() method. It calls the computeString() method, but does not use the value it returns. That value is still on the JVM stack, though, so the method has to do that pop operation to get rid of it.
Seems like the computeValueAndDoNothing() method would be more efficient if it didn't have to pop that value, right? The only way to pull that off would be if the computeString() didn't return a value in the first place. We can't do that, though; that method is used in other places—places that do use that value. Let's ignore that inefficiency and move on.
Look at the bytecode for the computeValueAndUseItlater() method. It stores the value returned by computeString(), then immediately loads it again, so it can be passed to useString(). Wouldn't we be better off if we used the value immediately? That's what the computeValueAndUseItImmediately() method does and its bytecode is two operations shorter.
That's more efficient, right?
Well, maybe it is and maybe it isn't. If our JVM were running purely in interpreted mode (that is, if we used the -Xint command-line argument when starting Java), it would run those exact sequences of operations every time it executed those methods. That's not how the modern Java world works, though; just-in-time (JIT) compilation can kick in at any time and turn a bit of bytecode into machine code. It's entirely possible that the JIT compiler will look at the computeValueAndUseItLater() method and figure out that it doesn't actually need to worry about the value local variable at all, since it's immediately being passed to the useString() method anyway.
That means there's a chance that the JIT compiler will turn the computeValueAndUseItLater() and the computeValueAndUseItImmediately() methods into the same machine code. It might turn out we looked at the bytecode for computeValueAndUseItLater(), saw the "inefficiency," and restructured it to look like computeValueAndUseItImmediately() method. For no good reason!
Will the JIT compiler actually make an optimization like this? Probably very few people actually know the real answer to that question. I don't and I'm sure the average Java developer doesn't, either. It's not really our job to worry about stuff like that.
Java developers absolutely do have to think about efficiency; using the wrong algorithm or data structure for a situation is never a good idea. There's a point, though, where the search for optimization dives too deeply into territory that might just be reorganized by the JIT compiler anyway. So what's a better idea than examining bytecode?
Identify whether your application has a performance issue.
Profile your application to find the part of your code that performs poorly.