Understand JVM and JIT Compiler — Part 3

Júlio Falbo
6 min readSep 12, 2020

Hello people!

Following the thinking line based on the second part of this series, I’ll explain better what means deoptimization.

Deoptimization means that the compiler needs to “undo” a previous compilation. The effect is that the performance of the application will be reduced (at least until the compiler can recompile the code).

So to clarify, when the JVM executes code, it does not begin compiling the code immediately, and here are two basic reasons for this.

First one: Let’s imagine that we have a code that will be executed only one time, then compiling it is a completely wasted effort. It will be faster to just interpret the Java bytecode.

But if the code in question is a frequently called method or a loop that runs many times, then compiling it will bring benefits. The effort required to compile that code will be offset by the savings in several executions of the compiled code more quickly.

That trade-off is one reason that the compiler executes the interpreted code first.

The second reason is one of optimization: the more times that the JVM executes a particular method or loop, the more information it has about that code (is it also called “profiling”). This information allows the JVM to make a lot of optimizations when compiling the code.

According to Scott Oaks, for a simple example, let’s consider the equals() method. This method exists in every Java object (because it is inherited from the Object class) and is often overridden. When the interpreter encounters the statement b = obj1.equals(obj2), it must look up the type (class) of obj1 in order to know which equals() method to execute. This dynamic lookup can be somewhat time-consuming. Over time, say the JVM notices that each time this statement is executed, obj1 is of type java.lang.String. Then the JVM can produce a compiled code that directly calls the String.equals() method. Now the code is faster not only because it is compiled but also because it can skip the lookup of which method to call. It’s not quite as simple as that; it is possible the next time the code is executed that obj1 refers to something other than a String. The JVM will create a compiled code that deals with that possibility, which will involve deoptimizing and then reoptimizing the code in question. Nonetheless, the overall compiled code here will be faster (at least as long as obj1 continues to refer to a String) because it skips the lookup of which method to execute. That kind of optimization can be made only after running the code for a while and observing what it does.

This is the second reason JIT compilers wait to compile sections of code.

Now let’s talk a little bit more about Deoptimization.

Deoptimization can happen in two cases, when code is made not entrant and when code is made zombie, as mentioned before.

Not entrant code

Two things can cause code to be made not entrant. The first one is when we are using polymorphism and interfaces, and the second one can simply happen during the tiered compilation (from 1 to 4).

So, let’s create 1 interface with 2 implementations, it will help us to understand better how it works.

Interface: MyInterface

Implementation: MyInterfaceImpl and MyInterfaceLoggerImpl

Let’s look at the first case.

MyInterfaceImpl is a simple implementation but the MyInterfaceLoggerImpl is adding some log statements on the implemented method.

Now, let’s imagine that the first 45.000 executions will call the MyInterfaceImpl, and then the MyInterfaceLoggerImpl will be called for the rest 5.000 executions.

If you execute the code above using the flag -XX:+PrintCompilation you will see this line:

152  184       3       main.MyInterfaceImpl::addARandomNumber (10 bytes)   made not entrant

And the explanation is because the compiler will see that the current type of the myInterface object is MyInterfaceImpl. It will then inline code and perform other optimizations based on this knowledge.

Now after a bunch of executions (45.000 based on our example) using the MyInterfaceImpl, we enter another scenario, where the implementation will be MyInterfaceLoggerImpl. Now the assumption the compiler made regarding the type of the myInterface object is wrong, the previous optimizations are invalid. It will generate a deoptimization, and the previous optimizations are discarded. If a lot of additional calls are made using the MyInterfaceLoggerImpl, the JVM will quickly compile that code and make new optimizations.

So, now we already understand how the first scenario (polymorphism and interface) works, let’s talk about the second scenario.

The second thing that can cause code to be made not entrant is because of how tiered compilation works. When a code is compiled by the C2 (Server) compiler (to a tier 4), the JVM must replace the code already compiled by the C1 (Client) compiler. It does this by marking the old code as not entrant and using the same deoptimization mechanism to replace the marked code for the newly compiled (and more efficient) code. So, when a program is running with tiered compilation, the compilation log will show some methods that are not entrant. But in this case, this “deoptimization” is, in fact, making the code that much faster.

Deoptimizing zombie code

According to Scott Oaks, when the compilation log reports that it has made zombie code, it is saying that it has reclaimed previous code that was made not entrant.

Using our example, after a test running with the MyInterfaceLoggerImpl implementation, the code for the MyInterfaceImpl class was made not entrant. But objects of the MyInterfaceImpl class were still in memory. Eventually, all those objects will be collected by Garbage Collector (GC). When that happened, the compiler noticed that the methods of that class will be marked as zombie code.

Thinking in performance, this is a good thing and I’ll explain why.

That compiled code is kept in a fixed-size code cache. When zombie methods are identified, this code can be removed from the code cache, making room for other code to be compiled and added there.

Amazing Julio, but let’s imagine that I want to know more about my compilation than the info that -XX:+PrintCompilation provides to me.

It is totally possible my dear friend!

There are 2 flags that will help us to generate a file that provides us more information and these flags are -XX:+UnlockDiagnosticVMOptions and -XX:+LogCompilation.

It will create a file called hotspot_pid<SOME_PID>.log and you can open this file and see a lot of info around your application.

By default, this file will be placed in the same folder of your project, but you can change the location of the file, using the flag -XX:LogFile=<PATH>.

But Julio, this log is really hard to understand, is there any way to make it easier to read?

Of course, it is, the Java community is amazing!

There is an amazing open-source project called JITWatch, maintained by AdoptOpenJDK, that will help us to track our JIT Compiler.

To use that is really simple, just clone the official repository and run using Maven or Gradle, like the example below.

git clone git@github.com:AdoptOpenJDK/jitwatch.git
cd jitwatch
mvn clean compile test exec:java
# or
gradlew clean build run

Important note: To generate the hotspot.log file used by JITWatch, we need one more flag, and it is “-XX:+TraceClassLoading”. So, in total, you need those 3 flags to generate a hotspot.log that JITWatch can understand:

-XX:+UnlockDiagnosticVMOptions -XX:+TraceClassLoading -XX:+LogCompilation

Once running, you can open your log file (hotspot_pid<SOME_PID>.log) there and click on the “Start” button, and then you will see all the analysis around your HotSpot log file, like the image below.

To know more about this fantastic tool visit his Wiki page clicking here.

Now let’s imagine that, based on our analysis, we decided that we need to increase the Code Cache size, how can we do that?

I’ll explain how to do that in the next article of this series!

--

--