Hello again, people!
If you read the first part of this series and is wondering:
How can I check if that specific code was compiled to a native machine language?
You are in the correct place! Now I’ll explain we can check and analyze the JIT Compilation logs. So, let’s go!
There is a JVM flag that can help us with this investigation, and this flag is “-XX:+PrintCompilation”.
Here is an example of the output:
50 1 3 java.lang.StringLatin1::hashCode (42 bytes)53 2 3 java.lang.Object::<init> (1 bytes)53 3 3 java.lang.String::isLatin1 (19 bytes)54 4 3 java.util.concurrent.ConcurrentHashMap::tabAt (22 bytes)
60 5 3 java.lang.String::charAt (25 bytes)
60 6 3 java.lang.StringLatin1::charAt (28 bytes)
60 7 3 java.lang.String::coder (15 bytes)
88 40 n 0 java.lang.invoke.MethodHandle::linkToSpecial(LLLLLLLL)L (native) (static)
88 39 ! 3 java.util.concurrent.ConcurrentHashMap::putVal (432 bytes)
90 41 n 0 java.lang.System::arraycopy (native) (static)
91 42 3 java.lang.String::length (11 bytes)
129 3 3 java.lang.String::isLatin1 (19 bytes) made not entrant
138 150 n 0 java.lang.Object::getClass (native)
Most lines of the compilation log have the following format:
timestamp compilation_id attributes tiered_level method_name size deopt
The timestamp (milliseconds) here is the time after the compilation has finished (relative to 0, which is when the JVM started).
The compilation_id is an internal task identifier. Usually, this number will simply increase monotonically, but sometimes you can see those out of sync. This happens, normally, when we have multiple compilation threads and means that those compilation threads are running out of order (faster or slower). So, it is just a function of thread scheduling.
The attributes field is a string with five characters that indicate the state of the code that is being compiled. If a specific attribute applies to the given compilation, the character for that specific attribute will be printed, otherwise, blank will be printed. The characters are:
% - The compilation is OSR (on-stack replacement).s - The method is synchronized.! - The method has an exception handler.b - Compilation occurred in blocking mode.n - Compilation occurred for a wrapper to a native method.
The first of these attributes (%) refers to on-stack replacement (OSR). We need to remember that JIT compilation is an asynchronous process. So, when the JVM decides that a certain block of code should be compiled, that block of code is put in a queue*. Instead of waiting for the compilation, the JVM will continue interpreting the method, and the next time the method is called, the JVM will execute the compiled version of the method (assuming the compilation has finished).
Now let’s consider a long-running loop, according to Scott Oaks, the JVM will notice that the loop itself should be compiled and will queue* that code for compilation. But that isn’t sufficient, the JVM has to have the ability to start executing the compiled version of the loop while the loop is still running — it would be inefficient to wait until the loop and enclosing method exit (which may not even happen). Hence, when the code for the loop has finished compiling, the JVM replaces the code (on the stack), and the next iteration of the loop will execute the much faster-compiled version of the code.
This is OSR, that is, the code now is running in the most optimal way possible!
*These queues are not strictly first in, first out; methods whose invocation counters are higher have priority. So even when a program starts execution and has lots of code to compile, this priority order helps ensure that the most important code will be compiled first. (This is another reason the compilation ID in the PrintCompilation output can appear out of order.)
The next two attributes (s and !) are easy to understand. The “s” means that the method is synchronized and the “!” means that the method has an exception handler as mentioned before.
The blocking flag (b) will never be printed by default in current versions of Java. It indicates that the compilation did not occur in the background.
The native attribute (n) indicates that the JVM generated a compiled code to facilitate the call into a native method.
If tiered compilation has been disabled (with the option -XX:-TieredCompilation), the next field (tiered_level) will be blank. Otherwise, it will be a number indicating which tier has completed compilation. This number can go from 0 to 4 and the meaning is:
At tier 0, the code was not compiled, the code was just interpreted.
At tiers 1, 2, 3 the code was compiled by C1 with different amounts of extra profiling. The most optimized of them are tier 1 since it has no profiling overhead.
At tier 4 the code is compiled by C2. It means that now the code was compiled at the highest possible level of compilation and was added in the code cache.
Next comes the name of the method (method_name) being compiled which is printed like ClassName::method.
The next field is the size (in bytes) of the code being compiled. It is important to understand that this is the size of the Java bytecodes, not the size of the compiled code, so this value cannot be used to predict the code cache size.
To finish, in some cases there is a message at the end of the compilation line that will indicate that some sort of deoptimization (deopt) has occurred, and they can be “made not entrant” or “made zombie”.
If you saw C1, C2, and deoptimization, and didn’t understand, don’t worry, I’ll explain what they are below!
There are actually 2 compilers built into the JVM called C1 (also called Client Compiler) and C2 (also called Server Compiler).
The C1 compiler is able to do the first three levels of compilation. Each is progressively more complex than the last one and the C2 compiler is responsible to do the fourth level.
The JVM decides what level of compilation to apply to a given block of code based on how often it is running and how complex or time-consuming it is. It is called “profiling” the code, so for any method which has a number of one to three, the code has been compiled using the C1 compiler and the higher the number is, the more “profiled” the code is.
Just to be clear, “profiling” in Java is the process of monitoring various JVM levels parameters like Method Execution, Thread Execution, Object Creation, and Garbage Collection.
So, if the code has been called enough times, then we reach level 4 and the C2 compiler has been used instead.
When it happens, means that our code is even more optimized than when it was compiled using the C1 compiler and the JVM can understand that this part of the code is being used so much that it will not only compile it to level 4, but it will also going to put that compiled code (level 4) into the code cache.
Wait Julio, Code Cache?
Actually it is really simple. Code cache is a special area of memory because that will be the quickest way for it to be accessible and therefore executed.
Another common name for “level of the compilation” is “compilation tier” and how more it is, the more optimized the compiled code should be.
So, you probably are wondering, why the JVM compilers don’t compile all code to level 4?
And the answer is not so simple, but basically there is no free lunch when we are talking about resources, there is a tradeoff that we need to consider.
According to Mark Stoodley, JIT aggressively speculated on class hierarchy. In the Java ecosystem or in the Java language, calls are virtual by specification, which means they can be overridden by other classes. When you’re making a call to one of these methods, you don’t really know what the target is, it could be anything, it could be a class that hasn’t even been loaded yet.
Note: JIT only can make this assumption because it is watching the Java Program, so it knows a lot about things like loaded classes, executed methods, and profiling.
Because of this dynamic nature of JVM, if it compiles a bytecode too early into native code, this can actually fool the compiler into speculating on something that doesn’t only have one target. It is, the compiler will generate a code pointing to the wrong object (target). Doing this, the compiler will generate a code that will be right for a while, but in cases that we have multiple targets (like multiple implementations of an Interface), this generated code will be wrong (pointing to the “wrong” implementation, for example).
Knowing that the JIT has to generate backup paths, so it can deal with that situation when it happens. The problem is that we need to wait for the wrong code to be re-compiled to have a great performance again.
So, in other words, the JIT needs to re-optimize if the previous assumption was wrong, it is also called Deoptimization.
I’ll explain deoptimization better in the next part (3) of this series.
Hope you are enjoying this JVM journey!
See you in the article!