logo graalVM

Blogpost

GraalVM in Java

Introduction 

We are in an era where finally monolithic applications are getting replaced by microservices. This paradigm shift has led the developers to think differently in terms of application logic as well as the well-established technologies behind it. Heavy application servers hosting sizable monolithic applications were replaced by lightweight  servers designed for running microservice applications. The adoption of microservices was accelerated e

ven further with the rise of container runtimes like Docker, PodMan, containerd, etc…

Thanks to the emerging technologies it has nowadays become relatively easy to build and deploy applications.

History of deploying applications

The figure above very nicely depicts this evolution of application development. Nowadays developers focus on fast delivery, lightweightness, and performance-optimized applications. Microservices provide ways to optimize applications by using various different microservices frameworks. Containers provide a very convenient way to build and run applications, following the same philosophy of Java back in the 90s with the motto “build once run everywhere”, where containers take this to the next level by allowing this forall kinds of applications. This breakthrough has led to developers searching for ways to run Java applications more efficiently. Generally, two options have been considered.  

The first is compiler/runtime optimizations and improvements on JVM, which is being done over decades. There is another option though which may sound more radical, namely what if we can get rid of the JVM itself? 

We’ll look into these options with the GraalVM which promises to deliver on these aspects.  

GraalVM image

Architecture

GraalVM is a new JDK distribution to accelerate executions of applications written in JVM languages while allowing different languages to run. Its polyglot support enables multiple languages to run in a single application.

There are three runtime modes for GraalVM. These are:

  • JVM runtime
  • Native image
  • Polyglot using Truffle framework

 

Let’s discuss these briefly.

 

JVM Runtime

Java applications are run on the standard HotSpot VM with an advanced JIT(Just-In-Time) compiler written in Java. This brings new optimizations and performance gains compared to the standard Java distributions. The Java VM is an interpreter for the bytecode which translates to the machine code. This translation has obvious performance drawbacks. The JIT compiler is an important concept which helps to boost the performance of Java applications. Unlike an interpreter which translates the bytecode on the fly, a JIT compiler compiles the bytecode into machine code for the following executions, which ultimately avoids the re-translation phase each time the code is executed. 

HotSpot VM has a three-tiered execution system consisting of the interpreter, the quick compiler, and the optimizing compiler. Each tier represents a different trade-off between the delay of execution and the speed of execution. Java code starts execution in the interpreter. Then, when a method becomes warm(meaning executed multiple times), it’s enqueued for compilation by the quick compiler(C1/client compiler). Execution switches to that compiled code when it’s ready. If a method executing in the second tier becomes hot(very frequent), then it’s enqueued for compilation by the optimizing compiler(C2/server compiler). Execution continues in the second-tier compiled code until the faster code is available. We won’t go more into detail for now on this topic.

For further reading on multi-tiered compilation there is a nice article on https://www.baeldung.com/jvm-tiered-compilation

Now that we know how the code execution works, let’s check how GraalVM innovates on this. As you can see the performance really depends on how well optimized the JIT compiler is. In HotSpot VM the JIT compiler is written in C++ and has had very little changes over the years and has become difficult to maintain. JIT compiler tries to optimize by leveraging a data structure in the form of a program dependence graph. GraalVM introduces a new JIT compiler written in Java which uses the same data structure.

The following example illustrates this concept:

JIT compiler

The code above is translated to the graph below:

graalVM graph

 

 

 

By constructing such a graph the JIT compiler can find the most optimized path so that the resulting machine code becomes more optimized.

 

Native Image

The JIT compiler is a very nice improvement on top of the VM. By compiling the executed bytecode directly into machine code, we can make applications run faster. However not everything is executed in this way. Based on the multi-tiered execution the executed code needs to be tracked and evaluated for compilation. This too introduces some overhead however.

This dynamic compilation can be extended to a static compilation where in essence the JIT compiler compiles the whole application at build time by doing a static code analysis. This is called Ahead-of-Time compilation. By applying AOT, the application can be compiled to machine code directly and thus the VM is avoided. This has the following implications: 

  • Reduced application size
  • No VM is required since everything is compiled into machine code
  • Faster executions and startup times

Also the following issues need to be taken into account: 

  • Long build time
  • Reflection can cause issues since classes are loaded at runtime
 

Polyglot with Truffle

The architecture allows us to run different types of languages through their runtimes on top of the GraalVM. This ultimately makes it possible to run different languages using only one single VM, which also begs the question whether it would be possible to let the different languages talk to each other; what we call polyglot applications.

This idea of developing polyglot applications sounds a bit strange at first but it definitely has its advantages when we want to have an optimized application with different parts focusing on their own domains. For instance, building a web application for facial recognition, we can build the web service using Java while the machine learning algorithms can evaluate data using Python. This allows the service layer to be minimalistic focusing on only exposing services and integration while Python modules focus purely on data.

One can also suggest that today we can achieve the same by using a service oriented architecture where both modules talk to each other by exposing services. This solution has two disadvantages though; we have to implement an additional service layer for the data analysis within the Python module, second there is an additional overhead for communication. So for such a use case polyglot could help to reduce complexity and have better performance.

 

Performance

Now that we have an idea of what GraalVM promises, let’s have a look at the performance tests for the JIT compiler and Native image.

 

HotSpot vs Graal JIT compiler

To illustrate the difference in performance between the standard HoSpot and the Graal JIT compiler we’ll execute a small program that concatenates spaces to a sentence. We’ll then measure the execution time in slices to illustrate the compilation time.

HotSpot vs Graal JIT compiler

Execution in HotSpot:

Execution in HotSpot

Execution in Graal:

Execution in Graal

The total execution time in HotSpot seems to be better, however when we look closely we notice that further executions in Graal take a lot less then in HotSpot due to its more optimized compilation. This means long running applications would benefit enormously from the Graal JIT compiler.

 

Native Image using Quarkus

Building a native image speeds up startup time immensely. To illustrate this, we’ll be building a native image for a small hello world web application which exposes APIsfor responding with “hello”. We’ll be using the “getting-started” application from quarkus quickstart repo https://github.com/quarkusio/quarkus-quickstarts.git

In order to build a native image we’ll use maven with the following command with docker:

maven command docker

This takes roughly 10 minutes to build the native executable. After build completion we need to build a docker image with the following command:

build docker image

build docker image

Run the container:

run the container

Running the container results in 0.022s. Let’s compare this to the one running on JVM:

JVM docker build

This results in application startup time of 1.409s which is much higher compared to the native image. This fast startup time makes it an ideal solution for executing jobs in clusters running in cloud or kubernetes.

 

Conclusion

GraalVM is one of the biggest changes in Java since Java 8, maybe the most important one so far. It brings many new improvements and features to the Java world. Improvements in JIT compilation improves performance for existing applications while native image brings new possibilities in terms of application development, such as small Java applications, such as microservices or jobs, running on the fly in the cloud. Another big feature is the introduction of polyglot applications which can make it easier to focus on integration and building applications components in different languages which are tailored for specific tasks.

Schermafbeelding 2022 02 28 om 14.47.34 150x150 1

Yasin Koyuncu

Software Crafter

Experienced Information Technology Consultant with a demonstrated history of working in the information technology and services industry. Development professional with a Cum laude focused in Computer Science in Software Engineering from University of Leuven.