<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Pierre Guimon</title>
    <description>The latest articles on Forem by Pierre Guimon (@pierregmn).</description>
    <link>https://forem.com/pierregmn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/pierregmn"/>
    <language>en</language>
    <item>
      <title>Migrating a Spring Boot application to Quarkus</title>
      <dc:creator>Pierre Guimon</dc:creator>
      <pubDate>Mon, 28 Nov 2022 13:36:58 +0000</pubDate>
      <link>https://forem.com/pierregmn/migrating-a-spring-boot-application-to-quarkus-5ap6</link>
      <guid>https://forem.com/pierregmn/migrating-a-spring-boot-application-to-quarkus-5ap6</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Java applications running in traditional Java Enterprise Edition environments are not well suited for cloud environments.&lt;/p&gt;

&lt;p&gt;The application server start-up time are quite high, usually above one minute, and the memory footprint required is high. Often, they require complex cluster configuration.&lt;/p&gt;

&lt;p&gt;This is not compatible with scale-up and scale-down concepts introduced in the cloud.&lt;/p&gt;

&lt;p&gt;A myriad of java frameworks are available on the market.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://quarkus.io/" rel="noopener noreferrer"&gt;Quarkus&lt;/a&gt; is a Red-Hat java framework that does not require an application server, and whose goal is to support Kubernetes and Java Native Compilation using GraalVM.&lt;/p&gt;

&lt;p&gt;Quarkus allows to reuse many existing java libraries, offering specific extensions for native compilation.&lt;/p&gt;

&lt;p&gt;Applications built with Quarkus can start in few seconds and if native compiled, they have a very limited memory and disk footprint.&lt;/p&gt;

&lt;p&gt;If you have no idea what is Quarkus, I encourage you to read my &lt;a href="https://dev.to/pierregmn/quarkus-fundamentals-n77"&gt;Quarkus fundamentals&lt;/a&gt; post (15mins read) on the subject.&lt;/p&gt;

&lt;p&gt;With Quarkus it is not possible to replace 100% of the features provided by Java Enterprise Edition application servers: EJB, JSP and other similar technologies will not be available to applications written for Quarkus.&lt;/p&gt;

&lt;p&gt;Migrating to Quarkus from a Spring boot application is not an immediate task, especially if targeting native compilation: many adaptations could be required.&lt;/p&gt;

&lt;p&gt;Although there are great guides out there to explain you how to migrate a Spring boot application to Quarkus, those guides do not really emphasize on the approach for migrating a multi service code base from Spring to Quarkus. &lt;br&gt;
Here are some examples: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developers.redhat.com/blog/2020/04/10/migrating-a-spring-boot-microservices-application-to-quarkus" rel="noopener noreferrer"&gt;https://developers.redhat.com/blog/2020/04/10/migrating-a-spring-boot-microservices-application-to-quarkus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dzone.com/articles/spring2quarkus-spring-boot-to-quarkus-migration" rel="noopener noreferrer"&gt;https://dzone.com/articles/spring2quarkus-spring-boot-to-quarkus-migration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article I will detail the approach for migrating a substantial Spring boot code base application (in other terms, a monolith 😆) to Quarkus. &lt;/p&gt;

&lt;p&gt;I will also highlight some pitfalls that we have ran into while migrating one of my company service code base to Quarkus.&lt;/p&gt;
&lt;h2&gt;
  
  
  Approach for migrating to Quarkus
&lt;/h2&gt;

&lt;p&gt;I will explain in this section how we have been progressing on the migration of one of my company service to Quarkus.&lt;/p&gt;

&lt;p&gt;First of all, whatever the approach is, I would recommend anyone to get to know Quarkus by following the post highlighted in the introduction.&lt;/p&gt;

&lt;p&gt;Once done, you should also play with the Quarkus Get Started guide on the official &lt;a href="https://quarkus.io/" rel="noopener noreferrer"&gt;website&lt;/a&gt;, so that you can get familiar with the packaging and build your first application with Quarkus in no more than an hour.&lt;/p&gt;
&lt;h3&gt;
  
  
  The hothead approach
&lt;/h3&gt;

&lt;p&gt;The first approach, that I like to call the hothead approach consists in:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adding the quarkus universe bom dependency to your service pom.xml file, following the guide: &lt;a href="https://quarkus.io/guides/maven-tooling#build-tool-maven" rel="noopener noreferrer"&gt;https://quarkus.io/guides/maven-tooling#build-tool-maven&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Build the service with: &lt;code&gt;mvn quarkus:dev&lt;/code&gt; command line and light a candle hoping that everything will work at first try!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The build will of course generate tons of errors, most of them being related to dependency injection issues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ERROR] Failed to execute goal io.quarkus:quarkus-maven-plugin:2.2.3.Final:build (default) on project webapp: Failed to build quarkus application: io.quarkus.builder.BuildException: Build failure: Build failed due to errors
[ERROR] [error]: Build step io.quarkus.arc.deployment.ArcProcessor#validate threw an exception: javax.enterprise.inject.spi.DeploymentException: Found 107 deployment problems:
[ERROR] [1] Unsatisfied dependency for type org.springframework.web.client.RestTemplate and qualifiers [@Default]
[ERROR] - java member: com.myapp.server_impl.ServerImpl#&amp;lt;init&amp;gt;()
[ERROR] - declared on CLASS bean [types=[com.myapp.server_impl.ServerImpl, java.lang.Object], qualifiers=[@Named(value = "serverImpl"), @Default, @Any], target=com.myapp.server_impl.ServerImpl]
[ERROR] [2] Unsatisfied dependency for type javax.ws.rs.ext.Provider and qualifiers [@Default]
[ERROR] - java member: com.myapp.server_impl.ServerImpl#&amp;lt;init&amp;gt;()
[ERROR] - declared on CLASS bean [types=[com.myapp.server_impl.ServerImpl, java.lang.Object], qualifiers=[@Named(value = "serverImpl"), @Default, @Any], target=com.myapp.server_impl.ServerImpl]
[ERROR] [3] Unsatisfied dependency for type java.util.concurrent.ExecutorService and qualifiers [@Default]
[ERROR] - java member: com.myapp.server_impl.ServerImpl#&amp;lt;init&amp;gt;()
[ERROR] - declared on CLASS bean [types=[com.myapp.server_impl.ServerImpl, java.lang.Object], qualifiers=[@Named(value = "serverImpl"), @Default, @Any], target=com.myapp.server_impl.ServerImpl]
...
...
hundreds of errors later
...
...
[ERROR] at io.quarkus.arc.processor.BeanDeployment.processErrors(BeanDeployment.java:1108)
[ERROR] at io.quarkus.arc.processor.BeanDeployment.init(BeanDeployment.java:265)
[ERROR] at io.quarkus.arc.processor.BeanProcessor.initialize(BeanProcessor.java:129)
[ERROR] at io.quarkus.arc.deployment.ArcProcessor.validate(ArcProcessor.java:418)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Did you really think it would have work like that 😇 ?!&lt;br&gt;
Okay, let's take a step back and explain the basics.&lt;/p&gt;
&lt;h3&gt;
  
  
  The use-my-brain approach
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Dependency injection
&lt;/h4&gt;

&lt;p&gt;Quarkus is designed to work with the most widely used Java standards, frameworks and libraries, such as Eclipse MicroProfile, Apache Kafka, RESTEasy (JAX-RS), Hibernate ORM (JPA) and many more.&lt;/p&gt;

&lt;p&gt;Quarkus programming model is based on another standard: the Contexts and Dependency Injection for Java 2.0 specification.&lt;/p&gt;

&lt;p&gt;If you are completely new to dependency injection, I encourage you to read Quarkus &lt;a href="https://quarkus.io/guides/cdi-reference" rel="noopener noreferrer"&gt;introduction&lt;/a&gt; to contexts and dependency injection.&lt;/p&gt;

&lt;p&gt;The first thing to know about Quarkus bean discovery and injection is that it won't scan classes from external modules.&lt;/p&gt;

&lt;p&gt;If you have a multi maven modules project, like we do for the service we have been migrating, you will find out that Quarkus won't find by default classes in other modules.&lt;/p&gt;

&lt;p&gt;You have various ways to make Quarkus find your beans. They are listed here: &lt;a href="https://quarkus.io/guides/cdi-reference#bean_discovery" rel="noopener noreferrer"&gt;https://quarkus.io/guides/cdi-reference#bean_discovery&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here is an excerpt from the referenced link:&lt;/p&gt;

&lt;p&gt;"The bean archive is synthesized from:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The application classes,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dependencies that contain a beans.xml descriptor (content is ignored),&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dependencies that contain a Jandex index META-INF/jandex.idx,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dependencies referenced by quarkus.index-dependency in application.properties configuration file,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And Quarkus integration code."&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want, an external module or a third-party library on which you do not have the hand (meaning that you can't modify it), to be scanned by Quarkus you should add the dependency in the &lt;code&gt;application.properties&lt;/code&gt; configuration file.&lt;/p&gt;

&lt;p&gt;If you have control on the module/project, you can directly add an empty beans.xml file in the META-INF folder.&lt;/p&gt;

&lt;p&gt;That said, you might want to clean your dependencies before getting your hands dirty, the less code base you will need to migrate the better.&lt;/p&gt;

&lt;p&gt;We will come back to that point later.&lt;/p&gt;
&lt;h4&gt;
  
  
  Spring
&lt;/h4&gt;

&lt;p&gt;Let's focus now on Spring. In the service we have been migrating, developers have been using Spring intensively.&lt;/p&gt;

&lt;p&gt;The service grew over the years and Spring dependencies have been added to the project. Spring dependency injection has been used here and there, instead of standard &lt;a href="https://docs.jboss.org/cdi/spec/2.0/cdi-spec.html" rel="noopener noreferrer"&gt;CDI specifications&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As an example, &lt;code&gt;@Component&lt;/code&gt; Spring DI annotations might have been used instead of &lt;code&gt;@Singleton&lt;/code&gt; CDI annotation. &lt;/p&gt;

&lt;p&gt;Another example would be the use of &lt;code&gt;@Bean&lt;/code&gt; Spring DI annotation instead of &lt;code&gt;@Produces&lt;/code&gt; CDI annotation.&lt;/p&gt;

&lt;p&gt;There are more examples, and you can find a conversion table (Spring DI annotation versus CDI) on the Quarkus website: &lt;a href="https://quarkus.io/guides/spring-di#conversion-table" rel="noopener noreferrer"&gt;https://quarkus.io/guides/spring-di#conversion-table&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So that the migration to Quarkus is not cumbersome, the Quarkus team came-up with a set of extensions that will help you migrating Spring projects to Quarkus: spring-di, spring-web, spring-data-jpa, spring-data, spring-security, spring-cache, spring-scheduled, spring-boot-properties, spring-cloud-config-client.&lt;/p&gt;

&lt;p&gt;For instance, if you decide to use spring-di Quarkus extension, a spring DI processor will map Spring DI annotations to CDI annotations.&lt;/p&gt;

&lt;p&gt;That said, it is recommended to migrate all your Spring beans to CDI specifications.&lt;/p&gt;
&lt;h2&gt;
  
  
  Migrating to Quarkus
&lt;/h2&gt;

&lt;p&gt;Following the above explanation on dependency injection and Spring we came-up with the following workplan that can be implemented for any migration of Spring boot based service to Quarkus.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Dependencies analysis
&lt;/h3&gt;

&lt;p&gt;First of all, we want to analyze the dependencies that are required to build and run the service to be migrated to Quarkus framework.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why ?&lt;/strong&gt; This step is fundamental to identify all the required external dependencies as well as internal dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How ?&lt;/strong&gt; We have used Class Dependency Analyzer (CDA) tool to meet this goal. You can find out how to use it on this &lt;a href="http://www.dependency-analyzer.org/" rel="noopener noreferrer"&gt;page&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can use built-in IDE dependency analyzer as well, but I found CDA really convenient to use and you can also use it as a library in your project if you want to improve the tool possibilities.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Maven modules cleaning
&lt;/h3&gt;

&lt;p&gt;Secondly, we want to clean all unwanted internal dependencies.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why ?&lt;/strong&gt; As said previously we are migrating a monolith and it is based on Maven software management tool.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code base consists of multiple Maven modules that are used in different services. &lt;/p&gt;

&lt;p&gt;Some code that is not used, by the service we want to migrate, is part of Maven modules that the service depends on.&lt;/p&gt;

&lt;p&gt;By cleaning all unwanted internal dependencies through moves to new/others maven modules, we will eventually reduce the scope of code base to be migrated to Quarkus.&lt;/p&gt;

&lt;p&gt;Following this principle, we have performed major cleaning in the code base to remove irrelevant and unwanted internal dependencies for our service.&lt;/p&gt;

&lt;p&gt;I highly encourage you to perform such cleaning prior to this migration. This preliminary step will eventually allow you to save time in the later steps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How ?&lt;/strong&gt; The output of Class Dependency Analyzer tool allows you to check all the classes that your service depends on, and eventually remove/move all unwanted classes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Concretely, this has been performed by moving some classes that were not needed by the service, to new maven modules or existing maven modules on which the service doesn't have a dependency on.&lt;/p&gt;

&lt;p&gt;We have also refactored some pieces of code: by splitting some classes for instance, by creating new classes to specialized their usage to the service we have been migrating to Quarkus framework.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Mocking
&lt;/h3&gt;

&lt;p&gt;The next step is to mock all the external dependencies that we have highlighted in step 1 to progress on the migration to Quarkus of OUR code base first.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why ?&lt;/strong&gt; By mocking all the external dependencies, we make sure to progress on our code base migration first and that we are not blocked by external dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How ?&lt;/strong&gt; Simply by implementing interfaces with mocked behavior. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For instance, in the service that we have been migrating to Quarkus, we use an external dependency interface to have access to a context. &lt;/p&gt;

&lt;p&gt;So that our application is building with Quarkus, we had to mock temporarily the context interface with a static mocked implementation.&lt;/p&gt;

&lt;p&gt;We implemented the interface and made sure that the bean follows the CDI specifications.&lt;/p&gt;
&lt;h3&gt;
  
  
  4. Internal/external dependencies teams support
&lt;/h3&gt;

&lt;p&gt;The previous steps should have highlighted you all the dependencies that are handled inside and outside of your organization.&lt;/p&gt;

&lt;p&gt;Now you can ask support to the teams inside/outside your organization owning dependencies, to unlock your progression.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How ?&lt;/strong&gt; Either you ask for the support of the teams inside/outside your organization or you contribute directly to the migration to Quarkus of your dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is done in an iterative approach, meaning that you or an external team is making a dependency Quarkus ready, then they deliver it, you integrate it in your code base, you remove the mock associated to this external dependency and you go on and on until there is no more external dependency to be migrated to Quarkus.&lt;/p&gt;
&lt;h3&gt;
  
  
  5. Spring-DI Quarkus extension
&lt;/h3&gt;

&lt;p&gt;So that the migration to Quarkus is not cumbersome, the Quarkus team came-up with a set of extensions that will help you migrating Spring projects to Quarkus.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why ?&lt;/strong&gt; To comply with CDI specifications, we would need to migrate all our non-CDI compliant beans to CDI compliant beans, meaning we would need to migrate all Spring beans to CDI beans. This might be a fastidious work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How ?&lt;/strong&gt; To avoid doing this non-neglectable task, we have been using the Quarkus spring-di extension that do the job for you for the nominal cases.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find a conversion table (Spring DI annotation versus CDI) here: &lt;a href="https://quarkus.io/guides/spring-di#conversion-table" rel="noopener noreferrer"&gt;https://quarkus.io/guides/spring-di#conversion-table&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Using this extension is done simply by adding the following dependency to your service pom.xml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;dependencies&amp;gt;&lt;/span&gt;
    &lt;span class="c"&gt;&amp;lt;!-- Spring DI extension --&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;io.quarkus&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;quarkus-spring-di&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;scope&amp;gt;&lt;/span&gt;runtime&lt;span class="nt"&gt;&amp;lt;/scope&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependencies&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Multi Maven modules handling
&lt;/h3&gt;

&lt;p&gt;Let's be honest, no monolith has only one maven module.&lt;/p&gt;

&lt;p&gt;This step is about making Quarkus scan beans in all required maven modules that the service you are migrating to Quarkus depends on.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why ?&lt;/strong&gt; As explained in the dependency injection section, Quarkus won't scan classes from external modules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want, an external module or a third party library on which you do not have the hand (meaning that you can't modify it), to be scanned by Quarkus you should add the dependency in the application.properties configuration file.&lt;/p&gt;

&lt;p&gt;If you have control on the module/project, you can directly add an empty beans.xml file in the META-INF folder.&lt;/p&gt;

&lt;p&gt;This will ensure that Quarkus scans your beans.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How ?&lt;/strong&gt; Simply create an empty beans.xml file in the META-INF folder of each maven module that you own, or create a dependency in the application.properties configuration file for modules that we do not own.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find more details in here: &lt;a href="https://quarkus.io/guides/cdi-reference#bean_discovery" rel="noopener noreferrer"&gt;https://quarkus.io/guides/cdi-reference#bean_discovery&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fket8xm0e1okp0lyu34ni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fket8xm0e1okp0lyu34ni.png" alt="image" width="201" height="145"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bvtvz6s7wiuexm70mvt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bvtvz6s7wiuexm70mvt.png" alt="image" width="800" height="54"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  7. Migrate your code base
&lt;/h3&gt;

&lt;p&gt;This step focuses on migrating uncompliant base code to Quarkus framework compliancy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why ?&lt;/strong&gt; So far, we mocked external dependencies to progress on the migration of our code base, we used spring-di Quarkus extension to ease our migration, but some pieces of code must be migrated to comply with Quarkus standards to build your service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Indeed, some pieces of software cannot be handled by the extensions provided by the Quarkus team and must be migrated; and some other pieces of code are not following the CDI standard specifications and must be migrated as well.&lt;/p&gt;

&lt;p&gt;This step really depends on your software.&lt;/p&gt;

&lt;p&gt;We will review later on the recurrent errors we have been facing while migrating the service to Quarkus.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How ?&lt;/strong&gt; Comply with CDI specifications and migrate some Spring dependencies that cannot be handled by Quarkus extensions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Example.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the service code base we have migrated we were using &lt;a href="https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/concurrent/ThreadPoolTaskExecutor.html" rel="noopener noreferrer"&gt;ThreadPoolTaskExecutor&lt;/a&gt; which is a java bean that allows for configuring a java standard &lt;a href="https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ThreadPoolExecutor.html" rel="noopener noreferrer"&gt;ThreadPoolExecutor&lt;/a&gt; in bean style (through its corePoolSize, maxPoolSize, keepAliveSeconds, queueCapacity properties).&lt;/p&gt;

&lt;p&gt;This class is also well suited for management and monitoring (e.g. through JMX), providing several useful attributes: corePoolSize, maxPoolSize, keepAliveSeconds (all supporting updates at runtime); poolSize, activeCount (for introspection only).&lt;/p&gt;

&lt;p&gt;This class is part of the spring-context library.&lt;/p&gt;

&lt;p&gt;Having a quick look at spring-di quarkus extension &lt;a href="https://github.com/quarkusio/quarkus/blob/1.13/extensions/spring-di/runtime/pom.xml#L40-L44" rel="noopener noreferrer"&gt;pom.xml&lt;/a&gt; you can find-out quite easily that spring-context dependency is excluded so that it is not accessible to us, as end-user.&lt;/p&gt;

&lt;p&gt;Spring-context is excluded from the dependencies to filter spring-context classes and only keep the ones that are necessary in &lt;a href="https://github.com/quarkusio/quarkus-spring-api/blob/main/quarkus-spring-context-api/pom.xml" rel="noopener noreferrer"&gt;quarkus-spring-context-api&lt;/a&gt; dependency.&lt;/p&gt;

&lt;p&gt;This means that we can't use at the same time, the spring-di quarkus extension (which is definitely a must have to migrate a monolith since we do not want to migrate all our beans to CDI standard specifications in the first place) and the Spring ThreadPoolTaskExecutor.&lt;/p&gt;

&lt;p&gt;To mitigate this issue we have migrated our Spring ThreadPoolTaskExecutor to the &lt;a href="https://download.eclipse.org/microprofile/microprofile-context-propagation-1.0/apidocs/org/eclipse/microprofile/context/ManagedExecutor.html" rel="noopener noreferrer"&gt;ManagedExecutor&lt;/a&gt; class of org.microprofile library which is a standard supported by Quarkus.&lt;/p&gt;

&lt;p&gt;This is an example that is particular to the code of this service, since not everyone uses Spring ThreadPoolTaskExecutor.&lt;/p&gt;

&lt;p&gt;In the recurrent errors and tips section, we will go through more commons errors that you will for sure face while migrating to Quarkus framework.&lt;/p&gt;
&lt;h3&gt;
  
  
  8. Optimize software to embrace GraalVM idiomatics
&lt;/h3&gt;

&lt;p&gt;This step is an optimization step that you should perform to embrace GraalVM idiomatics: boot faster, deliver smaller packages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why ?&lt;/strong&gt; GraalVM idiomatics require to change the way frameworks work, not really at runtime but at startup time. Most of the dynamicity that a framework brings actually comes at the startup time and this is what is being shifted to the build time with Quarkus.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Quarkus could be also highlighted to be a framework to makes frameworks start at build time.&lt;/p&gt;

&lt;p&gt;At startup time a framework (like Hibernate or Spring for instance) does usually the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Parse config files (e.g.: persistance.xml file)&lt;/li&gt;
&lt;li&gt;Classpath and classes scanning for annotations (e.g.: &lt;code&gt;@Entity&lt;/code&gt;, &lt;code&gt;@Bean&lt;/code&gt;, etc...), getters or others metadata&lt;/li&gt;
&lt;li&gt;Build metamodel objects from all those above information on which the framework will run at runtime. For instance Hibernate doesn't keep .xml files in memory but builds an internal model that is represented at runtime and it is this model that is used at runtime to save entities, etc...&lt;/li&gt;
&lt;li&gt;Prepare reflection (will get the reference to method object and field to be able to perform invoke) and build proxies&lt;/li&gt;
&lt;li&gt;Start and open IO, threads, etc... (e.g.: database connection, etc...)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conceptually, when you look at those steps, there could be easily done at build time instead of at startup time.&lt;/p&gt;

&lt;p&gt;Everything that is prior to the last step and even some parts of the start can be done at build time.&lt;/p&gt;

&lt;p&gt;This is what is done by Quarkus, it takes a framework like Hibernate and makes it work so that the maximum of steps can be performed at build time.&lt;/p&gt;

&lt;p&gt;On the following schema, you can see a typical Java framework at the top where most of the work is performed at runtime (configuration load, classpath scanning, model creation, starts the management), whereas at the bottom you can see a Quarkus framework where most of the work is performed at build time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0oiuj18yii9vadnt85q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0oiuj18yii9vadnt85q.png" alt="image" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That said, you might want to endorse this approach and make sure that all possible actions that could be performed at build time are taken out from the runtime and deported to the build time.&lt;/p&gt;

&lt;p&gt;Let's explain those concepts with a concrete example that we faced while migrating our service to Quarkus.&lt;/p&gt;

&lt;p&gt;Once we were able to package our quarkus application exposing our service, we started it and launched a first message towards it.&lt;/p&gt;

&lt;p&gt;The first query was taking a long time to be processed, whereas the second query was much more fast (x10 times faster 😮).&lt;/p&gt;

&lt;p&gt;We had to investigate why the first query was so long to be processed.&lt;/p&gt;

&lt;p&gt;Using the &lt;a href="https://github.com/jvm-profiling-tools/async-profiler" rel="noopener noreferrer"&gt;Async Profiler&lt;/a&gt; we were able to build flamegraphs for the first and second queries to picture the differences in path length of the two transactions execution.&lt;/p&gt;

&lt;p&gt;In the first flamegraph we saw that we spend most of the transaction time in initializing a JAXB context responsible for marsharling/unmarshalling a context from the input query.&lt;/p&gt;

&lt;p&gt;This operation could be transferred at build time instead of doing it at startup time, since all the information required are present at build time.&lt;/p&gt;

&lt;p&gt;This is just one example but, I'm positive, that in your code base, you have some operations that could be transferred from runtime to build time too !&lt;/p&gt;
&lt;h2&gt;
  
  
  Recurrent errors and tips
&lt;/h2&gt;

&lt;p&gt;In this section we will highlight some common errors that you might encounter while migrating a service to Quarkus framework, and tips to solve them.&lt;/p&gt;
&lt;h3&gt;
  
  
  Package-private
&lt;/h3&gt;

&lt;p&gt;You will see from time to time the following info message while building your Quarkus application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[INFO] [io.quarkus.arc.processor.BeanProcessor] Found unrecommended usage of private members (use package-private instead) in application beans:
    - @Inject field com.myapp.service.MyService#someBean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a property is package-private, Quarkus can inject it directly without requiring any reflection to come into play.&lt;/p&gt;

&lt;p&gt;That is why Quarkus recommends package-private members for injection as it tries to avoid reflection as much as possible (the reason for this being that less reflection means better performance which is something Quarkus strives to achieve).&lt;/p&gt;

&lt;p&gt;Quarkus is using GraalVM to build a native executable. One of the limitations of GraalVM is the usage of reflection. Reflective operations are supported but all relevant members must be registered for reflection explicitly. Those registrations result in a bigger native executable.&lt;/p&gt;

&lt;p&gt;And if Quarkus DI needs to access a private member it has to use reflection. That’s why Quarkus users are encouraged not to use private members in their beans. This involves injection fields, constructors and initializers, observer methods, producer methods and fields, disposers and interceptor methods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bean list injection
&lt;/h3&gt;

&lt;p&gt;Bean injection list is working perfectly well with Spring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Inject&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;PaymentProcessor&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;paymentProcessor&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;but is not part of the CDI standard specifications.&lt;/p&gt;

&lt;p&gt;In certain situations, injection is not the most convenient way to obtain a contextual reference. For example, it may not be used when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the bean type or qualifiers vary dynamically at runtime, or&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;depending upon the deployment, there may be no bean which satisfies the type and qualifiers, or&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;we would like to iterate over all beans of a certain type.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In these situations, an instance of the &lt;code&gt;javax.enterprise.inject.Instance&lt;/code&gt; interface may be injected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Inject&lt;/span&gt; &lt;span class="nc"&gt;Instance&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;PaymentProcessor&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;paymentProcessor&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more details you can checkout the CDI specifications for the &lt;a href="https://docs.jboss.org/cdi/spec/2.0/cdi-spec.html#dynamic_lookup" rel="noopener noreferrer"&gt;instance interface&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That is to say, that you will have to migrate your Spring list beans injection to a CDI specifications compliant solution.&lt;/p&gt;

&lt;p&gt;Usually you will use a producer pattern to produce those beans.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unused beans
&lt;/h3&gt;

&lt;p&gt;This particular point echoes the Quarkus documentation: &lt;a href="https://quarkus.io/guides/cdi-reference#remove_unused_beans" rel="noopener noreferrer"&gt;https://quarkus.io/guides/cdi-reference#remove_unused_beans&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some of our beans were being removed at build time because they were considered as unused by Quarkus.&lt;/p&gt;

&lt;p&gt;For example, the following Spring bean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Component&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PaymentMapper&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Mapper&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Payment&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="o"&gt;...&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which extends from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;abstract&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Mapper&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="nd"&gt;@Inject&lt;/span&gt;
  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;MapperFactory&lt;/span&gt; &lt;span class="n"&gt;mapperFactory&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="nd"&gt;@PostConstruct&lt;/span&gt;
  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;mapperFactory&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;type_of_the_class&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;was registered in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Named&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MapperFactory&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Class&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Mapper&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;mappers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HashMap&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Class&lt;/span&gt; &lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Mapper&lt;/span&gt; &lt;span class="n"&gt;mapper&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;mappers&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mapper&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bean PaymentMapper was considered as unused because it was not referenced anywhere else in the code apart from its definition.&lt;/p&gt;

&lt;p&gt;Unfortunately, the issue is that it was actually used via registering in a &lt;code&gt;@PostConstruct&lt;/code&gt; method call in the MapperFactory.&lt;/p&gt;

&lt;p&gt;The static mappers map ended up being always empty, because the Mapper beans were marked as unused.&lt;/p&gt;

&lt;p&gt;For this case, we had to change the code so that the MapperFactory registers a list of beans implementing the same interface, which makes anyway way more sense.&lt;/p&gt;

&lt;h3&gt;
  
  
  Legal bean type
&lt;/h3&gt;

&lt;p&gt;It is clearly written in the &lt;a href="https://docs.jboss.org/cdi/spec/2.0/cdi-spec.html#legal_bean_types" rel="noopener noreferrer"&gt;CDI specifications&lt;/a&gt; that: A parameterized type that contains a wildcard type parameter is not a legal bean type.&lt;/p&gt;

&lt;p&gt;I have made a small &lt;a href="https://github.com/pierregmn/quarkus_cdi_parameterized_bean_inject_reproducer" rel="noopener noreferrer"&gt;reproducer&lt;/a&gt; on github to show you the failure of a CDI parameterized bean, that contains a wildcard type parameter, injection:&lt;/p&gt;

&lt;p&gt;Since those beans are not considered as legal, they are not considered by Quarkus.&lt;/p&gt;

&lt;h3&gt;
  
  
  Provider no-arg constructor
&lt;/h3&gt;

&lt;p&gt;In our code base we are using &lt;code&gt;@Provider&lt;/code&gt; classes to decode inputs or to encode outputs.&lt;/p&gt;

&lt;p&gt;These providers implement &lt;a href="https://docs.oracle.com/javaee/7/api/javax/ws/rs/ext/ReaderInterceptor.html" rel="noopener noreferrer"&gt;ReaderInterceptor&lt;/a&gt;/&lt;a href="https://docs.oracle.com/javaee/7/api/javax/ws/rs/ext/WriterInterceptor.html" rel="noopener noreferrer"&gt;WriterInterceptor&lt;/a&gt; interfaces.&lt;/p&gt;

&lt;p&gt;When quarkus-resteasy library comes into play it tells us at compilation time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WARN  [io.qua.res.com.dep.ResteasyCommonProcessor] (build-9) Classes annotated with @Provider should have a single, no-argument constructor, otherwise dependency injection won't work properly. Offending class is com.myapp.interceptor.BaseReaderInterceptor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rule is the following: Classes annotated with &lt;code&gt;@Provider&lt;/code&gt; should have a single, no-argument constructor and classes must be public.&lt;/p&gt;

&lt;p&gt;For instance, this piece of code is not compiling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Provider&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BaseReaderInterceptor&lt;/span&gt; &lt;span class="kd"&gt;implements&lt;/span&gt; &lt;span class="nc"&gt;ReaderInterceptor&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;StatsCollector&lt;/span&gt; &lt;span class="n"&gt;statsCollector&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="nd"&gt;@Inject&lt;/span&gt;
  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;BaseReaderInterceptor&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;StatsCollector&lt;/span&gt; &lt;span class="n"&gt;statsCollector&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;statsCollector&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;statsCollector&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;...&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whereas this one is compiling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Provider&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BaseReaderInterceptor&lt;/span&gt; &lt;span class="kd"&gt;implements&lt;/span&gt; &lt;span class="nc"&gt;ReaderInterceptor&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="c1"&gt;// Injection by constructor makes REST-EASY unable to initialize the ReaderInterceptor...!!!!&lt;/span&gt;
  &lt;span class="c1"&gt;// Hence we inject at member level&lt;/span&gt;
  &lt;span class="nd"&gt;@Inject&lt;/span&gt;
  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;StatsCollector&lt;/span&gt; &lt;span class="n"&gt;statsCollector&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="o"&gt;...&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Emmanuel Bernard &lt;a href="https://www.youtube.com/watch?v=SQDR34KoC-8" rel="noopener noreferrer"&gt;talk&lt;/a&gt; on Quarkus DEVOXX video: &lt;strong&gt;&lt;em&gt;Quarkus why, how and what&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Oleg Selaje &amp;amp; Thomas Wuerthinger &lt;a href="https://www.youtube.com/watch?v=ANN9rxYo5Hg" rel="noopener noreferrer"&gt;talk&lt;/a&gt; on GraalVM DEVOXX video: &lt;strong&gt;&lt;em&gt;Everything you need to know about GraalVM&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;For FR speakers: Emmanuel Bernard &amp;amp; Clément Escoffier &lt;a href="https://www.youtube.com/watch?v=S05WsHJZsYk" rel="noopener noreferrer"&gt;talk&lt;/a&gt; on using Quarkus with GraalVM DEVOXX video: Quarkus: &lt;strong&gt;&lt;em&gt;Comment faire une appli Java Cloud Native avec Graal VM&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>writing</category>
      <category>programming</category>
    </item>
    <item>
      <title>Kafka consumer rebalancing - impact on Kafka Streams consumers performances</title>
      <dc:creator>Pierre Guimon</dc:creator>
      <pubDate>Tue, 25 Oct 2022 15:09:39 +0000</pubDate>
      <link>https://forem.com/pierregmn/kafka-rebalancing-impact-on-kafka-streams-consumers-performances-12dn</link>
      <guid>https://forem.com/pierregmn/kafka-rebalancing-impact-on-kafka-streams-consumers-performances-12dn</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Kafka consumer rebalancing is part of the lifecycle of any Kafka consumer group.&lt;/p&gt;

&lt;p&gt;We will focus on defining what is Kafka consumer rebalancing first.&lt;/p&gt;

&lt;p&gt;Then, we will see how it can impact the performances of Kafka Streams consumers on real applications and how we can mitigate it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Kafka consumer rebalancing ?
&lt;/h2&gt;

&lt;p&gt;Kafka consumers use the concept of &lt;a href="https://docs.confluent.io/platform/current/clients/consumer.html#consumer-groups" rel="noopener noreferrer"&gt;consumer groups&lt;/a&gt; to allow a pool of processes to divide the work of consuming and processing records.&lt;/p&gt;

&lt;p&gt;Each member of a consumer group, consumes from a unique set of Kafka topic's partitions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fog89mlqpgatvrf651qpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fog89mlqpgatvrf651qpt.png" alt="Multiple consumer groups subscribed to a topic"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Membership in a consumer group is maintained dynamically: if a process fails, the partitions assigned to it will be reassigned to other consumers in the same group. Similarly, if a new consumer joins the group, partitions will be moved from existing consumers to the new one. This is known as rebalancing the group.&lt;/p&gt;

&lt;p&gt;Group rebalance requires two actors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On server side: each group is managed by only one Kafka broker that we call a &lt;em&gt;group coordinator&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;On client side: one of the consumers is designed as a &lt;em&gt;consumer leader/group leader&lt;/em&gt; and will compute the assignation using the implementation of the interface:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

org.apache.kafka.clients.consumer.ConsumerPartitionAssignor


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;defined through the consumers configuration:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

partition.assignment.strategy


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can find the existing partitions assignment strategies in &lt;a href="https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#consumerconfigs_partition.assignment.strategy" rel="noopener noreferrer"&gt;Kafka consumer config properties&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Prior to Kafka Streams 2.4 version, the default consumer assignors was &lt;em&gt;RangeAssignor&lt;/em&gt; or &lt;em&gt;RoundRobinAssignor&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When a new consumer instance is deployed, the other members of the same group are notified by the group coordinator to release their resources (commit offsets and release the assigned partitions) using the rebalance protocol, and recalculate the allocation of resources (topic's partitions) between all group members (old ones plus the new one in this case).&lt;br&gt;
The mechanism is the same when the group coordinator detects the loss of one of the members (e.g. : an instance crash =&amp;gt; no heart-beats received for &lt;code&gt;session.timeout.ms&lt;/code&gt;, a leave group signal when a member exceeds the &lt;code&gt;poll.intervall.ms&lt;/code&gt; when processing records...)&lt;/p&gt;

&lt;p&gt;Below is an example of rebalance when a third consumer instance (customer 3) joins a consumer group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32j8o6dkrqwrgmuwtcum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32j8o6dkrqwrgmuwtcum.png" alt="Kafka default rebalancing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The challenging step with this rebalancing is what is called the &lt;br&gt;
&lt;em&gt;Synchronization Barrier&lt;/em&gt;, at this step all resources (topic's partitions) must be released. This is done by each instance before sending the &lt;code&gt;JoingGroup&lt;/code&gt; Request.&lt;/p&gt;

&lt;p&gt;So until &lt;code&gt;SyncResponse&lt;/code&gt; is received, no data will be processed by the consumers, and, as a result, processing of events from a topic happens with some delay. This is what we call a stop-the-world operation.&lt;/p&gt;

&lt;p&gt;This is the most secure way to assign partitions. Some business cases could tolerate rebalancing like offline/asynchronous events processing, but you can imagine that it doesn't fly in some real-time event processing.&lt;/p&gt;

&lt;p&gt;Now let's imagine that we are running a Kafka Streams application on a Kubernetes cluster. There are several use cases where rebalancing would occur:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Kubernetes node eviction&lt;/em&gt;: a node is being evicted from the Kubernetes cluster, any instances of our application running on this particular node would need to be restarted on another node of the cluster. This would induce &lt;code&gt;JoinGroup&lt;/code&gt; requests from the newly deployed instances of the application, ultimately resulting in rebalancing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Kubernetes pod eviction&lt;/em&gt;: an applicative pod is being evicted from a node of the Kubernetes cluster. Same as previously, this would result in rebalancing for the newly created applicative instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Application rolling upgrade&lt;/em&gt;: Similar to what might happen unexpectedly with a failure, an intentional application rolling upgrade could be triggered for a software update for instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Application scaling up/down&lt;/em&gt;: In case the Kafka Streams application requires to scale up or down, rebalance would occur to accommodate with the new consumers or removed consumers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Are they any alternative ways to rebalance to not suffer from the stop-the-world effect ?&lt;/p&gt;

&lt;h2&gt;
  
  
  Incremental cooperative rebalancing
&lt;/h2&gt;

&lt;p&gt;Amongst others, these reasons, motivated the need for a more robust rebalancing protocol.&lt;br&gt;
With Kafka 2.4 version, the incremental cooperative rebalancing protocol was introduced, and the main aim is to not stop-the-world when rebalancing. &lt;br&gt;
It uses the &lt;em&gt;StickyAssignor&lt;/em&gt; mechanism: it allows to conserve as much as possible the same partitions processed by each consumer instance when rebalancing.&lt;br&gt;
This protocol allows to release just the topic's partitions that will be processed by another consumer instance and not the others using two rebalance phases (or two consecutive rebalances):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The aim of the first rebalance is to revoke the partitions that will transit from an instance to another. The consumer leader computes the new assignment and sends to each one, only it's current partitions minus those to revoke.&lt;/li&gt;
&lt;li&gt;On the second rebalance, consumers will send a second &lt;code&gt;JoinGroup&lt;/code&gt; request to assign the revoked partitions (which are unassigned partitions).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is an example of incremental cooperative rebalancing when a third consumer instance (customer 3) joins a consumer group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15l7birxhypkl9e3fs0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15l7birxhypkl9e3fs0h.png" alt="Kafka incremental cooperative rebalancing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More details around the motivations for implementing the incremental cooperative rebalancing are available on this &lt;a href="https://cwiki.apache.org/confluence/display/KAFKA/Incremental+Cooperative+Rebalancing:+Support+and+Policies" rel="noopener noreferrer"&gt;Kafka confluence page&lt;/a&gt;. You can find the implementation details on this other &lt;a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-429:+Kafka+Consumer+Incremental+Rebalance+Protocol#KIP429:KafkaConsumerIncrementalRebalanceProtocol-Consumer" rel="noopener noreferrer"&gt;Kafka confluence page&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact of rebalancing on performances
&lt;/h2&gt;

&lt;p&gt;Now that we have described what is the Kafka consumer rebalancing, let's see how it impacts performances on a Kafka Streams based platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test protocol
&lt;/h3&gt;

&lt;p&gt;Let's consider the following simple example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rshxnlcdkppqd3x75ed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rshxnlcdkppqd3x75ed.png" alt="Kafka Streams microservices"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have a first microservice with 3 applicative instances, that is deployed on a Kubernetes cluster. This service consumes events from a single topic #1 using Kafka Streams 3.1 version. This topic contains 48 partitions and there are 3600 events per second published on it. &lt;br&gt;
On each of the 3 instances we use the Kafka Streams configuration property:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

num.stream.threads = 16


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Hence, on a single app instance we process events from 16 partitions with 16 Kafka Streams threads.&lt;br&gt;
This microservice is only enriching the records it processes with additional information. Hence the number of events stays the same in the second topic.&lt;br&gt;
The processing of a single event takes around 1 milliseconds and the stream is stateless.&lt;/p&gt;

&lt;p&gt;We plug a second microservice with 4 applicative instances. Each of them consuming from a single topic #2 of 64 partitions using Kafka Streams 3.1 version. We use the same Kafka Streams configuration property, and, as a result, each instance processes events from 16 partitions using 16 Kafka Streams threads. It receives as well 3600 events per second from the processing of the first microservice.&lt;br&gt;
The processing of a single event takes around 3.5 milliseconds and the stream is stateless.&lt;/p&gt;

&lt;p&gt;We are using Kafka Streams 3.1 version and by default the incremental cooperative rebalancing is used.&lt;/p&gt;

&lt;p&gt;We will be performing a rolling update of the first microservice, simulating a software update, to see the impact on the platform while injecting 3600 events per second on topic #1 for 30 minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring
&lt;/h3&gt;

&lt;p&gt;The microservices are monitored via &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; solution and they are exposing Kafka Streams metrics amongst other metrics.&lt;br&gt;
Here are the metrics used to monitor the impact of the Kafka consumer rebalancing mechanism:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;kafka_stream_processor_node_record_e2e_latency_avg&lt;/em&gt;: The average end-to-end latency of a record, measured by comparing the record timestamp when it was first created with the system time when it has been fully processed by the node. The Kafka rebalancing should impact this metric as some partitions being rebalanced won't get consumed anymore during rebalancing, and, as a result, the average end-to-end latency of a record should increase.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;kafka_consumer_coordinator_rebalance_latency_avg&lt;/em&gt;: The average time taken for a group to complete a successful rebalance, which may be composed of several failed re-trials until it succeeded.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;kafka_consumer_fetch_manager_records_lag_avg&lt;/em&gt;: The average records lag. Lag is the number of records available on the broker but not yet consumed. An increasing value over time is the best indication that the consumer group is not keeping up with the producers. The Kafka rebalancing should impact this metric as some partitions being rebalanced won't get consumed anymore during rebalancing, and, as a result, the lag should increase.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;kafka_consumer_fetch_manager_records_lag_max&lt;/em&gt;: The max records lag.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance test
&lt;/h3&gt;

&lt;p&gt;Since the Kafka consumer incremental cooperative rebalancing is activated by default for our version of Kafka Streams, we wanted to force the Kafka consumer eager rebalancing mechanism to see the advertised positive impact of the incremental cooperative rebalancing.&lt;br&gt;
The Kafka consumer eager rebalancing is the former rebalancing protocol that was in place before the Kafka consumer incremental cooperative rebalancing was introduced and set as default.&lt;/p&gt;

&lt;p&gt;To do so we simply need to add the following Kafka Streams configuration property to the deployment configuration of our first microservice: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

upgrade_from: 2.3


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This ensures that the Kafka cooperative rebalancing is not activated anymore since it didn't exist in this version of Kafka.&lt;/p&gt;

&lt;p&gt;In the following graphs we compare the impact of Kafka incremental cooperative rebalancing to the Kafka eager rebalancing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kafka Streams requests/sec processed per pod instance on the microservice #1 (1200 requests per pod x 3 = 3600 events/sec):&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl1dzljo6t3zv5vda7bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl1dzljo6t3zv5vda7bc.png" alt="Kafka Streams requests/sec processing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kafka Streams average/max end-to-end latency time of a record seen on the microservice #2:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsl6t6js1d14a0yfstpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsl6t6js1d14a0yfstpv.png" alt="Kafka Streams avg-end-to end latency"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3o5bqh7cvbo39x7ndaoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3o5bqh7cvbo39x7ndaoc.png" alt="Kafka Streams max end-to-end latency"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The average/max end-to-end latency time of a record is the same when we compare the eager rebalancing to the cooperative rebalancing and is high, ~40sec. To give you more context, the average end-to-end latency seen on microservice #2 when no rebalancing is occurring is around 50 milliseconds!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kafka Streams total max lag per topic (in yellow the topic #1 corresponding to the topic from which the microservice #1 is consuming):&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr6ys0h1qy0x22g0mqhl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr6ys0h1qy0x22g0mqhl.png" alt="Kafka Streams total max lag per topic"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We do see more lag (~100k elements) with the eager rebalancing compared to the cooperative rebalancing (~40k elements). Without rebalancing, not a single lag has been witnessed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kafka group rebalance average time on the microservice #1:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq156zqe11jw6hb2sumt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq156zqe11jw6hb2sumt.png" alt="Kafka group rebalance average time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rebalance average time is the same (~40sec) when we compare the eager rebalancing to the cooperative rebalancing.&lt;/p&gt;

&lt;p&gt;Using the Kafka eager rebalancing, with the stop-the-world effect, while rebalancing is in progress we can see that the max lag goes higher than with the cooperative rebalancing, which is expected since in incremental cooperative mode, some consumers are still able to consume.&lt;/p&gt;

&lt;p&gt;Overall the lag is scattered over time when using the cooperative rebalancing, and the average end-to-end latency time of a record is high in both cases.&lt;/p&gt;

&lt;p&gt;What we can retain from this test is that the cooperative rebalancing allows for some consumers to sill consume during a rebalancing but overall the average end-to-end latency time of a record is pretty high in both cases.&lt;/p&gt;

&lt;p&gt;We have seen that some consumers get rebalanced several times with the cooperative rebalancing during the rolling update. It puts pressure on some partitions which are not anymore consumed for an important time and this induces high average end-to-end latency time of a record on those partitions, whereas on some other partitions which didn't get rebalanced that often, the average end-to-end latency time of a record is rather low.&lt;/p&gt;

&lt;p&gt;When the consumers, which consume from partitions with high number of tanked messages restart after the rebalance, they have to dequeue a lot more items than for the other partitions, hence the message consumption on those consumers is increasing for a short period of time until the lag is consumed.&lt;/p&gt;

&lt;p&gt;In the end, the incremental cooperative rebalancing reduces the overall max lag consumption on the topic #1 but the average end-to-end latency time of a record is not improved and high in both cases.&lt;br&gt;
You can easily imagine that the Kafka rebalancing can have an important impact if at each deployment rollout we increase the average end-to-end latency time of a record from 50ms to 40sec! In real-time use cases with strong end-to-end latency time of a record requirements, this doesn't fly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mitigating the impact of Kafka consumer rebalancing
&lt;/h2&gt;

&lt;p&gt;Let's see how we can decrease the impact of Kafka rebalancing through various tips and how they have impacted our platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep up-to-date with Kafka Streams latest versions
&lt;/h3&gt;

&lt;p&gt;With the first versions of Apache Kafka client (2.4) using the &lt;em&gt;CooperativeStickyAssignor&lt;/em&gt; (incremental cooperative rebalancing), there was no huge difference observed in terms of performances, but this was for preparing to a multitude of changes that have been delivered in next versions (2.6) and that will be delivered in the future. That is why it's essential to upgrade frequently to the latest release.&lt;/p&gt;

&lt;h3&gt;
  
  
  Decrease Consumer Session Expiration
&lt;/h3&gt;

&lt;p&gt;There is a Kafka Streams configuration property named:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

session.timeout.ms


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It corresponds to the timeout used to detect client failures when using Kafka's group management facility.&lt;/p&gt;

&lt;p&gt;The client sends periodic heartbeats to indicate its liveness to the broker.&lt;/p&gt;

&lt;p&gt;If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance.&lt;/p&gt;

&lt;p&gt;Note that the value must be in the allowable range as configured in the broker configuration by &lt;code&gt;group.min.session.timeout.ms&lt;/code&gt; and &lt;code&gt;group.max.session.timeout.ms&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In Kafka Streams 3.1 version that we are using the default value is &lt;a href="https://github.com/apache/kafka/blob/3.1.0/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java#L355" rel="noopener noreferrer"&gt;45sec&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If an application instance gets unavailable, rebalance will happen only after the default 45sec, and eventually induce a lag in events consumption.&lt;/p&gt;

&lt;p&gt;In conjunction with this property there is another Kafka Streams configuration property named that requires an update:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

heartbeat.interval.ms


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It corresponds to the expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities.&lt;/p&gt;

&lt;p&gt;Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group.&lt;/p&gt;

&lt;p&gt;The value must be set lower than &lt;code&gt;session.timeout.ms&lt;/code&gt;, but typically should be set no higher than 1/3 of that value.&lt;/p&gt;

&lt;p&gt;It can be adjusted even lower to control the expected time for normal rebalances.&lt;/p&gt;

&lt;p&gt;The default value of the heartbeat is &lt;a href="https://github.com/apache/kafka/blob/3.1.0/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java#L360" rel="noopener noreferrer"&gt;3sec&lt;/a&gt; in Kafka Streams 3.1 version that we are using.&lt;/p&gt;

&lt;p&gt;We need to be careful with these settings, as it increases the probability of rebalancing occurrence on a daily basis, and consumers might hang in long rebalances, depending on network quality and stability.&lt;/p&gt;

&lt;p&gt;For our testing we have set the following values:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

session.timeout.ms=6000
heartbeat.interval.ms=1500


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Sending leave group request
&lt;/h3&gt;

&lt;p&gt;By default, Kafka Streams doesn't send consumer leave group request on application graceful shutdown, and, as a result, messages from some partitions (that were assigned to terminating applicative instance) will not be processed until session by this consumer will expire (with duration &lt;code&gt;session.timeout.ms&lt;/code&gt;), and only after expiration, new rebalance will be triggered.&lt;/p&gt;

&lt;p&gt;By default in Kafka Streams 3.1, we have &lt;code&gt;session.timeout.ms=45000&lt;/code&gt;, so it means during a single applicative instance restart, messages by some partitions will be processed at least within 45 seconds, and it's painful for real-time requirements.&lt;/p&gt;

&lt;p&gt;You can find more details about a discussion on this property in this &lt;a href="https://issues.apache.org/jira/browse/KAFKA-6995." rel="noopener noreferrer"&gt;Kafka Jira ticket&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The default value is false in Kafka Streams &lt;a href="https://github.com/apache/kafka/blob/3.1.0/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L925" rel="noopener noreferrer"&gt;configuration&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For our testing we have set the following values:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

internal.leave.group.on.close=true


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Tweak deployment rollout strategy
&lt;/h3&gt;

&lt;p&gt;Our application is deployed on a Kubernetes cluster and as such we can easily control the number of applicative instances which are created during a rolling update.&lt;/p&gt;

&lt;p&gt;One can use the following Kubernetes deployment properties to achieve this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

.spec.strategy.rollingUpdate.maxSurge (absolute/percentage, default: 25%)
.spec.strategy.rollingUpdate.maxUnavailable (absolute/percentage, default: 25%)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The more instances we rollout at the same time, the more partitions will need to get reassigned through Kafka rebalancing, eventually leading to high end-to-end latency time of a record, and a higher max lag per topic.&lt;/p&gt;

&lt;p&gt;Decreasing the number of instances which are started at the same time can mitigate the impact of Kafka rebalancing.&lt;/p&gt;

&lt;p&gt;For our testing we have set the following values:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

.spec.strategy.rollingUpdate.maxSurge=1
.spec.strategy.rollingUpdate.maxUnavailable=0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In that case, a single applicative instance will be started at a given time. The deployment process will be longer for sure, but overall we expect to have more stable performances on the platform, which is what we are looking for real-time use cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;p&gt;We have applied the above recommendations and we have ran the same test protocol than before except that we have launched 4 rolling updates of microservice #1 in 30mins to compute bunch of data. Here are the results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kafka Streams requests/sec processed per pod instance on the microservice #1 (1200 requests per pod x 3 = 3600 events/sec):&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnejjdd4oidm13fpdmnx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnejjdd4oidm13fpdmnx0.png" alt="Kafka Streams requests/sec processing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We do not see anymore the impact of Kafka rebalancing on the number of events processed by the microservice #1.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kafka Streams average/max end-to-end latency time of a record seen on the microservice #2:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm17m5pbcr0l1vo2k2opf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm17m5pbcr0l1vo2k2opf.png" alt="Kafka Streams avg-end-to end latency"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The average end-to-end latency time of a record is far less impacted than before by the Kafka rebalancing. We went down from 40sec to ~750ms in average during a Kafka rebalancing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kafka Streams total max lag per topic (topic #1 corresponding to the topic from which the microservice #1 is consuming):&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy45s438xtfact73i8w0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy45s438xtfact73i8w0b.png" alt="Kafka Streams total max lag per topic"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have far less lag than before the optimizations. We went down from ~40k elements with incremental cooperative rebalancing to ~150 elements.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kafka group rebalance average time:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2do71j78wvx4ds7gyacd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2do71j78wvx4ds7gyacd.png" alt="Kafka group rebalance average time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rebalance average time went down from 40sec to ~1sec with the optimizations.&lt;/p&gt;

&lt;p&gt;Here is a table summing-up the results of the optimizations on the platform:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Without optimizations&lt;/th&gt;
&lt;th&gt;With optimizations&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;microservice #1 - avg group rebalance time&lt;/td&gt;
&lt;td&gt;40sec&lt;/td&gt;
&lt;td&gt;1sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;topic #1 - max lag&lt;/td&gt;
&lt;td&gt;40.000 events&lt;/td&gt;
&lt;td&gt;150 events&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;microservice #2 - avg end-to-end latency time of a record&lt;/td&gt;
&lt;td&gt;40sec&lt;/td&gt;
&lt;td&gt;700ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Kafka consumer rebalancing mechanism has a non-neglectable impact on the performances of real-time applications. It can be mitigated by setting some Kafka consumers properties but it is still present. &lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;LinkedIn Engineering team &lt;a href="https://www.youtube.com/watch?v=QaeXDh12EhE" rel="noopener noreferrer"&gt;talk&lt;/a&gt; on Kafka rebalancing video: &lt;em&gt;&lt;strong&gt;Consumer Group Internals: Rebalancing, Rebalancing....&lt;/strong&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Juan Rao quick &lt;a href="https://www.youtube.com/watch?v=ovdSOIXSyzI" rel="noopener noreferrer"&gt;talk&lt;/a&gt; on Kafka consumer group video: &lt;em&gt;&lt;strong&gt;Apache Kafka Consumers and Consumer Group Protocol&lt;/strong&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Gwen Shapira &lt;a href="https://www.youtube.com/watch?v=MmLezWRI3Ys" rel="noopener noreferrer"&gt;talk&lt;/a&gt; on Kafka rebalancing video: &lt;em&gt;&lt;strong&gt;The Magical Rebalance Protocol of Apache Kafka&lt;/strong&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>java</category>
      <category>kafka</category>
      <category>performance</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Quarkus fundamentals</title>
      <dc:creator>Pierre Guimon</dc:creator>
      <pubDate>Tue, 04 Oct 2022 14:24:35 +0000</pubDate>
      <link>https://forem.com/pierregmn/quarkus-fundamentals-n77</link>
      <guid>https://forem.com/pierregmn/quarkus-fundamentals-n77</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The world of Java toolkits to build applications is already rich, and Quarkus is yet another toolkit.&lt;/p&gt;

&lt;p&gt;Why should you be interested in it? We will see what makes Quarkus different from other Java toolkits.&lt;/p&gt;

&lt;p&gt;We will first focus on defining what is Quarkus.&lt;/p&gt;

&lt;p&gt;Then, we will see Quarkus internal architecture, what technical benefits it brings, and how it can be more performant than the average Java toolkit.&lt;/p&gt;

&lt;p&gt;This article tries to give you an overview of Quarkus framework, as well as explaining its interactions with GraalVM.&lt;/p&gt;

&lt;p&gt;It is inspired by some of Emmanuel Bernard talks on the subject who is a recognized Java champion and lead at Quarkus RedHat project.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Quarkus ?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Quarkus definition
&lt;/h3&gt;

&lt;p&gt;Quarkus is, the contraction of the words quark and us. Quark, the elementary particle and fundamental constituent of matter, and us. If you go to the Quarkus &lt;a href="https://quarkus.io/" rel="noopener noreferrer"&gt;website&lt;/a&gt;, at the top, you can read two sentences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supersonic Subatomic Java&lt;/li&gt;
&lt;li&gt;A Kubernetes Native Java stack tailored for OpenJDK HotSpot and GraalVM, crafted from the best of breed Java libraries and standards&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ripoxkh1qzfswxtge6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ripoxkh1qzfswxtge6l.png" alt="Quarkus official website"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Supersonic Subatomic Java
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The idea behind &lt;strong&gt;&lt;em&gt;supersonic&lt;/em&gt;&lt;/strong&gt; is speed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we will see, Quarkus does all its best to boot your application as fast as possible and to respond to the first request.&lt;/p&gt;

&lt;p&gt;Another interesting aspect of the speed is the way you develop. With Quarkus, a developer codes and tests its code. It's so fast in recompiling and starting that, as a developer, you save a lot of time just by changing your code and seeing it running.&lt;/p&gt;

&lt;p&gt;Without wasting time, developing with Quarkus makes the developer's experience feel much faster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The idea behind the word &lt;strong&gt;&lt;em&gt;subatomic&lt;/em&gt;&lt;/strong&gt; is the size.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The size of the memory used to run an application, but also the size of the executable once the application is packaged.&lt;/p&gt;

&lt;p&gt;Quarkus is optimized to reduce the amount of memory used by a Java application. By using several tricks that we will discover next, Quarkus not just reduces the amount of memory of a Java heap, but the memory used by the entire process (RSS: Resident Set Size).&lt;/p&gt;

&lt;p&gt;Since the beginning of the Java platform, we were able to package an entire application into a binary file. Thanks to GraalVM, Quarkus drastically reduces the size of the binary. All in all, Quarkus is low in resource consumptions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;And finally, &lt;strong&gt;&lt;em&gt;Java&lt;/em&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Quarkus is polyglot, as it supports several JVM languages and, thanks to GraalVM, can also support all the languages such as C, C++, Ruby, or JavaScript. But Quarkus is Java first.&lt;/p&gt;

&lt;h4&gt;
  
  
  Kubernetes Native Java stack tailored for OpenJDK HotSpot and GraalVM, crafted from the best of breed Java libraries and standards
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes Native.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From the beginning, Quarkus has been designed around the container first philosophy.&lt;/p&gt;

&lt;p&gt;By producing small binaries with fast startup, Quarkus is perfectly suited for orchestration platforms like &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Quarkus comes with Docker images and Kubernetes extensions to easily package and deploy applications.&lt;/p&gt;

&lt;p&gt;By supporting Kubernetes natively, Quarkus has been a so‑called Knative and cloud native platform since its creation. Of course Quarkus can deploy your application in any environment, but it keeps Kubernetes in mind.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Java HotSpot and GraalVM.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;HotSpot comes first in the definition because it's important for Quarkus to target the mostly used JVM.&lt;/p&gt;

&lt;p&gt;But Quarkus brings also support for GraalVM. GraalVM support has been an important part of the design of Quarkus from the beginning.&lt;/p&gt;

&lt;p&gt;When an application is compiled down to a native binary, it starts much faster and can run with a much smaller heap than a standard JVM.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Java libraries and standards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Quarkus is a new toolkit, but its programming model isn't new.&lt;/p&gt;

&lt;p&gt;In fact, it builds on top of proven standards such as &lt;a href="https://projects.eclipse.org/projects/technology.microprofile" rel="noopener noreferrer"&gt;Eclipse MicroProfile&lt;/a&gt; or frameworks such as &lt;a href="https://vertx.io/" rel="noopener noreferrer"&gt;Vert.x&lt;/a&gt; or &lt;a href="https://projects.eclipse.org/projects/ee4j.jaxrs" rel="noopener noreferrer"&gt;JAX‑RS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instead of reinventing the wheel on common technical use cases, Quarkus prefers to integrate well‑known libraries.&lt;/p&gt;

&lt;p&gt;In fact, it integrates with hundreds of libraries. But Quarkus is not limited to standards or known libraries. If you have an in‑house library running on top of a JVM then it will work on Quarkus.&lt;/p&gt;

&lt;h4&gt;
  
  
  Open source
&lt;/h4&gt;

&lt;p&gt;It is important to mention that Quarkus is open source under an Apache 2 license, and all its code is on GitHub at &lt;a href="https://github.com/quarkusio" rel="noopener noreferrer"&gt;github.com/quarkusio&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Anybody can join and participate to Quarkus development.&lt;/p&gt;

&lt;p&gt;Quarkus is about supporting the best Java libraries and standards through its extension mechanism, but it is also an ecosystem of ever growing extensions.&lt;/p&gt;

&lt;p&gt;This never‑ending ecosystem is called &lt;a href="https://github.com/quarkiverse" rel="noopener noreferrer"&gt;Quarkiverse&lt;/a&gt;. Quarkiverse is a GitHub organization providing repository hosting, including build, continuous integration, and deployment of Quarkus extensions, mostly contributed by the community.&lt;/p&gt;

&lt;h4&gt;
  
  
  Operational concern: Cloud / on-premise
&lt;/h4&gt;

&lt;p&gt;Java is born in 1995 and at that time was mostly used to write graphical applications, such as applets. The language was based on the available hardware, using single core CPUs and multi‑threads.&lt;/p&gt;

&lt;p&gt;Quickly, the language moved to the server side, and we started developing monolithic applications designed to run on huge machines 24x7 for months, even years, with lots of CPU and memory.&lt;/p&gt;

&lt;p&gt;The JVM startup time was not an issue. The memory used by the JVM was huge, but we just let the just‑in‑time compiler optimize the execution over time and let the garbage collector manage the memory efficiently.&lt;/p&gt;

&lt;p&gt;Today, we don't have huge machines, we have small ones that we can easily discard. We moved from single cores, multi‑threads to multi‑cores, and we tend to be careful on the number of threads as they consume a lot of resources.&lt;/p&gt;

&lt;p&gt;Slow startup time and resource consumptions don't fit well in our new environment where we need to deploy hundreds of microservices into the cloud, move them around, and stop and start them quickly.&lt;/p&gt;

&lt;p&gt;Instead of scaling an application by adding more CPU and memory, we now scale microservices dynamically by adding more instances.&lt;/p&gt;

&lt;p&gt;Today, we need small binaries with small footprints and low resource consumptions.&lt;/p&gt;

&lt;p&gt;So the industry went from running a monolith on a huge machine to scaling up and down smaller microservices or functions on several small servers, to orchestrating tiny functions moving around constantly, having to start in a few milliseconds to handle a request and stopping immediately.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144470770-26566af6-2038-48df-bd8a-10946788b051.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144470770-26566af6-2038-48df-bd8a-10946788b051.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Java wasn't suited for this new environment.&lt;/p&gt;

&lt;p&gt;Today, being on the cloud or on‑premise, applications are usually deployed on a container platform. On top of these platforms, you can run so many JVMs.&lt;/p&gt;

&lt;p&gt;JVMs consume resources, memory, startup time, so the density is low. That's why we've seen other technologies and languages emerging in the cloud, such as Node.js or native languages such as Go.&lt;/p&gt;

&lt;p&gt;For the same amount of resources, the density of applications written in Go is much higher than in Java.&lt;/p&gt;

&lt;p&gt;The goal of Quarkus is to get the same density, but using Java platform instead of moving to a new one or a new language. What this means is that Quarkus is optimized for low memory usage, small binaries, and fast startup time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144470936-14694ff3-1c79-4868-bf77-0f9e1f841db4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144470936-14694ff3-1c79-4868-bf77-0f9e1f841db4.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Quarkus allows to benefit for better operational model both in cloud and on-premise data center.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where does it come from ?
&lt;/h3&gt;

&lt;p&gt;Quarkus is driven by Red Hat. In 2006, Red Hat acquired JBoss, who was developing the JBoss application server, and this way entered the Java ecosystem.&lt;/p&gt;

&lt;p&gt;Then, Red Hat started to work on Java's Standard Edition by committing to the OpenJDK, but also the Java Enterprise Edition or the MicroProfile.&lt;/p&gt;

&lt;p&gt;Recently, Red Hat started to get involved in GraalVM and distribute its own addition called &lt;a href="https://github.com/graalvm/mandrel" rel="noopener noreferrer"&gt;Mandrel&lt;/a&gt;. It also started hosting application with its own cloud platform called OpenShift.&lt;/p&gt;

&lt;p&gt;Quarkus came from an open source company who was a Linux operating system, commits to the JDK and GraalVM and host application to the cloud, the same company that gives Quarkus production support.&lt;/p&gt;

&lt;p&gt;The company JBoss was created in 1999 and started developing the JBoss Application Server, later known as JBoss EAP, for Enterprise Application Platform.&lt;/p&gt;

&lt;p&gt;In 2014, JBoss Application Server was renamed WildFly. &lt;a href="https://www.wildfly.org/" rel="noopener noreferrer"&gt;WildFly&lt;/a&gt;, for the free and open source software, and the name JBoss Application Server stayed for the supported product.&lt;/p&gt;

&lt;p&gt;In 2015, Red Hat created this innovative approach to packaging and running Java Enterprise Application. It was called WildFly Swarm and then renamed to Thorntail. &lt;a href="https://thorntail.io/" rel="noopener noreferrer"&gt;Thorntail&lt;/a&gt; didn't last long, and its end of life was announced in 2020.&lt;/p&gt;

&lt;p&gt;But Thorntail brought a set of new ideas that were introduced in Quarkus in 2018. 2018 was the year of the first Quarkus public commit.&lt;/p&gt;

&lt;p&gt;Quarkus 1.0 was then released in 2019, and since then has evolved at a rapid pace, nearly one release per month.&lt;/p&gt;

&lt;p&gt;Quarkus entered version 2.1 in June 2021. Even if Quarkus was created in 2018, it comes from a company that has a long history of open source, Java runtimes, distributed environments, microservices, and cloud environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Quarkus work ?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  GraalVM
&lt;/h3&gt;

&lt;p&gt;Before talking about the internal architecture of Quarkus, we need to present &lt;a href="https://www.graalvm.org/" rel="noopener noreferrer"&gt;GraalVM&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Quarkus targets the HotSpot VM of course, but it was also built with GraalVM in mind.&lt;/p&gt;

&lt;p&gt;GraalVM is an extension of the Java Virtual Machine to support more languages and several execution modes. GraalVM is itself implemented in Java.&lt;/p&gt;

&lt;p&gt;Running your application inside a JVM comes with startup and footprint costs. To improve that, GraalVM has a feature to create native images for existing Java applications.&lt;/p&gt;

&lt;p&gt;This improves the performance of Java to match the performance of native languages for fast startup and low memory footprint.&lt;/p&gt;

&lt;p&gt;This is the entire spectrum of GraalVM:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471247-9af40e24-658e-4b4c-b2f1-019dc01fa5ef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471247-9af40e24-658e-4b4c-b2f1-019dc01fa5ef.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Graal compiler&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the heart of GraalVM comes the Graal compiler.&lt;/p&gt;

&lt;p&gt;The Graal compiler is a high‑performance compiler written in Java. It accepts the JVM bytecode and produces both dynamic and static compilation for native code.&lt;/p&gt;

&lt;p&gt;Here we are not talking about the Java compiler written in C which takes the Java code and compiles it into bytecode.&lt;/p&gt;

&lt;p&gt;On the dynamic side, it uses the new JVM compiler interface to communicate with the good old job HotSpot VM.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Java HotSpot VM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The HotSpot has a just‑in‑time compiler known as JIT, which starts interpreting the code and then compiles it. The HotSpot supports all the known JVM languages, such as Java, Scala, or Groovy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Substrate VM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On the other side, for the static compilation, the Graal compiler relies on Substrate VM. Substrate VM allows ahead‑of‑time compilation or AOT for applications written in various languages. This ahead‑of‑time compilation improves the startup time by loading precompiled classes.&lt;/p&gt;

&lt;p&gt;With Substrate VM everything is compiled so JIT is not necessary, as well as metadata around code usage. Classes are compiled so there is no need to keep information for dynamic compilation and linkage.&lt;/p&gt;

&lt;p&gt;There is a garbage collector which is simpler than the OpenJDK ones. RedHat and other entities are currently working on GC algorithm improvements for Substrate VM.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Truffle framework&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On top of all that, you will find the Truffle framework that enables you to build interpreters and implementations for other languages, such as R, Ruby, JavaScript, or Python.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sulong&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then comes Sulong for what we call LLVM‑based languages, such as C, C++, or Fortran.&lt;/p&gt;

&lt;p&gt;When we talk about Quarkus on GraalVM we are talking about the following parts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471359-a6378abe-af4d-4d7a-8302-3722a67f0aac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471359-a6378abe-af4d-4d7a-8302-3722a67f0aac.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's focus on what interests Quarkus the most: ahead‑of‑time compilation.&lt;/p&gt;

&lt;p&gt;When developing a typical Java application, you have your own business classes. You rely on external libraries and the JDK classes.&lt;/p&gt;

&lt;p&gt;To get native images, you also need to include the necessary components of Substrate VM. Typically, these components are memory management or thread scheduling.&lt;/p&gt;

&lt;p&gt;At the end, your application consists of thousands and thousands of classes. Before compiling natively, the image generation process employs static analysis to find any code reachable from the main Java method.&lt;/p&gt;

&lt;p&gt;These reachable classes form what's called the &lt;strong&gt;&lt;em&gt;closed world&lt;/em&gt;&lt;/strong&gt;. Then, it's just a matter of eliminating all the classes and methods that are not used by your application. This is called &lt;strong&gt;&lt;em&gt;dead code elimination&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Then, thanks to ahead‑of‑time compilation, the remaining Java code is compiled into a standalone executable called a native executable or native binary for the platform you are running into (MAC, Linux, etc...).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471395-302578c3-f832-4f46-b46c-1d1788005083.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471395-302578c3-f832-4f46-b46c-1d1788005083.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks to dead code elimination, the final executable is smaller. And thanks to ahead‑of‑time compilation, the resulting native binary contains the application in machine code ready for its immediate execution.&lt;/p&gt;

&lt;p&gt;The end result is an application that is faster to start and uses a smaller amount of memory. This is why Quarkus is a great runtime for containers, as well as cloud native and serverless deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Internal Quarkus architecture
&lt;/h3&gt;

&lt;p&gt;Quarkus can run on both JVM, the HotSpot with its just‑in‑time compiler, and GraalVM VM with ahead‑of‑time compilation.&lt;/p&gt;

&lt;p&gt;Quarkus does a lot of things from persistence to transaction to microservices, reactive messaging, and so on.&lt;/p&gt;

&lt;p&gt;So you might think that its core is huge and implements hundreds of features. Well, this is not the case, Quarkus is made of a small core on which relies hundreds of extensions and from an end-user point of view we only see Quarkus as a maven or gradle dependency.&lt;/p&gt;

&lt;p&gt;In fact, the power of Quarkus is its extension mechanism. Persistence, transaction, fault tolerance, security are all external extensions that can be added to your application only if needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471457-be530839-09c3-4eb8-b027-59e484f81f6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471457-be530839-09c3-4eb8-b027-59e484f81f6b.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And next to these extensions, you can add an infinite number of external third‑party or in‑house libraries.&lt;/p&gt;

&lt;p&gt;The core of Quarkus and this extension mechanism is heavily based on Arc, a lightweight dependency injection framework.&lt;/p&gt;

&lt;p&gt;The core of Quarkus also does the hard work of rewriting parts of the application at build time.&lt;/p&gt;

&lt;p&gt;For that, it uses a set of tools such as Jandex, which is the Java annotation indexer and reflection library to optimize annotation processing.&lt;/p&gt;

&lt;p&gt;Gizmo is a library used to produce Java bytecode. And thanks to the Graal SDK, Quarkus can use a single path, single class loader, and dead code elimination mechanism. Quarkus uses Jandex to index classes and Gizmo to produce bytecode, but at build time, not at runtime.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build time vs runtime
&lt;/h3&gt;

&lt;p&gt;Quarkus is the way for frameworks to embrace GraalVM idiomatics.&lt;/p&gt;

&lt;p&gt;It requires to change the way frameworks work, not really at runtime but at startup time. Most of the dynamicity that a framework brings actually comes at the startup time and this is what is being shifted to the build time with Quarkus.&lt;/p&gt;

&lt;p&gt;Quarkus could be also highlighted to be a framework to makes frameworks start at build time.&lt;/p&gt;

&lt;p&gt;At startup time a framework (like Hibernate or Spring for instance) does usually the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Parse config files (e.g.: persistance.xml file for Hibernate)&lt;/li&gt;
&lt;li&gt;Classpath and classes scanning for annotations (e.g.: Entity, Bean, etc...), getters or others metadata&lt;/li&gt;
&lt;li&gt;Build metamodel objects from all those above information on which the framework will run at runtime. For instance Hibernate doesn't keep .xml files in memory but builds an internal model that is represented at runtime and it is this model that is used at runtime to save entities, etc...&lt;/li&gt;
&lt;li&gt;Prepare reflection (will get the reference to method object and field to be able to perform invoke) and build proxies&lt;/li&gt;
&lt;li&gt;Start and open IO, threads, etc... (e.g.: database connection, etc...)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conceptually, when you look at those steps, there could be easily done at build time instead of at startup time.&lt;/p&gt;

&lt;p&gt;Everything that is prior to the last step and even some parts of the start can be done at build time.&lt;/p&gt;

&lt;p&gt;This is what is done by Quarkus, it takes a framework like Hibernate and makes it work so that the maximum of steps can be performed at build time.&lt;/p&gt;

&lt;p&gt;On the following schema, you can see a typical Java framework at the top where most of the work is performed at runtime (configuration load, classpath scanning, model creation, starts the management), whereas at the bottom you can see a Quarkus framework where most of the work is performed at build time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471998-db527b7b-5d47-489d-a812-6eeafe1e9112.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144471998-db527b7b-5d47-489d-a812-6eeafe1e9112.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This represents the Quarkus build process:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144472024-6b1585d3-6c9f-4593-8794-838aeb0c5395.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F55956993%2F144472024-6b1585d3-6c9f-4593-8794-838aeb0c5395.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are several advantages to this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You do this work only once at compilation time and not at every startup.&lt;/li&gt;
&lt;li&gt;The percentage of classes/lines of code that are specialized for the startup time in a framework can be very high. Since the work is done at build time, these classes, can be removed by the dead code elimination, or not loaded when running on HotSpot VM. Hence the startup time and the memory consumption are lower.&lt;/li&gt;
&lt;li&gt;In the JVM, one thing that is particularly slow is to load classes and scan them. Because loading the bytecode means that we need to verify it, etc... without even having initializing the classes.&lt;/li&gt;
&lt;li&gt;In Quarkus, Jandex tool performs a quick JARs indexing by only reading the files without analyzing the bytecode via metadata: what is the class, what are the subclasses, what is the methods list, what are the return values, are there any annotations, etc...&lt;/li&gt;
&lt;li&gt;Every extension uses Jandex so that instead of having to load the classes we only read those metadata. This allows to gain time.&lt;/li&gt;
&lt;li&gt;Another trick which is applied is the following: with Quarkus, since most of the operations are performed at build time, there are steps like bytecode improvement/re-writing of bytecode that can be done at build time and in one shot. Indeed, all the improvements on the bytecode are listed and for one given class, we apply them all-at-once instead of having to re-process the same class for multiple bytecode improvement like it's usually done.&lt;/li&gt;
&lt;li&gt;With GraalVM, static initialization is performed at build time, this approach is also used by Quarkus. For instance, for Hibernate framework, everything that can be initialized before accessing the database, is put in a static block that will be executed by GraalVM at compilation time. All this work will be included in the binary. When the application is started, this step as already been performed and the heap is put in memory and we can continue with further steps of the initialization. Quarkus really embraces this ahead-of-time compilation approach of GraalVM. This is also a game changer for Hotspot VM, since it can benefit from those major optimizations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Extensions
&lt;/h3&gt;

&lt;p&gt;You can build any kind of application thanks to the Quarkus extension mechanism.&lt;/p&gt;

&lt;p&gt;As we've seen previously, Quarkus uses an extension mechanism, but Quarkus makes the difference between extensions and external libraries.&lt;/p&gt;

&lt;p&gt;First of all, extensions are developed and maintained by the Quarkus team. You can find them on the Quarkus &lt;a href="https://github.com/quarkusio/quarkus" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; repository. They integrate seamlessly into the Quarkus architecture as they can be processed at build time and be built in native mode with GraalVM.&lt;/p&gt;

&lt;p&gt;External dependencies are any in‑house Java framework that you've developed and maintained or external Java libraries that you can find out there.&lt;/p&gt;

&lt;p&gt;Therefore, they are not maintained by the Quarkus team GraalVM being very aggressive with dead code elimination, some of these external libraries might not work out of the box with native compilation.&lt;/p&gt;

&lt;p&gt;You might need to recompile them with the right GraalVM configuration to make them work. That's why not every external library is an extension.&lt;/p&gt;

&lt;p&gt;To sum up, Quarkus works with any external library in JVM mode so you are not restricted to Quarkus extensions. But you are not sure that an external library will work on native mode and will be optimized at build time.&lt;/p&gt;

&lt;p&gt;Quarkus has hundreds of extensions, and every release brings new ones.&lt;/p&gt;

&lt;p&gt;One way to keep up to date is to go to &lt;a href="https://code.quarkus.io/" rel="noopener noreferrer"&gt;code.quarkus.io&lt;/a&gt; and check if the technology or framework that you are looking for has been integrated as a Quarkus extension.&lt;/p&gt;

&lt;p&gt;If it's not the case, remember that you can always use it, but as an external library. And that might not compile with GraalVM or be optimized for Quarkus, but it will work.&lt;/p&gt;

&lt;p&gt;Here is an overview of available popular extensions:&lt;/p&gt;

&lt;h4&gt;
  
  
  Web extensions
&lt;/h4&gt;

&lt;p&gt;Quarkus supports the good old servlet, as well as RESTEasy to develop RESTful web services. This goes hand in hand with the JSON binding and JSON processing extensions.&lt;/p&gt;

&lt;p&gt;It also has an OpenAPI extension for documenting REST endpoints, gRPC, or GraphQL.&lt;/p&gt;

&lt;h4&gt;
  
  
  Database
&lt;/h4&gt;

&lt;p&gt;Remember that Quarkus comes from Red Hat, the company behind Hibernate and Narayana, the robust transaction manager.&lt;/p&gt;

&lt;p&gt;So you will get Hibernate ORM for relational mapping, Hibernate Validator to validate data, and Hibernate Envers to have historical data.&lt;/p&gt;

&lt;p&gt;Quarkus supports several relational database JDBC drivers, such as PostgreSQL, MariaDB, SQL Server, or H2, as well as MongoDB, Amazon S3. or Elasticsearch.&lt;/p&gt;

&lt;h4&gt;
  
  
  Messaging
&lt;/h4&gt;

&lt;p&gt;In terms of messaging, Quarkus has a JMS extension, but also supports new messaging brokers, such as Kafka or Kafka Streams.&lt;/p&gt;

&lt;p&gt;It also has an extension for AMQP, as well as MQTT, which is the standard messaging protocol for the Internet of Things.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reactive
&lt;/h4&gt;

&lt;p&gt;In terms of reactive architectures, Quarkus goes all the way from the database to exposing reactive REST endpoints.&lt;/p&gt;

&lt;p&gt;This is because it uses Vert.x extensively and reactive programming to get reactive messaging, reactive REST endpoints with RESTEasy, and database access with Hibernate Reactive thanks to the support of reactive R2DBC drivers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cloud
&lt;/h4&gt;

&lt;p&gt;Being a Kubernetes‑native stack, Quarkus comes with a few extensions to build Docker images, such as Docker itself or Jib.&lt;/p&gt;

&lt;p&gt;As well as Kubernetes and Minikube extensions to easily orchestrate microservices on your development and production environment.&lt;/p&gt;

&lt;p&gt;It has support for cloud providers, such as OpenShift, AWS, Azure, or Google Cloud, and comes with an extension called Funqy to develop functions on top of AWS Lambda and Azure Functions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Monitoring
&lt;/h4&gt;

&lt;p&gt;When you have several microservices or functions, you need to observe them. For that, Quarkus comes with a health check extension, as well as metrics and distributed tracing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Security
&lt;/h4&gt;

&lt;p&gt;In terms of security, you can use OpenID Connect or JSON Web Token very easily. Quarkus comes with a set of extensions to integrate Elytron, Keycloack, or Vault.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where to begin ?
&lt;/h3&gt;

&lt;p&gt;I encourage you to go on the Quarkus &lt;a href="https://quarkus.io/" rel="noopener noreferrer"&gt;website&lt;/a&gt; on which you can find a link to the coding Quarkus &lt;a href="https://code.quarkus.io/" rel="noopener noreferrer"&gt;website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You will be able to get hands-on Quarkus via sample applications that are package for you, you just need to unzip them and import them in your favorite IDE.&lt;/p&gt;

&lt;p&gt;You will experience the fast startup, live reload, low memory footprint that comes with Quarkus.&lt;/p&gt;

&lt;p&gt;After having tried out some of the Quarkus sample applications, I encourage you to have a look at the Quarkus &lt;a href="https://quarkus.io/guides/" rel="noopener noreferrer"&gt;guides&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You will find all possible information around core Quarkus mechanism such as CDI, etc... as well as guides for the most common extensions such as RESTeasy, Hibernate, Apache Kafka, etc...&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Emmanuel Bernard &lt;a href="https://www.youtube.com/watch?v=SQDR34KoC-8" rel="noopener noreferrer"&gt;talk&lt;/a&gt; on Quarkus DEVOXX video: &lt;strong&gt;&lt;em&gt;Quarkus why, how and what&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Oleg Selaje &amp;amp; Thomas Wuerthinger &lt;a href="https://www.youtube.com/watch?v=ANN9rxYo5Hg" rel="noopener noreferrer"&gt;talk&lt;/a&gt; on GraalVM DEVOXX video: &lt;strong&gt;&lt;em&gt;Everything you need to know about GraalVM&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;For FR speakers: Emmanuel Bernard &amp;amp; Clément Escoffier &lt;a href="https://www.youtube.com/watch?v=S05WsHJZsYk" rel="noopener noreferrer"&gt;talk&lt;/a&gt; on using Quarkus with GraalVM DEVOXX video: Quarkus: &lt;strong&gt;&lt;em&gt;Comment faire une appli Java Cloud Native avec Graal VM&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>java</category>
      <category>quarkus</category>
      <category>framework</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
