<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Max Beckers</title>
    <description>The latest articles on Forem by Max Beckers (@maxbeckers).</description>
    <link>https://forem.com/maxbeckers</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/maxbeckers"/>
    <language>en</language>
    <item>
      <title>Understanding the potential of Modulith architecture</title>
      <dc:creator>Max Beckers</dc:creator>
      <pubDate>Thu, 25 Jan 2024 10:57:09 +0000</pubDate>
      <link>https://forem.com/maxbeckers/understanding-the-potential-of-modulith-architecture-3h3n</link>
      <guid>https://forem.com/maxbeckers/understanding-the-potential-of-modulith-architecture-3h3n</guid>
      <description>&lt;p&gt;In software architecture, it is always about finding the best solution for your use case and your application in your specific context. Based on that, you have the challenge to find the right balance between flexibility and simplicity. We often must decide between a monolithic system and a microservice architecture. For a long time, the monolithic architecture has been the way to go. Then, microservices came up and most architectures followed way. This blogpost is about the relatively new architecture approach of modulith that finds its place in-between the other two approaches and tries to combine their benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monoliths: The focus is Simplicity
&lt;/h2&gt;

&lt;p&gt;A monolithic system, with its integrated codebase and unified structure, offers a simplicity that greatly streamlines development processes. Developers find solace in the cohesive nature of a monolith, where maintaining a singular codebase eliminates the complexities associated with distributed systems. Deployment becomes a straightforward task, as there’s no need to manage the intricacies of coordinating multiple services across a network. Furthermore, the absence of network handling concerns, error propagation, bandwidth issues, and circuit breakers simplifies the development landscape, allowing teams to focus on building features rather than navigating the intricacies of distributed system challenges. While microservices tout their advantages, the monolithic approach stands as a testament to the elegance of consolidated, efficient development practices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v9q38wemvq7w9u12trk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v9q38wemvq7w9u12trk.jpg" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pros of Monolithic Systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity&lt;/strong&gt;: With monolithic architectures, you have a single codebase for all the applications, which simplifies the development, debugging, testing and the deployment process because you do not have to manage multiple services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Development Productivity&lt;/strong&gt;: Developing monoliths is easier because they are tightly integrated, allowing developers to concentrate on different parts of the application without worrying about the interactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: The benefits of monolithic architectures over distributed architectures come from the fact that inter-service communication does not cause any network overhead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transaction Handling&lt;/strong&gt;: Transaction handling of different (database) operations is easily possible.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons of Monolithic Systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability&lt;/strong&gt;: When a monolithic architecture grows, it can become unwieldy and hard to maintain, making changes or the introduction of new features is challenging and risky. This often results in unintended consequences or regressions, as well as making it difficult for new developers to learn the entire codebase. Looking at the relations between the different classes of the application can become a “big ball of mud” over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technology Stack Limitations&lt;/strong&gt;: Monolithic systems often require a uniform technology stack for the entire application. This can limit the ability to leverage the best tools or frameworks for specific components or functionalities of the application. Having a monolith in different languages makes it even more difficult to maintain and deploy it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Complexity&lt;/strong&gt;: Deploying a monolithic system requires deploying the entire application, it’s not possible to deploy just a part or a feature. This can increase deployment complexity and limit the ability to deploy individual components independently. At the beginning it might be easy to deploy the monolith, but with time the complexity can increase. The overview what changes over each release has to be tracked, especially when there are multiple teams working on the monolith.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Monolithic applications can be challenging to scale because the entire application has to scale. Even if only one part of the monolith gets high traffic, the whole monolith has to scale and this requires a lot of resources. Scaling all components, even those with low traffic, can result in unnecessary costs and resource consumption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing Challenges&lt;/strong&gt;: Testing a monolithic system can be more challenging due to the tight coupling between components. Changes in one part of the application may have unintended consequences on other parts, making it necessary to perform comprehensive regression testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single Point of Failure&lt;/strong&gt;: In a monolithic system, a failure in one component can bring down the full application. There is no isolation between components, increasing the risk of cascading failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Microservices: Scalability is in the focus
&lt;/h2&gt;

&lt;p&gt;Microservices gained popularity as an alternative to monoliths, leveraging the concept of breaking down applications into small, independent services. Each microservice focuses on a specific functionality and communicates with other services through well-defined interfaces. In a microservice architecture the development teams are often designed to run there applications in the you build it - you run it way. This is stated inConway’s Law where the teams and the organisation must match with the software. Some benefits of microservices include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Microservices offer granular scalability, allowing specific services to scale independently based on demand. This enables efficient resource allocation and cost optimization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technological Diversity&lt;/strong&gt;: Microservices provide the flexibility to use different technologies for each service, depending on its requirements. Developers can choose the most appropriate technology stack, programming language, or framework for each service, optimizing performance and development productivity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: Development teams can work independently on different services, using different technologies or languages, allowing greater innovation, and reducing the time-to-market.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability&lt;/strong&gt;: Each microservice has a low level of coupling to the other services, making the services itself and the overall system easier to understand, modify, and maintain. Changes or updates to one service have minimal impact on others, reducing the risk of cascading failures and facilitating iterative development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reusability&lt;/strong&gt;: In a microservice architecture reusability is the key and one of the main concepts for microservices. Each microservice can be used by other applications / services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, microservices also come with challenges (for this see also &lt;a href="https://martinfowler.com/bliki/MicroservicePrerequisites.html"&gt;the Microservice Prerequisites by Martin Fowler&lt;/a&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complexity&lt;/strong&gt;: Microservices come with inherent complexity. Developers must handle service discovery, inter-service communication, and data consistency between services. Building robust distributed systems requires careful design and management of these complexities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration Complexity&lt;/strong&gt;: Orchestrating and deploying multiple microservices is one of the major concerns in microservices architectures. The resulting operational complexity requires additional infrastructure and monitoring, such as service discovery, load balancing, and fault tolerance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Challenges of Distributed Systems&lt;/strong&gt;: Microservices rely on network communication to interact with each other. Latency, network failures, and potential failure points are common risks as a result. Data consistency, service discovery, load balancing, and fault tolerance can also be challenging to implement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transaction Handling&lt;/strong&gt;: Managing transactions in a distributed system is inherently complex due to operations occurring across various services. One approach involves setting a defined timeframe for receiving asynchronous responses to events. In case of a rollback, each service must provide a dedicated endpoint to facilitate the process seamlessly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhhmddoasbbt1f8nr4xp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhhmddoasbbt1f8nr4xp.jpg" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Modulith: Find the balance for your application
&lt;/h2&gt;

&lt;p&gt;Modulith combines the best of both worlds, offering a balanced approach that mitigates the drawbacks of monoliths and microservices. Modulith is a modular monolith, where the application is divided into loosely coupled modules or domains. Each module represents a distinct area of functionality and can be developed and tested independently.&lt;/p&gt;

&lt;p&gt;When you break the Modulith into different deployable modules, it can be done using different frameworks or programming languages, allowing you to scale them independently. In addition, you can build the modules in such a way that they can be scaled. But before you invest time into such architecture, check if you have really need to scale the modules individually. In most cases, this type of flexibility is not required for many systems, as in most cases all parts of the application need to scale in the same dimension as traffic increases. This means that the whole Modulith can be scaled up in most cases when the traffic increases and after reaching the peak it can be scaled down as one application to reduce the costs.&lt;/p&gt;

&lt;p&gt;Here’s why Moduliths are gaining popularity:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity and Maintainability&lt;/strong&gt;: Moduliths retain the simplicity and maintainability of monoliths by encapsulating the entire application within a single codebase. Developers can easily navigate the codebase and make changes without the need for complex inter-service communication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear Boundaries&lt;/strong&gt;: Modulith emphasizes clear boundaries between modules, ensuring separation of concerns and loose coupling. This makes it easier to identify about dependencies and maintain a modular architecture without the operational complexities of microservices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monolith alike performance&lt;/strong&gt;: Because of reducing the network communication, the Moduliths can run its modules in the same processes or on the same hardware and can benefit from this direct usage of the modules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evolutionary Approach&lt;/strong&gt;: Moduliths offer an evolutionary path from a monolithic architecture. The advantage of starting with a modular monolith is that developers can gradually extract modules into independent microservices as the need arises, making it easier to transition to a microservices architecture as needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easier Deployment&lt;/strong&gt;: Moduliths are assembled into one deployment unit, which reduces the coordination for deployments. There is no need to ensure that the other modules are deployed in the correct version as it is needed in a microservice architecture.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Less complexity&lt;/strong&gt;: Compared with microservices it is easier to configure Moduliths and the overhead for development is reduced. Compared to a monolith, the complexity to understand the code and to test the separated modules of Moduliths is lower. The complexity of the business logic itself stays of course, but when it is less challenging to identify the different parts of the application and have a clean structure and clear boundaries, it is easier to understand the system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Effective&lt;/strong&gt;: Moduliths prove to be a cost-effective choice by consolidating modular functionalities within a single codebase, eliminating the need for complex infrastructure setups and intricate network configurations, including the management of TLS. This streamlined approach not only reduces economic overhead but also enhances development efficiency, making moduliths a pragmatic solution for optimizing both cost and operational effectiveness in software architecture. &lt;a href="https://www.primevideotech.com/video-streaming/scaling-up-the-prime-video-audio-video-monitoring-service-and-reducing-costs-by-90"&gt;Amazon&lt;/a&gt; was able to reduce the costs of one of their services by &lt;strong&gt;90%&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Moduliths provide a scalable architecture by allowing individual modules to be scaled independently. Modules that experience higher demand can be scaled without impacting other modules, achieving better resource utilization. For that of course the modules must be encapsulated and designed to be scaled individually.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: Moduliths allow for technological diversity within modules, like microservices. Developers can choose the most suitable technology stack for each module, optimizing performance and leveraging the strengths of different frameworks or programming languages. But this technology flexibility might bring up a higher complexity to the application than just having it in one technology stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The two points of &lt;strong&gt;scalability&lt;/strong&gt; and &lt;strong&gt;technical flexibility&lt;/strong&gt; may stand out somewhat from the above list. This is also the reason why I have placed these two points at the end of the list. Both of these points naturally bring further complexity with them. These should be avoided as far as possible if they are not necessary. Nevertheless, I would like to include these points here and explain them a little to make them easier to understand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; is about processing the load as well as possible when it occurs. Since in a modular application you want to avoid the network traffic that you have in a microservice architecture, multithreading often comes into play at this point. A relatively simple option at this point is the processing of queues. It often makes sense to outsource everything that is not required for synchronous processing to an asynchronous process via a message queue or similar. For asynchronous processing, the number of consumers can be scaled up relatively easily in most frameworks. We already have a module that processes the events at scale. But there are also ways of scaling a module by implementing a kind of load balancer that starts the module in several threads and then distributes the requests to the module to different threads, e.g. via round robin. This second approach is of course much more complex to implement, but it can help you to optimize the traffic generated by frequently used modules or those with high demand computing power.&lt;/p&gt;

&lt;p&gt;Having &lt;strong&gt;several technologies&lt;/strong&gt; in one codebase and in one modulith is also something that should only be implemented if there are really good reasons. Here too, however, the performance of individual modules can be the driver. I would also like to provide a few examples here to improve understanding of when it might make sense. A relatively simple example could be if you have a modular application that is written in Java and you have a part for file generation that can be implemented more easily and with better performance in Python. It is then conceivable to implement this module for file generation in Python as a module and to use this outsourced module from within the application. For example, you could build the Python tool as a CLI application and then call it from Java. However, you can also consider whether you want to use the same programming language for synchronous processing and asynchronous handling of individual events or whether in certain scenarios it makes sense to write the majority of the application in Java, but to outsource the processing of events for sending emails to a module in PHP, for example.&lt;/p&gt;

&lt;p&gt;These are just a few small examples of the two areas to make it a little clearer and show the possibilities. Each of you may have further ideas or other examples in mind. It all depends on the system and the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Break the Monolith
&lt;/h2&gt;

&lt;p&gt;Another use-case for a Modulith can be the process to break a monolith into microservices. To split a monolith up into microservices can be a good way to increase the maintainability of the software and to benefit from a microservice architecture. However, this approach can be very hard. It might be helpful to use the Modulith as a guide to break the monolith into microservices. It is helpful to separate the logic of the monolith into different modules and to reduce the coupling of the different parts of the code by defining APIs for the modules. This APIs might be interfaces in the first step. You still have one repository for the code, but you can profit from the benefits like simplicity and maintenance of the system. And perhaps you already terminate with your refactoring when you have a modular monolith, because you don’t need the flexibility of a microservice architecture. Otherwise, you go further and split up the modules into microservices.&lt;/p&gt;

&lt;p&gt;A very important but also very hard part of the refactoring is the database. In a Modulith each module should have its own database (at least its own schema on the same database server) similarity to microservice. It is not recommended to have an integration of the different modules via database. For existing and long running monolithic applications where each part of the code is directly using the database this will certainly be the most complicated part to define APIs between the modules and to ensure that each module gets its own data.&lt;/p&gt;

&lt;p&gt;In order to split the core of the monolith into modules, you must identify the domains of your application and you need to define the bounded contexts. The reason for this is that most of the monolithic applications were not build with a clean defined domain like Domain-driven Design does.&lt;/p&gt;

&lt;p&gt;If you use Java and Spring, you can have a look at &lt;a href="https://spring.io/projects/spring-modulith"&gt;Spring Modulith&lt;/a&gt;. It is an experimental Project by Spring to build modular Monoliths with Spring. This project can help to encapsulate your modules and to find a good project structure. For example it is helpful that references to internal module packages (sub-packages) are rejected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Module design - a dive into module
&lt;/h2&gt;

&lt;p&gt;The purpose of this section is to demonstrate how to begin creating modules for the modular monolith. You need to define the modules first, and based on that, the APIs that need to be used. This can consist of interfaces and data transfer objects (DTOs) or as well specific services that are allowed to be called from a context out of the module. It is helpful to have the rest of the code in the module in another namespace or have one module for the API and one for the implementation behind the API. I have used both ways and which way is better depends a bit on the context. Mostly it might be the best way to start with one module that includes the API and an internal namespace for the module’s internal classes.&lt;/p&gt;

&lt;p&gt;In a multi-module maven project for example, a simplified hierarchy could look like the illustration below. But this can be adapted to your framework and programming language. It could also be done in one maven module and separating by the namespace, but the multi module way it is more explicit, and it is ensured that you only use classes from modules you added to your pom file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwbajkd2d14nms1qhieb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwbajkd2d14nms1qhieb.jpg" alt="Image description" width="291" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a monolithic architecture we would not have such an internal namespace. There, each class would be able and allowed to call any public function or class from another namespace. In addition to that, the codebase is structured in namespaces and not separated in well defined modules.&lt;/p&gt;

&lt;p&gt;For testing you can of course define unit tests for the module’s internal logic and classes. Furthermore it is useful to tests the module API and view the rest of the application as a black box. This kind of integration tests (grey box tests) for your application are easier to run, maintain and implement than in a microservice environment where each called microservice also must run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Example
&lt;/h2&gt;

&lt;p&gt;To illustrate the concept, let’s delve into a real-world example focusing on a Company Management System. This system boasts a modular architecture reminiscent of a puzzle, with a central Core serving as the foundation. Imagine the Core as the backbone, comprising various individual modules such as User Management, Company Structure, Portal with an Administration Interface, and more. While the following diagram simplifies the structure, it effectively conveys the fundamental idea.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25vw9wl3dk4off121pen.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25vw9wl3dk4off121pen.jpg" alt="Image description" width="640" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application revolves around the Core, which provides interfaces for seamless integration of additional modules. Each module acts like a piece of the puzzle, bringing its unique functionalities to the system. For instance, a module could introduce its own user data, widgets for the portal, or even features for the administration interface. Essentially, this system operates as a foundational framework, akin to a construction kit. Users can leverage existing interfaces to deploy and expand functionality as needed.&lt;/p&gt;

&lt;p&gt;Consider the scenario of integrating a new module, such as a Job Application Workflow or Time Tracking module. This process is straightforward and independent of other modules, thanks to the well-defined interfaces and the Core’s existing infrastructure. Whether it is adding new features or extending existing ones, the system allows for modular enhancements, providing a flexible and scalable solution.&lt;/p&gt;

&lt;p&gt;While it is conceivable to expose the interfaces as HTTP endpoints and adopt a Microservices architecture, the chosen approach aligns with the concept of a Modulith. Given the manageable traffic, even with several thousand users, this design choice prioritizes simplicity and cohesion, making the system efficient and adaptable to evolving business needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Modulith presents a compelling architectural alternative that combines the simplicity of monoliths with the flexibility of microservices. By providing clear module boundaries and enabling independent development and deployment, Modulith comes with a balance that suits many applications’ needs. It allows for scalability, maintainability, and technological diversity while avoiding the operational complexities of a full-fledged microservices architecture. As software systems continue to evolve, Modulith proves to be an attractive choice for building adaptable and efficient applications.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>software</category>
      <category>programming</category>
    </item>
    <item>
      <title>Spring Boot 3 and GraalVM</title>
      <dc:creator>Max Beckers</dc:creator>
      <pubDate>Fri, 09 Jun 2023 08:18:52 +0000</pubDate>
      <link>https://forem.com/maxbeckers/spring-boot-3-and-graalvm-3f9e</link>
      <guid>https://forem.com/maxbeckers/spring-boot-3-and-graalvm-3f9e</guid>
      <description>&lt;p&gt;Spring Boot 3 comes with the support for native images. This is the part for &lt;a href="https://www.graalvm.org/"&gt;GraalVM&lt;/a&gt;. GraalVM transitions from a just-in-time (JIT) compiler built into OpenJDK to an ahead-of-time (AOT) compilation. As a result, it speeds up the startup time and reduces the memory usage of (Micro-)Services, improving the efficiency for cloud environments.&lt;/p&gt;

&lt;p&gt;GraalVM encompasses up a great benefit but has also it’s challenges and disadvantages. In this blog post, I will highlight the features and give an in-depth overview. &lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with GraalVM and Spring Boot 3
&lt;/h2&gt;

&lt;p&gt;GraalVM provides a good level of documentation including a lot of examples for the first steps.&lt;/p&gt;

&lt;p&gt;However, let’s start with the prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install GraalVM: &lt;a href="https://www.graalvm.org/22.0/docs/getting-started/"&gt;GraalVM - Getting Started&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Install Native Image: &lt;a href="https://www.graalvm.org/22.2/reference-manual/native-image/"&gt;Native Image - Getting Started&lt;/a&gt;. Short hint for Windows users: Native Image builds are platform dependent. This means that will only work in the platform specific command line (e.g. it will not work in Git Bash).&lt;/li&gt;
&lt;li&gt;Install Docker to build and run the native images.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, it’s helpful to familiarize a bit with GraalVM. There are a few prepared demos of GraalVM in a git repository and I recommend to have a look into the Spring Boot example application &lt;a href="https://github.com/graalvm/graalvm-demos/tree/master/spring-native-image"&gt;GraalVM demo - spring native image&lt;/a&gt;. It’s a good first step to play a little bit around with GraalVM.&lt;/p&gt;

&lt;p&gt;In order to start with an own Spring Boot 3 application you just need the following plugin, which is also defined and can be copied from the demo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;build&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;plugins&amp;gt;&lt;/span&gt;
    ...
    &lt;span class="nt"&gt;&amp;lt;plugin&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.graalvm.buildtools&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;native-maven-plugin&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/plugin&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/plugins&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/build&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a next step, you can try to build the native image with the maven target: &lt;code&gt;mvn clean package -Pnative&lt;/code&gt;. As already mentioned, the builds of native images are platform dependent. This is the reason why it is often helpful to use a docker image to build the native image.&lt;/p&gt;

&lt;p&gt;Maven has three different goals for AOT processing and building the image:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mvn spring-boot:process-aot&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mvn spring-boot:process-test-aot&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mvn spring-boot:build-image&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But these three commands are combined in the &lt;code&gt;mvn clean package -Pnative&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Quick tip: the profile &lt;code&gt;native&lt;/code&gt; is predefined in Spring Boot 3 for the native image creation. The same applies to the profile &lt;code&gt;nativeTest&lt;/code&gt; as a testing profile. &lt;/p&gt;

&lt;h3&gt;
  
  
  Generate metadata
&lt;/h3&gt;

&lt;p&gt;With a clean or very small Spring Boot application it might work out of the box. However for most applications it will not work in this way because of a GraalVM reflection incompatibility.&lt;/p&gt;

&lt;p&gt;In this case, there will be error notifications such as the following:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Warning: Could not resolve org.h2.Driver for reflection configuration. Reason: java.lang.ClassNotFoundException: org.h2.Driver.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This means, that there is some metadata required for the application to resolve this problem. The fix for that single problem would be very easy, but there might be a couple of these warnings. The metadata config for that would be in &lt;code&gt;reflect-config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"org.h2.Driver"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It would obviously be possible to create a configuration file by yourself and link it to the build. However, the more standard way (at least for larger artifacts) is to generate the configuration. For this case just run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;java -agentlib:native-image-agent=config-merge-dir=META-INF/native-image -jar target/my-application-1.0.0-SNAPSHOT.jar&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will generate a directory &lt;code&gt;META-INF/native-image&lt;/code&gt; (can be replaced by any name) from the application &lt;code&gt;target/my-application-1.0.0-SNAPSHOT.jar&lt;/code&gt; (replace by your jar name). When you run the application with this command, you should perform as many use cases as possible, to get a complete configuration. Due to the &lt;code&gt;config-merge-dir&lt;/code&gt; it is possible to run the application multiple times and it will be merged. This command will generate in the folder the following files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;jni-config.json&lt;/li&gt;
&lt;li&gt;predefined-classes-config.json&lt;/li&gt;
&lt;li&gt;proxy-config.json&lt;/li&gt;
&lt;li&gt;reflect-config.json&lt;/li&gt;
&lt;li&gt;resource-config.json&lt;/li&gt;
&lt;li&gt;serialization-config.json&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make this metadata available during the build, add it to your classpath in a folder named &lt;code&gt;META-INF/native-image&lt;/code&gt;. Alternatively, you can configure the path to the files like with the following two properties, depending on if the config is on the classpath but not in &lt;code&gt;META-INF/native-image&lt;/code&gt; it’s possible to use &lt;code&gt;-H:ConfigurationResourceRoots=path/to/resources/&lt;/code&gt; or otherwise when it’s outside of the classpath use &lt;code&gt;-H:ConfigurationFileDirectories=/path/to/config-dir/&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.graalvm.buildtools&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;native-maven-plugin&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;configuration&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;buildArgs&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;buildArg&amp;gt;&lt;/span&gt;-H:ConfigurationResourceRoots=path/to/resources/&lt;span class="nt"&gt;&amp;lt;/buildArg&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/buildArgs&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/configuration&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the metadata, run the build-image command again. You should now no longer see any more warnings because of reflection. In case of reflection warning, the reason might be missing metadata not created by the command. As a fix, you can either run the application again and generate the metadata for the missing part or add the missing config by yourself to the metadata files.&lt;/p&gt;

&lt;p&gt;An alternative quick fix could be the option &lt;code&gt;--allow-incomplete-classpath&lt;/code&gt;. This ensures that the possible linking errors are shifted from build time to run time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Class initialization at the wrong time
&lt;/h3&gt;

&lt;p&gt;The next challenge that might come up during the build is an error like &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ERROR: Classes that should be initialized at run time got initialized during image building:…&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The most classes are initialized at build time and GraalVM tries to find out what can be initialized at build time and which classes must be initialized at run time. This error can be fixed with the parameter &lt;code&gt;--initialize-at-run-time&lt;/code&gt;. This parameter will force to initialize this class at runtime. Another way to force to initialize a class during build is to use the parameter &lt;code&gt;--initialize-at-build-time&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.graalvm.buildtools&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;native-maven-plugin&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;configuration&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;buildArgs&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;buildArg&amp;gt;&lt;/span&gt;--initialize-at-build-time=my.build.package&lt;span class="nt"&gt;&amp;lt;/buildArg&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;buildArg&amp;gt;&lt;/span&gt;--initialize-at-build-time=my.other.build.package.SpecificClass&lt;span class="nt"&gt;&amp;lt;/buildArg&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;buildArg&amp;gt;&lt;/span&gt;--initialize-at-run-time=my.run.package&lt;span class="nt"&gt;&amp;lt;/buildArg&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;buildArg&amp;gt;&lt;/span&gt;--initialize-at-run-time=my.other.run.package.SpecificClass&lt;span class="nt"&gt;&amp;lt;/buildArg&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/buildArgs&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/configuration&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Runtime errors
&lt;/h3&gt;

&lt;p&gt;With the generated metadata and the fixed initialization time of the classes, the native image build should be successful. Nonetheless, at runtime there could come up more errors. The most common one is the &lt;code&gt;ClassNotFoundException&lt;/code&gt;. That means that the configuration in the &lt;code&gt;reflect-config.json&lt;/code&gt; is incomplete and you should add the class. Another similar error is a &lt;code&gt;FileNotFoundException&lt;/code&gt; because a file could not be located in the classpath. This means that the required file is missing in the &lt;code&gt;resource-config.json&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of GraalVM
&lt;/h2&gt;

&lt;p&gt;GraalVM is a very powerful tool with a lot of benefits. In the following section I just want to highlight the most important ones and summarize the main challenges.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduce the startup time
&lt;/h3&gt;

&lt;p&gt;Using GraalVM makes sense for applications running in the cloud. For autoscaling mechanism on load peaks, it might be important to scale up very fast a new instance. With the significant shorter startup time the native image is a huge benefit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Less memory usage
&lt;/h3&gt;

&lt;p&gt;Less memory usage is another benefit of the GraalVM native images since less memory usage can reduce the hosting costs in a significant manner.&lt;/p&gt;

&lt;h3&gt;
  
  
  Smaller Images
&lt;/h3&gt;

&lt;p&gt;The native executables are much smaller than the original Docker images. It only includes the needed and compiled code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;The artifact is compiled during build, meaning that the artifact is immutable and that it is not possible to inject insecure code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges of using GraalVM
&lt;/h2&gt;

&lt;p&gt;When managed all the issues with configuration and generating a native image there will come up more challenges to deal with.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build time
&lt;/h3&gt;

&lt;p&gt;Because of the AOT compilation the code is compiled during the build process. This will slow down the build process although it is needed to minimize the startup time on application run. For instance, the build time for one of my applications has increased from 3.11 minutes to 7.53 minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dynamic code changes
&lt;/h3&gt;

&lt;p&gt;Because of the AOT compilation there are some more challenges with migrating an existing Spring Boot application to GraalVM.&lt;/p&gt;

&lt;p&gt;GraalVM does not support the &lt;a class="mentioned-user" href="https://dev.to/profile"&gt;@profile&lt;/a&gt; annotation. The background of that is that the compilation is done before running the application. Profiles change the behavior of the application what cannot be handled with the AOT compilation.&lt;/p&gt;

&lt;p&gt;The same reason for other configurations that change if a bean is created or not like &lt;code&gt;@ConditionalOnProperty&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing
&lt;/h3&gt;

&lt;p&gt;So far &lt;a href="https://site.mockito.org/"&gt;Mockito&lt;/a&gt; is not supported for tests. This can bring up problems for a high number of existing applications and result in big test refactoring projects. There are two possible ways to get it running: either exclude all mocking tests or simply skip native tests with setting the configuration &lt;code&gt;skipNativeTests&lt;/code&gt; to true:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.graalvm.buildtools&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;native-maven-plugin&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;configuration&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;skipNativeTests&amp;gt;&lt;/span&gt;true&lt;span class="nt"&gt;&amp;lt;/skipNativeTests&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/configuration&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;At first, GraalVM in the first steps is a very complex topic with a lot of challenges to manage. For now, there is no support for GraalVM for a lot of libraries. This support will make it even easier.&lt;/p&gt;

&lt;p&gt;In terms of operating systems, I personally recommend using a mac or linux development environment. However, in case of a windows environment you should use WSL2 because for windows it is more complicated to get the setup for native images working.&lt;/p&gt;

&lt;p&gt;Microservices in cloud environments require a short startup time and minimal memory utilization, so native images are the way forward for Spring Boot applications in this context. For this reason, it makes sense to have a look into this technology. For new projects I highly recommend using GraalVM from the start, at least for a microservice or cloud architecture.&lt;/p&gt;

&lt;p&gt;But what about existing applications? It depends. The most microservices might be very easy to migrate, test and configure.&lt;br&gt;
For larger applications, it would also be very useful, but it probably requires a lot of complex refactoring and configuring.&lt;/p&gt;

</description>
      <category>java</category>
      <category>performance</category>
      <category>development</category>
    </item>
    <item>
      <title>API First: The Way of Developing Software</title>
      <dc:creator>Max Beckers</dc:creator>
      <pubDate>Fri, 12 May 2023 05:40:23 +0000</pubDate>
      <link>https://forem.com/maxbeckers/api-first-the-way-of-developing-software-1p26</link>
      <guid>https://forem.com/maxbeckers/api-first-the-way-of-developing-software-1p26</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pvEXlc6j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hi9b27jdwfrgny3qt342.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pvEXlc6j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hi9b27jdwfrgny3qt342.jpg" alt="Image by Geralt on Pixabay. Text added by author" width="768" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In recent years, software development has constantly been changing, just like the focus does when starting a new project. Nowadays, a new software project’s focus should be on the software’s APIs, which requires the developers and architects to rethink their perspectives. It has distinct advantages and makes work contributions easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now, the focus is on the API
&lt;/h2&gt;

&lt;p&gt;The purpose of building software is to process data. Data flows into systems or must be made available to others. This is why we need APIs. Since they are the backbone of the API-first approach’s core, the focus is on the API from the beginning.&lt;/p&gt;

&lt;p&gt;In today’s approaches, we often think in microservices and headless backend systems, which pushed API-first thinking. Microservices should be as independent as possible. Once an API is defined for the service, it does not matter how the service behind the API is developed. What matters is that API clients can start implementing when the API is defined. The same is true for the headless backends. Most of us want to consume a lot of data in our everyday lives, for instance, on the browser or mobile apps. In this case, a headless application provides an API, and then a mobile app or a JavaScript browser app can consume the same API.&lt;/p&gt;

&lt;p&gt;When developing a frontend application, developers usually use mockups from the UI and UX designer with exact specifications of what the frontend has to look like and which functionality is expected from the different buttons. This is exactly what an API definition does: creates a certain contract at the beginning of the project.&lt;/p&gt;

&lt;h2&gt;
  
  
  The API is Now a Product
&lt;/h2&gt;

&lt;p&gt;Defining the API as a product implies the API is relevant for more than developers. Product managers and product owners need to be centrally involved in API definition, especially if it is a public interface that should allow others to interact with the system.&lt;/p&gt;

&lt;p&gt;For example, from the perspective of a payment service provider (PSP), the market is very competitive. Having a great product can give you an edge over competitors. One of the first things a potential customer is looking for is the API. Does this API fit into the business processes? Is it easy to understand and implement the API? Is the API well documented? Those questions show that a good developer experience is an important criterion for a good API and that decisions are no longer only made on a management level but especially from the technical perspective.&lt;/p&gt;

&lt;h2&gt;
  
  
  API Definition and Documentation
&lt;/h2&gt;

&lt;p&gt;Documentation is a task developers do not like very much, so you must keep the hurdles as low as possible. That way, you can ensure the documentation stays maintained and up to date.&lt;/p&gt;

&lt;p&gt;OpenAPI specification was released recently; it is a helpful tool to define APIs at the first step of HTTP/RESTful APIs. The OpenAPI specification is easy to interpret by machines but can also be well maintained and serviced by humans after a short training period.&lt;/p&gt;

&lt;p&gt;This format is understood by developers and other stakeholders such as product managers and product owners, since the format of an API is easy to understand. One of the reasons why the format is easy to understand is that OpenAPI is a common standard and, therefore, offers nice possibilities for visualization. In this context, Swagger UI is widely used (a brief example can be found here), but there are other tools. At Worldline, we also use Redoc for API documentation (the same example with the Petstore here).&lt;/p&gt;

&lt;p&gt;When you use common standards and discuss the API with many stakeholders before implementation, you can build a well-designed API from the beginning that includes many use cases. In addition, it allows you to extend the API quickly and easily.&lt;/p&gt;

&lt;p&gt;Defining an API in advance can take time, depending on its size and complexity. This is especially true if the API is discussed with as many stakeholders as possible. Depending on the time pressure, it can make sense to start developing the system during the API definition phase. In most cases, the basics of the system can already be developed until you get to the API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improving the Development Process
&lt;/h2&gt;

&lt;p&gt;The API-first approach has an impact on the development process. Instead of starting the development in the IDE, the API-first approach starts with brainstorming, planning, and talking to stakeholders. It might take time to create API documentation that thinks through all use cases in detail and the ways a client will interact with the system.&lt;/p&gt;

&lt;p&gt;The result is well-defined API documentation, so when the development process starts, there are unlikely to be more changes (except new features).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KUbDlK_W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cqabj3ga39od5o0ejb3g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KUbDlK_W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cqabj3ga39od5o0ejb3g.jpg" alt="Photo by ThisisEngineering RAEng on Unsplash" width="700" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An API definition (however it may look) is a contract. A contract must be fulfilled with its implementation. This means that implementation can start on both sides of the API as soon as the definition is accepted. The API’s producer and consumers can work in parallel to implement this API, which makes it so different development teams are decoupled in their work.&lt;/p&gt;

&lt;p&gt;To illustrate the API-first approach, here is an example of how parallel working can work. In the first step, the API is designed or adapted. The approach works for new features and bug fixes as well as for new development.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Plan the API with as many stakeholders as possible. With a new feature or bug fix, there are probably not as many stakeholders involved as with new development, but that depends entirely on the use case.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Assumption: We have two frontend applications that use the same backend. Once the API is aligned, the frontend teams can generate a mock and implement it. In parallel, the backend team starts customizing the backend for the use case.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zKUnkfI_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rqzffkxav6ac7dbnxuzi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zKUnkfI_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rqzffkxav6ac7dbnxuzi.jpg" alt="image by author" width="331" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once all areas of development are complete, the integration can be tested, and the new version of the software can be released.
A clearly defined, well-aligned API makes the implementation smoother and contributes to significantly higher software quality. This is an important factor as it pays off in terms of maintenance and maintainability. Moreover, this allows you to focus on the use cases and features during development.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An OpenAPI definition also has other advantages. With OpenAPI, you automatically get mocks with examples added to the definition, simplifying API integration, especially in the development phase. You still must test the integration of the two systems later, but you don’t have to build a suitable mock for the development.&lt;/p&gt;

&lt;p&gt;When using the OpenAPI definition to generate your code automatically, you will keep the API definition up to date all the time, meaning that your API documentation becomes part of your development process. This generated code directly includes the validation defined in the OpenAPI. This increases the automation of the development process, which ensures the API documentation is always up to date.&lt;/p&gt;

&lt;h2&gt;
  
  
  API First Makes Everyone Happier
&lt;/h2&gt;

&lt;p&gt;Is this true? Yes. According to the following statistics, about 80% of the respondents said API-first companies are more productive, create better software, and are happier. In addition, the API-first approach also seems to have advantages in other areas. For example, new products and features can be developed faster, and security risks can be eliminated faster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FIcpeR30--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mhrme9ydo5kks3c54jdj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FIcpeR30--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mhrme9ydo5kks3c54jdj.jpg" alt="Source: https://www.postman.com/state-of-api/api-first-strategies/#API-first-strategies" width="600" height="451"&gt;&lt;/a&gt;&lt;br&gt;
Source: &lt;a href="https://www.postman.com/state-of-api/api-first-strategies/#API-first-strategies"&gt;https://www.postman.com/state-of-api/api-first-strategies/#API-first-strategies&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The first step requires a mindset shift from the developers and rethinking the company. The API has to be seen as a product offering a range of advantages. However, is there anything wrong with thinking about your API from the start, even if the company is not yet ready? No, but it does not always have to be an explicit API definition if that would be inappropriate for a certain use case. For example, if we write a CLI application that reads in a text file and outputs it to the CLI, then we do not need an API specification.&lt;/p&gt;

&lt;p&gt;Even if you, as a developer, write a small script that only runs on your computer, you do not have to think about the API. You use it yourself and can change it as you like. In these cases, the data model may be the focus of the conception. However, this does not mean a domain model is not an important part of the conception in the API-first approach.&lt;/p&gt;

&lt;p&gt;Following the API first approach is hard, but it is worth the hard work in the end because you are building a better piece of software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips and Tricks
&lt;/h2&gt;

&lt;p&gt;Finally, I would like to share a few tips that have made my job easier over the years.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BjaYtr41--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wuswbdgom3myguo6inx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BjaYtr41--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wuswbdgom3myguo6inx.jpg" alt="Tips &amp;amp; Tricks" width="768" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Helpful tools&lt;/strong&gt;&lt;br&gt;
miro is a great tool when brainstorming, but many other tools are useful. Egon.io is an excellent tool for identifying use cases when domain storytelling.&lt;/p&gt;

&lt;p&gt;To create OpenAPI documentation, you only need a text editor. Swagger-UI, which I’ve already mentioned, displays changes and errors directly, so you can perform test requests. I use IntelliJ with the corresponding OpenAPI plugin to create the API specification, so I stay in my IDE and don’t have to context switch.&lt;/p&gt;

&lt;p&gt;As an OpenAPI linter, I like vacuum. With vacuum, you can pull the docker image and run the linter.&lt;/p&gt;

&lt;p&gt;Detecting the right tool for mocks is a bit more difficult. It depends on what you want to use the mocks for. I often use wiremock, even if there is no OpenAPI support currently, and configure the mock as needed. Another tool is Prism, where you can get a mock directly based on the OpenAPI documentation.&lt;/p&gt;

&lt;p&gt;But there are many more tools, so you must find the right one for your project. This is a sample of the tools I use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenAPI Does Not Only Work With HTTP&lt;/strong&gt;&lt;br&gt;
So far, we have only discussed the HTTP/RESTful API. While this is what the OpenAPI specification was created for, you can also use an OpenAPI definition for other APIs, like message queues, to create a unified definition of the messages.&lt;/p&gt;

&lt;p&gt;Does that make sense? In my eyes, definitely. Since you have a uniform format for your software’s API, you can also use a part of the code generation — the creation of the models.&lt;/p&gt;

&lt;p&gt;Another tool is AsyncAPI. It is similar to OpenAPI and is created to generate asynchronous API documentation. The current tooling includes support for common message brokers such as Apache Kafka, RabbitMQ, and languages including Python, Java, and Nodejs.&lt;/p&gt;

&lt;p&gt;But the same would apply to file import. In this case, you can also consider sticking to the XSD validation for XML files, for example, rather than implementing an OpenAPI validation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automation against manual errors&lt;/strong&gt;&lt;br&gt;
You should automate wherever possible to make your life easier. This also applies to APIs. APIs change. APIs must change and live, or they die unused. The more manual steps you have with an API change, the more errors can slip in.&lt;/p&gt;

&lt;p&gt;Generating the client or server from the API specification is now possible for some programming languages. Often, these include an important part of APIs — the input validation. Although this is not always possible with an OpenAPI description (because dependencies between fields are hard to map), a solid basic validation is possible. For specific cases, however, validators have to be developed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using standards&lt;/strong&gt;&lt;br&gt;
One final recommendation to make your API more usable is to stick to certain standards. This starts with the HTTP methods and status codes and goes further into your API’s structure. There RFC 7807 is an interesting approach to defining error responses. The main advantage of using standards properly is that they are either already known or you can easily read them.&lt;/p&gt;

&lt;p&gt;Thanks for reading.&lt;/p&gt;

</description>
      <category>api</category>
      <category>development</category>
      <category>architecture</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
