<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: AISSAM ASSOUIK</title>
    <description>The latest articles on Forem by AISSAM ASSOUIK (@aissam_assouik).</description>
    <link>https://forem.com/aissam_assouik</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aissam_assouik"/>
    <language>en</language>
    <item>
      <title>Integration: Vaadin OAuth2 and Keycloak</title>
      <dc:creator>AISSAM ASSOUIK</dc:creator>
      <pubDate>Sun, 12 Oct 2025 15:09:53 +0000</pubDate>
      <link>https://forem.com/aissam_assouik/integration-vaadin-oauth2-authentication-with-keycloak-194f</link>
      <guid>https://forem.com/aissam_assouik/integration-vaadin-oauth2-authentication-with-keycloak-194f</guid>
      <description>&lt;p&gt;Integrate your &lt;strong&gt;Vaadin Flow&lt;/strong&gt; application with an &lt;strong&gt;IAM&lt;/strong&gt; solution that supports &lt;strong&gt;OAuth2&lt;/strong&gt; protocol can be an exhaustive task since the docs are not straight forward on "How to do it?". It worth to mention that this tutorial is based on Vaadin Flow v24 docs: &lt;a href="https://vaadin.com/docs/latest/flow/security/enabling-security" rel="noopener noreferrer"&gt;Securing Spring Boot Applications&lt;/a&gt; and &lt;a href="https://vaadin.com/docs/latest/flow/integrations/spring/oauth2" rel="noopener noreferrer"&gt;OAuth2 Authentication&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keycloak Client Config
&lt;/h2&gt;

&lt;p&gt;Starting with client's general settings in Clients &amp;gt; Client details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client ID: client-id&lt;/li&gt;
&lt;li&gt;Root URL: &lt;a href="http://VAADIN_APP_HOST:PORT" rel="noopener noreferrer"&gt;http://VAADIN_APP_HOST:PORT&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Home URL: &lt;a href="http://VAADIN_APP_HOST:PORT" rel="noopener noreferrer"&gt;http://VAADIN_APP_HOST:PORT&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Valid redirect URIs: &lt;a href="http://VAADIN_APP_HOST:PORT/*" rel="noopener noreferrer"&gt;http://VAADIN_APP_HOST:PORT/*&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Valid post logout redirect URIs: &lt;a href="http://VAADIN_APP_HOST:PORT/*" rel="noopener noreferrer"&gt;http://VAADIN_APP_HOST:PORT/*&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Client authentication: On&lt;/li&gt;
&lt;li&gt;Authorization: On&lt;/li&gt;
&lt;li&gt;Authentication Flow: Standard flow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then we create a client role in Clients &amp;gt; Client details.&lt;/p&gt;

&lt;p&gt;To make our client roles visible in access token we should go to Client scopes &amp;gt; Client scope details &amp;gt; Mapper details. In Client scopes, we select Roles then we go Mappers tab and select client roles. We enable Add to access token and Add to userinfo to have client roles visible in our Vaadin Flow application by either getting from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Map&amp;lt;String, Object&amp;gt; claimsFromAccessToekn = oidcUser.getClaims();
Map&amp;lt;String, Object&amp;gt; claimsFromUserInfo = oidcUser.getUserInfo().getClaims();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can do the same for realm roles as well if we need realm roles in our Vaadin Flow app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Depencdencies
&lt;/h2&gt;

&lt;p&gt;We need to add both next dependencies to our pom.xml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-oauth2-client&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-security&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Vaadin Flow App Config
&lt;/h2&gt;

&lt;p&gt;To link our application to Keycloak OAuth2 provider we need to set the following in our application.properties file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring.security.oauth2.client.registration.keycloak.provider=keycloak
spring.security.oauth2.client.registration.keycloak.client-id=&amp;lt;client-id&amp;gt;
spring.security.oauth2.client.registration.keycloak.client-secret=&amp;lt;client-secret&amp;gt;
spring.security.oauth2.client.registration.keycloak.authorization-grant-type=authorization_code
spring.security.oauth2.client.registration.keycloak.scope=openid,profile

spring.security.oauth2.client.provider.keycloak.issuer-uri=http://KEYCLOAK_HOST:PORT/realms/&amp;lt;REALM NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Security Config
&lt;/h2&gt;

&lt;p&gt;We need to use &lt;strong&gt;VaadinSecurityConfigurer&lt;/strong&gt; to configure the login page using the syntax /oauth2/authorization/{registrationId}, where registrationId is the OAuth2 client we have in application.properties which is keycloak. In our config below for we redirect to {baseUrl} by default as post-logout redirect URL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@EnableWebSecurity
@Configuration
@Import(VaadinAwareSecurityContextHolderStrategyConfiguration.class)
public class SecurityConfig {

    @Bean
    SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
        http.authorizeHttpRequests(authz -&amp;gt; authz
                .requestMatchers("/images/*.png").permitAll()
                .requestMatchers("/icons/*.svg").permitAll()
        );

        http.with(VaadinSecurityConfigurer.vaadin(), configurer -&amp;gt; {
            configurer.oauth2LoginPage(
                    "/oauth2/authorization/keycloak"
            );
        });

        return http.build();

    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Keycloak Oidc User Service
&lt;/h2&gt;

&lt;p&gt;Most important part is to make the roles from Keycloak available in our OidcUser instance to be able to protect views with annotations like @RolesAllowed("ROLE1").&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Component
public class KeycloakOidcUserService extends OidcUserService {

    @Override
    public OidcUser loadUser(OidcUserRequest userRequest) throws OAuth2AuthenticationException {
        OidcUser oidcUser = super.loadUser(userRequest);

        Set&amp;lt;GrantedAuthority&amp;gt; mappedAuthorities = new HashSet&amp;lt;&amp;gt;(oidcUser.getAuthorities());

        Map&amp;lt;String, Object&amp;gt; claims = oidcUser.getClaims();

        // 1) realm roles: claims.realm_access.roles
        if (claims.containsKey("realm_access")) {
            Object realmAccessObj = claims.get("realm_access");
            if (realmAccessObj instanceof Map&amp;lt;?, ?&amp;gt; realmAccess) {
                Object rolesObj = realmAccess.get("roles");
                if (rolesObj instanceof Collection) {
                    ((Collection&amp;lt;?&amp;gt;) rolesObj).forEach(r -&amp;gt; {
                        String roleName = String.valueOf(r);
                        mappedAuthorities.add(new SimpleGrantedAuthority("ROLE_" + roleName));
                    });
                }
            }
        }

        // 2) client roles: claims.resource_access.&amp;lt;client-id&amp;gt;.roles
        if (claims.containsKey("resource_access")) {
            Object resourceAccessObj = claims.get("resource_access");
            if (resourceAccessObj instanceof Map&amp;lt;?, ?&amp;gt; resourceAccess) {
                String clientId = userRequest.getClientRegistration().getClientId();
                Object clientObj = resourceAccess.get(clientId);
                if (clientObj instanceof Map&amp;lt;?, ?&amp;gt; clientMap) {
                    Object rolesObj = clientMap.get("roles");
                    if (rolesObj instanceof Collection) {
                        ((Collection&amp;lt;?&amp;gt;) rolesObj).forEach(r -&amp;gt; {
                            String roleName = String.valueOf(r);
                            mappedAuthorities.add(new SimpleGrantedAuthority("ROLE_" + roleName));
                        });
                    }
                }
            }
        }

        return new DefaultOidcUser(mappedAuthorities, oidcUser.getIdToken(), oidcUser.getUserInfo());
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This article is a hands-on guide to hooking a Vaadin Flow app up to Keycloak using OAuth2: it walks through the Keycloak client settings you need, the Spring Boot dependencies to add, and the application.properties OAuth2 registration values. It also shows the Spring Security / Vaadin config (how to point the login page to /oauth2/authorization/keycloak) and provides a KeycloakOidcUserService that reads realm and client roles from the OIDC claims and maps them to ROLE_… authorities so you can use @RolesAllowed on Vaadin views.&lt;/p&gt;

</description>
      <category>java</category>
      <category>webdev</category>
      <category>security</category>
      <category>springboot</category>
    </item>
    <item>
      <title>Kubernetes Event Driven Autoscaling: Spring Boot &amp; RabbitMQ</title>
      <dc:creator>AISSAM ASSOUIK</dc:creator>
      <pubDate>Wed, 11 Jun 2025 17:33:13 +0000</pubDate>
      <link>https://forem.com/aissam_assouik/kubernetes-event-driven-autoscaling-21ii</link>
      <guid>https://forem.com/aissam_assouik/kubernetes-event-driven-autoscaling-21ii</guid>
      <description>&lt;p&gt;Kubernetes Event Driven Autoscaling (KEDA) enabling Kubernetes workloads (deployments, statefulsets, CRDs and Jobs) to scale horizontally in response to real world events like a RabbitMQ queue length.&lt;/p&gt;

&lt;h2&gt;
  
  
  How KEDA works
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb91nab76697a67pkiu54.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb91nab76697a67pkiu54.jpg" alt="KEDA" width="800" height="764"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;KEDA can be described as the extension of Kubernetes's native autoscaling feature HorizontalPodAutoscaler. So, instead of only use resource metrics (like cpu and memory) or custom metrics we can now extend that to scale horizontally on events produced by a source. which could be a useful solution for use cases where native HPA behavior is not actually working as expected to be. &lt;br&gt;
In today's world, where cloud native environments are mostly microservices based. KEDA integration into Kubernetes is crucial to achieve responsiveness and efficient resource utilization by dynamically adjusting workloads in response to fluctuating traffic.&lt;br&gt;
Mainly, KEDA can operate for two different use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Scale deployments, statefulsets and CRDs using &lt;strong&gt;ScaledObjects&lt;/strong&gt;: it's the core KEDA functionality for scaling. Using ScaledObjects, we instruct KEDA how we want to scale in response to a defined event or metric. It's suitable option for short running process, like web application, where we can scale quickly in response to high traffic by scaling in or out the number of pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scale jobs using &lt;strong&gt;ScaledJobs&lt;/strong&gt;: fit for batch or long running processes. Scale by running bursts of work using jobs that can run one or multiple times.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Internally, KEDA's architecture include the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Metrics Server (keda-operator-metrics-apiserver)&lt;/strong&gt;: Acts as a bridge between external event sources and Kubernetes' Horizontal Pod Autoscaler (HPA). It translates event metrics (for example, queue length, stream lag) into Kubernetes-compatible metrics. HPA consumes metrics from this server to scale pods beyond one replica (from one to n).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KEDA Operator (keda-operator)&lt;/strong&gt;: Manages the lifecycle of KEDA's CRDs (for example, ScaledObject, ScaledJob) and acts as an agent for scaling deployments from zero to one when events are detected and to zero during idle periods. Uses Kubernetes controller loops to watch ScaledObject/ScaledJob resources and adjust replica counts via the Kubernetes API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Admission Webhooks (keda-admission-webhooks)&lt;/strong&gt;: Validates and mutates KEDA resources during creation/updates to prevent misconfigurations. Intercepts Kubernetes API requests for KEDA CRDs before persistence. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalers&lt;/strong&gt;: they are NOT CRDs, they are code-level components (Go implementations) within KEDA's control plane (KEDA Operator and Metrics Server). Act as adapters or connectors that translate event-source logic (for example, "get Redis queue length") into metrics Kubernetes understands. &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Authentication with Trigger Authentication
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
  name: rabbitmq-trigger-authentication
  namespace: spring-boot
spec:
  secretTargetRef:
    - parameter: host
      key: rabbitmq-uri
      name: tts-secrets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;TriggerAuthentication&lt;/strong&gt; is a KEDA CRD that securely provides credentials to scalers without hardcoding sensitive data (for example, passwords, tokens) in our scaling definitions (ScaledObjec and ScaledJob YAMLs). It acts as a centralized &lt;strong&gt;credential bridge&lt;/strong&gt; between: &lt;strong&gt;Event sources&lt;/strong&gt; (for example, RabbitMQ, Kafka, AWS SQS) and &lt;strong&gt;Scalers&lt;/strong&gt; (for example, rabbitmq scaler in a ScaledObject).&lt;/p&gt;

&lt;p&gt;Key components of TriggerAuthentication YAML config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+------------------+-------------------+----------------------------------------------------------------------------+
| Field            | Value             | Purpose                                                                    |
+------------------+-------------------+----------------------------------------------------------------------------+
| secretTargetRef  | Credential source | References a Kubernetes Secret (avoids hardcoding secrets in YAML).        |
| parameter: host  | Scaler parameter  | Injects the Secret's value into the scaler's host field.                   |
| key: rabbitmq-uri| Secret data key   | Identifies which entry in the Secret to use (for example, rabbitmq-uri:    |
|                  |                   | amqp://guest:password@localhost:5672/vhost).                               |
| name: tts-secrets| Secret name       | The Kubernetes Secret where credentials are stored.                        |
+------------------+-------------------+----------------------------------------------------------------------------+

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;TriggerAuthentication should be injected into either a ScaledObject or ScaledJob, and when is used by ScaledObject in our example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Scaler (RabbitMQ scaler) needs a connection URI (host) to monitor queues.&lt;/li&gt;
&lt;li&gt;Instead of embedding credentials, The ScaledObject references TriggerAuthentication to fetch host dynamically.&lt;/li&gt;
&lt;li&gt;Behind the scenes, KEDA reads the tts-secrets Secret then extracts rabbitmq-uri and injects it as host into the scaler.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Worth to mention that TriggerAuthentication are namespace scoped, which means the secrete should be in the same namespace as TriggerAuthentication CRD. For cluster scoped TriggerAuthentication, KEDA provide as with &lt;strong&gt;ClusterTriggerAuthentication&lt;/strong&gt;, where secrets should be in same namespace as where KEDA components are deployed (most of cases is keda namespace). Also, an important feature of TriggerAuthentication and ClusterTriggerAuthentication, is that we can reference scaler parameters (like host parameter for rabbitmq scaler) with environment variables, secrets (as the case for our example), Hashicorp Vault secrets, Azure Key Vault secrets, GCP Secret Manager secrets ...&lt;/p&gt;

&lt;p&gt;In summary, &lt;strong&gt;TriggerAuthentication&lt;/strong&gt; decouples secrets from scaling logic, acting as a secure credential broker for KEDA scalers. By referencing Kubernetes Secrets, it ensures sensitive data never appears in &lt;strong&gt;ScaledObject&lt;/strong&gt; and &lt;strong&gt;ScaledJob&lt;/strong&gt; definitions, making our event-driven infrastructure both scalable and secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Expose a deployment with Gateway API
&lt;/h2&gt;

&lt;p&gt;We are exposing the frontend application (tts) using Gateway API, which is  as set of API resources help us to manage traffic to different backends and is considered as the successor of Ingress.&lt;/p&gt;

&lt;p&gt;First, we need to install one of the implementations of Gateway API, please refer to &lt;a href="https://docs.nginx.com/nginx-gateway-fabric/installation/installing-ngf/helm/" rel="noopener noreferrer"&gt;the installation guide from Nginx to install using Helm.&lt;/a&gt;. Also, they have a nice &lt;a href="https://docs.nginx.com/nginx-gateway-fabric/get-started/" rel="noopener noreferrer"&gt;simple example&lt;/a&gt; to get familiar with it.&lt;/p&gt;

&lt;p&gt;Then we define our gateway instance that is linked to our traffic handling infrastructure, which is the nginx gateway fabric. We need to define the network endpoint that listening on port 80 for HTTP protocol:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: tts-gateway
  namespace: spring-boot
spec:
  gatewayClassName: nginx
  listeners:
    - name: http
      port: 80
      protocol: HTTP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we define the HTTP specific rules to handle the actual traffic from the gateway listener to our backend service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: tts-route
  namespace: spring-boot
spec:
  parentRefs:
    - name: tts-gateway
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: tts-svc
          port: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Autoscale deployment with KEDA
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: tts-analytics-rabbitmq-scaled-object
  namespace: spring-boot
spec:
  scaleTargetRef:
    name: tts-analytics
  minReplicaCount: 1
  maxReplicaCount: 10
  triggers:
    - type: rabbitmq
      metadata:
        protocol: amqp
        queueName: post-tts-analytics.analytics-group
        mode: QueueLength
        value: "5"
      authenticationRef:
        name: rabbitmq-trigger-authentication
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ScaledObject&lt;/strong&gt; CRD is KEDA's primary scaling configuration that Links an event source (RabbitMQ) to a Kubernetes workload (deployment, statefulset, CRD) and defines scaling rules and behavior. Mainly it abstracts away HPA complexity while generating HPA resources under the hood.&lt;/p&gt;

&lt;p&gt;Key components of ScaledObject YAML config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+---------------+----------------------+---------------------------------------------+
| Section       | Key Field            | Function                                    |
+---------------+----------------------+---------------------------------------------+
| Scale Target  | scaleTargetRef.name  | Identifies the Deployment (tts-analytics)   |
|               |                      | to scale                                    |
| Max Replicas  | maxReplicaCount      | Sets max replicas count to prevent          |
|               |                      | over scaling                                |
| Min Replicas  | minReplicaCount      | Sets min replicas count                     |
| Event Source  | type: rabbitmq       | Uses KEDA's RabbitMQ scaler for queue       |
|               |                      | monitoring                                  |
| Queue Config  | queueName            | Specific queue to monitor                   |
|               |                      | (post-tts-analytics.analytics-group)        |
| Scaling Logic | mode: QueueLength    | Scales based on total messages in queue     |
|               |                      | (alternative: MessageRate)                  |
| Target Load   | value: "5"           | Aims for 5 messages per replica             |
|               |                      | (HPA will maintain this ratio)              |
| Security      | authenticationRef    | Links to TriggerAuthentication              |
|               |                      | for secure credentials                      |
+---------------+----------------------+---------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How Autoscaling Works? With this configuration, KEDA will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to RabbitMQ, it will use credentials from rabbitmq-trigger-authentication. Monitors post-tts-analytics.analytics-group queue via AMQP protocol. Every 30s (default polling interval) it will check queue length from RabbitMQ.&lt;/li&gt;
&lt;li&gt;Calculate desired replicas with the following formula: desiredReplicas = ceil(currentQueueLength / value). While &lt;strong&gt;value&lt;/strong&gt;, is the metric value which is queue length.&lt;/li&gt;
&lt;li&gt;If current replicas ≠ desired replicas, KEDA updates the tts-analytics Deployment's replica count (under the hood, this is done with HPA). &lt;/li&gt;
&lt;li&gt;When queue is empty, KEDA scale-in to min replicas (one). Scale-in (downscale) behavior, is controlled by HPA's stabilizationWindowSeconds (default 5 minutes) since we have min replicas &amp;gt; 0.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By specifying &lt;strong&gt;minReplicaCount&lt;/strong&gt; to 1 and setting &lt;strong&gt;maxReplicaCount&lt;/strong&gt; to 10, KEDA will set value of min replica for HPA to 1 when traffic went down. While 10 max replicas work as a safety mechanism to prevent over scaling out in case of queue flooded with messages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy RabbitMQ
&lt;/h2&gt;

&lt;p&gt;In most cases RabbitMQ cluster will be a cloud managed service. But for test purposes we can deploy it inside our cluster using bitnami/rabbitmq helm chart with the following custom values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;auth:
  username: guest
  password: guest
persistence:
  enabled: false
env:
  - name: RABBITMQ_FORCE_BOOT
    value: "yes"
replicaCount: 2
clustering:
  enabled: true
resources:
  limits:
    memory: 1Gi
    cpu: 1
  requests:
    memory: 512Mi
    cpu: 500m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After deploying it, it should be like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;student@control-plane:~/$ kubectl -n rabbitmq get all
NAME             READY   STATUS    RESTARTS        AGE
pod/rabbitmq-0   1/1     Running   0               9m5s
pod/rabbitmq-1   1/1     Running   0               10m

NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                 AGE
service/rabbitmq            ClusterIP   10.110.197.82   &amp;lt;none&amp;gt;        5672/TCP,4369/TCP,25672/TCP,15672/TCP   24h
service/rabbitmq-headless   ClusterIP   None            &amp;lt;none&amp;gt;        4369/TCP,5672/TCP,25672/TCP,15672/TCP   24h

NAME                        READY   AGE
statefulset.apps/rabbitmq   2/2     24h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;We deploy ScaledObject using the yaml above, we should see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;student@control-plane:~/$ kubectl -n spring-boot get scaledobjects.keda.sh tts-analytics-rabbitmq-scaled-object
NAME                                   SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   READY   ACTIVE   FALLBACK   PAUSED    TRIGGERS   AUTHENTICATIONS                   AGE
tts-analytics-rabbitmq-scaled-object   apps/v1.Deployment   tts-analytics     1     10    True    False    False      Unknown   rabbitmq   rabbitmq-trigger-authentication   23m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By looking into READY column, we can be sure that our ScaledObject is able to connect to our RabbitMQ cluster through the configured TriggerAuthentication and is listening to metrics (queue length) each pollingInterval (30 seconds) or when HPA controller asks for metrics (15 seconds).&lt;/p&gt;

&lt;p&gt;For testing purposes, we going to use Siege load testing tool. It will call the endpoint exposed by TTS deployment, for this lab we used Gateway API to manage traffic from outside cluster to tts-service that exposes tts deployment. For local deployment, you can use cluster IP local address instead of a Load balancer public IP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~$ siege -c 4 -t 60S --header="Content-Type: application/json" "http://$CLUSTER_LOAD_BALANCER_PUBLIC_IP/tts POST &amp;lt; ./payload.json"
~$ cat payload.json
{"text":"Hi this is a test!!"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see below how the corresponding HPA updates replicas count for the target deployment based on metric values shipped by KEDA:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;student@control-plane:~/$ kubectl -n spring-boot get hpa keda-hpa-tts-analytics-rabbitmq-scaled-object -w
NAME                                            REFERENCE                  TARGETS             MINPODS   MAXPODS   REPLICAS   AGE
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   &amp;lt;unknown&amp;gt;/5 (avg)   1         10        1          32s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   0/5 (avg)           1         10        1          76s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   180/5 (avg)         1         10        1          3m16s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   103500m/5 (avg)     1         10        4          3m32s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   74500m/5 (avg)      1         10        8          3m47s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   78700m/5 (avg)      1         10        10         4m2s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   79800m/5 (avg)      1         10        10         4m17s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   77300m/5 (avg)      1         10        10         4m32s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   74900m/5 (avg)      1         10        10         4m47s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   72400m/5 (avg)      1         10        10         5m2s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   69900m/5 (avg)      1         10        10         5m17s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   67400m/5 (avg)      1         10        10         5m32s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   64900m/5 (avg)      1         10        10         5m47s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   62400m/5 (avg)      1         10        10         6m2s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   60/5 (avg)          1         10        10         6m17s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   57500m/5 (avg)      1         10        10         6m32s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   55/5 (avg)          1         10        10         6m47s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   52400m/5 (avg)      1         10        10         7m3s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   49900m/5 (avg)      1         10        10         7m18s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   47400m/5 (avg)      1         10        10         7m33s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   44900m/5 (avg)      1         10        10         7m48s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   42500m/5 (avg)      1         10        10         8m3s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   39900m/5 (avg)      1         10        10         8m19s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   37400m/5 (avg)      1         10        10         8m34s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   34900m/5 (avg)      1         10        10         8m49s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   32400m/5 (avg)      1         10        10         9m4s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   29900m/5 (avg)      1         10        10         9m19s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   27400m/5 (avg)      1         10        10         9m34s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   25/5 (avg)          1         10        10         9m49s
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   22500m/5 (avg)      1         10        10         10m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   20/5 (avg)          1         10        10         10m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   17500m/5 (avg)      1         10        10         10m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   14500m/5 (avg)      1         10        10         10m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   12400m/5 (avg)      1         10        10         11m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   9900m/5 (avg)       1         10        10         11m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   7300m/5 (avg)       1         10        10         11m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   4900m/5 (avg)       1         10        10         11m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   2400m/5 (avg)       1         10        10         12m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   0/5 (avg)           1         10        10         12m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   &amp;lt;unknown&amp;gt;/5 (avg)   1         10        10         13m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   &amp;lt;unknown&amp;gt;/5 (avg)   1         10        10         13m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   0/5 (avg)           1         10        10         13m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   0/5 (avg)           1         10        10         16m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   0/5 (avg)           1         10        10         16m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   0/5 (avg)           1         10        5          17m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   0/5 (avg)           1         10        1          17m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   &amp;lt;unknown&amp;gt;/5 (avg)   1         10        1          21m
keda-hpa-tts-analytics-rabbitmq-scaled-object   Deployment/tts-analytics   &amp;lt;unknown&amp;gt;/5 (avg)   1         10        1          21m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that HPA adjusting the number of replicas in response to load increase till it reaches maxReplicaCount (10). When traffic went down of target value 5, we can see that HPA is not scaling down immediately. This feature prevents hard scaling down in case of fluctuating traffic, HPA will wait for 5 minutes and then scale down to minReplicaCount since the queue is empty.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Autoscaling in Kubernetes usually works based on CPU or memory usage, but sometimes that's not enough. What if your app is working with queues, messages, or events? That's where KEDA comes in. &lt;br&gt;
KEDA is a tool that helps your apps scale up or down based on events, like the number of messages in a queue. It's especially useful for apps that don't always need to be running full-time, like background jobs or workers. &lt;br&gt;
It works with many event sources like Kafka, RabbitMQ, AWS SQS, and others. You just define a ScaledObject in your YAML file, tell it what to watch (like a queue), and KEDA handles the rest.&lt;br&gt;
In short, KEDA makes your Kubernetes apps smarter by scaling based on real-world events, not just system resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/AnirAchmrar/kubernetes-1" rel="noopener noreferrer"&gt;Project GitHub Repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>springboot</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Spring Cloud Stream: write once, run anywhere</title>
      <dc:creator>AISSAM ASSOUIK</dc:creator>
      <pubDate>Mon, 26 May 2025 23:39:06 +0000</pubDate>
      <link>https://forem.com/aissam_assouik/spring-cloud-stram-write-once-run-anywhere-24hk</link>
      <guid>https://forem.com/aissam_assouik/spring-cloud-stram-write-once-run-anywhere-24hk</guid>
      <description>&lt;p&gt;In modern microservices architectures, asynchronous communication isn't just an option – it's the backbone of scalable, resilient systems. But let's face it: working directly with message brokers like RabbitMQ or Kafka often means drowning in boilerplate code, broker-specific configurations, and operational complexity.&lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;Spring Cloud Stream&lt;/strong&gt; – the game-changing framework that transforms messaging from a technical hurdle into a strategic advantage. With its elegant abstraction layer, you can implement robust event-driven communication between services without tying your code to a specific broker.&lt;/p&gt;

&lt;p&gt;In this article, I'll demonstrate how Spring Cloud Stream enabled me to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Build broker-agnostic producers/consumers&lt;/strong&gt; with identical code for RabbitMQ and Kafka.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement enterprise-grade messaging patterns&lt;/strong&gt; (retries, DLQs) in 3 lines of configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Switch messaging technologies&lt;/strong&gt; with a single environment variable change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain focus on business logic&lt;/strong&gt; while the framework handles messaging plumbing.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Whether you're handling payment processing events, real-time analytics, or IoT data streams, Spring Cloud Stream turns asynchronous communication from a challenge into your system's superpower. Let’s dive in!&lt;/p&gt;

&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkoh7sznogjfu5w2d5ia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkoh7sznogjfu5w2d5ia.png" alt="Spring Cloud Stream" width="781" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To be able to demonstrate the mentioned above, we are going to use two Spring Boot applications that both able to communicate asynchronously and synchronously.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TTS, exposing one endpoint that serves the purpose of converting text entered by user to speech using FreeTTS Java Library.&lt;/li&gt;
&lt;li&gt;TTS Analytics, a microservice that serves the role of doing analytics on user IP addresses and User Agent in order to provide device and country info (for this lab, we only mock this behavior). In addition, it receives Post TTS Analytics messages for some post processing scenarios that we can face in real world application through a Queue (if we use RabbitMQ) or a Topic (if we use Kafka).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Components
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌──────────────────────────────┐
│       Business Logic         │
└──────────────┬───────────────┘
               │
┌──────────────▼───────────────┐
│    Spring Cloud Stream       │
│  ┌────────────────────────┐  │
│  │      StreamBridge      │◄───SEND
│  └────────────────────────┘  │
│  ┌────────────────────────┐  │
│  │  @Bean Supplier/       │◄───POLL
│  │    Consumer            │  │
│  └────────────────────────┘  │
└──────────────┬───────────────┘
               │
┌──────────────▼───────────────┐
│         Binders              │
│  ┌───────┐  ┌───────┐        │
│  │Rabbit │  │ Kafka │ ...    │
│  │MQ     │  │       │        │
│  └───────┘  └───────┘        │
└──────────────┬───────────────┘
               │
┌──────────────▼──────────────┐
│   Message Broker            │
│  (RabbitMQ/Kafka/PubSub)    │
└─────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key terminology in the world of Spring Cloud Stream includes but not limited to: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Binder: Plug-in connector to specific brokers (RabbitMQ/Kafka).&lt;/li&gt;
&lt;li&gt;Binding: Configuration linking code to broker destinations.&lt;/li&gt;
&lt;li&gt;Destination: Logical address (exchange/topic) for messages.&lt;/li&gt;
&lt;li&gt;Message Channel: Virtual pipeline (input/output).&lt;/li&gt;
&lt;li&gt;StreamBridge: Imperative message sending utility.&lt;/li&gt;
&lt;li&gt;Supplier/Consumer: Functional interfaces for streams.&lt;/li&gt;
&lt;li&gt;Consumer Group: Scaling mechanism for parallel processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Message Flow (Producer → Consumer)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                PRODUCER SIDE
┌───────────┐       ┌───────────┐       ┌───────────┐
│ Business  │       │ Stream    │       │ Message   │
│  Logic    ├──────►│ Bridge    ├──────►│ Broker    │
│           │       │           │       │           │
└───────────┘       └───────────┘       └─────┬─────┘
                                              │
                CONSUMER SIDE                 ▼
┌───────────┐       ┌───────────┐       ┌───────────┐
│ @Bean     │       │ Spring    │       │ Message   │
│ Consumer  │◄──────┤ Cloud     │◄──────┤ Broker    │
│           │       │ Stream    │       │           │
└───────────┘       └───────────┘       └───────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Business logic may include multiple methods or services that need to send messages to our exchange/topic, perfect use case of StreamBridge. The consumer application then receives messages from the channel for processing through the method that returns a Consumer that will process the message/event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Consumer Group Scaling
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;           ┌───────────────────────┐
           │      Message Broker   │
           │  (post-tts-analytics) │
           └───────────┬───────────┘
                       │
           ┌───────────▼───────────┐
           │    Consumer Group     │
           │  "analytics-group"    │
           └───────────┬───────────┘
         ┌─────────────┼─────────────┐
┌────────▼───┐   ┌─────▼─────┐   ┌───▼───────┐
│ Consumer   │   │ Consumer  │   │ Consumer  │
│ Instance 1 │   │ Instance2 │   │ Instance3 │
└────────────┘   └───────────┘   └───────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In real world scenarios, we probably have multiple instances of consumer application. The different instances end up as consumer competing between each other, and we expect that each message is handled by only one instance. Here where Consumer Group concept is important. Each consumer binding can specify a group name, each group that subscribe to a given destination will receive a copy of message/event and only one instance/member from that group will receive the message/event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Binder Abstraction Layer
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌───────────────────────────────────┐
│        Your Application Code      │
│  ┌──────────────────────────────┐ │
│  │  Spring Cloud Stream         │ │
│  │  ┌───────────────────────┐   │ │
│  │  │     Bindings          │   │ │
│  │  │ (Logical Destinations)│   │ │
│  │  └───────────┬───────────┘   │ │
│  └──────────────│───────────────┘ │
│                 ▼                 │
│  ┌──────────────────────────────┐ │
│  │        Binder Bridge         │ │
│  └──────────────┬───────────────┘ │
└─────────────────│─────────────────┘
                  │
       ┌──────────┴─────────────┐
┌──────▼─────────┐    ┌─────────▼───────┐
│ RabbitMQ       │    │ Kafka           │
│ Implementation │    │ Implementation  │
└────────────────┘    └─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Binder Service Provider Interface (SPI) is the foundation of Spring Cloud Stream's broker independence. It defines a contract for connecting application logic to messaging systems while abstracting broker-specific details.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface Binder&amp;lt;T&amp;gt; {
    Binding&amp;lt;T&amp;gt; bindProducer(String name, T outboundTarget, 
                           ProducerProperties producerProperties);

    Binding&amp;lt;T&amp;gt; bindConsumer(String name, String group, 
                           T inboundTarget, 
                           ConsumerProperties consumerProperties);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;bindProducer(): Connects output channels to broker destinations.&lt;/li&gt;
&lt;li&gt;bindConsumer(): Links input channels to broker queues/topics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kafka binder implementation maps the destination and consumer group to a Kafka topic. And for RabbitMQ binder implementation, it maps the destination to a TopicExchange and for each consumer group a queue will be bound to that TopicExchange.&lt;/p&gt;

&lt;h2&gt;
  
  
  Producer Application
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Service
public class PostTtsAnalyticsPublisherService {
    private final StreamBridge streamBridge;

    public void sendAnalytics(PostTtsAnalyticsRequest request) {
        streamBridge.send("postTtsAnalytics-out-0", request);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Imperative Sending: StreamBridge allows ad-hoc message sending from any business logic.&lt;/li&gt;
&lt;li&gt;Binding Abstraction: postTtsAnalytics-out-0 maps to broker destinations via configuration.&lt;/li&gt;
&lt;li&gt;Zero Broker-Specific Code: No RabbitMQ/Kafka API dependencies.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  cloud:
    stream:
      bindings:
        postTtsAnalytics-out-0:
          destination: post-tts-analytics # Exchange/topic name
          content-type: application/json
      default-binder: ${BINDER_TYPE:rabbit} # Magic switch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Binding Name: &lt;strong&gt;postTtsAnalytics-out-0&lt;/strong&gt;, defines an output channel for sending messages and follow convention typically as &lt;strong&gt;-out-&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;destination: &lt;strong&gt;post-tts-analytics&lt;/strong&gt;, specifies the target destination in the messaging system. For RabbitMQ, this would be an exchange named &lt;strong&gt;post-tts-analytics&lt;/strong&gt;; for Kafka, a topic with the same name.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;content-type: application/json&lt;/strong&gt;, indicates that messages sent through this binding will be serialized as JSON. Spring Cloud Stream uses this to handle message conversion appropriately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;default-binder: ${BINDER_TYPE:rabbit}&lt;/strong&gt;, sets the default binder to use for message communication. The &lt;strong&gt;BINDER_TYPE&lt;/strong&gt; environment variable is set, its value will be used; otherwise, it defaults to &lt;strong&gt;rabbit&lt;/strong&gt;. This flexibility enables easy switching between different messaging systems (e.g., RabbitMQ or Kafka) without changing the codebase.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-stream-binder-rabbit&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-stream-binder-kafka&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By providing binders dependencies for both Kafka and RabbitMQ, we can easily switch between Kafka and RabbitMQ without any codebase changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Consumer Application
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Service
public class PostTtsAnalyticsConsumerService {

    @Bean
    public Consumer&amp;lt;PostTtsAnalyticsRequest&amp;gt; postTtsAnalytics() {
        return this::processAnalytics;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Declarative Consumption: Functional Consumer bean handles incoming messages.&lt;/li&gt;
&lt;li&gt;Automatic Binding: Method name &lt;strong&gt;postTtsAnalytics&lt;/strong&gt; matches binding config.&lt;/li&gt;
&lt;li&gt;Retry/DLQ Support: Configured via simple YAML.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  cloud:
    stream:
      bindings:
        postTtsAnalytics-in-0:
          destination: post-tts-analytics
          content-type: application/json
          group: analytics-group
          consumer:
            max-attempts: 3                 # Retry attempts
            back-off-initial-interval: 1000 # Retry delay
      default-binder: ${BINDER_TYPE:rabbit} # Magic switch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Binding Name: &lt;strong&gt;postTtsAnalytics-in-0&lt;/strong&gt;, defines an input binding, indicating that this application will consume messages. The naming convention typically follows &lt;strong&gt;-in-&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;destination: post-tts-analytics&lt;/strong&gt;, specifies the target destination in the messaging system. For RabbitMQ, this would correspond to an exchange named &lt;strong&gt;post-tts-analytics&lt;/strong&gt;; for Kafka, a topic with the same name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;group: analytics-group&lt;/strong&gt;, defines a consumer group. In Kafka, this ensures that messages are distributed among consumers in the same group. In RabbitMQ, it helps in creating a shared queue for the group.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;max-attempts: 3&lt;/strong&gt;, specifies that the application will attempt to process a message up to three times before considering it failed (this includes the initial attempt and two retries). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;back-off-initial-interval: 1000&lt;/strong&gt;, sets the initial delay (in milliseconds) between retry attempts. In this case, there's a 1-second delay before the first retry.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-stream-binder-rabbit&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-cloud-stream-binder-kafka&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing Strategy
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@SpringBootTest
@Import(TestChannelBinderConfiguration.class)
class PostTtsAnalyticsPublisherServiceTest {

    @Autowired
    private OutputDestination outputDestination;

    @Autowired
    private CompositeMessageConverter converter;

    @Autowired
    private PostTtsAnalyticsPublisherService analyticsService;

    @Test
    void testSendMessage() {
        PostTtsAnalyticsRequest request = new PostTtsAnalyticsRequest(
                "123", LocalDateTime.now(), 1500
        );

        analyticsService.sendAnalytics(request);

        var message = outputDestination.receive(1000, "post-tts-analytics");
        assert message != null;

        PostTtsAnalyticsRequest received = (PostTtsAnalyticsRequest) converter.fromMessage(message, PostTtsAnalyticsRequest.class);

        assert Objects.requireNonNull(received).id().equals(request.id());
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This test effectively ensures that the &lt;strong&gt;PostTtsAnalyticsPublisherService&lt;/strong&gt; publishes messages as expected, validating both the sending mechanism and the message content.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/import"&gt;@import&lt;/a&gt;(TestChannelBinderConfiguration.class)&lt;/strong&gt;: Imports the &lt;strong&gt;TestChannelBinderConfiguration&lt;/strong&gt;, which sets up the in-memory test binder provided by Spring Cloud Stream. This binder simulates message broker interactions within the JVM, eliminating the need for an actual message broker during testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OutputDestination&lt;/strong&gt;: An abstraction provided by the test binder to capture messages sent by the application. It allows the test to retrieve messages that would have been sent to an external message broker.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CompositeMessageConverter&lt;/strong&gt;: A composite of multiple &lt;strong&gt;MessageConverter&lt;/strong&gt; instances. It facilitates the conversion of message payloads to and from different formats, such as &lt;strong&gt;JSON **to **POJO&lt;/strong&gt;, based on the content type.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@SpringBootTest
@Import(TestChannelBinderConfiguration.class)
class PostTtsAnalyticsConsumerServiceTest {

    @Autowired
    private InputDestination inputDestination;

    @MockitoSpyBean
    private PostTtsAnalyticsConsumerService consumerService;

    @Test
    void testReceiveMessage() {
        PostTtsAnalyticsRequest request = new PostTtsAnalyticsRequest(
                "456", LocalDateTime.now(), 2000
        );

        inputDestination.send(new GenericMessage&amp;lt;&amp;gt;(request), "post-tts-analytics");

        // Verify the handler was called
        verify(consumerService, timeout(1000)).processAnalytics(request);
    }

    @Test
    void testRetryMechanism() throws Exception {
        PostTtsAnalyticsRequest request = new PostTtsAnalyticsRequest(
                "456", LocalDateTime.now(), 2000
        );

        // Mock failure scenario
        doThrow(new RuntimeException("Simulated processing failure"))
                .when(consumerService).processAnalytics(request);

        // Send test message
        inputDestination.send(new GenericMessage&amp;lt;&amp;gt;(request), "post-tts-analytics");

        // Verify retry attempts
        verify(consumerService, timeout(4000).times(3)).processAnalytics(request);
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;testReceiveMessage()&lt;/strong&gt;, validates that the consumer correctly processes an incoming message. &lt;strong&gt;testRetryMechanism()&lt;/strong&gt;, Tests the consumer's retry mechanism when message processing fails.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;InputDestination&lt;/strong&gt;: An abstraction provided by the test binder to send messages to the application, simulating incoming messages from a message broker.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;@MockitoSpyBean&lt;/strong&gt;: Creates a spy of the &lt;strong&gt;PostTtsAnalyticsConsumerService&lt;/strong&gt; bean, allowing the test to verify interactions with its methods while preserving the original behavior.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  RabbitMQ Demo
&lt;/h2&gt;

&lt;p&gt;We run both applications without setting &lt;strong&gt;BINDER_TYPE&lt;/strong&gt; environment variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::                (v3.4.5)

2025-05-25T21:20:41.345+01:00  INFO 15156 --- [tts] [           main] com.example.tts.TtsApplication           : No active profile set, falling back to 1 default profile: "default"
2025-05-25T21:20:42.657+01:00  INFO 15156 --- [tts] [           main] faultConfiguringBeanFactoryPostProcessor : No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created.
2025-05-25T21:20:42.666+01:00  INFO 15156 --- [tts] [           main] faultConfiguringBeanFactoryPostProcessor : No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created.
2025-05-25T21:20:43.297+01:00  INFO 15156 --- [tts] [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port 8080 (http)
2025-05-25T21:20:43.312+01:00  INFO 15156 --- [tts] [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2025-05-25T21:20:43.313+01:00  INFO 15156 --- [tts] [           main] o.apache.catalina.core.StandardEngine    : Starting Servlet engine: [Apache Tomcat/10.1.40]
2025-05-25T21:20:43.387+01:00  INFO 15156 --- [tts] [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2025-05-25T21:20:43.387+01:00  INFO 15156 --- [tts] [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1977 ms
2025-05-25T21:20:45.676+01:00  INFO 15156 --- [tts] [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2025-05-25T21:20:45.676+01:00  INFO 15156 --- [tts] [           main] o.s.i.channel.PublishSubscribeChannel    : Channel 'tts.errorChannel' has 1 subscriber(s).
2025-05-25T21:20:45.676+01:00  INFO 15156 --- [tts] [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean '_org.springframework.integration.errorLogger'
2025-05-25T21:20:45.767+01:00  INFO 15156 --- [tts] [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port 8080 (http) with context path '/'
2025-05-25T21:20:45.794+01:00  INFO 15156 --- [tts] [           main] com.example.tts.TtsApplication           : Started TtsApplication in 5.064 seconds (process running for 5.525)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above are logs from Producer application startup with default rabbit binder. We can see the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto-configures Spring Integration channels (errorChannel, integrationHeaderChannelRegistry)&lt;/li&gt;
&lt;li&gt;No explicit RabbitMQ connection logs (binder initialized lazily)&lt;/li&gt;
&lt;li&gt;Implicit errorChannel subscriber for logging
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::                (v3.4.5)

2025-05-25T21:20:46.613+01:00  INFO 23932 --- [tts-analytics] [           main] c.e.t.TtsAnalyticsApplication            : No active profile set, falling back to 1 default profile: "default"
2025-05-25T21:20:47.894+01:00  INFO 23932 --- [tts-analytics] [           main] faultConfiguringBeanFactoryPostProcessor : No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created.
2025-05-25T21:20:47.903+01:00  INFO 23932 --- [tts-analytics] [           main] faultConfiguringBeanFactoryPostProcessor : No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created.
2025-05-25T21:20:48.520+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port 8090 (http)
2025-05-25T21:20:48.535+01:00  INFO 23932 --- [tts-analytics] [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2025-05-25T21:20:48.535+01:00  INFO 23932 --- [tts-analytics] [           main] o.apache.catalina.core.StandardEngine    : Starting Servlet engine: [Apache Tomcat/10.1.40]
2025-05-25T21:20:48.599+01:00  INFO 23932 --- [tts-analytics] [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2025-05-25T21:20:48.599+01:00  INFO 23932 --- [tts-analytics] [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1913 ms
2025-05-25T21:20:50.354+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.c.s.m.DirectWithAttributesChannel    : Channel 'tts-analytics.postTtsAnalytics-in-0' has 1 subscriber(s).
2025-05-25T21:20:50.479+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2025-05-25T21:20:50.479+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.i.channel.PublishSubscribeChannel    : Channel 'tts-analytics.errorChannel' has 1 subscriber(s).
2025-05-25T21:20:50.483+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean '_org.springframework.integration.errorLogger'
2025-05-25T21:20:50.558+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.c.s.binder.DefaultBinderFactory      : Creating binder: rabbit
2025-05-25T21:20:50.558+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.c.s.binder.DefaultBinderFactory      : Constructing binder child context for rabbit
2025-05-25T21:20:50.760+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.c.s.binder.DefaultBinderFactory      : Caching the binder: rabbit
2025-05-25T21:20:50.790+01:00  INFO 23932 --- [tts-analytics] [           main] c.s.b.r.p.RabbitExchangeQueueProvisioner : declaring queue for inbound: post-tts-analytics.analytics-group, bound to: post-tts-analytics
2025-05-25T21:20:50.804+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [localhost:5672]
2025-05-25T21:20:50.876+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.a.r.c.CachingConnectionFactory       : Created new connection: rabbitConnectionFactory#2055833f:0/SimpleConnection@1ff463bb [delegate=amqp://guest@127.0.0.1:5672/, localPort=54933]
2025-05-25T21:20:50.964+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.c.stream.binder.BinderErrorChannel   : Channel 'rabbit-230456842.postTtsAnalytics-in-0.errors' has 1 subscriber(s).
2025-05-25T21:20:50.966+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.c.stream.binder.BinderErrorChannel   : Channel 'rabbit-230456842.postTtsAnalytics-in-0.errors' has 2 subscriber(s).
2025-05-25T21:20:50.997+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.i.a.i.AmqpInboundChannelAdapter      : started bean 'inbound.post-tts-analytics.analytics-group'
2025-05-25T21:20:51.020+01:00  INFO 23932 --- [tts-analytics] [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port 8090 (http) with context path '/'
2025-05-25T21:20:51.047+01:00  INFO 23932 --- [tts-analytics] [           main] c.e.t.TtsAnalyticsApplication            : Started TtsAnalyticsApplication in 5.096 seconds (process running for 5.797)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And for Consumer application, from startup logs we see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Binds to RabbitMQ queue: post-tts-analytics.analytics-group&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creates connection to localhost:5672&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AmqpInboundChannelAdapter : Started message listener (bean 'inbound.post-tts-analytics.analytics-group')&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;BinderErrorChannel with 2 subscribers (retry + DLQ handling)&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ # rabbitmqctl list_connections name user state protocol
Listing connections ...
name    user    state   protocol
172.17.0.1:43720 -&amp;gt; 172.17.0.2:5672     guest   running {0,9,1}
/ # rabbitmqctl list_channels name connection user confirm consumer_count
Listing channels ...
name    connection      user    confirm consumer_count
172.17.0.1:43720 -&amp;gt; 172.17.0.2:5672 (1) &amp;lt;rabbit@6332dec50309.1748204432.670.0&amp;gt;  guest   false   1
/ # rabbitmqctl list_exchanges name type durable auto_delete
Listing exchanges for vhost / ...
name    type    durable auto_delete
post-tts-analytics      topic   true    false
amq.match       headers true    false
amq.fanout      fanout  true    false
amq.rabbitmq.trace      topic   true    false
amq.headers     headers true    false
        direct  true    false
amq.topic       topic   true    false
amq.direct      direct  true    false
/ # rabbitmqctl list_queues name durable auto_delete consumers state
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name    durable auto_delete     consumers       state
post-tts-analytics.analytics-group      true    false   1       running
/ # rabbitmqctl list_bindings source_name routing_key destination_name
Listing bindings for vhost /...
source_name     routing_key     destination_name
        post-tts-analytics.analytics-group      post-tts-analytics.analytics-group
post-tts-analytics      #       post-tts-analytics.analytics-group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that all RabbitMQ resources are created correctly by Spring Cloud Stream.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-05-25T21:38:52.121+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] c.e.t.controller.TextToSpeechController  : textToSpeech request: TtsRequest[text=Hi this is a complete test. From RabbitMQ]
2025-05-25T21:38:52.809+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] c.e.tts.service.TextToSpeechService      : do Something with analytics response: TtsAnalyticsResponse[device=Bot, countryIso=ZA]
Wrote synthesized speech to \tts\output\14449f58-7140-45a5-b024-26021229dfb3.wav
2025-05-25T21:38:53.063+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] c.e.t.m.PostTtsAnalyticsPublisherService : sendAnalytics: PostTtsAnalyticsRequest[id=14449f58-7140-45a5-b024-26021229dfb3, creationDate=2025-05-25T21:38:53.063039900, processTimeMs=937]
2025-05-25T21:38:53.068+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] o.s.c.s.binder.DefaultBinderFactory      : Creating binder: rabbit
2025-05-25T21:38:53.068+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] o.s.c.s.binder.DefaultBinderFactory      : Constructing binder child context for rabbit
2025-05-25T21:38:53.188+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] o.s.c.s.binder.DefaultBinderFactory      : Caching the binder: rabbit
2025-05-25T21:38:53.207+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [localhost:5672]
2025-05-25T21:38:53.255+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] o.s.a.r.c.CachingConnectionFactory       : Created new connection: rabbitConnectionFactory#54463380:0/SimpleConnection@18b2124b [delegate=amqp://guest@127.0.0.1:5672/, localPort=55076]
2025-05-25T21:38:53.293+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] o.s.c.s.m.DirectWithAttributesChannel    : Channel 'tts.postTtsAnalytics-out-0' has 1 subscriber(s).
2025-05-25T21:38:53.311+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [localhost:5672]
2025-05-25T21:38:53.319+01:00  INFO 15156 --- [tts] [nio-8080-exec-1] o.s.a.r.c.CachingConnectionFactory       : Created new connection: rabbitConnectionFactory.publisher#105dbd15:0/SimpleConnection@7243df50 [delegate=amqp://guest@127.0.0.1:5672/, localPort=55077]
2025-05-25T21:38:55.309+01:00  INFO 15156 --- [tts] [nio-8080-exec-3] c.e.t.controller.TextToSpeechController  : textToSpeech request: TtsRequest[text=Hi this is a complete test. From RabbitMQ]
2025-05-25T21:38:55.315+01:00  INFO 15156 --- [tts] [nio-8080-exec-3] c.e.tts.service.TextToSpeechService      : do Something with analytics response: TtsAnalyticsResponse[device=Tablet, countryIso=AU]
Wrote synthesized speech to tts\output\a4021436-fd12-486b-9d65-cff25c5611c5.wav
2025-05-25T21:38:55.400+01:00  INFO 15156 --- [tts] [nio-8080-exec-3] c.e.t.m.PostTtsAnalyticsPublisherService : sendAnalytics: PostTtsAnalyticsRequest[id=a4021436-fd12-486b-9d65-cff25c5611c5, creationDate=2025-05-25T21:38:55.400422300, processTimeMs=91]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By calling the "/tts" endpoint in TTS applicatoin, we can see the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The application processes text-to-speech requests, generates audio files, and publishes analytics data to a RabbitMQ exchange using Spring Cloud Stream.&lt;/li&gt;
&lt;li&gt;Upon the first request, the application initializes the RabbitMQ binder, constructs the necessary context, and caches it for future use.&lt;/li&gt;
&lt;li&gt;The application establishes connections to RabbitMQ at &lt;strong&gt;localhost:5672&lt;/strong&gt;, confirming successful communication with the message broker.&lt;/li&gt;
&lt;li&gt;The application confirms that the output channel has an active subscriber, ensuring that messages are being routed correctly.&lt;/li&gt;
&lt;li&gt;Further requests are processed efficiently, with the application reusing the established binder and connections, demonstrating effective resource management. processTimeMs=91 for second request in comparison to processTimeMs=937 for first one.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-05-25T21:38:52.567+01:00  INFO 23932 --- [tts-analytics] [nio-8090-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2025-05-25T21:38:52.567+01:00  INFO 23932 --- [tts-analytics] [nio-8090-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2025-05-25T21:38:52.567+01:00  INFO 23932 --- [tts-analytics] [nio-8090-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 0 ms
2025-05-25T21:38:52.664+01:00  INFO 23932 --- [tts-analytics] [nio-8090-exec-1] c.e.t.controller.AnalyticsController     : doAnalytics request: TtsAnalyticsRequest[clientIp=0:0:0:0:0:0:0:1, userAgent=PostmanRuntime/7.44.0]
2025-05-25T21:38:53.381+01:00  INFO 23932 --- [tts-analytics] [alytics-group-1] c.e.t.m.PostTtsAnalyticsConsumerService  : processAnalytics: PostTtsAnalyticsRequest[id=14449f58-7140-45a5-b024-26021229dfb3, creationDate=2025-05-25T21:38:53.063039900, processTimeMs=937]
2025-05-25T21:38:55.313+01:00  INFO 23932 --- [tts-analytics] [nio-8090-exec-2] c.e.t.controller.AnalyticsController     : doAnalytics request: TtsAnalyticsRequest[clientIp=0:0:0:0:0:0:0:1, userAgent=PostmanRuntime/7.44.0]
2025-05-25T21:38:55.408+01:00  INFO 23932 --- [tts-analytics] [alytics-group-1] c.e.t.m.PostTtsAnalyticsConsumerService  : processAnalytics: PostTtsAnalyticsRequest[id=a4021436-fd12-486b-9d65-cff25c5611c5, creationDate=2025-05-25T21:38:55.400422300, processTimeMs=91]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From correspond logs above of TTS Analytics following the messages sent from TTS application we see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The application acts as a consumer in a Spring Cloud Stream setup, receiving messages from a RabbitMQ exchange named &lt;strong&gt;post-tts-analytics&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Upon receiving a &lt;strong&gt;PostTtsAnalyticsRequest&lt;/strong&gt;, the &lt;strong&gt;PostTtsAnalyticsConsumerService&lt;/strong&gt; processes the message, which likely involves business logic such as storing analytics data or triggering further actions.&lt;/li&gt;
&lt;li&gt;The presence of the &lt;strong&gt;AnalyticsController&lt;/strong&gt; handling &lt;strong&gt;TtsAnalyticsRequest&lt;/strong&gt; indicates that the application also exposes REST endpoints, possibly for manual testing or additional data ingestion.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Kafka Demo
&lt;/h2&gt;

&lt;p&gt;We run both applications now with setting the environment variable BINDER_TYPE=kafka. Producer application TTS run as previous example without any difference.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::                (v3.4.5)

2025-05-26T23:31:21.842+01:00  INFO 17204 --- [tts-analytics] [           main] c.e.t.TtsAnalyticsApplication            : No active profile set, falling back to 1 default profile: "default"
2025-05-26T23:31:24.238+01:00  INFO 17204 --- [tts-analytics] [           main] faultConfiguringBeanFactoryPostProcessor : No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created.
2025-05-26T23:31:24.260+01:00  INFO 17204 --- [tts-analytics] [           main] faultConfiguringBeanFactoryPostProcessor : No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created.
2025-05-26T23:31:25.529+01:00  INFO 17204 --- [tts-analytics] [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port 8090 (http)
2025-05-26T23:31:25.556+01:00  INFO 17204 --- [tts-analytics] [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2025-05-26T23:31:25.556+01:00  INFO 17204 --- [tts-analytics] [           main] o.apache.catalina.core.StandardEngine    : Starting Servlet engine: [Apache Tomcat/10.1.40]
2025-05-26T23:31:25.673+01:00  INFO 17204 --- [tts-analytics] [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2025-05-26T23:31:25.676+01:00  INFO 17204 --- [tts-analytics] [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 3734 ms
2025-05-26T23:31:29.799+01:00  INFO 17204 --- [tts-analytics] [           main] o.s.c.s.m.DirectWithAttributesChannel    : Channel 'tts-analytics.postTtsAnalytics-in-0' has 1 subscriber(s).
2025-05-26T23:31:30.167+01:00  INFO 17204 --- [tts-analytics] [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2025-05-26T23:31:30.167+01:00  INFO 17204 --- [tts-analytics] [           main] o.s.i.channel.PublishSubscribeChannel    : Channel 'tts-analytics.errorChannel' has 1 subscriber(s).
2025-05-26T23:31:30.167+01:00  INFO 17204 --- [tts-analytics] [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean '_org.springframework.integration.errorLogger'
2025-05-26T23:31:30.373+01:00  INFO 17204 --- [tts-analytics] [           main] o.s.c.s.binder.DefaultBinderFactory      : Creating binder: kafka
2025-05-26T23:31:30.376+01:00  INFO 17204 --- [tts-analytics] [           main] o.s.c.s.binder.DefaultBinderFactory      : Constructing binder child context for kafka
2025-05-26T23:31:30.980+01:00  INFO 17204 --- [tts-analytics] [           main] o.s.c.s.binder.DefaultBinderFactory      : Caching the binder: kafka
2025-05-26T23:31:31.055+01:00  INFO 17204 --- [tts-analytics] [           main] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
............
2025-05-26T23:31:37.664+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-analytics-group-2, groupId=analytics-group] Successfully joined group with generation Generation{generationId=1, memberId='consumer-analytics-group-2-2e0c644b-239a-444d-bedb-24be9d40df31', protocol='range'}
2025-05-26T23:31:37.674+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-analytics-group-2, groupId=analytics-group] Finished assignment for group at generation 1: {consumer-analytics-group-2-2e0c644b-239a-444d-bedb-24be9d40df31=Assignment(partitions=[post-tts-analytics-0])}
2025-05-26T23:31:37.691+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-analytics-group-2, groupId=analytics-group] Successfully synced group in generation Generation{generationId=1, memberId='consumer-analytics-group-2-2e0c644b-239a-444d-bedb-24be9d40df31', protocol='range'}
2025-05-26T23:31:37.692+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-analytics-group-2, groupId=analytics-group] Notifying assignor about the new Assignment(partitions=[post-tts-analytics-0])
2025-05-26T23:31:37.696+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] k.c.c.i.ConsumerRebalanceListenerInvoker : [Consumer clientId=consumer-analytics-group-2, groupId=analytics-group] Adding newly assigned partitions: post-tts-analytics-0
2025-05-26T23:31:37.713+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-analytics-group-2, groupId=analytics-group] Found no committed offset for partition post-tts-analytics-0
2025-05-26T23:31:37.736+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-analytics-group-2, groupId=analytics-group] Resetting offset for partition post-tts-analytics-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 1 rack: null)], epoch=0}}.
2025-05-26T23:31:37.738+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$2  : analytics-group: partitions assigned: [post-tts-analytics-0]
...........
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the following from logs above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spring Cloud Stream wires up the input binding channel named tts-analytics.postTtsAnalytics-in-0 and confirms it has one subscriber (consumer application).&lt;/li&gt;
&lt;li&gt;Immediately after, an error‐logging endpoint is attached to the errorChannel—so any exceptions in message handling will be logged rather than silently dropped.&lt;/li&gt;
&lt;li&gt;The Kafka binder is created, its child context constructed, and cached.&lt;/li&gt;
&lt;li&gt;The Kafka consumer (clientId=consumer-analytics-group-2) successfully joins the analytics-group, receives its partition assignment (post-tts-analytics-0) and syncs the group and notifies the internal assignor. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next Kafka commands are verifying that all Kafka resources are created properly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ $ /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
__consumer_offsets
post-tts-analytics
/ $ /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic post-tts-analytics
Topic: post-tts-analytics       TopicId: bq8s8k8mTJCexcXcG_s-3A PartitionCount: 1       ReplicationFactor: 1    Configs: segment.bytes=1073741824
        Topic: post-tts-analytics       Partition: 0    Leader: 1       Replicas: 1     Isr: 1  Elr:    LastKnownElr: 
/ $ /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
analytics-group
/ $ /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092  --describe --group analytics-group

GROUP           TOPIC              PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                                     HOST            CLIENT-ID
analytics-group post-tts-analytics 0          2               2               0               consumer-analytics-group-2-2e0c644b-239a-444d-bedb-24be9d40df31 /172.17.0.1     consumer-analytics-group-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we did earlier, we are going to call same endpoint and test the whole flow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-05-26T23:59:25.601+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2025-05-26T23:59:25.602+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2025-05-26T23:59:25.604+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.s.web.servlet.DispatcherServlet        : Completed initialization in 1 ms
2025-05-26T23:59:25.785+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] c.e.t.controller.TextToSpeechController  : textToSpeech request: TtsRequest[text=Hi this is a complete test. From Kafka]
2025-05-26T23:59:26.992+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] c.e.tts.service.TextToSpeechService      : do Something with analytics response: TtsAnalyticsResponse[device=Mobile, countryIso=ZA]
Wrote synthesized speech to tts\output\7e82f0d0-050f-4e05-8f50-f39fc8549ae2.wav
2025-05-26T23:59:27.492+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] c.e.t.m.PostTtsAnalyticsPublisherService : sendAnalytics: PostTtsAnalyticsRequest[id=7e82f0d0-050f-4e05-8f50-f39fc8549ae2, creationDate=2025-05-26T23:59:27.491530100, processTimeMs=1701]
2025-05-26T23:59:27.504+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.s.c.s.binder.DefaultBinderFactory      : Creating binder: kafka
2025-05-26T23:59:27.504+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.s.c.s.binder.DefaultBinderFactory      : Constructing binder child context for kafka
2025-05-26T23:59:27.793+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.s.c.s.binder.DefaultBinderFactory      : Caching the binder: kafka
2025-05-26T23:59:27.815+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.s.c.s.b.k.p.KafkaTopicProvisioner      : Using kafka topic for outbound: post-tts-analytics
2025-05-26T23:59:27.826+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
..........
2025-05-26T23:59:28.970+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.a.k.c.t.i.KafkaMetricsCollector        : initializing Kafka metrics collector
2025-05-26T23:59:29.023+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 3.8.1
2025-05-26T23:59:29.023+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: 70d6ff42debf7e17
2025-05-26T23:59:29.023+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1748300369023
2025-05-26T23:59:29.044+01:00  INFO 19336 --- [tts] [ad | producer-1] org.apache.kafka.clients.Metadata        : [Producer clientId=producer-1] Cluster ID: 5L6g3nShT-eMCtK--X86sw
2025-05-26T23:59:29.066+01:00  INFO 19336 --- [tts] [nio-8080-exec-2] o.s.c.s.m.DirectWithAttributesChannel    : Channel 'tts.postTtsAnalytics-out-0' has 1 subscriber(s).
2025-05-26T23:59:36.998+01:00  INFO 19336 --- [tts] [nio-8080-exec-3] c.e.t.controller.TextToSpeechController  : textToSpeech request: TtsRequest[text=Hi this is a complete test. From Kafka]
2025-05-26T23:59:37.007+01:00  INFO 19336 --- [tts] [nio-8080-exec-3] c.e.tts.service.TextToSpeechService      : do Something with analytics response: TtsAnalyticsResponse[device=SmartTV, countryIso=US]
Wrote synthesized speech to tts\output\fe504b51-5149-4f6b-a65f-86eecf6daa66.wav
2025-05-26T23:59:37.158+01:00  INFO 19336 --- [tts] [nio-8080-exec-3] c.e.t.m.PostTtsAnalyticsPublisherService : sendAnalytics: PostTtsAnalyticsRequest[id=fe504b51-5149-4f6b-a65f-86eecf6daa66, creationDate=2025-05-26T23:59:37.158350700, processTimeMs=159]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see from logs above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DefaultBinderFactory creates, constructs the child context for kafka then caches that binder instance—this is our "magic switch" picking Kafka.&lt;/li&gt;
&lt;li&gt;KafkaTopicProvisioner confirms it will use the topic post-tts-analytics, ensuring the topic exists.&lt;/li&gt;
&lt;li&gt;Create the messages and send them to the topic to be processed by the consumer as shown below.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025-05-26T23:59:26.739+01:00  INFO 17204 --- [tts-analytics] [nio-8090-exec-1] c.e.t.controller.AnalyticsController     : doAnalytics request: TtsAnalyticsRequest[clientIp=0:0:0:0:0:0:0:1, userAgent=PostmanRuntime/7.44.0]
2025-05-26T23:59:29.272+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] c.e.t.m.PostTtsAnalyticsConsumerService  : processAnalytics: PostTtsAnalyticsRequest[id=7e82f0d0-050f-4e05-8f50-f39fc8549ae2, creationDate=2025-05-26T23:59:27.491530100, processTimeMs=1701]
2025-05-26T23:59:37.004+01:00  INFO 17204 --- [tts-analytics] [nio-8090-exec-2] c.e.t.controller.AnalyticsController     : doAnalytics request: TtsAnalyticsRequest[clientIp=0:0:0:0:0:0:0:1, userAgent=PostmanRuntime/7.44.0]
2025-05-26T23:59:37.168+01:00  INFO 17204 --- [tts-analytics] [container-0-C-1] c.e.t.m.PostTtsAnalyticsConsumerService  : processAnalytics: PostTtsAnalyticsRequest[id=fe504b51-5149-4f6b-a65f-86eecf6daa66, creationDate=2025-05-26T23:59:37.158350700, processTimeMs=159]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For those tests, RabbitMQ and Kafka instances are running locally as containers with Docker using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name=kafka -p 9092:9092 apache/kafka
docker run -d --name=rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:4-management
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Broker-Agnostic Messaging: Spring Cloud Stream provides a thin abstraction over messaging systems (RabbitMQ, Kafka, etc.), so you can write your producer and consumer logic once and switch brokers simply by changing an environment variable.&lt;/li&gt;
&lt;li&gt;Key Components: It introduces Binders (plug‐ins for specific brokers), Bindings (logical links to destinations), and the StreamBridge API for imperative sends—all wired via Spring Boot auto-configuration.&lt;/li&gt;
&lt;li&gt;Lightweight Configuration: Complex patterns like retries and dead-letter queues require only a few YAML lines (max-attempts, back-off-initial-interval), enabling enterprise-grade messaging with minimal boilerplate.&lt;/li&gt;
&lt;li&gt;Testing Strategy: Demonstrates in-JVM tests using TestChannelBinderConfiguration plus InputDestination/OutputDestination and CompositeMessageConverter to verify end-to-end message publishing and consumption without a real broker.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/AnirAchmrar/spring-cloud-stream" rel="noopener noreferrer"&gt;Project GitHub Repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>springboot</category>
      <category>microservices</category>
      <category>eventdriven</category>
      <category>java</category>
    </item>
    <item>
      <title>Beginner friendly: deploy a Spring Boot application to Kubernetes</title>
      <dc:creator>AISSAM ASSOUIK</dc:creator>
      <pubDate>Sat, 17 May 2025 18:33:13 +0000</pubDate>
      <link>https://forem.com/aissam_assouik/beginner-friendly-deploy-a-spring-boot-application-to-kubernetes-17ge</link>
      <guid>https://forem.com/aissam_assouik/beginner-friendly-deploy-a-spring-boot-application-to-kubernetes-17ge</guid>
      <description>&lt;p&gt;Deploying our applications to Kubernetes may help us with a lot of heavy deploy-related tasks like service discovery and horizontal scaling... With Kubernetes, we don't need to include those concerns in our code, instead those concerns are exported to be handled by Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use case
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkn9jrvkhk6q7p47h57k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkn9jrvkhk6q7p47h57k.png" alt="Text-To-Speech Spring Boot deploy on K8s" width="800" height="649"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are going to deploy two microservices to a Kubernetes cluster using the following Kubernetes resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Namespace&lt;/strong&gt;, spring-boot, will help us to isolate our resources within our cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;, tts and tts-analytics, to manage the set of Pods running our Spring Boot applications (Text-To-Speech and Text-To-Speech Analytics microservices).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Services&lt;/strong&gt;, to expose our running applications for pod-to-pod communication using a &lt;strong&gt;ClusterIP&lt;/strong&gt; service type and a &lt;strong&gt;NodePort&lt;/strong&gt; service type for outside-to-cluster communication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HorizontalPodAutoscaler&lt;/strong&gt;, showcase Kubernetes native autoscaling features that will target our deployment for TTS microservice in order to scale in and out based on resource utilization (like cpu and memory) across running replicas.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What these microservices do?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;TTS, exposing one endpoint that serves the purpose of converting text entered by user to speech using FreeTTS Java Library.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;TTS Analytics, a microservice that serves the role of doing analytics on user IP addresses and User Agent in order to provide device and country info (for this lab, we only mock this behavior).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Spring Boot
&lt;/h2&gt;

&lt;p&gt;Worth to mention the benefits of using &lt;a href="https://github.com/GoogleContainerTools/jib/tree/master/jib-maven-plugin" rel="noopener noreferrer"&gt;Jib Maven Plugin&lt;/a&gt; to build our Spring Boot applications images. Jib help us with image build optimization and customization.&lt;br&gt;
Use of Kubernetes DNS records to communicate with services, like we do for our example TTS Analytics service assigned the DNS name: tts-analytics-svc.spring-boot.svc.cluster.local&lt;/p&gt;
&lt;h2&gt;
  
  
  Namespcae
&lt;/h2&gt;

&lt;p&gt;We are going to create spring-boot namespace in order to group and isolate our Kubernetes for this lab. In reality for simple clusters, we should work with default namespace as mentioned in &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#when-to-use-multiple-namespaces" rel="noopener noreferrer"&gt;When to Use Multiple Namespaces&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Namespace
metadata:
  name: spring-boot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;TTS and TTS Analytics microservices will be deployed using next manifest files. Additionally, we can container-level resource requests and limit with &lt;strong&gt;spec.containers[].resources&lt;/strong&gt; field. For TTS deployment, we configured CPU and memory requests (the resources that will be allocated to our container on pod scheduling) to 100 milliCPU (=0.1 CPU) and 100 mebibyte respectively. For limits, 300 milliCPU (=0.3 CPU) and 300 mebibyte. What are we expecting by setting those limits? &lt;br&gt;
CPU limits are hard limit, when a container uses near of the CPU limit, the kernel will restrict access to CPU by CPU throttling. Whicl will guaranties the container always using CPU less than configured limit. Memory limits in the other hand, are enforced with kernel with out of memory (OOM) kills. Which means if a container uses more memory than the configured limit it will be terminated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: tts
  name: tts
  namespace: spring-boot
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tts
  template:
    metadata:
      labels:
        app: tts
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - image: registry.hub.docker.com/${REPO_NAME}/tts:latest
        name: tts
        ports:
        - containerPort: 8080
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 300m
            memory: 300Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: tts-analytics
  name: tts-analytics
  namespace: spring-boot
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tts-analytics
  template:
    metadata:
      labels:
        app: tts-analytics
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - image: registry.hub.docker.com/${REPO_NAME}/ttsanalytics:latest
        name: tts-analytics
        ports:
        - containerPort: 8090
        resources:
          requests:
            cpu: 400m
            memory: 100Mi
          limits:
            cpu: 1000m
            memory: 500Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Depending on your Kubernetes cluster setup and environment. You may consider the following:&lt;br&gt;
If you are using a private registry you may need to set &lt;strong&gt;imagePullSecrets&lt;/strong&gt; by creating a &lt;strong&gt;docker-registry&lt;/strong&gt; secret. For image field as well, you may need to add &lt;strong&gt;registry.hub.docker.com&lt;/strong&gt; before repo name if you're using Docker Hub.&lt;/p&gt;
&lt;h2&gt;
  
  
  Service
&lt;/h2&gt;

&lt;p&gt;We are exposing the TTS Analytics microservice with a &lt;strong&gt;ClusterIP service&lt;/strong&gt;, the corresponding pods are exposed for inside cluster communication only. And the service assigned a virtual IP address, Kubernetes then load-balances traffic across the corresponding pods.&lt;br&gt;
TTS microservice will be exposed to outside cluster communication, also it will be to communicate inside cluster as well since &lt;strong&gt;NodePort service&lt;/strong&gt; is based on ClusterIP service. The only difference is that each node proxies the configured nodePort (30234) to our service, in other words, we can use the node public IP address to access our NodePort service using the port 30234.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: tts-svc
  namespace: spring-boot
spec:
  type: NodePort
  selector:
    app: tts
  ports:
    - port: 8080
      targetPort: 8080
      nodePort: 30234
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: tts-analytics-svc
  namespace: spring-boot
spec:
  type: ClusterIP
  selector:
    app: tts-analytics
  ports:
    - port: 8090
      targetPort: 8090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Horizontal Pod Autoscaler
&lt;/h2&gt;

&lt;p&gt;HPA is Kubernetes native autoscaling feature with which more pods are deployed to match the increase in demand and scale in if traffic is down. Basically, HPA controller (the resource that controlled the behavior of HPA resource) calculates the desired replicas count based on the ratio between the current metric value and current metric value. &lt;br&gt;
HPA supports resource metrics like cpu and memory, with which we can set the target value, average value, or average utilization of that metrics on which HPA should trigger scaling actions. Worth to mention that metric type of &lt;strong&gt;Resource ** is pod-level scope, for more granular control we can use metric type **ContainerResource&lt;/strong&gt; for container-level metric scope. &lt;br&gt;
​For our example, we going to target TTS deployment. With bounds of 1 to 5 replicas, our deployment replicas count will not exceed 5 replicas on max traffic load and will be reduced to only one replica when traffic is down. We are using two metrics, cpu and memory at the same time, HPA controller will calculate the desired replicas count for each metrics separately and the max desired replicas count will be used among both calculated values. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CPU metric, using metrics target type of &lt;strong&gt;Utilization&lt;/strong&gt;, HPA controller will calculate the ratio of current usage of cpu and requested cpu for all pods and ensure that the corresponding average utilization value is 60%.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory metric, using &lt;strong&gt;AverageValue&lt;/strong&gt; metric target type, HPA controller will try to keep memory metric average value across all targeted pods equal to 500Mi.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: tts
  namespace: spring-boot
spec:
  maxReplicas: 5
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: tts
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 60
  - type: Resource
    resource:
      name: memory
      target:
        type: AverageValue
        averageValue: 500Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;We are going to create all corresponding resources with the following &lt;strong&gt;kubeclt&lt;/strong&gt; commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;student@control-plane:~$ kubectl apply -f namespace.yaml
student@control-plane:~$ envsubst &amp;lt; tts-deploy.yaml | kubectl apply -f -
student@control-plane:~$ envsubst &amp;lt; tts-analytics-deploy.yaml | kubectl apply -f -
student@control-plane:~$ kubectl apply -f tts-hpa.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We expect the following Kubernetes resources are created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;student@control-plane:~$ kubectl -n spring-boot get pod,deploy,svc,hpa
NAME                                 READY   STATUS    RESTARTS   AGE
pod/tts-analytics-6b5f4577b5-6c9gg   1/1     Running   0          4m28s
pod/tts-d996c687d-v9wgx              1/1     Running   0          2m43s

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tts             1/1     1            1           2m43s
deployment.apps/tts-analytics   1/1     1            1           4m28s

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/tts-analytics-svc   ClusterIP   10.97.203.149    &amp;lt;none&amp;gt;        8090/TCP         4m28s
service/tts-svc             NodePort    10.110.233.162   &amp;lt;none&amp;gt;        8080:30234/TCP   2m43s

NAME                                      REFERENCE        TARGETS                                MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/tts   Deployment/tts   cpu: 4%/60%, memory: 120954880/500Mi   1         5         1          95s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's test if TTS and TTS Analytics are able to complete a request flow and communicate with each other:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aissam@aissam:/aissam/Downloads/test$ curl -X POST http://$PUBLIC_NODE_IP:30234/tts -H "Content-Type: application/json" -d '{"text
":"Hi this is a test!!"}' -OJ
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 48778  100 48748  100    30   2363      1  0:00:30  0:00:20  0:00:10 12316
aissam@aissam:/aissam/Downloads/test$ ls
f51b3d0d-0038-42a4-8eae-2a7ba1d38ce9.wav
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;student@control-plane:~$ kubectl -n spring-boot logs tts-d996c687d-v9wgx -f

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::                (v3.4.5)

2025-05-17T16:28:07.715Z  INFO 1 --- [producer] [           main] com.example.tts.TtsApplication           : Starting TtsApplication using Java 21.0.7 with PID 1 (/app/classes started by root in /)
2025-05-17T16:28:07.883Z  INFO 1 --- [producer] [           main] com.example.tts.TtsApplication           : No active profile set, falling back to 1 default profile: "default"
2025-05-17T16:28:20.330Z  INFO 1 --- [producer] [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port 8080 (http)
2025-05-17T16:28:20.532Z  INFO 1 --- [producer] [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2025-05-17T16:28:20.533Z  INFO 1 --- [producer] [           main] o.apache.catalina.core.StandardEngine    : Starting Servlet engine: [Apache Tomcat/10.1.40]
2025-05-17T16:28:21.679Z  INFO 1 --- [producer] [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2025-05-17T16:28:21.681Z  INFO 1 --- [producer] [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 13236 ms
2025-05-17T16:28:31.145Z  INFO 1 --- [producer] [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port 8080 (http) with context path '/'
2025-05-17T16:28:31.339Z  INFO 1 --- [producer] [           main] com.example.tts.TtsApplication           : Started TtsApplication in 28.549 seconds (process running for 31.811)
2025-05-17T16:28:51.978Z  INFO 1 --- [producer] [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2025-05-17T16:28:51.979Z  INFO 1 --- [producer] [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2025-05-17T16:28:51.984Z  INFO 1 --- [producer] [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 5 ms
2025-05-17T16:28:52.880Z  INFO 1 --- [producer] [nio-8080-exec-1] c.e.t.controller.TextToSpeechController  : textToSpeech request: TtsRequest[text=Hi this is a test!!]
2025-05-17T16:28:55.042Z  INFO 1 --- [producer] [nio-8080-exec-1] c.e.tts.service.TextToSpeechService      : do Something with analytics response: TtsAnalyticsResponse[device=Desktop, countryIso=FR]
Wrote synthesized speech to /output/f51b3d0d-0038-42a4-8eae-2a7ba1d38ce9.wav
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;student@control-plane:~$ kubectl -n spring-boot logs tts-analytics-6b5f4577b5-6c9gg -f

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::                (v3.4.5)

2025-05-17T16:28:08.589Z  INFO 1 --- [consumer] [           main] c.e.t.TtsAnalyticsApplication            : Starting TtsAnalyticsApplication using Java 21.0.7 with PID 1 (/app/classes started by root in /)
2025-05-17T16:28:08.600Z  INFO 1 --- [consumer] [           main] c.e.t.TtsAnalyticsApplication            : No active profile set, falling back to 1 default profile: "default"
2025-05-17T16:28:11.989Z  INFO 1 --- [consumer] [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port 8090 (http)
2025-05-17T16:28:12.027Z  INFO 1 --- [consumer] [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2025-05-17T16:28:12.029Z  INFO 1 --- [consumer] [           main] o.apache.catalina.core.StandardEngine    : Starting Servlet engine: [Apache Tomcat/10.1.40]
2025-05-17T16:28:12.381Z  INFO 1 --- [consumer] [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2025-05-17T16:28:12.421Z  INFO 1 --- [consumer] [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 3677 ms
2025-05-17T16:28:14.518Z  INFO 1 --- [consumer] [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port 8090 (http) with context path '/'
2025-05-17T16:28:14.552Z  INFO 1 --- [consumer] [           main] c.e.t.TtsAnalyticsApplication            : Started TtsAnalyticsApplication in 7.325 seconds (process running for 8.222)
2025-05-17T16:28:54.054Z  INFO 1 --- [consumer] [nio-8090-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2025-05-17T16:28:54.055Z  INFO 1 --- [consumer] [nio-8090-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2025-05-17T16:28:54.059Z  INFO 1 --- [consumer] [nio-8090-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 3 ms
2025-05-17T16:28:54.412Z  INFO 1 --- [consumer] [nio-8090-exec-1] c.e.t.controller.AnalyticsController     : doAnalytics request: TtsAnalyticsRequest[clientIp=10.0.0.212, userAgent=curl/8.5.0]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything seems to work correctly!! Next we will overwhelm the TTS microservice with requests in order to increase load and then we inspect HPA behavior in response to load increase.&lt;br&gt;
We are going to use the following script to run 50 request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env bash

# Replace with your actual node public IP
export PUBLIC_NODE_IP=...........

for i in $(seq 1 50); do
  curl -X POST http://"$PUBLIC_NODE_IP":30234/tts \
       -H "Content-Type: application/json" \
       -d '{"text":"Hi this is a test!!"}' \
       -OJ &amp;amp;
done

# Wait for all background requests to finish
wait
echo "All 100 POST requests have completed."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we get the behavior of HPA during before, during and after script finished:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;student@control-plane:~$ kubectl -n spring-boot get hpa tts -w
NAME   REFERENCE        TARGETS                                MINPODS   MAXPODS   REPLICAS   AGE
tts    Deployment/tts   cpu: 4%/60%, memory: 156020736/500Mi   1         5         1          2m21s
tts    Deployment/tts   cpu: 3%/60%, memory: 156020736/500Mi   1         5         1          2m31s
tts    Deployment/tts   cpu: 4%/60%, memory: 156020736/500Mi   1         5         1          3m16s
tts    Deployment/tts   cpu: 47%/60%, memory: 181080064/500Mi   1         5         1          3m47s
tts    Deployment/tts   cpu: 302%/60%, memory: 263831552/500Mi   1         5         1          4m2s
tts    Deployment/tts   cpu: 300%/60%, memory: 268967936/500Mi   1         5         4          4m17s
tts    Deployment/tts   cpu: 248%/60%, memory: 168945664/500Mi   1         5         5          4m32s
tts    Deployment/tts   cpu: 296%/60%, memory: 133169152/500Mi   1         5         5          4m47s
tts    Deployment/tts   cpu: 265%/60%, memory: 118334259200m/500Mi   1         5         5          5m2s
tts    Deployment/tts   cpu: 250%/60%, memory: 136432844800m/500Mi   1         5         5          5m17s
tts    Deployment/tts   cpu: 166%/60%, memory: 152010752/500Mi       1         5         5          5m32s
tts    Deployment/tts   cpu: 70%/60%, memory: 153232179200m/500Mi    1         5         5          5m48s
tts    Deployment/tts   cpu: 63%/60%, memory: 153314918400m/500Mi    1         5         5          6m3s
tts    Deployment/tts   cpu: 61%/60%, memory: 153285427200m/500Mi    1         5         5          6m18s
tts    Deployment/tts   cpu: 60%/60%, memory: 153246105600m/500Mi    1         5         5          6m33s
tts    Deployment/tts   cpu: 60%/60%, memory: 153327206400m/500Mi    1         5         5          6m48s
tts    Deployment/tts   cpu: 56%/60%, memory: 153352601600m/500Mi    1         5         5          7m3s
tts    Deployment/tts   cpu: 62%/60%, memory: 153411584/500Mi        1         5         5          7m18s
tts    Deployment/tts   cpu: 60%/60%, memory: 153486950400m/500Mi    1         5         5          7m33s
tts    Deployment/tts   cpu: 55%/60%, memory: 153413222400m/500Mi    1         5         5          7m48s
tts    Deployment/tts   cpu: 59%/60%, memory: 153544294400m/500Mi    1         5         5          8m3s
tts    Deployment/tts   cpu: 59%/60%, memory: 153555763200m/500Mi    1         5         5          8m18s
tts    Deployment/tts   cpu: 56%/60%, memory: 153517260800m/500Mi    1         5         5          8m33s
tts    Deployment/tts   cpu: 61%/60%, memory: 153539379200m/500Mi    1         5         5          8m48s
tts    Deployment/tts   cpu: 59%/60%, memory: 153495142400m/500Mi    1         5         5          9m3s
tts    Deployment/tts   cpu: 60%/60%, memory: 153457459200m/500Mi    1         5         5          9m18s
tts    Deployment/tts   cpu: 60%/60%, memory: 153468108800m/500Mi    1         5         5          9m33s
tts    Deployment/tts   cpu: 60%/60%, memory: 153416499200m/500Mi    1         5         5          9m48s
tts    Deployment/tts   cpu: 59%/60%, memory: 153436979200m/500Mi    1         5         5          10m
tts    Deployment/tts   cpu: 52%/60%, memory: 153480396800m/500Mi    1         5         5          10m
tts    Deployment/tts   cpu: 59%/60%, memory: 153458278400m/500Mi    1         5         5          10m
tts    Deployment/tts   cpu: 63%/60%, memory: 153523814400m/500Mi    1         5         5          10m
tts    Deployment/tts   cpu: 46%/60%, memory: 153424691200m/500Mi    1         5         5          11m
tts    Deployment/tts   cpu: 59%/60%, memory: 153407488/500Mi        1         5         5          11m
tts    Deployment/tts   cpu: 60%/60%, memory: 153355878400m/500Mi    1         5         5          11m
tts    Deployment/tts   cpu: 61%/60%, memory: 153372262400m/500Mi    1         5         5          11m
tts    Deployment/tts   cpu: 63%/60%, memory: 153357516800m/500Mi    1         5         5          12m
tts    Deployment/tts   cpu: 60%/60%, memory: 153169100800m/500Mi    1         5         5          12m
tts    Deployment/tts   cpu: 60%/60%, memory: 153171558400m/500Mi    1         5         5          12m
tts    Deployment/tts   cpu: 61%/60%, memory: 153098649600m/500Mi    1         5         5          12m
tts    Deployment/tts   cpu: 62%/60%, memory: 153107660800m/500Mi    1         5         5          13m
tts    Deployment/tts   cpu: 61%/60%, memory: 153069158400m/500Mi    1         5         5          13m
tts    Deployment/tts   cpu: 61%/60%, memory: 152816844800m/500Mi    1         5         5          13m
tts    Deployment/tts   cpu: 44%/60%, memory: 152460492800m/500Mi    1         5         5          13m
tts    Deployment/tts   cpu: 3%/60%, memory: 152462131200m/500Mi     1         5         5          14m
tts    Deployment/tts   cpu: 3%/60%, memory: 152467046400m/500Mi     1         5         5          14m
tts    Deployment/tts   cpu: 3%/60%, memory: 152388403200m/500Mi     1         5         5          14m
tts    Deployment/tts   cpu: 3%/60%, memory: 152393318400m/500Mi     1         5         5          14m
tts    Deployment/tts   cpu: 3%/60%, memory: 151896064/500Mi         1         5         5          15m
tts    Deployment/tts   cpu: 3%/60%, memory: 151904256/500Mi         1         5         5          15m
tts    Deployment/tts   cpu: 3%/60%, memory: 151910809600m/500Mi     1         5         5          15m
tts    Deployment/tts   cpu: 3%/60%, memory: 151916544/500Mi         1         5         5          15m
tts    Deployment/tts   cpu: 3%/60%, memory: 151923916800m/500Mi     1         5         5          16m
tts    Deployment/tts   cpu: 3%/60%, memory: 151925555200m/500Mi     1         5         5          16m
tts    Deployment/tts   cpu: 3%/60%, memory: 151929651200m/500Mi     1         5         5          16m
tts    Deployment/tts   cpu: 3%/60%, memory: 151931289600m/500Mi     1         5         5          16m
tts    Deployment/tts   cpu: 3%/60%, memory: 151934566400m/500Mi     1         5         5          17m
tts    Deployment/tts   cpu: 3%/60%, memory: 151936204800m/500Mi     1         5         5          17m
tts    Deployment/tts   cpu: 3%/60%, memory: 151939481600m/500Mi     1         5         5          17m
tts    Deployment/tts   cpu: 3%/60%, memory: 151940300800m/500Mi     1         5         5          18m
tts    Deployment/tts   cpu: 3%/60%, memory: 151942758400m/500Mi     1         5         5          18m
tts    Deployment/tts   cpu: 3%/60%, memory: 151946035200m/500Mi     1         5         5          18m
tts    Deployment/tts   cpu: 3%/60%, memory: 158087168/500Mi         1         5         4          18m
tts    Deployment/tts   cpu: 3%/60%, memory: 193366016/500Mi         1         5         2          19m
tts    Deployment/tts   cpu: 3%/60%, memory: 193372160/500Mi         1         5         2          20m
tts    Deployment/tts   cpu: 4%/60%, memory: 193372160/500Mi         1         5         2          20m
tts    Deployment/tts   cpu: 4%/60%, memory: 193396736/500Mi         1         5         2          21m
tts    Deployment/tts   cpu: 3%/60%, memory: 193396736/500Mi         1         5         2          21m
tts    Deployment/tts   cpu: 4%/60%, memory: 193396736/500Mi         1         5         2          22m
tts    Deployment/tts   cpu: 3%/60%, memory: 193396736/500Mi         1         5         2          22m
tts    Deployment/tts   cpu: 4%/60%, memory: 193396736/500Mi         1         5         2          23m
tts    Deployment/tts   cpu: 4%/60%, memory: 193398784/500Mi         1         5         2          23m
tts    Deployment/tts   cpu: 3%/60%, memory: 193398784/500Mi         1         5         2          23m
tts    Deployment/tts   cpu: 4%/60%, memory: 257855488/500Mi         1         5         1          24m
tts    Deployment/tts   cpu: 3%/60%, memory: 257855488/500Mi         1         5         1          25m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  37m   horizontal-pod-autoscaler  New size: 4; reason: cpu resource utilization (percentage of request) above target
  Normal  SuccessfulRescale  37m   horizontal-pod-autoscaler  New size: 5; reason: cpu resource utilization (percentage of request) above target
  Normal  SuccessfulRescale  22m   horizontal-pod-autoscaler  New size: 4; reason: All metrics below target
  Normal  SuccessfulRescale  22m   horizontal-pod-autoscaler  New size: 2; reason: All metrics below target
  Normal  SuccessfulRescale  17m   horizontal-pod-autoscaler  New size: 1; reason: All metrics below target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initially we had one replicas of our TTS microservice, then after load increase HPA controller try to match the target value by increasing replicas count. After the load was down, HPA controller takes 5 minutes to scale in from 5 replicas to 2 replicas then finaly to our MinReplicas. This period called &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#stabilization-window" rel="noopener noreferrer"&gt;Stabilization Window&lt;/a&gt;, it's used to stabilize replicas count when the metric keeps fluctuating.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We showed how we can deploy our Spring Boot microservices with Kubernetes and expose them for external and internal communication. In addition, we showcased how Kubernetes native autoscaling feature can help us efficiently use our cluster resources by scaling out on traffic increase and scale in on traffic decrease.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/AnirAchmrar/kubernetes-0" rel="noopener noreferrer"&gt;Project GitHub Repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>springboot</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
