<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Matthew Casperson</title>
    <description>The latest articles on Forem by Matthew Casperson (@mcasperson).</description>
    <link>https://forem.com/mcasperson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mcasperson"/>
    <language>en</language>
    <item>
      <title>Private LLMs for GitHub Actions</title>
      <dc:creator>Matthew Casperson</dc:creator>
      <pubDate>Mon, 23 Dec 2024 00:15:34 +0000</pubDate>
      <link>https://forem.com/mcasperson/private-llms-for-github-actions-4nfa</link>
      <guid>https://forem.com/mcasperson/private-llms-for-github-actions-4nfa</guid>
      <description>&lt;p&gt;GitHub has been an enthusiastic adopter of AI with its Copilot platform to support developers with understanding, coding, and debugging software. However, it is not easy to use Copilot in an automated fashion in your GitHub Actions workflows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/marketplace/actions/secondbrainaction" rel="noopener noreferrer"&gt;SecondBrain&lt;/a&gt; is a new action that supports the use of LLMs inside GitHub Actions workflows. It works by deploying &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; as a Docker container to host LLMs and then calling Ollama via a custom CLI that automates the process of constructing Retrieval Augmented Generation (RAG) prompts that embed details of git commits.&lt;/p&gt;

&lt;p&gt;SecondBrain works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You pass in the git commit SHAs you wish to query&lt;/li&gt;
&lt;li&gt;You pass in GitHub token used to query the GitHub REST API to get the details of the commits&lt;/li&gt;
&lt;li&gt;You define a prompt to pass to the LLM that can assume access to a summary of the Git commits referenced by the SHAs&lt;/li&gt;
&lt;li&gt;SecondBrain queries GitHub for the details of the commits associated with the SHAs, summarizes the commit diffs, places the summaries into the prompt context, and then passes the context and your prompt to the LLM.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The following is a sample workflow YAML file that generates a summary of each commit to the main branch:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Summarize the commit

on:
  workflow_dispatch:
  push:
    branches:
      - main

jobs:
  summarize:
    runs-on: ubuntu-latest
    steps:
      - name: SecondBrainAction
        id: secondbrain
        uses: mcasperson/SecondBrain@main
        with:
            prompt: 'Provide a summary of the changes from the git diffs. Use plain language. You will be penalized for offering code suggestions. You will be penalized for sounding excited about the changes.'
            token: ${{ secrets.GITHUB_TOKEN }}
            owner: ${{ github.repository_owner }}
            repo: ${{ github.event.repository.name }}
            sha: ${{ github.sha }}
      - name: Get the diff summary
        env:
            RESPONSE: ${{ steps.secondbrain.outputs.response }}
        run: echo "$RESPONSE"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The output of the &lt;code&gt;Get the diff summary&lt;/code&gt; step looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Here is a summary of the changes:

The README.md file now indicates that the sha input is required and cannot have a default value [2]. The action.yml file has also removed the option to provide a default value for the sha input [1].

- [1]: The action.yml file was also updated to remove the default value for the sha input (52b40e59684d17e5fddc95c4dba3cdc82e4f7b7d)
- [2]: The README.md file was updated to add a note that the sha input is mandatory and has no default value (52b40e59684d17e5fddc95c4dba3cdc82e4f7b7d)

Links:
- [52b40e59684d17e5fddc95c4dba3cdc82e4f7b7d](https://github.com/mcasperson/SecondBrain/commit/52b40e59684d17e5fddc95c4dba3cdc82e4f7b7d)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you &lt;a href="https://github.com/mcasperson/SecondBrain/commit/52b40e59684d17e5fddc95c4dba3cdc82e4f7b7d" rel="noopener noreferrer"&gt;click the link&lt;/a&gt; from the report, you will see the commit that generated this summary. The description of the commit is accurate, and much easier to read than inspecting the diff directly.&lt;/p&gt;

&lt;p&gt;It is important to note that this action never called any external services except for GitHub itself. There is no need to host your own LLM infrastructure as the entire process is handled by a local, private LLM exposed by Ollama.&lt;/p&gt;

&lt;p&gt;Give it a try and &lt;a href="https://github.com/mcasperson/SecondBrain/discussions/5" rel="noopener noreferrer"&gt;let me know if this action was useful&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>github</category>
      <category>githubactions</category>
      <category>ai</category>
    </item>
    <item>
      <title>A first look at Tekton Pipelines</title>
      <dc:creator>Matthew Casperson</dc:creator>
      <pubDate>Mon, 11 May 2020 23:08:59 +0000</pubDate>
      <link>https://forem.com/octopus/a-first-look-at-tekton-pipelines-46fp</link>
      <guid>https://forem.com/octopus/a-first-look-at-tekton-pipelines-46fp</guid>
      <description>&lt;p&gt;Kubernetes is quickly evolving from a Docker orchestration platform to a general purpose cloud operating system. With &lt;a href="https://octopus.com/blog/operators-with-kotlin" rel="noopener noreferrer"&gt;operators&lt;/a&gt; Kubernetes gains the ability to natively manage high-level concepts and business processes, meaning you are no longer managing the building blocks of Pods, Services, and Deployments, but instead, describing the things those building blocks can create like web servers, databases, continuous deployments, certificate management, and more.&lt;/p&gt;

&lt;p&gt;When deployed to a Kubernetes cluster, Tekton Pipelines expose the ability to define and execute build tasks, inputs and outputs in the form of simple values or complex objects like Docker images, and to combine these resources in pipelines. These new Kubernetes resources, and the controllers that manage them result in a headless CI/CD platform hosted by a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;In this post, we’ll take a look at a simple build pipeline running on MicroK8S.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing the test Kubernetes cluster
&lt;/h2&gt;

&lt;p&gt;For this post, I’m using &lt;a href="https://microk8s.io/" rel="noopener noreferrer"&gt;MicroK8S&lt;/a&gt; to provide the Kubernetes cluster. MicroK8S is useful here because it offers a selection of &lt;a href="https://microk8s.io/docs/addons" rel="noopener noreferrer"&gt;official add-ons&lt;/a&gt;, one of which is a Docker image registry. Since our pipeline builds a Docker image, we need somewhere to host it, and the MicroK8S registry add-on gives us that functionality with a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s.enable registry
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to enable DNS lookups from within the MicroK8S cluster. This is done by enabling the DNS add-on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s.enable dns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing Tekton Pipelines
&lt;/h2&gt;

&lt;p&gt;Installation of Tekton Pipelines is simple, with a single &lt;code&gt;kubectl&lt;/code&gt; (or &lt;code&gt;microk8s.kubectl&lt;/code&gt; in our case) command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s.kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now create Tekton resources in our Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  A "Hello World" task
&lt;/h2&gt;

&lt;p&gt;Tasks contain the individual steps that are executed to do work. In the example below, we have a task with a single step that executes the &lt;code&gt;echo&lt;/code&gt; command with the arguments &lt;code&gt;Hello World&lt;/code&gt; in a container built from the &lt;code&gt;ubuntu&lt;/code&gt; image.&lt;/p&gt;

&lt;p&gt;The YAML below shows our &lt;code&gt;helloworldtask.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: echo-hello-world
spec:
  steps:
    - name: echo
      image: ubuntu
      command:
        - echo
      args:
        - "Hello World"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The task resource is created in the Kubernetes cluster with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s.kubectl apply -f helloworldtask.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A task describes how work is to be done, but creating the task resource does not result in any action being taken. A task run resource references the task, and the creation of a task run resource triggers Tekton to execute the steps in the referenced task.&lt;/p&gt;

&lt;p&gt;The YAML below shows our &lt;code&gt;helloworldtaskrun.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: tekton.dev/v1alpha1
kind: TaskRun
metadata:
  name: echo-hello-world-task-run
spec:
  taskRef:
    name: echo-hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The task run resource is created in the Kubernetes cluster with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s.kubectl apply -f helloworldtaskrun.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building a Docker image
&lt;/h2&gt;

&lt;p&gt;To move beyond this hello world example, we’ll look at the canonical use case of a Tekton build pipeline, which is to compile and push a Docker image. To demonstrate this functionality, we’ll build our &lt;a href="https://github.com/OctopusSamples/RandomQuotes-Java" rel="noopener noreferrer"&gt;RandomQuotes&lt;/a&gt; sample application.&lt;/p&gt;

&lt;p&gt;We start the pipeline with a pipeline resource. Pipeline resources provide a decoupled method of defining inputs for the build process.&lt;/p&gt;

&lt;p&gt;The first input we need is the Git repository that holds our code. Pipeline resources have a number of known types, and here we define a &lt;code&gt;git&lt;/code&gt; pipeline resource specifying the URL and branch holding our code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton.dev/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PipelineResource&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;randomquotes-git&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git&lt;/span&gt;
  &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;revision&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;url&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/OctopusSamples/RandomQuotes-Java.git&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we define the Docker registry holding our compiled image. This is where the MicroK8S registry add-on is useful, as it exposes a Docker registry at &lt;a href="http://registry.container-registry.svc.cluster.local:5000" rel="noopener noreferrer"&gt;http://registry.container-registry.svc.cluster.local:5000&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here is the pipeline resource of type &lt;code&gt;image&lt;/code&gt; defining the Docker image we’ll create as &lt;code&gt;registry.container-registry.svc.cluster.local:5000/randomquotes&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton.dev/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PipelineResource&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;randomquotes-image&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image&lt;/span&gt;
  &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;url&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.container-registry.svc.cluster.local:5000/randomquotes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the input source code and destination Docker image defined, we can create a task to create the Docker image and push it to the repository.&lt;/p&gt;

&lt;p&gt;Traditionally, building Docker images is performed by the Docker client directly on the host operating system. However, in Kubernetes, everything is run inside Docker, which leads to the question: How do you run Docker inside Docker?&lt;/p&gt;

&lt;p&gt;Over the last few years, there has been an explosion of tools designed to perform the processes provided by the Docker CLI and daemon, but without any dependency on Docker itself. These include tools like &lt;a href="https://github.com/openSUSE/umoci" rel="noopener noreferrer"&gt;umoci&lt;/a&gt; for unpacking and repacking Docker images, &lt;a href="https://github.com/GoogleContainerTools/kaniko" rel="noopener noreferrer"&gt;Kaniko&lt;/a&gt; and &lt;a href="https://github.com/containers/buildah" rel="noopener noreferrer"&gt;Buildah&lt;/a&gt; for building Docker images from a Dockerfile, and &lt;a href="https://podman.io/" rel="noopener noreferrer"&gt;Podman&lt;/a&gt; for running Docker images.&lt;/p&gt;

&lt;p&gt;We’ll use Kaniko in our Tekton task to build our Docker image inside the Docker container provided by Kubernetes. The YAML below shows the complete task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton.dev/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Task&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-docker-image-from-git-source&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker-source&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git&lt;/span&gt;
    &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pathToDockerFile&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;The path to the dockerfile to build&lt;/span&gt;
        &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/workspace/docker-source/Dockerfile&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pathToContext&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="s"&gt;The build context used by Kaniko&lt;/span&gt;
          &lt;span class="s"&gt;(https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts)&lt;/span&gt;
        &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/workspace/docker-source&lt;/span&gt;
  &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;builtImage&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-and-push&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/kaniko-project/executor:v0.17.1&lt;/span&gt;
      &lt;span class="c1"&gt;# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DOCKER_CONFIG"&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tekton/home/.docker/"&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/kaniko/executor&lt;/span&gt;
      &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--dockerfile=$(inputs.params.pathToDockerFile)&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--destination=$(outputs.resources.builtImage.url)&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--context=$(inputs.params.pathToContext)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are some important aspects to this task that are worth pointing out.&lt;/p&gt;

&lt;p&gt;There are two properties in this task that relate to the pipeline resources that we created above.&lt;/p&gt;

&lt;p&gt;An input resource of type &lt;code&gt;git&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker-source&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And an output of type &lt;code&gt;image&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;builtImage&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are two additional input parameters that define paths used for the Docker build process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pathToDockerFile&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;The path to the dockerfile to build&lt;/span&gt;
        &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/workspace/docker-source/Dockerfile&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pathToContext&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="s"&gt;The build context used by Kaniko&lt;/span&gt;
          &lt;span class="s"&gt;(https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts)&lt;/span&gt;
        &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/workspace/docker-source&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the path &lt;code&gt;/workspace/docker-source&lt;/code&gt; is a convention used by &lt;code&gt;git&lt;/code&gt; resources, with the &lt;code&gt;docker-source&lt;/code&gt; directory matching the name of the input.&lt;/p&gt;

&lt;p&gt;We then have a single step that builds the Docker image. The build is executed in a container created from the &lt;code&gt;gcr.io/kaniko-project/executor:v0.17.1&lt;/code&gt; image, that provides Kaniko:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-and-push&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/kaniko-project/executor:v0.17.1&lt;/span&gt;
      &lt;span class="c1"&gt;# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DOCKER_CONFIG"&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tekton/home/.docker/"&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/kaniko/executor&lt;/span&gt;
      &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--dockerfile=$(inputs.params.pathToDockerFile)&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--destination=$(outputs.resources.builtImage.url)&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--context=$(inputs.params.pathToContext)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, a task run is used to bind the task and pipeline resources together. This resource maps the task &lt;code&gt;docker-source&lt;/code&gt; input to the &lt;code&gt;randomquotes-git&lt;/code&gt; pipeline resource and the &lt;code&gt;builtImage&lt;/code&gt; output to the &lt;code&gt;randomquotes-image&lt;/code&gt; pipeline resource.&lt;/p&gt;

&lt;p&gt;Creating this resource then triggers the build to take place:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tekton.dev/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TaskRun&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-docker-image-from-git-source-task-run&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;taskRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-docker-image-from-git-source&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker-source&lt;/span&gt;
        &lt;span class="na"&gt;resourceRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;randomquotes-git&lt;/span&gt;
    &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pathToDockerFile&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pathToContext&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/workspace/docker-source&lt;/span&gt;
  &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;builtImage&lt;/span&gt;
        &lt;span class="na"&gt;resourceRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;randomquotes-image&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Interacting with builds
&lt;/h2&gt;

&lt;p&gt;Tekton itself does not provide any kind of dashboard or GUI for interacting with jobs. However, there is a &lt;a href="https://github.com/tektoncd/cli" rel="noopener noreferrer"&gt;CLI tool&lt;/a&gt; for managing Tekton jobs.&lt;/p&gt;

&lt;p&gt;The Tekton CLI tool assumes &lt;code&gt;kubectl&lt;/code&gt; is configured, but MicroK8S maintains a separate tool called &lt;code&gt;microk8s.kubectl&lt;/code&gt;. The easiest way to configure &lt;code&gt;kubectl&lt;/code&gt; is with the following command which copies the MicroK8S configuration file to the standard location for &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo microk8s.kubectl config view --raw &amp;gt; $HOME/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, we can get the status of the task with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tkn taskrun logs build-docker-image-from-git-source-task-run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-05%2Fintroduction-to-tekton-pipelines%2Ftekton-logs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-05%2Fintroduction-to-tekton-pipelines%2Ftekton-logs.png" title="width=500" width="800" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Tekton for you?
&lt;/h2&gt;

&lt;p&gt;The idea of a headless build server is an intriguing one.&lt;/p&gt;

&lt;p&gt;By composing builds with Docker images, Tekton removes the overhead of maintaining a suite of specialized build agents. Every tool and language provides a supported Docker image these days, making it easier to keep up with the new normal of six month release cycles for major language versions.&lt;/p&gt;

&lt;p&gt;Kubernetes is also a natural platform to serve the elastic and short-lived requirements of software builds. Why have ten specialized agents sitting idle when you can have five nodes scheduling builds between them?&lt;/p&gt;

&lt;p&gt;However, I suspect Tekton itself is too low-level for most engineering teams. The &lt;code&gt;tkn&lt;/code&gt; CLI tool will be familiar to anyone who has used &lt;code&gt;kubectl&lt;/code&gt; before, but it is difficult to understand the overall state of your builds from the terminal. Not to mention creating builds with &lt;code&gt;kubectl create -f taskrun.yml&lt;/code&gt;  gets old quickly.&lt;/p&gt;

&lt;p&gt;There is a &lt;a href="https://github.com/tektoncd/dashboard" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt; available, but it is a bare-bones user interface compared to existing CI tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-05%2Fintroduction-to-tekton-pipelines%2Fdashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-05%2Fintroduction-to-tekton-pipelines%2Fdashboard.png" title="width=500" width="800" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That said, Tekton is a powerful foundation on which to build developer facing tools. &lt;a href="https://jenkins-x.io/" rel="noopener noreferrer"&gt;Jenkins X&lt;/a&gt; and &lt;a href="https://www.openshift.com/learn/topics/pipelines" rel="noopener noreferrer"&gt;OpenShift Pipelines&lt;/a&gt; are two such platforms that leverage Tekton under the hood.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes solves many of the requirements for running applications like authentication, authorization, CLI tooling, resource management, health checks, and more. The fact that a Kubernetes cluster can host a fully functional CI server with a single command is a testament to just how flexible Kubernetes is.&lt;/p&gt;

&lt;p&gt;With projects like &lt;a href="https://jenkins-x.io/" rel="noopener noreferrer"&gt;Jenkins X&lt;/a&gt; and &lt;a href="https://www.openshift.com/learn/topics/pipelines" rel="noopener noreferrer"&gt;OpenShift Pipelines&lt;/a&gt;, Tekton is at the start of a journey into mainstream development workflows. But as a standalone project, Tekton is a little too close to the metal to be something most development teams could use, if only because few people would have the experience to support it.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>The ultimate guide to Tomcat deployments</title>
      <dc:creator>Matthew Casperson</dc:creator>
      <pubDate>Tue, 05 May 2020 20:21:56 +0000</pubDate>
      <link>https://forem.com/octopus/the-ultimate-guide-to-tomcat-deployments-5dkk</link>
      <guid>https://forem.com/octopus/the-ultimate-guide-to-tomcat-deployments-5dkk</guid>
      <description>&lt;p&gt;Continuous integration and delivery (CI/CD) is a common goal for DevOps teams to reduce costs and increase the agility of software teams. But the CI/CD pipeline is far more than simply testing, compiling, and deploying applications. A robust CI/CD pipeline addresses a number of concerns such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High availability (HA)&lt;/li&gt;
&lt;li&gt;Multiple environments&lt;/li&gt;
&lt;li&gt;Zero downtime deployments&lt;/li&gt;
&lt;li&gt;Database migrations&lt;/li&gt;
&lt;li&gt;Load balancers&lt;/li&gt;
&lt;li&gt;HTTPS and certificate management&lt;/li&gt;
&lt;li&gt;Feature branch deployments&lt;/li&gt;
&lt;li&gt;Smoke testing&lt;/li&gt;
&lt;li&gt;Rollback strategies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How these goals are implemented depends on the type of software being deployed. In this post, I look at how to create the continuous delivery (or deployment) half of the CI/CD pipeline by deploying Java applications to Tomcat. I then build a supporting infrastructure stack that includes the Apache web server for load balancing, PostgreSQL for the database, Keepalived for highly available load balancers, and Octopus for orchestrating the deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  A note on the PostgreSQL server
&lt;/h2&gt;

&lt;p&gt;This post assumes that the PostgreSQL database is already deployed in a highly available configuration. For more information on how to deploy PostgreSQL, refer to the &lt;a href="https://www.postgresql.org/docs/current/high-availability.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The instructions in this post can be followed with a single PostgreSQL instance, with the understanding that the database represents a single point of failure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to implement HA in Tomcat
&lt;/h2&gt;

&lt;p&gt;When talking about HA, it is important to understand exactly which components of a platform need to be managed to address the unavailability of an individual Tomcat instance, for example, when an instance is unavailable due to routine maintenance or a hardware failure.&lt;/p&gt;

&lt;p&gt;For the purpose of this post, I created infrastructure that allows a traditional stateful Java servlet application to continue operating when an individual Tomcat server is no longer available. In practical terms, this means the application session state will persist and be distributed to other Tomcat instances in the cluster when the server that originally hosted the session is no longer available.&lt;/p&gt;

&lt;p&gt;As a brief recap, Java servlet applications can save data against a &lt;code&gt;HttpSession&lt;/code&gt; instance, which is then available across requests. In the (naïve, as it does not deal with race conditions) example below, we have a simple counter variable that is incremented with each request to the page. This demonstrates how information can be persisted across individual requests made by a web browser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@RequestMapping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/pageCount"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;HttpSession&lt;/span&gt; &lt;span class="n"&gt;httpSession&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;httpSession&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setAttribute&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"pageCount"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ObjectUtils&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;defaultIfNull&lt;/span&gt;&lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nc"&gt;Integer&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="n"&gt;httpSession&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getAttribute&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"pageCount"&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Integer&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="n"&gt;httpSession&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getAttribute&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"pageCount"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The session state is held in memory on an individual server. By default, if that server is no longer available, the session data is also lost. For a trivial example like a page count, this is not important, but it is not uncommon for more critical functionality to rely on the session state. For example, a shopping cart may hold the list of items for purchase in session state, and losing that information may result in a lost sale.&lt;/p&gt;

&lt;p&gt;To maintain high availability, the session state needs to be duplicated so it can be shared if a server goes offline.&lt;/p&gt;

&lt;p&gt;Tomcat offers three solutions to enable session replication:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using session persistence and saving the session to a shared file system (PersistenceManager + FileStore).&lt;/li&gt;
&lt;li&gt;Using session persistence and saving the session to a shared database (PersistenceManager + JDBCStore).&lt;/li&gt;
&lt;li&gt;Using in-memory-replication, and using the SimpleTcpCluster that ships with Tomcat (&lt;code&gt;lib/catalina-tribes.jar&lt;/code&gt; + &lt;code&gt;lib/catalina-ha.jar&lt;/code&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because our infrastructure stack already assumes a highly available database, I’ll implement option two. This is arguably the simplest solution for us, as we do not have to implement any special networking, and we can reuse an existing database. However, this solution does introduce a delay between when the session state is modified and when it is persisted to the database. This delay introduces a window during which data may be lost in the case of hardware or network failure. Scheduled maintenance tasks are supported though, as any session data will be written to the database when Tomcat is shutdown, allowing us to patch the operating system or update Tomcat itself safely.&lt;/p&gt;

&lt;p&gt;We noted the example code above is naïve as it doesn’t deal with the fact the session cache is not thread safe. Even this simple example is subject to race conditions that may result in the page count being incorrect. The solution to this problem is to use the traditional thread locks and synchronization features available in Java, but these features are only valid within a single JVM. This means we must ensure that client requests are always directed to a single Tomcat instance, which in turn means that only one Tomcat instance contains the single, authoritative copy of the session state, which can then ensure consistency via thread locks and synchronization. This is achieved with sticky sessions.&lt;/p&gt;

&lt;p&gt;Sticky sessions provide a way for client requests to be inspected by a load balancer and then directed to one web server in a cluster. By default, in a Java servlet application a client is identified by a &lt;code&gt;JSESSIONID&lt;/code&gt; cookie that is sent by the web browser and inspected by the load balancer to identify the Tomcat instance that holds the session, and then by the Tomcat server to associate a request with an existing session.&lt;/p&gt;

&lt;p&gt;In summary, our HA Tomcat solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persists session state to a shared database.&lt;/li&gt;
&lt;li&gt;Relies on sticky sessions to direct client requests to a single Tomcat instance.&lt;/li&gt;
&lt;li&gt;Supports routine maintenance by persisting session state when Tomcat is shutdown.&lt;/li&gt;
&lt;li&gt;Has a small window during which a hardware or network failure may result in lost data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Keepalived for HA load balancers
&lt;/h2&gt;

&lt;p&gt;To ensure that network requests are distributed among multiple Tomcat instances and not directed to an offline instance, we need to implement a load balancing solution. These load balancers sit in front of the Tomcat instances and direct network requests to those instances that are available.&lt;/p&gt;

&lt;p&gt;Many load balancing solutions exist that can perform this role, but for this post, we use the Apache web server with the mod_jk plugin. Apache will provide the networking functionality, while mod_jk will distribute traffic to multiple Tomcat instances, implementing sticky sessions to direct a client to the same backend server for each request.&lt;/p&gt;

&lt;p&gt;In order to maintain high availability, we need at least two load balancers. But how do we split a single incoming network connection across two load balancers in a reliable manner? This is where Keepalived comes in.&lt;/p&gt;

&lt;p&gt;Keepalived is a Linux service run across multiple instances and picks a single master instance from the pool of healthy instances. Keepalived is quite flexible when it comes to determining what that master instance does, but in our scenario, we will use Keepalived to assign a virtual, floating IP address to the instance that assumes the master role. This means our incoming network traffic will be sent to a floating IP address that is assigned to a healthy load balancer, and the load balancer then forwards the traffic to the Tomcat instances. Should one of the load balancers be taken offline, Keepalived will ensure the remaining load balancer is assigned the floating IP address.&lt;/p&gt;

&lt;p&gt;In summary, our HA load balancing solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implements Apache with the mod_jk plugin to direct traffic to the Tomcat instances.&lt;/li&gt;
&lt;li&gt;Implements Keepalived to ensure one load balancer has a virtual IP address assigned to it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The network diagram
&lt;/h2&gt;

&lt;p&gt;Here is the diagram of the network that we will create:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fnetwork_diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fnetwork_diagram.png" title="width=500" width="647" height="1091"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Zero downtime deployments and rollbacks
&lt;/h2&gt;

&lt;p&gt;A goal of continuous delivery is to always be in a state where you can deploy (even if you choose not to). This means moving away from schedules that require people to be awake at midnight to perform a deployment when your customers are asleep.&lt;/p&gt;

&lt;p&gt;Zero downtime deployments require reaching a point where deployments can be done at any time without disruption. In our example infrastructure, there are two points that we need to consider in order to achieve zero downtime deployments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The ability for customers to use the existing version of the application to complete their session even after a newer version of the application has been deployed.&lt;/li&gt;
&lt;li&gt;Forward and backward compatibility of any database changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ensuring database changes are backward and forward compatible requires some design work and discipline when pushing new application versions. Fortunately, there are tools available, including &lt;a href="https://flywaydb.org/" rel="noopener noreferrer"&gt;Flyway&lt;/a&gt; and &lt;a href="https://www.liquibase.org/" rel="noopener noreferrer"&gt;Liquidbase&lt;/a&gt;, that provide a way to roll out database changes with the applications themselves, taking care of versioning the changes and wrapping any migrations in the required transactions. We’ll see Flyway implemented in a sample application later in the post.&lt;/p&gt;

&lt;p&gt;As long as the shared database remains compatible between the current and the new version of the application, Tomcat provides a feature called &lt;a href="https://tomcat.apache.org/tomcat-9.0-doc/config/context.html#Parallel_deployment" rel="noopener noreferrer"&gt;parallel deployments&lt;/a&gt; that allows clients to continue to access the previous version of the application until their session expires, while new sessions are created against the new version of the application. Parallel deployments allow a new version of the application to be deployed without disrupting any existing clients.&lt;/p&gt;

&lt;p&gt;Tomcat has the ability to automatically clean up old versions that no longer have any sessions. We will not enable this feature though, as it may prevent a session for an old version from being migrated to another Tomcat instance.&lt;/p&gt;

&lt;p&gt;Ensuring database changes are compatible between the current and new version of the application means we can easily roll back the application deployment. Redeploying the previous version of the application provides a quick fallback in case the new version introduced any errors.&lt;/p&gt;

&lt;p&gt;In summary, our zero downtime deployments solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Relies on database changes being forward and backward compatible (at least between the new and current versions of the application).&lt;/li&gt;
&lt;li&gt;Uses parallel deployments to allow existing sessions to complete uninterrupted.&lt;/li&gt;
&lt;li&gt;Provides application rollbacks by reverting to the previously installed application version.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Build the infrastructure
&lt;/h2&gt;

&lt;p&gt;The example infrastructure shown here is deployed to Ubuntu 18.04 virtual machines. Most of the instructions will be distribution agnostic, although some of the package names and file locations may change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure the Tomcat instances
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Install the packages
&lt;/h4&gt;

&lt;p&gt;We start by installing Tomcat and the Manager application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install tomcat tomcat-admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Add the AJP connector
&lt;/h4&gt;

&lt;p&gt;Communication between the Apache web server and Tomcat is performed with an AJP connector. AJP is an optimized binary HTTP protocol that the mod_jk plugin for Apache and Tomcat both understand. The connector is added to the &lt;code&gt;Service&lt;/code&gt; element in the &lt;code&gt;/etc/tomcat9/server.xml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;Server&amp;gt;&lt;/span&gt;
  &lt;span class="c"&gt;&amp;lt;!-- ... --&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;Service&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"Catalina"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="c"&gt;&amp;lt;!-- ... --&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;Connector&lt;/span&gt; &lt;span class="na"&gt;port=&lt;/span&gt;&lt;span class="s"&gt;"8009"&lt;/span&gt; &lt;span class="na"&gt;protocol=&lt;/span&gt;&lt;span class="s"&gt;"AJP/1.3"&lt;/span&gt; &lt;span class="na"&gt;redirectPort=&lt;/span&gt;&lt;span class="s"&gt;"8443"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/Connector&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/Service&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/Server&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Define the Tomcat instance names
&lt;/h4&gt;

&lt;p&gt;Each Tomcat instance needs a unique name added to the &lt;code&gt;Engine&lt;/code&gt; element in the &lt;code&gt;/etc/tomcat9/server.xml&lt;/code&gt; file. The default &lt;code&gt;Engine&lt;/code&gt; element looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;Engine&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"Catalina"&lt;/span&gt; &lt;span class="na"&gt;defaultHost=&lt;/span&gt;&lt;span class="s"&gt;"localhost"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The name of the Tomcat instance is defined in the &lt;code&gt;jvmRoute&lt;/code&gt; attribute. I’ll call the first Tomcat instance &lt;code&gt;worker1&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;Engine&lt;/span&gt; &lt;span class="na"&gt;defaultHost=&lt;/span&gt;&lt;span class="s"&gt;"localhost"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"Catalina"&lt;/span&gt; &lt;span class="na"&gt;jvmRoute=&lt;/span&gt;&lt;span class="s"&gt;"worker1"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second Tomcat instance is called &lt;code&gt;worker2&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;Engine&lt;/span&gt; &lt;span class="na"&gt;defaultHost=&lt;/span&gt;&lt;span class="s"&gt;"localhost"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"Catalina"&lt;/span&gt; &lt;span class="na"&gt;jvmRoute=&lt;/span&gt;&lt;span class="s"&gt;"worker2"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Add a manager user
&lt;/h4&gt;

&lt;p&gt;Octopus performs deployments to Tomcat via the Manager application. This is what we installed with the &lt;code&gt;tomcat-admin&lt;/code&gt; package earlier.&lt;/p&gt;

&lt;p&gt;In order to authenticate with the Manager application, a new user needs to be defined in the &lt;code&gt;/etc/tomcat9/tomcat-users.xml&lt;/code&gt; file. I’ll call this user &lt;code&gt;tomcat&lt;/code&gt; with the password &lt;code&gt;Password01!&lt;/code&gt;, and it will belong to the &lt;code&gt;manager-script&lt;/code&gt; and &lt;code&gt;manager-gui&lt;/code&gt; roles.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;manager-script&lt;/code&gt; role grants access to the Manager API, and the &lt;code&gt;manager-gui&lt;/code&gt; role grants access to the Manager web console.&lt;/p&gt;

&lt;p&gt;Here is a copy of the &lt;code&gt;/etc/tomcat9/tomcat-users.xml&lt;/code&gt; file with the &lt;code&gt;tomcat&lt;/code&gt; user defined:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;tomcat-users&lt;/span&gt; &lt;span class="na"&gt;xmlns=&lt;/span&gt;&lt;span class="s"&gt;"http://tomcat.apache.org/xml"&lt;/span&gt;
              &lt;span class="na"&gt;xmlns:xsi=&lt;/span&gt;&lt;span class="s"&gt;"http://www.w3.org/2001/XMLSchema-instance"&lt;/span&gt;
              &lt;span class="na"&gt;xsi:schemaLocation=&lt;/span&gt;&lt;span class="s"&gt;"http://tomcat.apache.org/xml tomcat-users.xsd"&lt;/span&gt;
              &lt;span class="na"&gt;version=&lt;/span&gt;&lt;span class="s"&gt;"1.0"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;role&lt;/span&gt; &lt;span class="na"&gt;rolename=&lt;/span&gt;&lt;span class="s"&gt;"manager-gui"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;role&lt;/span&gt; &lt;span class="na"&gt;rolename=&lt;/span&gt;&lt;span class="s"&gt;"manager-script"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;user&lt;/span&gt; &lt;span class="na"&gt;username=&lt;/span&gt;&lt;span class="s"&gt;"tomcat"&lt;/span&gt; &lt;span class="na"&gt;password=&lt;/span&gt;&lt;span class="s"&gt;"Password01!"&lt;/span&gt; &lt;span class="na"&gt;roles=&lt;/span&gt;&lt;span class="s"&gt;"manager-script,manager-gui"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/tomcat-users&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Add the PostgreSQL JDBC driver jar
&lt;/h4&gt;

&lt;p&gt;Each Tomcat instance will communicate with a PostgreSQL database to persist session data. In order for Tomcat to communicate with a PostgreSQL database, we need to install the PostgreSQL JDBC driver JAR file. This is done by saving the file &lt;code&gt;https://jdbc.postgresql.org/download/postgresql-42.2.11.jar&lt;/code&gt; as &lt;code&gt;/var/lib/tomcat9/lib/postgresql-42.2.11.jar&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Enable session replication
&lt;/h4&gt;

&lt;p&gt;To enable session persistence to a database, we add a new &lt;code&gt;Manager&lt;/code&gt; definition in the file &lt;code&gt;/etc/tomcat9/context.xml&lt;/code&gt;. This manager uses the &lt;code&gt;org.apache.catalina.session.PersistentManager&lt;/code&gt; to save the session details to a database defined in the nested &lt;code&gt;Store&lt;/code&gt; element.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Store&lt;/code&gt; element in turn, defines the database that the session information will be persisted to.&lt;/p&gt;

&lt;p&gt;We also need to add a &lt;code&gt;Valve&lt;/code&gt; loading the &lt;code&gt;org.apache.catalina.ha.session.JvmRouteBinderValve&lt;/code&gt; class. This valve is important when a client is redirected from a Tomcat instance that is no longer available to another Tomcat instance in the cluster. We’ll see this valve in action after we deploy our sample application.&lt;/p&gt;

&lt;p&gt;Here is a copy of the &lt;code&gt;/etc/tomcat9/context.xml&lt;/code&gt; file with the &lt;code&gt;Manager&lt;/code&gt;, &lt;code&gt;Store&lt;/code&gt;, and &lt;code&gt;Valve&lt;/code&gt; elements defined:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;Context&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;Manager&lt;/span&gt;
    &lt;span class="na"&gt;className=&lt;/span&gt;&lt;span class="s"&gt;"org.apache.catalina.session.PersistentManager"&lt;/span&gt;
    &lt;span class="na"&gt;processExpiresFrequency=&lt;/span&gt;&lt;span class="s"&gt;"3"&lt;/span&gt;
    &lt;span class="na"&gt;maxIdleBackup=&lt;/span&gt;&lt;span class="s"&gt;"1"&lt;/span&gt; &lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;Store&lt;/span&gt;
        &lt;span class="na"&gt;className=&lt;/span&gt;&lt;span class="s"&gt;"org.apache.catalina.session.JDBCStore"&lt;/span&gt;
        &lt;span class="na"&gt;driverName=&lt;/span&gt;&lt;span class="s"&gt;"org.postgresql.Driver"&lt;/span&gt;
        &lt;span class="na"&gt;connectionURL=&lt;/span&gt;&lt;span class="s"&gt;"jdbc:postgresql://postgresserver:5432/tomcat?currentSchema=session"&lt;/span&gt;
        &lt;span class="na"&gt;connectionName=&lt;/span&gt;&lt;span class="s"&gt;"postgres"&lt;/span&gt;
        &lt;span class="na"&gt;connectionPassword=&lt;/span&gt;&lt;span class="s"&gt;"passwordgoeshere"&lt;/span&gt;
        &lt;span class="na"&gt;sessionAppCol=&lt;/span&gt;&lt;span class="s"&gt;"app_name"&lt;/span&gt;
        &lt;span class="na"&gt;sessionDataCol=&lt;/span&gt;&lt;span class="s"&gt;"session_data"&lt;/span&gt;
        &lt;span class="na"&gt;sessionIdCol=&lt;/span&gt;&lt;span class="s"&gt;"session_id"&lt;/span&gt;
        &lt;span class="na"&gt;sessionLastAccessedCol=&lt;/span&gt;&lt;span class="s"&gt;"last_access"&lt;/span&gt;
        &lt;span class="na"&gt;sessionMaxInactiveCol=&lt;/span&gt;&lt;span class="s"&gt;"max_inactive"&lt;/span&gt;
        &lt;span class="na"&gt;sessionTable=&lt;/span&gt;&lt;span class="s"&gt;"session.tomcat_sessions"&lt;/span&gt;
        &lt;span class="na"&gt;sessionValidCol=&lt;/span&gt;&lt;span class="s"&gt;"valid_session"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/Manager&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;Valve&lt;/span&gt; &lt;span class="na"&gt;className=&lt;/span&gt;&lt;span class="s"&gt;"org.apache.catalina.ha.session.JvmRouteBinderValve"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;WatchedResource&amp;gt;&lt;/span&gt;WEB-INF/web.xml&lt;span class="nt"&gt;&amp;lt;/WatchedResource&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;WatchedResource&amp;gt;&lt;/span&gt;WEB-INF/tomcat-web.xml&lt;span class="nt"&gt;&amp;lt;/WatchedResource&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;WatchedResource&amp;gt;&lt;/span&gt;${catalina.base}/conf/web.xml&lt;span class="nt"&gt;&amp;lt;/WatchedResource&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/Context&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure the PostgreSQL database
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Install the packages
&lt;/h4&gt;

&lt;p&gt;We need to initialize PostgreSQL with a new database, schema, and table. To do this, we use the &lt;code&gt;psql&lt;/code&gt; command-line tool, which is installed with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install postgresql-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Add the database, schema, and table
&lt;/h4&gt;

&lt;p&gt;If you look at the &lt;code&gt;connectionURL&lt;/code&gt; attribute from the &lt;code&gt;/etc/tomcat9/context.xml&lt;/code&gt; file defined above, you will see we are saving session information into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The database called &lt;code&gt;tomcat&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The schema called &lt;code&gt;session&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;A table called &lt;code&gt;tomcat_sessions&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To create these resources in the PostgreSQL server, we run a number of SQL commands.&lt;/p&gt;

&lt;p&gt;First, save the following text into a file called &lt;code&gt;createdb.sql&lt;/code&gt;. This command creates the database if it does not exist (see &lt;a href="https://stackoverflow.com/a/18389184/157605" rel="noopener noreferrer"&gt;this StackOverflow&lt;/a&gt; post for more information about the syntax):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT 'CREATE DATABASE tomcat' WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'tomcat')\gexec
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then execute the SQL with the following command, replacing &lt;code&gt;postgresserver&lt;/code&gt; with the hostname of your PostgreSQL server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat createdb.sql | /usr/bin/psql -a -U postgres -h postgresserver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we create the schema and table. Save the following text to a file called &lt;code&gt;createschema.sql&lt;/code&gt;. Note the columns of the &lt;code&gt;tomcat_sessions&lt;/code&gt; table match the attributes of the &lt;code&gt;Store&lt;/code&gt; element in the &lt;code&gt;/etc/tomcat9/context.xml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE SCHEMA IF NOT EXISTS session;

CREATE TABLE IF NOT EXISTS session.tomcat_sessions
(
  session_id character varying(100) NOT NULL,
  valid_session character(1) NOT NULL,
  max_inactive integer NOT NULL,
  last_access bigint NOT NULL,
  app_name character varying(255),
  session_data bytea,
  CONSTRAINT tomcat_sessions_pkey PRIMARY KEY (session_id)
);

CREATE INDEX IF NOT EXISTS app_name_index
  ON session.tomcat_sessions
  USING btree
  (app_name);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then execute the SQL with the following command, replacing &lt;code&gt;postgresserver&lt;/code&gt; with the hostname of your PostgreSQL server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql -a -d tomcat -U postgres -h postgresserver -f /root/createschema.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now have a table in PostgreSQL ready to save the Tomcat sessions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure the load balancers
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Install the packages
&lt;/h4&gt;

&lt;p&gt;We need to install the Apache web server, the mod_jk plugin, and the Keepalived service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install apache2 libapache2-mod-jk keepalived
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Configure the load balancer
&lt;/h4&gt;

&lt;p&gt;The mod_jk plugin is configured via the file &lt;code&gt;/etc/libapache2-mod-jk/workers.properties&lt;/code&gt;. In this file, we define a number of workers that traffic can be directed to. The fields in this file are documented &lt;a href="https://tomcat.apache.org/connectors-doc/reference/workers.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We start by defining a worker called &lt;code&gt;loadbalancer&lt;/code&gt; that will receive all of the traffic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;worker.list=loadbalancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then define the two Tomcat instances that were created earlier. Make sure to replace &lt;code&gt;worker1_ip&lt;/code&gt; and &lt;code&gt;worker2_ip&lt;/code&gt; with the IP addresses of the matching Tomcat instances.&lt;/p&gt;

&lt;p&gt;Note, the name of the workers defined here as &lt;code&gt;worker1&lt;/code&gt; and &lt;code&gt;worker2&lt;/code&gt; match the value of the &lt;code&gt;jvmRoute&lt;/code&gt; attribute in the &lt;code&gt;Engine&lt;/code&gt; element in the &lt;code&gt;/etc/tomcat9/server.xml&lt;/code&gt; file. These names must match, as they are used by mod_jk to implement sticky sessions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;worker.worker1.type=ajp13
worker.worker1.host=worker1_ip
worker.worker1.port=8009

worker.worker2.type=ajp13
worker.worker2.host=worker2_ip
worker.worker2.port=8009
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we define the &lt;code&gt;loadbalancer&lt;/code&gt; worker as a load balancer that directs traffic to the &lt;code&gt;worker1&lt;/code&gt; and &lt;code&gt;worker2&lt;/code&gt; workers with sticky sessions enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=worker1,worker2
worker.loadbalancer.sticky_session=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a complete copy of the &lt;code&gt;/etc/libapache2-mod-jk/workers.properties&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# All traffic is directed to the load balancer
worker.list=loadbalancer

# Set properties for workers (ajp13)
worker.worker1.type=ajp13
worker.worker1.host=worker1_ip
worker.worker1.port=8009

worker.worker2.type=ajp13
worker.worker2.host=worker2_ip
worker.worker2.port=8009

# Load-balancing behaviour
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=worker1,worker2
worker.loadbalancer.sticky_session=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Add an Apache VirtualHost
&lt;/h4&gt;

&lt;p&gt;In order for Apache to accept traffic we need to define a &lt;code&gt;VirtualHost&lt;/code&gt;, which we create in the file &lt;code&gt;/etc/apache2/sites-enabled/000-default.conf&lt;/code&gt;. This virtual host will accept HTTP traffic on port 80, define some log files, and use the &lt;code&gt;JkMount&lt;/code&gt; directive to forward traffic to the worker called &lt;code&gt;loadbalancer&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;VirtualHost&lt;/span&gt; &lt;span class="err"&gt;*:80&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  ErrorLog ${APACHE_LOG_DIR}/error.log
  CustomLog ${APACHE_LOG_DIR}/access.log combined
  JkMount /* loadbalancer
&lt;span class="nt"&gt;&amp;lt;/VirtualHost&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Configure Keepalived
&lt;/h4&gt;

&lt;p&gt;We have two load balancers to ensure that one can be taken offline for maintenance at any given time. Keepalived is the service that we use to assign a virtual IP address to one of the load balancer services, which Keepalived refers to as the master.&lt;/p&gt;

&lt;p&gt;Keepalived is configured via the &lt;code&gt;/etc/keepalived/keepalived.conf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;We start by naming the load balancer instance. The first load balancer is called &lt;code&gt;loadbalancer1&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vrrp_instance loadbalancer1 {
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;state&lt;/code&gt; MASTER designates the active server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;state MASTER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;interface&lt;/code&gt; parameter assigns the physical interface name to this particular virtual IP instance:&lt;/p&gt;

&lt;p&gt;You can find the interface name by running &lt;code&gt;ifconfig&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface ens5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;virtual_router_id&lt;/code&gt; is a numerical identifier for the Virtual Router instance. It must be the same on all LVS Router systems participating in this Virtual Router. It is used to differentiate multiple instances of Keepalived running on the same network interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;virtual_router_id 101
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;priority&lt;/code&gt; specifies the order in which the assigned interface takes over in a failover; the higher the number, the higher the priority. This priority value must be within the range of 0 to 255, and the load balancing server configured as state &lt;code&gt;MASTER&lt;/code&gt; should have a priority value set to a higher number than the priority value of the server configured as state &lt;code&gt;BACKUP&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;priority 101
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;advert_int&lt;/code&gt; defines how often to send out VRRP advertisements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;advert_int 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;authentication&lt;/code&gt; block specifies the authentication type (&lt;code&gt;auth_type&lt;/code&gt;) and password (&lt;code&gt;auth_pass&lt;/code&gt;) used to authenticate servers for failover synchronization. &lt;code&gt;PASS&lt;/code&gt; specifies password authentication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;authentication {
    auth_type PASS
    auth_pass passwordgoeshere
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;unicast_src_ip&lt;/code&gt; is the IP address of this load balancer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unicast_src_ip 10.0.0.20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;unicast_peer&lt;/code&gt; lists the IP addresses of other load balancers. Since we have two load balancers total, there is only one other load balancer to list here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unicast_peer {
  10.0.0.21
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;virtual_ipaddress&lt;/code&gt; defines the virtual, or floating, IP address that Keepalived assigns to the master node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;virtual_ipaddress {
    10.0.0.30
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a complete copy of the &lt;code&gt;/etc/keepalived/keepalived.conf&lt;/code&gt; file for the first load balancer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vrrp_instance loadbalancer1 {
    state MASTER
    interface ens5
    virtual_router_id 101
    priority 101
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass passwordgoeshere
    }
    # Replace unicast_src_ip and unicast_peer with your load balancer IP addresses
    unicast_src_ip 10.0.0.20
    unicast_peer {
      10.0.0.21
    }
    virtual_ipaddress {
        10.0.0.30
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a complete copy of the &lt;code&gt;/etc/keepalived/keepalived.conf&lt;/code&gt; file for the second load balancer.&lt;/p&gt;

&lt;p&gt;Note, the name has been set to &lt;code&gt;loadbalancer2&lt;/code&gt;, the &lt;code&gt;state&lt;/code&gt; has been set to &lt;code&gt;BACKUP&lt;/code&gt;, the &lt;code&gt;priority&lt;/code&gt; is lower at &lt;code&gt;100&lt;/code&gt;, and the &lt;code&gt;unicast_src_ip&lt;/code&gt; and &lt;code&gt;unicast_peer&lt;/code&gt; IP addresses have been flipped:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vrrp_instance loadbalancer2 {
    state BACKUP
    interface ens5
    virtual_router_id 101
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass passwordgoeshere
    }
    # Replace unicast_src_ip and unicast_peer with your load balancer IP addresses
    unicast_src_ip 10.0.0.21
    unicast_peer {
      10.0.0.20
    }
    virtual_ipaddress {
        10.0.0.30
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the &lt;code&gt;keepalived&lt;/code&gt; service on both load balancers with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart keepalived
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the first load balancer, run the command &lt;code&gt;ip addr&lt;/code&gt;. This will show the virtual IP address assigned to the interface that Keepalived was configured to manage with the output &lt;code&gt;inet 10.0.0.30/32 scope global ens5&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ip addr
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens5: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0e:2b:f9:2a:fa:a7 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.21/24 brd 10.0.0.255 scope global dynamic ens5
       valid_lft 3238sec preferred_lft 3238sec
    inet 10.0.0.30/32 scope global ens5
       valid_lft forever preferred_lft forever
    inet6 fe80::c2b:f9ff:fe2a:faa7/64 scope link
       valid_lft forever preferred_lft forever
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the first load balancer is shutdown, the second load balancer will assume the virtual IP address, and the second Apache web server will act as the load balancer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the deployment pipeline
&lt;/h2&gt;

&lt;p&gt;Our deployment pipeline will involve deploying the &lt;a href="https://github.com/OctopusSamples/RandomQuotes-Java" rel="noopener noreferrer"&gt;Random Quotes&lt;/a&gt; sample application. This is a simple stateful Spring Boot application utilizing Flyway to manage database migrations.&lt;/p&gt;

&lt;p&gt;When you click the &lt;strong&gt;Refresh&lt;/strong&gt; button, a new quote is loaded from the database, a counter is incremented in the session, and the counter is displayed as the &lt;strong&gt;Quote count&lt;/strong&gt; field on the page. The application version is shown in the &lt;strong&gt;Version&lt;/strong&gt; field.&lt;/p&gt;

&lt;p&gt;We can use the &lt;strong&gt;Quote count&lt;/strong&gt; and &lt;strong&gt;Version&lt;/strong&gt; information to verify that existing sessions are preserved as new deployments are performed or Tomcat instances are taken offline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Frandom_quotes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Frandom_quotes.png" title="width=500" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Get an Octopus instance
&lt;/h3&gt;

&lt;p&gt;If you do not already have Octopus installed, the easiest way to get an Octopus instance is to &lt;a href="https://octopus.com/start/cloud" rel="noopener noreferrer"&gt;sign up for a cloud account&lt;/a&gt;. These instances are free for up to 10 targets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the environments
&lt;/h3&gt;

&lt;p&gt;We’ll create two environments for this example: &lt;strong&gt;Dev&lt;/strong&gt; and &lt;strong&gt;Prod&lt;/strong&gt;. This means we will configure eight targets in total: four load balancers and four Tomcat instances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy the Tentacle
&lt;/h3&gt;

&lt;p&gt;We will install a Tentacle on each of our virtual machines to allow us to perform deployments, updates, and system maintenance tasks. The instructions for installing the Tentacle software is found on the &lt;a href="https://octopus.com/downloads/tentacle#linux" rel="noopener noreferrer"&gt;Octopus download page&lt;/a&gt;. As this example is using Ubuntu as the base OS, we install the Tentacle with the commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-key adv --fetch-keys https://apt.octopus.com/public.key
add-apt-repository "deb https://apt.octopus.com/ stretch main"
apt-get update
apt-get install tentacle
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the Tentacle is installed we configure an instance with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/opt/octopus/tentacle/configure-tentacle.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The installation gives you a choice between &lt;a href="https://octopus.com/docs/infrastructure/deployment-targets/windows-targets/tentacle-communication" rel="noopener noreferrer"&gt;polling or listening Tentacles&lt;/a&gt;. Which option you choose often depends on your network restrictions. Polling Tentacles require the VM hosting the Tentacle can reach the Octopus Server, while listening Tentacles require the Octopus Server can reach the VM. The choice of communication style depends on whether the Octopus Server or VMs have fixed IP addresses and the correct ports are opened in the firewall. Either option is a valid choice and does not impact the deployment process.&lt;/p&gt;

&lt;p&gt;Here is a screenshot of the &lt;strong&gt;Dev&lt;/strong&gt; environment with Tentacles for the Tomcat and Load Balancer instances. The Tomcat instances have a role of &lt;strong&gt;tomcat&lt;/strong&gt;, and the load balancer instances have a role of &lt;strong&gt;loadbalancer&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftargets.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftargets.png" title="width=500" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the external feed
&lt;/h3&gt;

&lt;p&gt;The Random Quotes sample application has been pushed to &lt;a href="https://repo.maven.apache.org/maven2/com/octopus/randomquotes/" rel="noopener noreferrer"&gt;Maven Central as a WAR file&lt;/a&gt;. This means we can deploy the application directly from a Maven feed.&lt;/p&gt;

&lt;p&gt;Create a new Maven feed pointing to &lt;code&gt;https://repo.maven.apache.org/maven2/&lt;/code&gt;. A screenshot of this feed is shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fmaven_feed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fmaven_feed.png" title="width=500" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Test the feed by searching for &lt;strong&gt;com.octopus:randomquotes&lt;/strong&gt;. Here we can see that our application is found in the repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftest_feed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftest_feed.png" title="width=500" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the deployment process
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Generate a timestamp
&lt;/h4&gt;

&lt;p&gt;In order to support zero downtime deployments, we want to take advantage of the &lt;a href="https://tomcat.apache.org/tomcat-9.0-doc/config/context.html#Parallel_deployment" rel="noopener noreferrer"&gt;parallel deployments&lt;/a&gt; feature in Tomcat. Parallel deployments are enabled by versioning each application when it is deployed.&lt;/p&gt;

&lt;p&gt;This version number uses string comparisons to determine the latest version. Typical versioning schemes, like SemVer, use a &lt;em&gt;major.minor.patch&lt;/em&gt; format, like &lt;em&gt;1.23.4&lt;/em&gt;, to identify versions. For a lot of cases, these traditional versioning schemes can be compared as strings to determine their order.&lt;/p&gt;

&lt;p&gt;However, padding can introduce issues. For example, the version &lt;em&gt;1.23.40&lt;/em&gt; is lower than &lt;em&gt;01.23.41&lt;/em&gt;, but a direct string comparison returns the opposite result.&lt;/p&gt;

&lt;p&gt;For this reason, we use the time of the deployment as the Tomcat version. Because the version needs to be consistent across targets, we start with a script step that generates a timestamp and saves it to an output variable with the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$timestamp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Get-Date&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Format&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"yyMMddHHmmss"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Set-OctopusVariable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TimeStamp"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-value&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$timestamp&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  The Tomcat deployment
&lt;/h4&gt;

&lt;p&gt;Our deployment process starts with deploying the application to each Tomcat instance using the &lt;strong&gt;Deploy to Tomcat via Manager&lt;/strong&gt; step. We’ll call this step &lt;strong&gt;Random Quotes&lt;/strong&gt; and run it on the &lt;strong&gt;tomcat&lt;/strong&gt; targets:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_1.png" title="width=500" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We deploy the &lt;strong&gt;com.octopus:randomquotes&lt;/strong&gt; package from the Maven feed we setup earlier:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_2.png" title="width=500" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because the Tentacle is located on the VM that hosts Tomcat, the location of the Manager API is &lt;strong&gt;&lt;a href="http://localhost:8080/manager" rel="noopener noreferrer"&gt;http://localhost:8080/manager&lt;/a&gt;&lt;/strong&gt;. We then supply the manager credentials, which were the details entered into the &lt;code&gt;tomcat-users.xml&lt;/code&gt; file when we configured Tomcat:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_3.png" title="width=500" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The context path makes up the path in the URL that the deployed application is accessible on. Here we expose the application on the path &lt;strong&gt;/randomquotes&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_4.png" title="width=500" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The deployment version is set to the timestamp that was generated by the previous step by referencing the output variable as &lt;strong&gt;#{Octopus.Action[Get Timestamp].Output.TimeStamp}&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_step_5.png" title="width=500" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Smoke test the deployment
&lt;/h4&gt;

&lt;p&gt;To verify that our deployment was successful, we issue an HTTP request and check the response code. To do this, we use a community step template called &lt;strong&gt;HTTP - Test URL (Bash)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As before, this step will run on the Tomcat instances:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fhttp_step_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fhttp_step_1.png" title="width=500" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The step will attempt to open the &lt;code&gt;index.html&lt;/code&gt; page from the newly deployed application, expecting an HTTP response code of &lt;strong&gt;200&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fhttp_step_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fhttp_step_2.png" title="width=500" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Perform the initial deployment
&lt;/h4&gt;

&lt;p&gt;Let’s go ahead and perform the initial deployment. For this deployment, we’ll specifically select a previous version of the Random Quotes application. Version &lt;strong&gt;0.1.6.1&lt;/strong&gt; in this case, is our second last artifact version:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdeployment_step_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdeployment_step_1.png" title="width=500" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Octopus then proceeds to download the WAR file from the Maven repository, push it to the Tomcat instances and deploy it to Tomcat via the Manager. When complete, the smoke test runs to ensure the application can be opened successfully:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdeployment_result.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdeployment_result.png" title="width=500" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Inspect a request through the stack
&lt;/h2&gt;

&lt;p&gt;With the deployment done, we can access it via the load balancer.&lt;/p&gt;

&lt;p&gt;In the previous configuration examples, we had a floating IP address of 10.0.0.30, but for these screenshots, I used Keepalived to assign a public IP address to the load balancers.&lt;/p&gt;

&lt;p&gt;Here is a screenshot of the application opened with the Chrome developer tools:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fapp_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fapp_1.png" title="width=500" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are three things to note in this screenshot:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;strong&gt;JSESSIONID&lt;/strong&gt; cookie is set with a random session ID and the name of the Tomcat instance that responded to the request. In this example, the Tomcat instance whose &lt;strong&gt;jvmRoute&lt;/strong&gt; was set to &lt;strong&gt;worker1&lt;/strong&gt; responded to the request.&lt;/li&gt;
&lt;li&gt;We have opened version 0.1.6.1 of the application.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Quote count&lt;/strong&gt; is set to 1, but this will increase as we hit the &lt;strong&gt;Refresh&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s increase the &lt;strong&gt;Quote count&lt;/strong&gt; by clicking the &lt;strong&gt;Refresh&lt;/strong&gt; button. This value is saved on the Tomcat server in the session associated with the &lt;strong&gt;JSESSIONID&lt;/strong&gt; cookie:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fapp_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fapp_2.png" title="width=500" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s now shut down the &lt;strong&gt;worker1&lt;/strong&gt; Tomcat instance. After shutdown, we click the &lt;strong&gt;Refresh&lt;/strong&gt; button again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fapp_3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fapp_3.png" title="width=500" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are three things to note in this screenshot:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The suffix on the &lt;strong&gt;JSESSIONID&lt;/strong&gt; cookie changed from &lt;strong&gt;worker1&lt;/strong&gt; to &lt;strong&gt;worker2&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;JSESSIONID&lt;/strong&gt; cookie session ID remained the same.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Quote count&lt;/strong&gt; increased to 6.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When Tomcat is gracefully shut down, it will write out the contents of any sessions to the database. Then because the &lt;strong&gt;worker1&lt;/strong&gt; Tomcat instance was no longer available, the request was directed to the &lt;strong&gt;worker2&lt;/strong&gt; Tomcat instance. The &lt;strong&gt;worker2&lt;/strong&gt; Tomcat instance loaded the session information from the database and incremented the counter. The &lt;strong&gt;JvmRouteBinderValve&lt;/strong&gt; valve rewrote the session cookie to append the current Tomcat instance name to the end, and the response was returned to the browser.&lt;/p&gt;

&lt;p&gt;We can now see that it is important that the worker names in the load balancer &lt;code&gt;/etc/libapache2-mod-jk/workers.properties&lt;/code&gt; file match the names assigned to the &lt;strong&gt;jvmRoute&lt;/strong&gt; in the &lt;code&gt;/etc/tomcat9/server.xml&lt;/code&gt; file because matching these names allow sticky sessions to be implemented.&lt;/p&gt;

&lt;p&gt;Because the &lt;strong&gt;Quote count&lt;/strong&gt; did not reset back to 1, we know that the session was persisted to the database and replicated to the other Tomcat instances in the cluster. We also know that the request was served by another Tomcat instance because the &lt;strong&gt;JSESSIONID&lt;/strong&gt; cookie shows a new worker name.&lt;/p&gt;

&lt;p&gt;Even if we brought &lt;strong&gt;worker1&lt;/strong&gt; back online, this browser session would continue to be handled by &lt;strong&gt;worker2&lt;/strong&gt; because the load balancers implement sticky sessions by inspecting the &lt;strong&gt;JSESSIONID&lt;/strong&gt; cookie. This also means that the load balancers don’t need to share state, as they only require the cookie value to direct traffic.&lt;/p&gt;

&lt;p&gt;We’ve now demonstrated that the Tomcat instances support session replication and failover, making them highly available.&lt;/p&gt;

&lt;p&gt;To demonstrate failover of the load balancers, we only need to restart the instance designated as the master by Keepalived. We can then watch the events on the backup instance with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;journalctl -u keepalived -f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Very quickly we’ll see these messages appear on the backup as it assumes the master role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apr 01 03:15:00 ip-10-0-0-21 Keepalived_vrrp[32485]: VRRP_Instance(loadbalancer2) Transition to MASTER STATE
Apr 01 03:15:01 ip-10-0-0-21 Keepalived_vrrp[32485]: VRRP_Instance(loadbalancer2) Entering MASTER STATE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Having assumed the master role, the load balancer will be assigned the virtual IP address, and distribute traffic as the previous master did.&lt;/p&gt;

&lt;p&gt;After the previous master instance restarts, it will re-assume the master role because it is configured with a higher priority, and the virtual IP address will be assigned back.&lt;/p&gt;

&lt;p&gt;The whole process is seamless, and upstream clients never need to be aware that a failover and failback took place. So we have demonstrated that the load balancers can failover, making them highly available.&lt;/p&gt;

&lt;p&gt;In summary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;JSESSIONID&lt;/strong&gt; cookie contains the session ID and the name of the Tomcat instance that processed the request.&lt;/li&gt;
&lt;li&gt;The load balancers implement sticky sessions based on the worker name appended to the &lt;strong&gt;JSESSIONID&lt;/strong&gt; cookie.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;JvmRouteBinderValve&lt;/code&gt; valve rewrites the &lt;strong&gt;JSESSIONID&lt;/strong&gt; cookie when a Tomcat instance received traffic for a session it was not originally responsible for.&lt;/li&gt;
&lt;li&gt;Keepalived assigns a virtual IP to the backup load balancer if the master goes offline.&lt;/li&gt;
&lt;li&gt;The master load balancer re-assumes the virtual IP when it comes back online.&lt;/li&gt;
&lt;li&gt;The infrastructure stack can survive the loss of one Tomcat instance and one load balancer and still maintain availability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Zero downtime deployments
&lt;/h2&gt;

&lt;p&gt;We have now successfully deployed version &lt;em&gt;0.1.6.1&lt;/em&gt; of our web application to Tomcat. This version of the application uses a very simple table structure to hold the names of those credited with the quotes, placing both the first name and last name into a single column called &lt;code&gt;AUTHOR&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This table structure was originally created by a Flyway database script with the following SQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt; &lt;span class="n"&gt;auto_increment&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next version of our application will split the names into a &lt;code&gt;FIRSTNAME&lt;/code&gt; and &lt;code&gt;LASTNAME&lt;/code&gt; column. We add these columns with a new Flyway script that contains the following SQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt;
&lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="nb"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt;
&lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="nb"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, we have to consider how these changes can be made in a backward compatible way. A cornerstone of the zero downtime deployment strategy requires that a shared database support both the current version of the application and the new version. Unfortunately, there is no silver bullet to provide this compatibility, and it is up to us as developers to ensure that our changes don’t break any existing sessions.&lt;/p&gt;

&lt;p&gt;In this scenario, we must keep the old &lt;code&gt;AUTHOR&lt;/code&gt; column and duplicate the data it held into the new &lt;code&gt;FIRSTNAME&lt;/code&gt; and &lt;code&gt;LASTNAME&lt;/code&gt; columns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Rob'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Siltanen'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Albert'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Einstein'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Charles'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Eames'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Henry'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Ford'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Antoine'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'de Saint-Exupery'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Salvador'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Dali'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'M.C.'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Escher'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Paul'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Rand'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Elon'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Musk'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Jessica'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Hische'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Paul'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Rand'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Mark'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Weiser'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Pablo'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Picasso'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;AUTHOR&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;FIRSTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Charles'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LASTNAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Mingus'&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition, the new JPA entity class needs to ignore the old &lt;code&gt;AUTHOR&lt;/code&gt; column (through the &lt;code&gt;@Transient&lt;/code&gt; annotation). The &lt;code&gt;getAuthor()&lt;/code&gt; method then returns the combined values of the &lt;code&gt;getFirstName()&lt;/code&gt; and &lt;code&gt;getLastName()&lt;/code&gt; methods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Entity&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Author&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;@Id&lt;/span&gt;
    &lt;span class="nd"&gt;@GeneratedValue&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;strategy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GenerationType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;AUTO&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;Integer&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="nd"&gt;@Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"FIRSTNAME"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;firstName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="nd"&gt;@Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"LASTNAME"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;lastName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;@OneToMany&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;mappedBy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"author"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;cascade&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CascadeType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ALL&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;orphanRemoval&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Quote&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;quotes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ArrayList&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;

    &lt;span class="kd"&gt;protected&lt;/span&gt; &lt;span class="nf"&gt;Author&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;Integer&lt;/span&gt; &lt;span class="nf"&gt;getId&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="nd"&gt;@Transient&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getAuthor&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;getFirstName&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;" "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;getLastName&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Quote&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getQuotes&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;quotes&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getFirstName&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;firstName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getLastName&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;lastName&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While this is a trivial example that is easy to implement thanks to the fact the &lt;code&gt;AUTHOR&lt;/code&gt; table is read-only, it demonstrates how database changes can be implemented in a backward compatible manner. It would be possible to write entire books on the strategies for maintaining backward compatibility, but for the purposes of this post, we’ll leave this discussion here.&lt;/p&gt;

&lt;p&gt;Before we perform the next deployment, reopen the existing application and refresh some quotes. This creates a session against the existing &lt;em&gt;0.1.6.1&lt;/em&gt; version, which we’ll use to test our zero downtime deployment strategy.&lt;/p&gt;

&lt;p&gt;With our migration scripts written in a backward compatible manner, we can deploy the new version of our application. For convenience this new version has been pushed to Maven Central as version &lt;em&gt;0.1.7&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdeployment_two.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdeployment_two.png" title="width=500" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the deployment completes, open the Manager application at &lt;code&gt;http://tomcatip:8080/manager/html&lt;/code&gt;. Note, that while you could access the Manager through the load balancer, you do not get to choose which Tomcat instance you will be managing as the load balancer picks a Tomcat instance for you. This means it is better to connect to the Tomcat instance directly, bypassing the load balancer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fmanager_one.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fmanager_one.png" title="width=500" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are two things to notice in this screenshot:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We have two applications under the path &lt;code&gt;/randomquotes&lt;/code&gt;, each with a unique version.&lt;/li&gt;
&lt;li&gt;The application with the earlier version has a session associated with it. This was the session we created by accessing version &lt;em&gt;0.1.6.1&lt;/em&gt; before version &lt;em&gt;0.1.7&lt;/em&gt; was deployed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If we go back to the browser where we opened version &lt;em&gt;0.1.6.1&lt;/em&gt; of the application, we can continue to refresh the quotes. The counter increases as we expect, and the version number displayed in the footer remains at version &lt;em&gt;0.1.6.1&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If we reopen the application in a private browsing window, we can be guaranteed that we won’t reuse an old session cookie, and we are directed to version &lt;em&gt;0.1.7&lt;/em&gt; of the application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fprivate_window.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fprivate_window.png" title="width=500" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With that, we have demonstrated zero downtime deployments. Because our database migrations were designed to be backward compatible, version &lt;em&gt;0.1.6.1&lt;/em&gt; and version &lt;em&gt;0.1.7&lt;/em&gt; of our application can run side by side using Tomcat parallel deployments. Best of all, sessions for old deployments can still be transferred between Tomcat instances, so we retain high availability along with the parallel deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rollback strategies
&lt;/h2&gt;

&lt;p&gt;As long as database compatibility has been maintained between the last and current version of the application (versions &lt;code&gt;0.1.6.1&lt;/code&gt; and &lt;code&gt;0.1.7&lt;/code&gt; in this example), rolling back is as simple as creating a new deployment with the previous version of the application.&lt;/p&gt;

&lt;p&gt;Because the Tomcat version has a timestamp calculated at deployment time, deploying version &lt;code&gt;0.1.6.1&lt;/code&gt; of the application again results in it processing any new traffic as it has a later version.&lt;/p&gt;

&lt;p&gt;Note, any existing sessions for version &lt;code&gt;0.1.7&lt;/code&gt;, will be left to naturally expire thanks to the Tomcat’s parallel deployments. If this version has to be taken offline (for example, if there is a critical issue and it can not be left in service), we can use the &lt;strong&gt;Start/stop App in Tomcat&lt;/strong&gt; step to stop a deployed application.&lt;/p&gt;

&lt;p&gt;We’ll create a runbook to run this step, as it is a maintenance task that may need to be applied to any environment to pull a bad release.&lt;/p&gt;

&lt;p&gt;We start by adding a prompted variable that will be populated with the Tomcat version timestamp corresponding to the deployment we want to shutdown:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_version.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_version.png" title="width=500" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The runbook is then configured with the &lt;strong&gt;Start/stop App in Tomcat&lt;/strong&gt; step. The &lt;strong&gt;Deployment version&lt;/strong&gt; is set to the value of the prompted variable:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_stop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_stop.png" title="width=500" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the runbook is run, we are prompted for the timestamp version of the application to stop:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fstop_app_runbook_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fstop_app_runbook_1.png" title="width=500" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the runbook has completed, we can verify the application was stopped by opening the Manager console. In the screenshot below, you can see version &lt;strong&gt;200401140129&lt;/strong&gt; is not running. This version no longer responds to requests, and all future requests will then be directed to the latest version of the application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fstopped_application.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fstopped_application.png" title="width=500" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature branch deployments
&lt;/h2&gt;

&lt;p&gt;A common development practice is completing a feature in a separate SCM branch, known as a feature branch.&lt;/p&gt;

&lt;p&gt;A CI server will typically watch for the presence of feature branches and create a deployable artifact from the code committed there.&lt;/p&gt;

&lt;p&gt;These feature branch artifacts are then versioned to indicate which branch they were built from. GitVersion is a popular tool for generating versions to match commits and branches in Git, and they offer this example showing &lt;a href="https://gitversion.net/docs/git-branching-strategies/githubflow-examples" rel="noopener noreferrer"&gt;versions created as part of their GitHub flow&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fgithubflow_feature_branch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fgithubflow_feature_branch.png" title="width=500" width="323" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from the image above, a commit to a feature branch called &lt;strong&gt;myfeature&lt;/strong&gt; generates a version like &lt;strong&gt;1.2.1-myfeature.1+1&lt;/strong&gt;. This in turn, produces an artifact with a filename like &lt;code&gt;myapp.1.2.1-myfeature.1+1.zip&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Despite the fact that tools like GitVersion generate SemVer version strings, the same format can be used for Maven artifacts. However, there is a catch.&lt;/p&gt;

&lt;p&gt;SemVer will order a version with a feature branch lower than a version without any pre-release component. For example, &lt;strong&gt;1.2.1&lt;/strong&gt; is considered a higher version number than &lt;strong&gt;1.2.1-myfeature&lt;/strong&gt;. This is the expected ordering as a feature branch will eventually be merged back into the master branch.&lt;/p&gt;

&lt;p&gt;When a feature branch is appended to a Maven version, it is considered a qualifier. Maven allows any qualifiers but recognizes some special ones like &lt;strong&gt;SNAPSHOT&lt;/strong&gt;, &lt;strong&gt;final&lt;/strong&gt;, &lt;strong&gt;ga&lt;/strong&gt;, etc. A complete list can be found in the blog post &lt;a href="https://octopus.com/blog/maven-versioning-explained" rel="noopener noreferrer"&gt;Maven versions explained&lt;/a&gt;. Maven versions with unrecognized qualifiers (and feature branch names are unrecognized qualifiers) are treated as later releases than unqualified versions.&lt;/p&gt;

&lt;p&gt;This means Maven considers version &lt;strong&gt;1.2.1-myfeature&lt;/strong&gt; is to be a later release than &lt;strong&gt;1.2.1&lt;/strong&gt;, when clearly that is not the intention of a feature branch. You can verify this behavior with the following test in a project hosted on &lt;a href="https://github.com/mcasperson/MavenVersionTest/blob/master/src/test/java/org/apache/maven/artifact/versioning/VersionTest.java#L122" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, thanks to the functionality of channels in Octopus, we can ensure that both SemVer pre-release and Maven qualifiers are filtered in the expected way.&lt;/p&gt;

&lt;p&gt;Here is the default channel for our application deployment. Note, the regular expression &lt;strong&gt;^$&lt;/strong&gt; for the &lt;strong&gt;Pre-release tag&lt;/strong&gt; field. This regular expression only matches empty strings, meaning the default channel will only ever deploy artifacts with no pre-release or qualifier string:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdefault_channel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdefault_channel.png" title="width=500" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we have the feature branch channel, which defines a regular expression of &lt;strong&gt;.+&lt;/strong&gt; for the &lt;strong&gt;Pre-release tag&lt;/strong&gt; field. This regular expression only matches non-empty strings, meaning the feature branch channel will only deploy artifacts with a pre-release or qualifier string:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ffeature_branch_channel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ffeature_branch_channel.png" title="width=500" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the list of versions that Octopus allows a release to be created from in the default channel. Notice, the only version displayed has no qualifier, which we take to mean it is the master release:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdefault_channel_deployment.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdefault_channel_deployment.png" title="width=500" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the list of versions that Octopus allows a release to be created from using the feature branch channel. All of these versions have a qualifier embedding the feature branch name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ffeature_branch_channel_deployment.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ffeature_branch_channel_deployment.png" title="width=500" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The end result of these channels is that versions like &lt;strong&gt;1.2.1-myfeature&lt;/strong&gt; will never be compared to versions like &lt;strong&gt;1.2.1&lt;/strong&gt;, which removes the ambiguity with feature branch version numbers being considered later releases.&lt;/p&gt;

&lt;p&gt;The final step is to deploy these feature branch packages to unique contexts so they can be accessed side by side on a single Tomcat instance. To do this we modify the &lt;strong&gt;Context path&lt;/strong&gt; to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/randomquotes#{Octopus.Action.Package.PackageVersion | Replace "^([0-9\.]+)((?:-[A-Za-z0-9]+)?)(.*)$" "$2"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the regular expression above on the version &lt;code&gt;1.2.1-myfeature.1+1&lt;/code&gt; will do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;^([0-9\.]+)&lt;/code&gt; groups all digits and periods at the start of the version as group 1, which matches &lt;code&gt;1.2.1&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;((?:-[A-Za-z0-9]+)?)&lt;/code&gt; groups the leading dash and any subsequent alphanumeric characters (if any) as group 2, which matches &lt;code&gt;-myfeature&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;(.*)$&lt;/code&gt; groups any subsequent characters (if any) as group 3, which matches &lt;code&gt;.1+1&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This variable filter will result in the complete pre-release or qualifier strings being replaced by just the second group from the regular expression. This results in the following context path:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Version &lt;code&gt;1.2.1-myfeature.1+1&lt;/code&gt; generates a context path of &lt;code&gt;/randomquotes-myfeature&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Version &lt;code&gt;1.2.1&lt;/code&gt; generates a context path of &lt;code&gt;/randomquotes&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a screenshot of the Tomcat deployment step with the new context path applied:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fcontext_path.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fcontext_path.png" title="width=500" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The SemVer project provides a more &lt;a href="https://semver.org/#is-there-a-suggested-regular-expression-regex-to-check-a-semver-string" rel="noopener noreferrer"&gt;robust regular expression&lt;/a&gt; that reliably captures the groups in a SemVer version.&lt;/p&gt;

&lt;p&gt;The regular expression with named capture groups is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;^(?P&amp;lt;major&amp;gt;0|[1-9]\d*)\.(?P&amp;lt;minor&amp;gt;0|[1-9]\d*)\.(?P&amp;lt;patch&amp;gt;0|[1-9]\d*)(?:-(?P&amp;lt;prerelease&amp;gt;(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P&amp;lt;buildmetadata&amp;gt;[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The regular expression without named groups is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+([0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Public certificate management
&lt;/h2&gt;

&lt;p&gt;To finish configuring our infrastructure, we will enable HTTPS access via our load balancers. We can edit the Apache web server virtual host configuration to enable SSL and point to the keys and certificates we have obtained for our domain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Obtain a certificate
&lt;/h3&gt;

&lt;p&gt;For this post, I have obtained a Let’s Encrypt certificate, generated through our DNS provider. The exact process for generating HTTPS certificates is not something we’ll look at here, but you can refer to your DNS or certificate provider for specific instructions.&lt;/p&gt;

&lt;p&gt;In the screenshot below, you can see the various options available for downloading the certificate. Note, while instructions and downloads are provided for Apache, we are downloading the PFX file provided under the instructions for IIS. PFX files contain the public certificate, private key, and certificate chain in a single file. We need this single file to upload the certificate to Octopus. Here we download the PFX file for the &lt;code&gt;octopus.tech&lt;/code&gt; domain:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdns_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fdns_1.png" title="width=500" width="775" height="621"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the certificate in Octopus
&lt;/h3&gt;

&lt;p&gt;Deploying certificates is an ongoing operation. In particular, certificates provided by Let’s Encrypt expire every three months, and so need to be frequently refreshed.&lt;/p&gt;

&lt;p&gt;This makes deploying certificates a great use case for runbooks. Unlike a regular deployment, runbooks don’t need to progress through environments, and you don’t need to create a deployment. We’ll create a runbook to deploy the certificate to the Apache load balancers.&lt;/p&gt;

&lt;p&gt;First, we need to upload the PFX certificate that was generated by the DNS provider. In the screenshot below you can see the Let’s Encrypt certificate uploaded to the Octopus certificate store:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fcertificate.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fcertificate.png" title="width=500" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy the certificate
&lt;/h3&gt;

&lt;p&gt;Create a new project in Octopus, and add the certificate we just uploaded as a variable:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fcert_deploy_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fcert_deploy_1.png" title="width=500" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, create a runbook with a single &lt;strong&gt;Run a Script&lt;/strong&gt; step.&lt;/p&gt;

&lt;p&gt;The first step in the script is to enable &lt;strong&gt;mod_ssl&lt;/strong&gt; with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;a2enmod ssl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create some directories to hold the certificate, certificate chain, and private key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"/etc/apache2/ssl"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mkdir&lt;/span&gt; /etc/apache2/ssl
&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"/etc/apache2/ssl/private"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mkdir&lt;/span&gt; /etc/apache2/ssl/private
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The contents of the certificate variable are then saved as files into the directories above. Certificates are special variables that expose the individual components that make up a certificate with &lt;a href="https://octopus.com/docs/projects/variables/certificate-variables#expanded-properties" rel="noopener noreferrer"&gt;expanded properties&lt;/a&gt;. We need to access three properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Certificate.CertificatePem&lt;/code&gt;, which is the public certificate.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Certificate.PrivateKeyPem&lt;/code&gt;, which is the private key.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Certificate.ChainPem&lt;/code&gt;, which is the certificate chain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We print the contents of these variables into three files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;get_octopusvariable &lt;span class="s2"&gt;"Certificate.CertificatePem"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apache2/ssl/octopus_tech.crt
get_octopusvariable &lt;span class="s2"&gt;"Certificate.PrivateKeyPem"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apache2/ssl/private/octopus_tech.key
get_octopusvariable &lt;span class="s2"&gt;"Certificate.ChainPem"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apache2/ssl/octopus_tech_bundle.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you recall, earlier in the post we created the file &lt;code&gt;/etc/apache2/sites-enabled/000-default.conf&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;VirtualHost&lt;/span&gt; &lt;span class="err"&gt;*:80&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  ErrorLog ${APACHE_LOG_DIR}/error.log
  CustomLog ${APACHE_LOG_DIR}/access.log combined
  JkMount /* loadbalancer
&lt;span class="nt"&gt;&amp;lt;/VirtualHost&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We want to modify this file, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;VirtualHost&lt;/span&gt; &lt;span class="err"&gt;*:443&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  ErrorLog ${APACHE_LOG_DIR}/error.log
  CustomLog ${APACHE_LOG_DIR}/access.log combined
  JkMount /* loadbalancer
  SSLEngine on
  SSLCertificateFile /etc/apache2/ssl/octopus_tech.crt
  SSLCertificateKeyFile /etc/apache2/ssl/private/octopus_tech.key
  SSLCertificateChainFile /etc/apache2/ssl/octopus_tech_bundle.pem
&lt;span class="nt"&gt;&amp;lt;/VirtualHost&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is done by echoing the desired text into the file &lt;code&gt;/etc/apache2/sites-enabled/000-default.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
&amp;lt;VirtualHost *:443&amp;gt;
  ErrorLog &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APACHE_LOG_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;/error.log
  CustomLog &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APACHE_LOG_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;/access.log combined
  JkMount /* loadbalancer
  SSLEngine on
  SSLCertificateFile /etc/apache2/ssl/octopus_tech.crt
  SSLCertificateKeyFile /etc/apache2/ssl/private/octopus_tech.key
  SSLCertificateChainFile /etc/apache2/ssl/octopus_tech_bundle.pem
&amp;lt;/VirtualHost&amp;gt;
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apache2/sites-enabled/000-default.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final step is to restart the &lt;code&gt;apache2&lt;/code&gt; service to load the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl restart apache2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the complete script for reference:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;a2enmod ssl

&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"/etc/apache2/ssl"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mkdir&lt;/span&gt; /etc/apache2/ssl
&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"/etc/apache2/ssl/private"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mkdir&lt;/span&gt; /etc/apache2/ssl/private
get_octopusvariable &lt;span class="s2"&gt;"Certificate.CertificatePem"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apache2/ssl/octopus_tech.crt
get_octopusvariable &lt;span class="s2"&gt;"Certificate.PrivateKeyPem"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apache2/ssl/private/octopus_tech.key
get_octopusvariable &lt;span class="s2"&gt;"Certificate.ChainPem"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apache2/ssl/octopus_tech_bundle.pem

&lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
&amp;lt;VirtualHost *:443&amp;gt;
  ErrorLog &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APACHE_LOG_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;/error.log
  CustomLog &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;APACHE_LOG_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;/access.log combined
  JkMount /* loadbalancer
  SSLEngine on
  SSLCertificateFile /etc/apache2/ssl/octopus_tech.crt
  SSLCertificateKeyFile /etc/apache2/ssl/private/octopus_tech.key
  SSLCertificateChainFile /etc/apache2/ssl/octopus_tech_bundle.pem
&amp;lt;/VirtualHost&amp;gt;
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apache2/sites-enabled/000-default.conf
systemctl restart apache2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Frunbook.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Frunbook.png" title="width=500" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the runbook has completed, we can verify the application is exposed via HTTPS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ffirefox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ffirefox.png" title="width=500" width="741" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Internal certificate management
&lt;/h2&gt;

&lt;p&gt;As we’ve seen, it is useful to connect directly to the Tomcat instances when using the Manager console. This connection transfers credentials and should be done across a secure connection. To support this, we configure Tomcat with self-signed certificates.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create self-signed certificates
&lt;/h3&gt;

&lt;p&gt;Because our Tomcat instances are not exposed via a hostname, &lt;a href="https://cabforum.org/internal-names/" rel="noopener noreferrer"&gt;we don’t have the option of getting a certificate for them&lt;/a&gt;. To enable HTTPS, we need to create self-signed certificates, which can be done with OpenSSL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa 2048 &amp;gt; private.pem
openssl req -x509 -new -key private.pem -out public.pem -days 3650
openssl pkcs12 -export -in public.pem -inkey private.pem -out mycert.pfx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add the certificate to Tomcat
&lt;/h3&gt;

&lt;p&gt;The certificate is configured in Tomcat using the &lt;strong&gt;Deploy a certificate to Tomcat&lt;/strong&gt; step.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Tomcat CATALINA_HOME path&lt;/strong&gt; is set to &lt;code&gt;/usr/share/tomcat9&lt;/code&gt; and the &lt;strong&gt;Tomcat CATALINA_BASE path&lt;/strong&gt; is set to &lt;code&gt;/var/lib/tomcat9&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_paths.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_paths.png" title="width=500" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We reference a certificate variable for the &lt;strong&gt;Select certificate variable&lt;/strong&gt; field. The default value of &lt;strong&gt;Catalina&lt;/strong&gt; is fine for the &lt;strong&gt;Tomcat service name&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We have a few choices for how the certificate is handled by Tomcat. Generally speaking, the &lt;strong&gt;Blocking IO&lt;/strong&gt;, &lt;strong&gt;Non-Blocking IO&lt;/strong&gt;, &lt;strong&gt;Non-Blocking IO 2&lt;/strong&gt;, and &lt;strong&gt;Apache Portable Runtime&lt;/strong&gt; options have an increasing level of performance. The &lt;strong&gt;Apache Portable Runtime&lt;/strong&gt; is an additional library that Tomcat can take advantage of, and it is provided by the Tomcat packages we installed with &lt;code&gt;apt-get&lt;/code&gt;, so it makes sense to use that option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_cert.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Ftomcat_cert.png" title="width=500" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To allow Tomcat to use the new configuration, we need to restart the service with a script step using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart tomcat9
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now load the manager console from &lt;code&gt;https://tomcatip:8443/manager/html&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scale up to multiple environments
&lt;/h2&gt;

&lt;p&gt;The infrastructure we have created thus far can now be used as a template for other testing or production environments. Nothing we have presented here is environment specific, meaning all the processes and infrastructure can be scaled out to as many environments as needed.&lt;/p&gt;

&lt;p&gt;By associating the Tentacles assigned to the new Tomcat and load balancer instanced to additional environments in Octopus, we gain the ability to push deployments through to production:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fmultiple_environments.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.octopus.com%2Fblog%2F2020-04%2Fultimate-guide-to-tomcat-deployments%2Fmultiple_environments.png" title="width=500" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you have reached this point, congratulations! Setting up a highly available Tomcat cluster with zero downtime deployments, feature branches, rollback, and HTTPS is not for the fainthearted. It is still up to the end user to combine multiple technologies to achieve this result, but I hope the instructions laid out in this blog post expose some of the magic that goes into real world Java deployments.&lt;/p&gt;

&lt;p&gt;To summarize, in this post we:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configured Tomcat session replication with a PostgreSQL database and session cookie rewriting with the &lt;code&gt;JvmRouteBinderValve&lt;/code&gt; valve.&lt;/li&gt;
&lt;li&gt;Configured Apache web servers acting as load balancers with the mod_jk plugin.&lt;/li&gt;
&lt;li&gt;Implemented high availability amongst the load balancers with Keepalived.&lt;/li&gt;
&lt;li&gt;Performed zero downtime deployments with Tomcat’s parallel deployment feature and Flyway performing backward compatible database migrations.&lt;/li&gt;
&lt;li&gt;Smoke tested the deployments with community steps in Octopus.&lt;/li&gt;
&lt;li&gt;Implemented feature branch deployments, taking into account the limitations of the Maven versioning strategy with Octopus channels.&lt;/li&gt;
&lt;li&gt;Looked at how applications can be rolled back or pulled from service.&lt;/li&gt;
&lt;li&gt;Added HTTPS certificates to Apache.&lt;/li&gt;
&lt;li&gt;Repeated the process for multiple environments.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>tomcat</category>
      <category>java</category>
      <category>octopus</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Creating a Kubernetes Operator with Kotlin</title>
      <dc:creator>Matthew Casperson</dc:creator>
      <pubDate>Thu, 27 Feb 2020 23:47:26 +0000</pubDate>
      <link>https://forem.com/mcasperson/creating-a-kubernetes-operator-with-kotlin-48a1</link>
      <guid>https://forem.com/mcasperson/creating-a-kubernetes-operator-with-kotlin-48a1</guid>
      <description>&lt;p&gt;Most environments will initially treat their Kubernetes cluster as a tool to orchestrate containers and configure traffic between them. Kubernetes supports this use case very well by providing declarative descriptions of the desired container state and their connections.&lt;/p&gt;

&lt;p&gt;When used in this way, developers and operations staff sit outside of the cluster, looking in. The cluster is managed with calls to &lt;code&gt;kubectl&lt;/code&gt; that are made in an ad-hoc fashion or from a CI/CD pipeline. This means Kubernetes itself is quite naïve; it understands how to reconfigure itself to match the desired state, but it has no understanding of what that state represents.&lt;/p&gt;

&lt;p&gt;For example, a common Kubernetes deployment might see three pods created: a front end web application, a backend web service, and a database. The relationship between these pods is well understood by the developers deploying them as a classic three-tier architecture, but Kubernetes literally sees nothing more than three pods to be deployed, monitored, and exposed to network traffic.&lt;/p&gt;

&lt;p&gt;The operator pattern has evolved as a way of encapsulating business knowledge and operational workflows in the Kubernetes cluster itself, allowing a cluster to implement high level, domain-specific concepts with the common, low-level resources like pods, services, and deployments, etc.&lt;/p&gt;

&lt;p&gt;The term was originally coined by Brandon Philips in the blog post &lt;a href="https://coreos.com/blog/introducing-operators.html"&gt;Introducing Operators: Putting Operational Knowledge into Software&lt;/a&gt; and offers this definition:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It builds upon the basic Kubernetes resource and controller concepts but includes domain or application-specific knowledge to automate common tasks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The three key components identified in this definition are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource&lt;/li&gt;
&lt;li&gt;Controller&lt;/li&gt;
&lt;li&gt;Domain or application-specific knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, a &lt;em&gt;resource&lt;/em&gt; means a Custom Resource Definition (CRD), a &lt;em&gt;controller&lt;/em&gt; means an application integrated into and responding to the Kubernetes API, and the &lt;em&gt;application-specific knowledge&lt;/em&gt; is the logic implemented in the &lt;em&gt;controller&lt;/em&gt; to reify high-level concepts from standard Kubernetes resources.&lt;/p&gt;

&lt;p&gt;To understand the operator pattern, let’s look at a simple example written in Kotlin. The code for this operator is available from &lt;a href="https://github.com/OctopusSamples/KotlinK8SOperator"&gt;GitHub&lt;/a&gt;, and it is based on the code from this &lt;a href="https://developers.redhat.com/blog/2019/10/07/write-a-simple-kubernetes-operator-in-java-using-the-fabric8-kubernetes-client/"&gt;RedHat blog&lt;/a&gt;. The operator will extend the Kubernetes cluster with the concept of a web server with a &lt;code&gt;WebServer&lt;/code&gt; CRD and a controller that builds pods with an image known to expose a sample web server.&lt;/p&gt;

&lt;p&gt;The CRD meets the &lt;em&gt;resource&lt;/em&gt; requirement, the code we’ll write to interact with the Kubernetes API meets the &lt;em&gt;controller&lt;/em&gt; requirement, and the knowledge that a particular Docker image is used to expose a sample web server is the &lt;em&gt;application-specific knowledge&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pom.xml file
&lt;/h2&gt;

&lt;p&gt;We start with the Maven &lt;code&gt;pom.xml&lt;/code&gt; file. This file defines the dependencies required for Kotlin itself and the &lt;a href="https://github.com/fabric8io/kubernetes-client"&gt;fabric8 Kubernetes client library&lt;/a&gt;. The complete &lt;code&gt;pom.xml&lt;/code&gt; file is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;project&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;modelVersion&amp;gt;&lt;/span&gt;4.0.0&lt;span class="nt"&gt;&amp;lt;/modelVersion&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;com.octopus&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;kotlink8soperator&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.0&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;

    &lt;span class="nt"&gt;&amp;lt;properties&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;kotlin.version&amp;gt;&lt;/span&gt;1.3.61&lt;span class="nt"&gt;&amp;lt;/kotlin.version&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;version.fabric8.client&amp;gt;&lt;/span&gt;4.7.0&lt;span class="nt"&gt;&amp;lt;/version.fabric8.client&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/properties&amp;gt;&lt;/span&gt;

    &lt;span class="nt"&gt;&amp;lt;dependencies&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.jetbrains.kotlin&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;kotlin-stdlib&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;${kotlin.version}&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;io.fabric8&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;kubernetes-client&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;${version.fabric8.client}&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/dependencies&amp;gt;&lt;/span&gt;

    &lt;span class="nt"&gt;&amp;lt;build&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;sourceDirectory&amp;gt;&lt;/span&gt;${project.basedir}/src/main/kotlin&lt;span class="nt"&gt;&amp;lt;/sourceDirectory&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;testSourceDirectory&amp;gt;&lt;/span&gt;${project.basedir}/src/test/kotlin&lt;span class="nt"&gt;&amp;lt;/testSourceDirectory&amp;gt;&lt;/span&gt;

        &lt;span class="nt"&gt;&amp;lt;plugins&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;plugin&amp;gt;&lt;/span&gt;
                &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.jetbrains.kotlin&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
                &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;kotlin-maven-plugin&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
                &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;${kotlin.version}&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;

                &lt;span class="nt"&gt;&amp;lt;executions&amp;gt;&lt;/span&gt;
                    &lt;span class="nt"&gt;&amp;lt;execution&amp;gt;&lt;/span&gt;
                        &lt;span class="nt"&gt;&amp;lt;id&amp;gt;&lt;/span&gt;compile&lt;span class="nt"&gt;&amp;lt;/id&amp;gt;&lt;/span&gt;
                        &lt;span class="nt"&gt;&amp;lt;goals&amp;gt;&lt;/span&gt;
                            &lt;span class="nt"&gt;&amp;lt;goal&amp;gt;&lt;/span&gt;compile&lt;span class="nt"&gt;&amp;lt;/goal&amp;gt;&lt;/span&gt;
                        &lt;span class="nt"&gt;&amp;lt;/goals&amp;gt;&lt;/span&gt;
                    &lt;span class="nt"&gt;&amp;lt;/execution&amp;gt;&lt;/span&gt;

                    &lt;span class="nt"&gt;&amp;lt;execution&amp;gt;&lt;/span&gt;
                        &lt;span class="nt"&gt;&amp;lt;id&amp;gt;&lt;/span&gt;test-compile&lt;span class="nt"&gt;&amp;lt;/id&amp;gt;&lt;/span&gt;
                        &lt;span class="nt"&gt;&amp;lt;goals&amp;gt;&lt;/span&gt;
                            &lt;span class="nt"&gt;&amp;lt;goal&amp;gt;&lt;/span&gt;test-compile&lt;span class="nt"&gt;&amp;lt;/goal&amp;gt;&lt;/span&gt;
                        &lt;span class="nt"&gt;&amp;lt;/goals&amp;gt;&lt;/span&gt;
                    &lt;span class="nt"&gt;&amp;lt;/execution&amp;gt;&lt;/span&gt;
                &lt;span class="nt"&gt;&amp;lt;/executions&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;/plugin&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;/plugins&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/build&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/project&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Anatomy of a Kubernetes resource
&lt;/h2&gt;

&lt;p&gt;Before we dive into the Kotlin code, we need to understand the common structure of all Kubernetes resources. Here is the YAML definition of a deployment resource that we’ll use as an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.7.9&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;availableReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;observedGeneration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;readyReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;updatedReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This resource can be broken down into four components.&lt;/p&gt;

&lt;p&gt;The first component is the Group, Version, and Kind (GVK). The deployment resource has a group of &lt;code&gt;apps&lt;/code&gt;, a version of &lt;code&gt;v1&lt;/code&gt; and a kind of &lt;code&gt;Deployment&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The second component is the metadata. This is where labels, annotations, names, and namespaces are defined:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The third component is the spec, which defines the properties of the specific resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.7.9&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The fourth component is the status. The details in this component are generated by Kubernetes to reflect the current state of the resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;availableReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;observedGeneration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;readyReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;updatedReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  The CRD classes
&lt;/h2&gt;

&lt;p&gt;Now that we know the components that make up a Kubernetes resource, we can look at the code that reflects the CRD implemented by the operator.&lt;/p&gt;

&lt;p&gt;We are creating a new CRD called &lt;code&gt;WebServer&lt;/code&gt;, which is represented by a class also called &lt;code&gt;WebServer&lt;/code&gt;. This class has two properties defining the spec and the status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;com.octopus.webserver.operator.crd&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;io.fabric8.kubernetes.client.CustomResource&lt;/span&gt;

&lt;span class="kd"&gt;data class&lt;/span&gt; &lt;span class="nc"&gt;WebServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="py"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;WebServerSpec&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;WebServerSpec&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
                     &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="py"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;WebServerStatus&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;WebServerStatus&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;CustomResource&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The spec for our CRD is represented in the &lt;code&gt;WebServerSpec&lt;/code&gt; class. This has a single field called &lt;code&gt;replicas&lt;/code&gt; indicating how many web server pods this CRD is responsible for creating:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.octopus.webserver.operator.crd

import com.fasterxml.jackson.databind.annotation.JsonDeserialize
import io.fabric8.kubernetes.api.model.KubernetesResource

@JsonDeserialize
data class WebServerSpec(val replicas: Int = 0) : KubernetesResource
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The status of our CRD is represented in the &lt;code&gt;WebServerStatus&lt;/code&gt; class. It contains a single field called &lt;code&gt;count&lt;/code&gt; that reports how many pods have been created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.octopus.webserver.operator.crd

import com.fasterxml.jackson.databind.annotation.JsonDeserialize
import io.fabric8.kubernetes.api.model.KubernetesResource

@JsonDeserialize
data class WebServerStatus(var count: Int = 0) : KubernetesResource
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The final two classes, called &lt;code&gt;WebServerList&lt;/code&gt; and &lt;code&gt;DoneableWebServer&lt;/code&gt;, contain no custom properties or logic, and are boilerplate code required by the fabric8 library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.octopus.webserver.operator.crd

import io.fabric8.kubernetes.client.CustomResourceList

class WebServerList : CustomResourceList&amp;lt;WebServer&amp;gt;()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.octopus.webserver.operator.crd

import io.fabric8.kubernetes.client.CustomResourceDoneable
import io.fabric8.kubernetes.api.builder.Function

class DoneableWebServer(resource: WebServer, function: Function&amp;lt;WebServer,WebServer&amp;gt;) :
        CustomResourceDoneable&amp;lt;WebServer&amp;gt;(resource, function)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  The main function
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;main()&lt;/code&gt; function is the entry point to our controller. Here is the complete code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.octopus.webserver.operator

import com.octopus.webserver.operator.controller.WebServerController
import com.octopus.webserver.operator.crd.WebServer
import com.octopus.webserver.operator.crd.WebServerList
import io.fabric8.kubernetes.api.model.Pod
import io.fabric8.kubernetes.api.model.PodList
import io.fabric8.kubernetes.api.model.apiextensions.CustomResourceDefinitionBuilder
import io.fabric8.kubernetes.client.DefaultKubernetesClient
import io.fabric8.kubernetes.client.dsl.base.CustomResourceDefinitionContext


fun main(args: Array&amp;lt;String&amp;gt;) {
    val client = DefaultKubernetesClient()
    client.use {
        val namespace = client.namespace ?: "default"
        val podSetCustomResourceDefinition = CustomResourceDefinitionBuilder()
                .withNewMetadata().withName("webservers.demo.k8s.io").endMetadata()
                .withNewSpec()
                .withGroup("demo.k8s.io")
                .withVersion("v1alpha1")
                .withNewNames().withKind("WebServer").withPlural("webservers").endNames()
                .withScope("Namespaced")
                .endSpec()
                .build()
        val webServerCustomResourceDefinitionContext = CustomResourceDefinitionContext.Builder()
                .withVersion("v1alpha1")
                .withScope("Namespaced")
                .withGroup("demo.k8s.io")
                .withPlural("webservers")
                .build()
        val informerFactory = client.informers()
        val podSharedIndexInformer = informerFactory.sharedIndexInformerFor(
                Pod::class.java,
                PodList::class.java,
                10 * 60 * 1000.toLong())
        val webServerSharedIndexInformer = informerFactory.sharedIndexInformerForCustomResource(
                webServerCustomResourceDefinitionContext,
                WebServer::class.java,
                WebServerList::class.java,
                10 * 60 * 1000.toLong())
        val webServerController = WebServerController(
                client,
                podSharedIndexInformer,
                webServerSharedIndexInformer,
                podSetCustomResourceDefinition,
                namespace)

        webServerController.create()
        informerFactory.startAllRegisteredInformers()

        webServerController.run()
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We create a &lt;code&gt;DefaultKubernetesClient&lt;/code&gt;, which gives us access to the Kubernetes API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val client = DefaultKubernetesClient()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The client knows how to configure itself based on the environment it is executed in. We’ll run this code locally when testing, meaning the client will access the details of the Kubernetes cluster from the &lt;code&gt;~/.kube/config&lt;/code&gt; file. The namespace is then extracted from the client’s configuration, or set to &lt;code&gt;default&lt;/code&gt; if no namespace setting was found:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val namespace = client.namespace ?: "default"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;CustomResourceDefinitionBuilder&lt;/code&gt; defines the &lt;code&gt;WebServer&lt;/code&gt; CRD that this controller manages. This is used when working with the client to update resources in the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val podSetCustomResourceDefinition = CustomResourceDefinitionBuilder()
        .withNewMetadata().withName("webservers.demo.k8s.io").endMetadata()
        .withNewSpec()
        .withGroup("demo.k8s.io")
        .withVersion("v1alpha1")
        .withNewNames().withKind("WebServer").withPlural("webservers").endNames()
        .withScope("Namespaced")
        .endSpec()
        .build()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The controller works by listening to events that indicate the resources it should be managing have changed. To listen to events relating to the &lt;code&gt;WebServer&lt;/code&gt; CRD, we create a &lt;code&gt;CustomResourceDefinitionContext&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val webServerCustomResourceDefinitionContext = CustomResourceDefinitionContext.Builder()
        .withVersion("v1alpha1")
        .withScope("Namespaced")
        .withGroup("demo.k8s.io")
        .withPlural("webservers")
        .build()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We are notified of events through informers, and the informers are created from a factory provided by the client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val informerFactory = client.informers()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here we create an informer that will notify us of events relating to pods. Because pods are a standard resource in Kubernetes, creating this informer did not require a &lt;code&gt;CustomResourceDefinitionContext&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val podSharedIndexInformer = informerFactory.sharedIndexInformerFor(
        Pod::class.java,
        PodList::class.java,
        10 * 60 * 1000.toLong())
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here we create an informer that will notify us of events relating to our CRD. This required the &lt;code&gt;CustomResourceDefinitionContext&lt;/code&gt; created previously:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val webServerSharedIndexInformer = informerFactory.sharedIndexInformerForCustomResource(
        webServerCustomResourceDefinitionContext,
        WebServer::class.java,
        WebServerList::class.java,
        10 * 60 * 1000.toLong())
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The logic of the operator is contained in the controller. In this project, the &lt;code&gt;WebServerController&lt;/code&gt; class fulfills the role of the controller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;val webServerController = WebServerController(
        client,
        podSharedIndexInformer,
        webServerSharedIndexInformer,
        podSetCustomResourceDefinition,
        namespace)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The controller links up event handlers in the &lt;code&gt;create()&lt;/code&gt; method, we start listening for events, and then enter the reconcile loop by calling the &lt;code&gt;run()&lt;/code&gt; method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;webServerController.create()
informerFactory.startAllRegisteredInformers()

webServerController.run()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  The controller
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;WebServerController&lt;/code&gt; class implements the controller in our operator. Its job is to listen for changes to Kubernetes resources and reconcile the current state with the desired state. The complete code for the class is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.octopus.webserver.operator.controller

import com.octopus.webserver.operator.crd.DoneableWebServer
import com.octopus.webserver.operator.crd.WebServer
import com.octopus.webserver.operator.crd.WebServerList
import io.fabric8.kubernetes.api.model.OwnerReference
import io.fabric8.kubernetes.api.model.Pod
import io.fabric8.kubernetes.api.model.PodBuilder
import io.fabric8.kubernetes.api.model.apiextensions.CustomResourceDefinition
import io.fabric8.kubernetes.client.KubernetesClient
import io.fabric8.kubernetes.client.informers.ResourceEventHandler
import io.fabric8.kubernetes.client.informers.SharedIndexInformer
import io.fabric8.kubernetes.client.informers.cache.Cache
import io.fabric8.kubernetes.client.informers.cache.Lister
import java.util.*
import java.util.AbstractMap.SimpleEntry
import java.util.concurrent.ArrayBlockingQueue


class WebServerController(private val kubernetesClient: KubernetesClient,
                          private val podInformer: SharedIndexInformer&amp;lt;Pod&amp;gt;,
                          private val webServerInformer: SharedIndexInformer&amp;lt;WebServer&amp;gt;,
                          private val webServerResourceDefinition: CustomResourceDefinition,
                          private val namespace: String) {
    private val APP_LABEL = "app"
    private val webServerLister = Lister&amp;lt;WebServer&amp;gt;(webServerInformer.indexer, namespace)
    private val podLister = Lister&amp;lt;Pod&amp;gt;(podInformer.indexer, namespace)
    private val workQueue = ArrayBlockingQueue&amp;lt;String&amp;gt;(1024)

    fun create() {
        webServerInformer.addEventHandler(object : ResourceEventHandler&amp;lt;WebServer&amp;gt; {
            override fun onAdd(webServer: WebServer) {
                enqueueWebServer(webServer)
            }

            override fun onUpdate(webServer: WebServer, newWebServer: WebServer) {
                enqueueWebServer(newWebServer)
            }

            override fun onDelete(webServer: WebServer, b: Boolean) {}
        })

        podInformer.addEventHandler(object : ResourceEventHandler&amp;lt;Pod&amp;gt; {
            override fun onAdd(pod: Pod) {
                handlePodObject(pod)
            }

            override fun onUpdate(oldPod: Pod, newPod: Pod) {
                if (oldPod.metadata.resourceVersion == newPod.metadata.resourceVersion) {
                    return
                }
                handlePodObject(newPod)
            }

            override fun onDelete(pod: Pod, b: Boolean) {}
        })
    }

    private fun enqueueWebServer(webServer: WebServer) {
        val key: String = Cache.metaNamespaceKeyFunc(webServer)
        if (key.isNotEmpty()) {
            workQueue.add(key)
        }
    }

    private fun handlePodObject(pod: Pod) {
        val ownerReference = getControllerOf(pod)

        if (ownerReference?.kind?.equals("WebServer", ignoreCase = true) != true) {
            return
        }

        webServerLister
                .get(ownerReference.name)
                ?.also { enqueueWebServer(it) }
    }

    private fun getControllerOf(pod: Pod): OwnerReference? =
            pod.metadata.ownerReferences.firstOrNull { it.controller }

    private fun reconcile(webServer: WebServer) {
        val pods = podCountByLabel(APP_LABEL, webServer.metadata.name)
        val existingPods = pods.size

        webServer.status.count = existingPods
        updateStatus(webServer)

        if (existingPods &amp;lt; webServer.spec.replicas) {
            createPod(webServer)
        } else if (existingPods &amp;gt; webServer.spec.replicas) {
            kubernetesClient
                    .pods()
                    .inNamespace(webServer.metadata.namespace)
                    .withName(pods[0])
                    .delete()
        }
    }

    private fun updateStatus(webServer: WebServer) =
            kubernetesClient.customResources(webServerResourceDefinition, WebServer::class.java, WebServerList::class.java, DoneableWebServer::class.java)
                    .inNamespace(webServer.metadata.namespace)
                    .withName(webServer.metadata.name)
                    .updateStatus(webServer)

    private fun podCountByLabel(label: String, webServerName: String): List&amp;lt;String&amp;gt; =
            podLister.list()
                    .filter { it.metadata.labels.entries.contains(SimpleEntry(label, webServerName)) }
                    .filter { it.status.phase == "Running" || it.status.phase == "Pending" }
                    .map { it.metadata.name }

    private fun createPod(webServer: WebServer) =
            createNewPod(webServer).let { pod -&amp;gt;
                kubernetesClient.pods().inNamespace(webServer.metadata.namespace).create(pod)
            }

    private fun createNewPod(webServer: WebServer): Pod =
            PodBuilder()
                    .withNewMetadata()
                    .withGenerateName(webServer.metadata.name.toString() + "-pod")
                    .withNamespace(webServer.metadata.namespace)
                    .withLabels(Collections.singletonMap(APP_LABEL, webServer.metadata.name))
                    .addNewOwnerReference()
                    .withController(true)
                    .withKind("WebServer")
                    .withApiVersion("demo.k8s.io/v1alpha1")
                    .withName(webServer.metadata.name)
                    .withNewUid(webServer.metadata.uid)
                    .endOwnerReference()
                    .endMetadata()
                    .withNewSpec()
                    .addNewContainer().withName("nginx").withImage("nginxdemos/hello").endContainer()
                    .endSpec()
                    .build()

    fun run() {
        blockUntilSynced()
        while (true) {
            try {
                workQueue
                        .take()
                        .split("/")
                        .toTypedArray()[1]
                        .let { webServerLister.get(it) }
                        ?.also { reconcile(it) }
            } catch (interruptedException: InterruptedException) {
                // ignored
            }
        }
    }

    private fun blockUntilSynced() {
        while (!podInformer.hasSynced() || !webServerInformer.hasSynced()) {}
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;create()&lt;/code&gt; method assigns anonymous classes as informer event handlers. The event handlers identify instances of the &lt;code&gt;WebServer&lt;/code&gt; CRDs that need to be processed by calling either &lt;code&gt;enqueueWebServer()&lt;/code&gt; or &lt;code&gt;handlePodObject()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fun create() {
        webServerInformer.addEventHandler(object : ResourceEventHandler&amp;lt;WebServer&amp;gt; {
            override fun onAdd(webServer: WebServer) {
                enqueueWebServer(webServer)
            }

            override fun onUpdate(webServer: WebServer, newWebServer: WebServer) {
                enqueueWebServer(newWebServer)
            }

            override fun onDelete(webServer: WebServer, b: Boolean) {}
        })

        podInformer.addEventHandler(object : ResourceEventHandler&amp;lt;Pod&amp;gt; {
            override fun onAdd(pod: Pod) {
                handlePodObject(pod)
            }

            override fun onUpdate(oldPod: Pod, newPod: Pod) {
                if (oldPod.metadata.resourceVersion == newPod.metadata.resourceVersion) {
                    return
                }
                handlePodObject(newPod)
            }

            override fun onDelete(pod: Pod, b: Boolean) {}
        })
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;enqueueWebServer()&lt;/code&gt; creates a key identifying the &lt;code&gt;WebServer&lt;/code&gt; CRD and adds it to the &lt;code&gt;workQueue&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private fun enqueueWebServer(webServer: WebServer) {
    val key: String = Cache.metaNamespaceKeyFunc(webServer)
    if (key.isNotEmpty()) {
        workQueue.add(key)
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;handlePodObject()&lt;/code&gt; first determines if the pod is managed by a &lt;code&gt;WebServer&lt;/code&gt; through the ownerReference. If it is, the owning &lt;code&gt;WebServer&lt;/code&gt; is added to the &lt;code&gt;workQueue&lt;/code&gt; by calling &lt;code&gt;enqueueWebServer()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private fun handlePodObject(pod: Pod) {
    val ownerReference = getControllerOf(pod)

    if (ownerReference?.kind?.equals("WebServer", ignoreCase = true) != true) {
        return
    }

    webServerLister
            .get(ownerReference.name)
            ?.also { enqueueWebServer(it) }
}

private fun getControllerOf(pod: Pod): OwnerReference? =
        pod.metadata.ownerReferences.firstOrNull { it.controller }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;reconcile()&lt;/code&gt; provides the logic that ensures the cluster has as many pods as required by the &lt;code&gt;WebServer&lt;/code&gt; CRD. It calls &lt;code&gt;podCountByLabel()&lt;/code&gt; to find out how many pods exist, and updates the status of the CRD with a call to &lt;code&gt;updateStatus()&lt;/code&gt;. If there are too few pods to meet the requirements, &lt;code&gt;createPod()&lt;/code&gt; is called. If there are too many pods, one is deleted.&lt;/p&gt;

&lt;p&gt;By continually creating or deleting pods to push the cluster towards the desired state, we will eventually satisfy the requirements of the &lt;code&gt;WebServer&lt;/code&gt; CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private fun reconcile(webServer: WebServer) {
    val pods = podCountByLabel(APP_LABEL, webServer.metadata.name)
    val existingPods = pods.size

    webServer.status.count = existingPods
    updateStatus(webServer)

    if (existingPods &amp;lt; webServer.spec.replicas) {
        createPod(webServer)
    } else if (existingPods &amp;gt; webServer.spec.replicas) {
        kubernetesClient
                .pods()
                .inNamespace(webServer.metadata.namespace)
                .withName(pods[0])
                .delete()
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;updateStatus()&lt;/code&gt; uses the client to update the status component of our custom resource. The status component is unique because updating it does not trigger an update event in our code. Only a controller can update the status component of a resource, and Kubernetes has been designed to prevent status updates from triggering an infinite event loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private fun updateStatus(webServer: WebServer) =
        kubernetesClient.customResources(webServerResourceDefinition, WebServer::class.java, WebServerList::class.java, DoneableWebServer::class.java)
                .inNamespace(webServer.metadata.namespace)
                .withName(webServer.metadata.name)
                .updateStatus(webServer)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;podCountByLabel()&lt;/code&gt; returns the names of pods that are managed by the CRD that are either running or in the process of being created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private fun podCountByLabel(label: String, webServerName: String): List&amp;lt;String&amp;gt; =
        podLister.list()
                .filter { it.metadata.labels.entries.contains(SimpleEntry(label, webServerName)) }
                .filter { it.status.phase == "Running" || it.status.phase == "Pending" }
                .map { it.metadata.name }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;createPod()&lt;/code&gt; and &lt;code&gt;createNewPod()&lt;/code&gt; create a new pod. It is here that our business logic has been codified with the use of the &lt;code&gt;nginxdemos/hello&lt;/code&gt; Docker image as our test web server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private fun createPod(webServer: WebServer) =
        createNewPod(webServer).let { pod -&amp;gt;
            kubernetesClient.pods().inNamespace(webServer.metadata.namespace).create(pod)
        }

private fun createNewPod(webServer: WebServer): Pod =
        PodBuilder()
                .withNewMetadata()
                .withGenerateName(webServer.metadata.name.toString() + "-pod")
                .withNamespace(webServer.metadata.namespace)
                .withLabels(Collections.singletonMap(APP_LABEL, webServer.metadata.name))
                .addNewOwnerReference()
                .withController(true)
                .withKind("WebServer")
                .withApiVersion("demo.k8s.io/v1alpha1")
                .withName(webServer.metadata.name)
                .withNewUid(webServer.metadata.uid)
                .endOwnerReference()
                .endMetadata()
                .withNewSpec()
                .addNewContainer().withName("nginx").withImage("nginxdemos/hello").endContainer()
                .endSpec()
                .build()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;run()&lt;/code&gt; method is an infinite loop continually consuming a web server resource ID added to the &lt;code&gt;workQueue&lt;/code&gt; by the event listeners and passing it to the &lt;code&gt;reconcile()&lt;/code&gt; method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fun run() {
    blockUntilSynced()
    while (true) {
        try {
            workQueue
                    .take()
                    .split("/")
                    .toTypedArray()[1]
                    .let { webServerLister.get(it) }
                    ?.also { reconcile(it) }
        } catch (interruptedException: InterruptedException) {
            // ignored
        }
    }
}

private fun blockUntilSynced() {
    while (!podInformer.hasSynced() || !webServerInformer.hasSynced()) {}
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  The CRD YAML
&lt;/h2&gt;

&lt;p&gt;The final piece of the operator is the CRD itself. A CRD is simply another Kubernetes resource, and we define it in the following YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apiextensions.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CustomResourceDefinition&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webservers.demo.k8s.io&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo.k8s.io&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1alpha1&lt;/span&gt;
  &lt;span class="na"&gt;names&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WebServer&lt;/span&gt;
    &lt;span class="na"&gt;plural&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webservers&lt;/span&gt;
  &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespaced&lt;/span&gt;
  &lt;span class="na"&gt;subresources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Putting it all together
&lt;/h2&gt;

&lt;p&gt;To run the operator, we first need to apply the CRD YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f crd.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then, we create an instance of our CRD with the YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo.k8s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WebServer&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-webserver&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The controller can then run locally. Because the client we used in our code knows how to configure itself based on where it is run, executing our code locally, means that the client configures itself from the &lt;code&gt;~/.kube/config&lt;/code&gt; file. In the screenshot below you can see the controller run directly from my IDE:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g8sEADov--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.octopus.com/blog/2020-02/operators-with-kotlin/intellij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g8sEADov--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.octopus.com/blog/2020-02/operators-with-kotlin/intellij.png" alt="" title="width=500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The controller responds to the new web server CRD and creates the required pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
example-webserver-pod92ht9   1/1     Running   0          54s
example-webserver-podgbz86   1/1     Running   0          54s
example-webserver-podk58gz   1/1     Running   0          54s
example-webserver-podkftmp   1/1     Running   0          54s
example-webserver-podpwzrt   1/1     Running   0          54s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The status of the web server resource is updated with the &lt;code&gt;count&lt;/code&gt; of the pods it has successfully created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get webservers -n default -o yaml
apiVersion: v1
items:
- apiVersion: demo.k8s.io/v1alpha1
  kind: WebServer
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"demo.k8s.io/v1alpha1","kind":"WebServer","metadata":{"annotations":{},"name":"example-webserver","namespace":"default"},"spec":{"replicas":5}}
    creationTimestamp: "2020-01-16T20:19:23Z"
    generation: 1
    name: example-webserver
    namespace: default
    resourceVersion: "112308"
    selfLink: /apis/demo.k8s.io/v1alpha1/namespaces/default/webservers/example-webserver
    uid: 9eb08575-8fa1-4bc9-bb2b-6f11b7285b68
  spec:
    replicas: 5
  status:
    count: 5
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  The power of operators
&lt;/h2&gt;

&lt;p&gt;Without an operator, the concept of a test web server lived outside of the cluster. Developers may have emailed around the YAML they use to create test pods with, but more likely, everyone had their own opinion of what a test web server was.&lt;/p&gt;

&lt;p&gt;The operator we created extends our Kubernetes cluster with a specific implementation of a test web server. Encapsulating this business knowledge allows the cluster to create and manage high-level concepts specific to our environment.&lt;/p&gt;

&lt;p&gt;Creating and managing new resources is just one example of what an operator can do. Automating tasks like security scans, reporting, and load testing are all valid use cases for operators. A list of popular operators is available &lt;a href="https://github.com/operator-framework/awesome-operators"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Operators are a much hyped but often poorly understood pattern. With the definition from the original blog post describing operators, we saw the three simple parts to an operator: a resource to define them, a controller to act on the Kubernetes resources, and logic to implement application-specific knowledge. We then implemented a simple operator in Kotlin to create test web servers.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kotlin</category>
    </item>
    <item>
      <title>Kubernetes deployment strategies visualized</title>
      <dc:creator>Matthew Casperson</dc:creator>
      <pubDate>Mon, 17 Feb 2020 21:24:42 +0000</pubDate>
      <link>https://forem.com/octopus/kubernetes-deployment-strategies-visualized-6m9</link>
      <guid>https://forem.com/octopus/kubernetes-deployment-strategies-visualized-6m9</guid>
      <description>&lt;p&gt;One of the benefits Kubernetes provides administrators and developers is the ability to intelligently manage deployments of new software or configuration.&lt;/p&gt;

&lt;p&gt;Kubernetes includes two built-in deployment strategies called &lt;em&gt;recreate&lt;/em&gt; and &lt;em&gt;rolling update&lt;/em&gt;, which are configured directly on the deployment resources. Octopus offers a third Kubernetes deployment strategy called &lt;em&gt;blue green&lt;/em&gt;, which is managed through the &lt;em&gt;Deploy Kubernetes containers&lt;/em&gt; step.&lt;/p&gt;

&lt;p&gt;But what do these strategies actually do? In this blog post, we’ll visualize these deployment strategies to highlight their differences and note why you would choose one strategy over another.&lt;/p&gt;

&lt;h2&gt;
  
  
  The test deployment
&lt;/h2&gt;

&lt;p&gt;For the videos below, we will watch a deployment as it is updated on a multi-node Kubernetes cluster. Each node in the cluster has a unique label, and the deployment is updated to place the new pods on a specific node.&lt;/p&gt;

&lt;p&gt;The end result is that the new deployment shifts pods from one node to the other. We can then watch how the pods are moved between nodes to see the effect of the different deployment strategies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes deployment recreate strategy
&lt;/h2&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ejJs8IDQK3s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The Kubernetes deployment &lt;em&gt;recreate&lt;/em&gt; strategy is the simplest of the three. When a deployment configured with the &lt;em&gt;recreate&lt;/em&gt; strategy is updated, Kubernetes will first delete the pods from the existing deployment, and once those pods are removed, the new pods are created.&lt;/p&gt;

&lt;p&gt;In the above video, you can see all the pods on node 1 are deleted, and only after they are removed are the new pods on the second node created.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;recreate&lt;/em&gt; strategy ensures that the old and new pods do not run concurrently. This can be beneficial when synchronizing changes to a backend datastore that does not support access from two different client versions. However, there is a period of downtime before the new pods start accepting traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes rolling update
&lt;/h2&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/98vmfe3Jn2Q"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;As its name suggests, the Kubernetes &lt;em&gt;rolling update&lt;/em&gt; strategy incrementally deploys new pods as the old pods are removed. You can see this in the above video, where a number of the pods on the first node are deleted at the same time as new pods are created on the second node. Eventually, the pods on the first node are all removed, and all the new pods are created on the second node.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;rolling update&lt;/em&gt; strategy ensures there are some pods available to continue serving traffic during the update, so there is no downtime. However, both the old and new pods run side by side while the update is taking place, meaning any datastores or clients must be able to interact with both versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes blue green deployment
&lt;/h2&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/R2fFMX_qf5A"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Unlike the other deployment strategies, the Kubernetes &lt;em&gt;blue green&lt;/em&gt; deployment strategy is not something natively implemented by Kubernetes. It involves creating an entirely new deployment resource (i.e., a deployment resource with a new name), waiting for the new deployment to become ready, switching traffic from the old deployment to the new deployment, and finally deleting the old deployment. This process is implemented in Octopus via the &lt;em&gt;Deploy Kubernetes containers&lt;/em&gt; step.&lt;/p&gt;

&lt;p&gt;In the above video, you can see during a blue/green deployment, the pods on the second node are deployed and initialized, and when they are ready, the pods on the first node are deleted.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;blue/green&lt;/em&gt; strategy ensures the new deployment is fully initialized and healthy before any traffic is sent to it. Should the new deployment fail, the old deployment continues to serve traffic.&lt;/p&gt;

&lt;p&gt;Like the &lt;em&gt;rolling update&lt;/em&gt; strategy, the &lt;em&gt;blue/green&lt;/em&gt; strategy deploys two versions side by side for a period of time, so any backing datastores need to support two different clients. However, by cutting all traffic over to the new deployment when it’s ready, only one version of the deployment will be accessible at any time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Selecting the correct deployment strategy is crucial to ensuring that your Kubernetes updates are reliable and remove or minimize downtime. Visualizing the available update strategies is helpful in understanding the differences between them, and in this post, we saw how pods were created and destroyed with the Kubernetes native strategies of &lt;em&gt;recreate&lt;/em&gt; and &lt;em&gt;rolling updates&lt;/em&gt;, and then with the &lt;em&gt;blue/green&lt;/em&gt; strategy implemented by Octopus.&lt;/p&gt;

&lt;p&gt;This post was originally published at &lt;a href="https://octopus.com" rel="noopener noreferrer"&gt;octopus.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>octopus</category>
    </item>
  </channel>
</rss>
