<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Harald Uebele</title>
    <description>The latest articles on Forem by Harald Uebele (@harald_u).</description>
    <link>https://forem.com/harald_u</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/harald_u"/>
    <language>en</language>
    <item>
      <title>Deploy your Quarkus applications on Kubernetes. Almost automatically!</title>
      <dc:creator>Harald Uebele</dc:creator>
      <pubDate>Mon, 06 Apr 2020 09:05:42 +0000</pubDate>
      <link>https://forem.com/harald_u/deploy-your-quarkus-applications-on-kubernetes-almost-automatically-7gm</link>
      <guid>https://forem.com/harald_u/deploy-your-quarkus-applications-on-kubernetes-almost-automatically-7gm</guid>
      <description>&lt;p&gt;You want to code Java, not Kubernetes deployment YAML files? And you use Quarkus? You may have seen the &lt;a href="https://quarkus.io/blog/quarkus-1-3-0-final-released"&gt;announcement blog for Quarkus 1.3.0&lt;/a&gt;. Under "much much more" is a feature that is very interesting to everyone using Kubernetes or OpenShift and with a dislike for the required YAML files:&lt;/p&gt;

&lt;h4&gt;
  
  
  "&lt;em&gt;Easy deployment to Kubernetes or OpenShift&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;The Kubernetes extension has been overhauled and now gives users the&lt;br&gt;
ability to deploy their Quarkus applications to Kubernetes or OpenShift&lt;br&gt;
with almost no effort. Essentially the extension now also takes care of&lt;br&gt;
generating a container image and applying the generated Kubernetes manifests to a target cluster, after the container image has been&lt;br&gt;
generated.&lt;/em&gt;"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I3_h0hfD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/23epq8ab101gud85c9wl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I3_h0hfD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/23epq8ab101gud85c9wl.png" alt="(c) Quarkus.io"&gt;&lt;/a&gt;&lt;br&gt;
There are two Quarkus extensions required.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;a href="https://quarkus.io/guides/kubernetes"&gt;Kubernetes Extension&lt;/a&gt;
This extension generates the Kubernetes and OpenShift YAML (or JSON)
files and also manages the automatic deployment using these files.&lt;/li&gt;
&lt;li&gt; &lt;a href="https://quarkus.io/guides/container-image"&gt;Container Images&lt;/a&gt;
There are actually 3 extensions that can handle automatic build
using:

&lt;ul&gt;
&lt;li&gt;Jib&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;OpenShift Source-to-image (s2i)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both extensions use parameters that are placed into the&lt;br&gt;
&lt;code&gt;application.properties&lt;/code&gt; file. The parameters are listed in the respective&lt;br&gt;
guides of the extensions. Note that I use the term "listed". Some of&lt;br&gt;
these parameters are really just listed without any further explanation.&lt;/p&gt;

&lt;p&gt;You can find the list of parameters for the Kubernetes extension &lt;a href="https://quarkus.io/guides/kubernetes#configuration-options"&gt;here&lt;/a&gt;, those for the Container Image extension are &lt;a href="https://quarkus.io/guides/container-image#customizing"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I tested the functionality in 4 different scenarios: Minikube, IBM Cloud&lt;br&gt;
Kubernetes Service, and Red Hat OpenShift in the form of CodeReady&lt;br&gt;
Containers (CRC) and Red Hat OpenShift on IBM Cloud. I will describe all&lt;br&gt;
of them here.&lt;/p&gt;
&lt;h3&gt;
  
  
  Demo Project
&lt;/h3&gt;

&lt;p&gt;I use the simple example from the Quarkus Getting Started Guide as my&lt;br&gt;
demo application. The current &lt;strong&gt;Quarkus 1.3.1 uses Java 11 and requires Apache Maven 3.6.2+&lt;/strong&gt;. My notebook runs on Fedora 30 so I had to&lt;br&gt;
manually install Maven 3.6.3 because the version provided in the Fedora 30 repositories is too old.&lt;/p&gt;

&lt;p&gt;The following command creates the Quarkus Quickstart Demo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mvn io.quarkus:quarkus-maven-plugin:1.3.1.Final:create 
    -DprojectGroupId=org.acme 
    -DprojectArtifactId=config-quickstart 
    -DclassName="org.acme.config.GreetingResource" 
    -Dpath="/greeting"
$ cd config-quickstart
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can run the application locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./mvnw compile quarkus:dev
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then test it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -w "n" http://localhost:8080/hello
hello
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now add the Kubernetes and Docker Image extensions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./mvnw quarkus:add-extension -Dextensions="kubernetes, container-image-docker"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Edit application.properties
&lt;/h3&gt;

&lt;p&gt;The Kubernetes extension will create 3 Kubernetes objects:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Service Account&lt;/li&gt;
&lt;li&gt; Service&lt;/li&gt;
&lt;li&gt; Deployment&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The configuration and naming of these is based on some basic parameters that have to be added in application.properties:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Option&lt;/th&gt;
&lt;th&gt;Explanation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;quarkus.kubernetes.part-of=todo-app&lt;/td&gt;
&lt;td&gt;One of the Kubernetes "recommended" labels (recommended, not required)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quarkus.container-image.registry=&lt;br&gt;quarkus.container-image.group=&lt;br&gt;quarkus.container-image.name=getting-started&lt;br&gt;quarkus.container-image.tag=1.0&lt;/td&gt;
&lt;td&gt;Specifies the container image in the K8s deployment.&lt;br&gt;Result is 'image: getting-started:1.0'. &lt;br&gt;Make sure there are no excess or trailing spaces! &lt;br&gt;I specify empty registry and group parameters to obtain predictable results.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quarkus.kubernetes.service-type=NodePort&lt;/td&gt;
&lt;td&gt;Creates a service of type NodePort, default would be ClusterIP (doesn't really work with Minikube)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now do a test compile with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./mvnw clean package
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This should result in &lt;code&gt;BUILD SUCCESS&lt;/code&gt;. Look at the &lt;code&gt;kubernetes.yml&lt;/code&gt; file in the&lt;code&gt;target/kubernetes&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Every object (ServiceAccount, Service, Deployment) has a set of annotations and labels. The annotations are picked up automatically when the source directory is under version control (e.g. git) and from the last compile time. The labels are picked up from the parameters specified in the table above. You can specify additional parameters but the Kubernetes extensions uses specific defaults:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  app.kubernetes.io/name and name in the YAML are set to quarkus.container-image.name.&lt;/li&gt;
&lt;li&gt;  app.kubernetes.io/version in the YAML is set to the container-image.tag parameter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The definition of the port (http, 8080) is picked up by Quarkus from the source code during compile.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy to
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cWkEoPBL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rie3o5kth2p4tea8s9tx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cWkEoPBL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rie3o5kth2p4tea8s9tx.jpg" alt="Minikube"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Minikube, we will create the Container (Docker) Image in the Docker installation that is part of the Minikube VM. So after starting Minikube (&lt;code&gt;minikube start&lt;/code&gt;) you need to point your local docker command to the Minikube environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ eval $(minikube docker-env)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The Kubernetes extension specifies &lt;code&gt;imagePullPolicy: Always&lt;/code&gt; as the default for a container image. This is a problem when using the Minikube Docker environment, it should be &lt;code&gt;never&lt;/code&gt; instead. Your&lt;br&gt;
application.properites should therefore look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;quarkus.kubernetes.part-of=todo-app
quarkus.container-image.registry=
quarkus.container-image.group=
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
quarkus.kubernetes.image-pull-policy=never
quarkus.kubernetes.service-type=NodePort
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now try a test build &amp;amp; deploy in the getting-started directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Check that everything is started with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pod 
$ kubectl get deploy
$ kubectl get svc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that in the result of the last command you can see the NodePort of the getting-started service, e.g. 31304 or something in that range. Get the IP address of your Minikube cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube ip
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And then test the service, in my example with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl 192.168.39.131:31304/hello
hello
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The result of this execise:&lt;/p&gt;

&lt;p&gt;Installing 2 Quarkus extensions and adding 7 statements to the application.properties file (of which 1 is optional) allows you to compile your Java code, build a container image, and deploy it into&lt;br&gt;
Kubernetes with a single command. I think this is cool!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VBYU0Drm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/okn835t2rbmya91zrg4p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VBYU0Drm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/okn835t2rbmya91zrg4p.png" alt="IBM Cloud Kubernetes Service (IKS)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I just described for Minikube also works for the IBM Cloud. IBM Cloud Kubernetes Service (or IKS) does not have an internal Container Image Registry, instead this is a separate service and you may have&lt;br&gt;
guessed its name: IBM Cloud Container Registry (ICR). This example works on free IKS clusters, too. A &lt;a href="https://cloud.ibm.com/docs/containers?topic=containers-getting-started#clusters_gs"&gt;free IKS cluster&lt;/a&gt;&lt;br&gt;
is free of charge and you can use for 30 days.&lt;/p&gt;

&lt;p&gt;For our example to work, you need to create a "Namespace" in an ICR location which is different from a Kubernetes namespace. For example, my test Kubernetes cluster (with the name: mycluster) is located in&lt;br&gt;
Houston, so I create a namespace called 'harald-uebele' in the registry location Dallas (because it is close to Houston).&lt;/p&gt;

&lt;p&gt;Now I need to login and setup the connection using the &lt;code&gt;ibmcloud&lt;/code&gt; CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ibmcloud login
$ ibmcloud ks cluster config --cluster mycluster
$ ibmcloud cr login
$ ibmcloud cr region-set us-south
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The last command will set the registry region to us-south which is Dallas and has the URL 'us.icr.io'.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;application.properties&lt;/code&gt; needs a few changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;registry&lt;/code&gt; now holds the ICR URL (us.icr.io)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;group&lt;/code&gt; is the registry namespace mentioned above&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;image-pull-policy&lt;/code&gt; is changed to always for ICR&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;service-account&lt;/code&gt; needs to be 'default', the service account created by the Kubernetes extension ('getting-started') is not allowed to pull images from the ICR image registry
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;quarkus.kubernetes.part-of=todo-app
quarkus.container-image.registry=us.icr.io
quarkus.container-image.group=harald-uebele
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
quarkus.kubernetes.image-pull-policy=always
quarkus.kubernetes.service-type=NodePort
quarkus.kubernetes.service-account=default
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Compile &amp;amp; build as before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Check if the image has been built:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ibmcloud cr images
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You should see the newly created image, correctly tagged, and hopefully with a 'security status' of 'No issues'. That is the result of a &lt;a href="https://cloud.ibm.com/docs/Registry?topic=va-va_index"&gt;Vulnerability Advisor scan&lt;/a&gt; that is automatically performed on every image.&lt;/p&gt;

&lt;p&gt;Now check the status of your deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get deploy
$ kubectl get pod
$ kubectl get svc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;kubectl get svc&lt;/code&gt; you will see the number of the NodePort of the service, in my example it is 30850. You can obtain the public IP address of an IKS worker node with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ibmcloud ks worker ls --cluster mycluster
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you have multiple worker nodes, any of the public IP addresses will do. Test your service with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl &amp;lt;externalIP&amp;gt;:&amp;lt;nodePort&amp;gt;/hello
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The result should be 'hello'.&lt;/p&gt;

&lt;h3&gt;
  
  
  All this also works on
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w85ww9CZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tvux9ss67ykgjo4z4l2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w85ww9CZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tvux9ss67ykgjo4z4l2b.png" alt="Red Hat OpenShift"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have tested this with &lt;a href="https://haralduebele.blog/2019/09/13/red-hat-openshift-4-on-your-laptop/"&gt;CodeReady Containers&lt;/a&gt; (CRC) and on &lt;a href="https://cloud.ibm.com/docs/openshift?topic=openshift-getting-started"&gt;Red Hat OpenShift on IBM Cloud&lt;/a&gt;. CRC was a bit flaky, sometimes it would build the image, create the deployment config, but wouldn't start the pod.&lt;/p&gt;

&lt;p&gt;On OpenShift, the container image is built using &lt;a href="https://docs.openshift.com/container-platform/4.3/builds/understanding-image-builds.html#build-strategy-s2i_understanding-image-builds"&gt;Source-to-Image&lt;/a&gt;&lt;br&gt;
(s2i) and this requires a different Maven extension:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./mvnw quarkus:add-extension -Dextensions="container-image-s2i"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It seems like you can have only container-image extensions in your project. If you installed the &lt;code&gt;container-image-docker&lt;/code&gt; extension before, you'll need to remove it from the dependency section of the &lt;code&gt;pom.xml&lt;/code&gt; file, otherwise the build may fail, later.&lt;/p&gt;

&lt;p&gt;There is an OpenShift specific section of parameters / options is the &lt;a href="https://quarkus.io/guides/kubernetes#openshift"&gt;documentation&lt;/a&gt; of the extension.&lt;/p&gt;

&lt;p&gt;Start with log in to OpenShift and creating a new project (quarkus):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc login ...
$ oc new-project quarkus
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is the application.properties file I used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;quarkus.kubernetes.deployment-target=openshift
quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000
quarkus.container-image.group=quarkus
quarkus.container-image.name=getting-started
quarkus.container-image.tag=1.0
quarkus.openshift.part-of=todo-app
quarkus.openshift.service-account=default
quarkus.openshift.expose=true
quarkus.kubernetes-client.trust-certs=true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Line 1: Create an OpenShift deployment&lt;br&gt;
Line 2: This is the (OpenShift internal) image repository URL for OpenShift 4&lt;br&gt;
Line 3: The OpenShift project name&lt;br&gt;
Line 4: The image name will also be used for all other OpenShift objects&lt;br&gt;
Line 5: Image tag, will also be the application version in OpenShift&lt;br&gt;
Line 6: Name of the OpenShift application&lt;br&gt;
Line 7: Use the 'default' service account&lt;br&gt;
Line 8: Expose the service with a route (URL)&lt;br&gt;
Line 9: Needed for CRC because of self-signed certificates, don't use with OpenShift on IBM Cloud&lt;/p&gt;

&lt;p&gt;With these options in place, start a compile &amp;amp; build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It will take a while but in the end you should see a "BUILD SUCCESS" and in the OpenShift console you should see an application called "todo-app" with a Deployment Config, Pod, Build, Service, and Route:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eJtpERIw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://haralduebele.files.wordpress.com/2020/04/image-1.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eJtpERIw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://haralduebele.files.wordpress.com/2020/04/image-1.png%3Fw%3D1024" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional and missing options
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Namespaces (Kubernetes) and Projects (OpenShift)&lt;/strong&gt; cannot be specified with an option in application.properties. With OpenShift thats not really an issue because you can specify which project (namespace) to work in with the oc CLI before starting the &lt;code&gt;mvn package&lt;/code&gt;. But it would be nice if there were a namespace and/or project option.&lt;/p&gt;

&lt;p&gt;The Kubernetes extension is picking up which Port your app is using during build. But if you need to specify an &lt;strong&gt;additional port&lt;/strong&gt; this is how you do it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;quarkus.kubernetes.ports.https.container-port=8443
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will add an https port on 8443 to the service and an https containerPort on 8443 to the containers spec in the deployment.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;number of replicas&lt;/strong&gt; is supposed to be defined with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;quarkus.kubernetes.replicas=4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This results in &lt;em&gt;WARN &lt;a href="https://dev.tomain"&gt;io.qua.config&lt;/a&gt; Unrecognized configuration key "quarkus.kubernetes.replicas" was provided; it will be ignored&lt;/em&gt; and the replicas count remains 1 in the deployment. &lt;strong&gt;Instead use&lt;/strong&gt; the deprecated configuration option without &lt;em&gt;quarkus.&lt;/em&gt; (I am sure this will be fixed):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubernetes.replicas=4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Adding a key value pair &lt;strong&gt;environment variables&lt;/strong&gt; to the deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;quarkus.kubernetes.env-vars.DB.value=local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;will result in this YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    spec:
      containers:
      - env:
        - name: "DB"
          value: "local"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There are many more options, for readiness and liveness probes, mounts and volumes, secrets, config maps, etc. Have a look at the &lt;a href="https://quarkus.io/guides/kubernetes"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>quarkus</category>
      <category>kubernetes</category>
      <category>openshift</category>
      <category>yaml</category>
    </item>
    <item>
      <title>How to run OpenShift 4 on your notebook</title>
      <dc:creator>Harald Uebele</dc:creator>
      <pubDate>Thu, 19 Mar 2020 13:34:48 +0000</pubDate>
      <link>https://forem.com/harald_u/how-to-run-openshift-4-on-your-notebook-562h</link>
      <guid>https://forem.com/harald_u/how-to-run-openshift-4-on-your-notebook-562h</guid>
      <description>&lt;p&gt;OpenShift is Red Hat's version of Kubernetes, simply put. It includes&lt;br&gt;
tools and features that make it very interesting for developers. But&lt;br&gt;
since it is a commercial product it normally comes with a fee.&lt;/p&gt;

&lt;p&gt;You may know Minikube, a tool to run "vanilla" Kubernetes in a virtual&lt;br&gt;
machine on your notebook. You may also know Minishift, which does the same for OKD which is the open source upstream project of OpenShift. Minishift is based on OKD version 3.xx, though. OpenShift version 4 is very different from OpenShift and OKD version 3. There is work underway for a version 4 of OKD but this still seems to take some time.&lt;/p&gt;

&lt;p&gt;Last year I found something called  &lt;strong&gt;Red Hat CodeReady Containers&lt;/strong&gt; and this allows to run OpenShift 4.3 in a single node configuration on your&lt;br&gt;
workstation. For free! It operates almost exactly like Minishift and Minikube. Actually under the covers it is completely different but that's&lt;br&gt;
another story. &lt;/p&gt;

&lt;p&gt;CodeReady Containers (CRC) runs on Linux, MacOS, and Windows, and it only supports the native hypervisors: KVM for Linux, Hyperkit for MacOS, and HyperV for Windows. &lt;/p&gt;

&lt;p&gt;This is the place where you need to start: &lt;a href="https://cloud.redhat.com/openshift/install/crc/installer-provisioned"&gt;Install on Laptop: Red Hat&lt;br&gt;
CodeReady Containers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need a Red Hat account to access this page, you can register right&lt;br&gt;
there and it is free. It contains a link to the &lt;a href="https://code-ready.github.io/crc/"&gt;Getting Started&lt;/a&gt; guide, the download link for CodeReady Containers (for Windows, MacOS, and Linux) and a link to download the &lt;em&gt;pull secret&lt;/em&gt; which is required during installation and is therefore the most important piece.&lt;/p&gt;

&lt;p&gt;The Getting Started guide lists the hardware requirements, they are similar to those for Minikube and Minishift:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  4 vCPUs&lt;/li&gt;
&lt;li&gt;  8 GB RAM (IMHO you need at least 16 GB to use CRC)&lt;/li&gt;
&lt;li&gt;  35 GB disk space for the virtual disk&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will also find the required versions of Windows 10 and MacOS there. &lt;/p&gt;

&lt;p&gt;I am running Fedora (F30 at the moment) on my notebook and I normally use VirtualBox as hypervisor. CRC does not support VirtualBox so I had to install KVM first, here are good &lt;a href="https://computingforgeeks.com/how-to-install-kvm-on-fedora/"&gt;instructions&lt;/a&gt;. The requirements for CRC also mention NetworkManager as required but most Linux distributions will use it, Fedora certainly does. There are additional instructions for Ubuntu/Debian/Mint users for libvirt in the&lt;br&gt;
Getting Started guide and &lt;a href="https://github.com/code-ready/crc/issues/549"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Start with downloading the CodeReady Containers archive for your OS and download the pull secrets to a location you remember. Extracting the CodeReady Containers archive results in an executable 'crc' which needs to be placed in your PATH. This is very similar to the 'minikube' and 'minishift' executables.&lt;/p&gt;

&lt;p&gt;First step is to setup CodeReady Containers:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ crc setup&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This checks the prerequistes, installs some drivers, configures the network,  and creates an initial configuration in a directory '.crc' (on&lt;br&gt;
Linux).&lt;/p&gt;

&lt;p&gt;You can check the configurable options of'crc' with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ crc config view&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Since I plan to test OpenShift Service Mesh (= Istio) on CRC I have&lt;br&gt;
changed the memory limit to 16 GB and added the path to the pull secret file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ crc config set memory 16384&lt;/code&gt;&lt;br&gt;
&lt;code&gt;$ crc config set pull-secret-file path/to/pull-secret.txt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Start CodeReady Containers with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ crc start&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will take a while and in the end give you instructions on how to access the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
INFO To login as an admin, run 'oc login -u kubeadmin -p db9Dr-J2csc-8oP78-9sbmf https://api.crc.testing:6443'
INFO
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I found that you need to wait a few minutes after that because OpenShift&lt;br&gt;
is some times not completely started. Check with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ crc status&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CRC VM: Running 
OpenShift: Running (v4.3.1) 
Disk Usage: 22.28GB of 32.72GB (Inside the CRC VM) 
Cache Usage: 12.3GB 
Cache Directory: /home/uebele/.crc/cache
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If your cluster is up, access it using the link in the completion message or use:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ crc console&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;User is 'kubeadmin' and the password has been printed in the completion message above. You will need to accept the self-signed certificates and then be presented with an OpenShift 4 Web Console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lort4J2H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cp2o81zrckc47bv5q2c8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lort4J2H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cp2o81zrckc47bv5q2c8.png" alt="Web Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are some more commands that you probably need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; 'crc stop' stops the OpenShift cluster&lt;/li&gt;
&lt;li&gt; 'crc delete' completely deletes the cluster&lt;/li&gt;
&lt;li&gt; 'eval $(crc oc-env)' correctly sets the environment for the 'oc' CLI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I am really impressed with CodeReady Containers. It gives you the full&lt;br&gt;
OpenShift 4 experience with the new Web Console and even includes the OperatorHub catalog to get started with Operators.&lt;/p&gt;

&lt;h4&gt;
  
  
  Updates
&lt;/h4&gt;

&lt;p&gt;New CRC versions come out about once per month. You don't need to&lt;br&gt;
install them but they offer the latest version of OpenShift at that&lt;br&gt;
time. Updating means to delete the old VM and create a new one. &lt;/p&gt;

&lt;p&gt;A very good place for current information is the CRC Github issues&lt;br&gt;
page &lt;a href="https://github.com/code-ready/crc/issues"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>container</category>
      <category>openshift</category>
    </item>
  </channel>
</rss>
