<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: santoshjpawar</title>
    <description>The latest articles on Forem by santoshjpawar (@santoshjpawar).</description>
    <link>https://forem.com/santoshjpawar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/santoshjpawar"/>
    <language>en</language>
    <item>
      <title>Use PodDisruptionBudget (PDB) to define minimum application availability</title>
      <dc:creator>santoshjpawar</dc:creator>
      <pubDate>Thu, 04 Nov 2021 09:52:16 +0000</pubDate>
      <link>https://forem.com/santoshjpawar/use-poddisruptionbudget-pdb-to-define-minimum-application-availability-3ppe</link>
      <guid>https://forem.com/santoshjpawar/use-poddisruptionbudget-pdb-to-define-minimum-application-availability-3ppe</guid>
      <description>&lt;p&gt;A pod disruption budget is used to define number of disruptions that can be tolerated at a given time by a specific set of pods. In simple words, minimum number of pods that should be running or maximum number of pods that can be deleted during voluntary disruptions. &lt;/p&gt;

&lt;p&gt;The voluntary disruption means any action in Kubernetes cluster initiated by someone or something that results into deleting the pods running on a given node. For example, during the cluster maintenance, the cluster admin wants to drain the node that results into deleting all the pods running on that node, or the cluster auto scaler is scaling down a node that results into deleting all the pods running on that node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implications of not having PDB defined for your application
&lt;/h3&gt;

&lt;p&gt;If your workload does not have PDB defined, it might go offline during cluster maintenance event or scale down action introducing &lt;strong&gt;downtime&lt;/strong&gt; for the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Defining PDB for your application
&lt;/h3&gt;

&lt;p&gt;Below is an example of a sample application that defines pod disruption budget.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Deployment
&lt;/h4&gt;

&lt;p&gt;The Deployment object defines the application pod and it’s configuration. The application name and label used is &lt;em&gt;app1&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: app1
  name: app1
  namespace: ns1
spec:
  selector:
    matchLabels:
      name: app1
  template:
    metadata:
      labels:
        name: app1
    spec:
      containers:
      ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. HPA
&lt;/h4&gt;

&lt;p&gt;The HPA object defines the scaling criteria for application pods. The HPA is associated with deployment using the &lt;em&gt;scaleTargetRef&lt;/em&gt; tag. Once HPA is associated with deployment, then that deployment can no longer be scaled independently. The OpenShift UI will have the manual scaling of pods disabled as shown below. You won’t see the up and down arrows to scale up or scale down the pod.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1t25e9j9rolt8wdfnn1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1t25e9j9rolt8wdfnn1.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: app1
  labels:
    app: app1
    app.kubernetes.io/instance: app1
  namespace: ns1
spec:
  minReplicas: 2
  maxReplicas: 10
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: app1
  targetCPUUtilizationPercentage: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. PDB
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;PodDisruptionBudget&lt;/strong&gt; object defines the criteria for pods to handle disruptions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: app1
  namespace: ns1
spec:
  minAvailable: 25%
  selector:
    matchLabels:
      app: app1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assuming the application is running with minimum two pods, one pod can be deleted and created on another node by scheduler during the voluntary disruption as deleting one pod keeps the availability as 50% which is greater than 25% defined in the PDB.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Spreading the pods across multiple zones</title>
      <dc:creator>santoshjpawar</dc:creator>
      <pubDate>Thu, 04 Nov 2021 03:18:11 +0000</pubDate>
      <link>https://forem.com/santoshjpawar/spreading-the-pods-across-multiple-zones-2kpo</link>
      <guid>https://forem.com/santoshjpawar/spreading-the-pods-across-multiple-zones-2kpo</guid>
      <description>&lt;p&gt;To achieve high availability and better utilization of resources, it is required to spread the pods across multiple failure domains / availability zones. You can use OpenShift’s &lt;em&gt;topologySpreadConstraint&lt;/em&gt; feature to achieve this.&lt;/p&gt;

&lt;p&gt;Here I will explain how to achieve this by labeling the nodes and defining the topology spread constraint for the pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Label the nodes as per the zones they are located in
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc label node worker0 zone=az0
oc label node worker1 zone=az1
oc label node worker2 zone=az1
oc label node worker3 zone=az2
oc label node worker4 zone=az2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Define the topology spread constraint in deployment yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
spec:
  …
  template:
    metadata:
    …
    spec:
      containers:
      …
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: zone
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
              app: app1
…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where,&lt;br&gt;
&lt;strong&gt;maxSkew:&lt;/strong&gt; This defines the maximum difference in number of pods between any two zones. It is used to control how strict the pod distribution should be.&lt;br&gt;
&lt;strong&gt;topologyKey:&lt;/strong&gt; The label used to define the zone value for nodes. In our example, it is &lt;em&gt;zone&lt;/em&gt;. &lt;br&gt;
&lt;strong&gt;whenUnsatisfiable:&lt;/strong&gt; How to handle a pod if it does not satisfy the spread constraint.&lt;br&gt;
&lt;strong&gt;labelSelector:&lt;/strong&gt; Pods to select for the distribution based on the labels associated with the pods. In this example, the pods are created with label app: app1 using &lt;code&gt;spec.template.metadata.labels&lt;/code&gt; tag in the same deployment yaml as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
spec:
…
  template:
    metadata:
      …
      labels:
        app: app1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After making these changes, the pods will be evenly distributed across the zones. The worker node selection within the zone will be done based on the capacity available on those worker nodes. So it is quite possible that one worker node will have more pods than other worker node within the same zone.&lt;/p&gt;

&lt;p&gt;In the below picture, we can see how the 5 pods are distributed to 3 availability zones.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pod0&lt;/strong&gt; is assigned to &lt;strong&gt;az0&lt;/strong&gt; which contains only one worker node &lt;strong&gt;worker0&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pod1&lt;/strong&gt; and &lt;strong&gt;Pod2&lt;/strong&gt; are assigned to &lt;strong&gt;az1&lt;/strong&gt; which contains two worker nodes &lt;strong&gt;worker1&lt;/strong&gt; and &lt;strong&gt;worker2&lt;/strong&gt;. Both the pods are assigned to &lt;strong&gt;worker1&lt;/strong&gt; as per the resource requirement and capacity available.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pod3&lt;/strong&gt; and &lt;strong&gt;Pod4&lt;/strong&gt; are assigned to &lt;strong&gt;az2&lt;/strong&gt; which contains two worker nodes &lt;strong&gt;worker3&lt;/strong&gt; and &lt;strong&gt;worker4&lt;/strong&gt;. Both the pods are assigned to &lt;strong&gt;worker4&lt;/strong&gt; as per the resource requirement and capacity available.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3by66q2lw0osvkpk3cpm.png" alt="Image description"&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Use S3 compatible Object Storage in OpenShift</title>
      <dc:creator>santoshjpawar</dc:creator>
      <pubDate>Sun, 24 Oct 2021 07:33:36 +0000</pubDate>
      <link>https://forem.com/santoshjpawar/using-s3-compatible-object-storage-in-openshift-agk</link>
      <guid>https://forem.com/santoshjpawar/using-s3-compatible-object-storage-in-openshift-agk</guid>
      <description>&lt;p&gt;We have came across situations where we need to feed the custom files to the applications deployed in OpenShift. These files will be consumed by the application for the further operations.&lt;/p&gt;

&lt;p&gt;Consider a scenario - An application allows users to create custom jar files using the provided SDK, and feed it to the application to execute that customization. These custom jars should be made available to the application through Java CLASSPATH by copying it to the appropriate path.&lt;/p&gt;

&lt;p&gt;OpenShift has two options to handle such scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Persistent Volume
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Persistent Volumes (PV) allows to share the file storage between application pods and external world. Users can copy the files to PV to make it available to the pods (for example configuration files), or pods can create the files to make it accessible outside the OpenShift cluster (for example log files). This sounds like a feasible approach for sharing the files described above. However, some organizations have restrictions over using or accessing PVs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using ConfigMap
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;ConfigMaps can be mounted as volumes to make the custom files available to the pods.&lt;/li&gt;
&lt;li&gt;ConfigMap has 1MiB size limit.&lt;/li&gt;
&lt;li&gt;Using ConfigMap for sharing the custom files requires access to the OpenShift cluster to create the ConfigMap containing custom files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both the approaches are tightly coupled with OpenShift platform. These approaches may work just fine for some users. But we might want to look for some generic approach, and that's what will be discussing here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using S3 Compatible Object Storage
&lt;/h2&gt;

&lt;p&gt;Here we will see how to use S3 compatible storage to allow users to share the custom files with pods.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2kefk2o362ex3fujpom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2kefk2o362ex3fujpom.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup S3 compatible storage
&lt;/h3&gt;

&lt;p&gt;If you have AWS account, you can use AWS S3 service as object storage. If not, then you can use any other S3 compatible storage. Here we will use MinIO (&lt;a href="https://min.io/" rel="noopener noreferrer"&gt;https://min.io/&lt;/a&gt;) object storage. You can check the documentation for all the supported scenarios to get MinIO running. In this example, I am installing it on &lt;strong&gt;CentOS 7 VM&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Installation
&lt;/h4&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

mkdir /opt/minio
cd /opt/minio
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Start MinIO server
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

export MINIO_REGION_NAME="us-east-1"
export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=password
./minio server /mnt/data --console-address ":9001" &amp;amp;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you don’t set the &lt;code&gt;MINIO_ROOT_USER&lt;/code&gt; and &lt;code&gt;MINIO_ROOT_PASSWORD&lt;/code&gt; environment variables, MinIO will use default as &lt;strong&gt;minioadmin:minioadmin&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It is important to set the region using &lt;code&gt;MINIO_REGION_NAME&lt;/code&gt; as we will need to set the region while running the aws cli commands later in the process.&lt;/p&gt;

&lt;p&gt;The directory &lt;code&gt;/mnt/data&lt;/code&gt; will be used as a storage space. Make sure the directory you use as a data directory has enough storage space as per your requirement.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create API user to be used to access the S3 storage
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Open &lt;a href="http://%3cminio-server%3e:9001" rel="noopener noreferrer"&gt;http://[minio-server]:9001&lt;/a&gt; in the browser and login using credentials provided while starting the server.&lt;/li&gt;
&lt;li&gt;Click on the Buckets link in LHN.&lt;/li&gt;
&lt;li&gt;Click on + Create User button.-&lt;/li&gt;
&lt;li&gt;Specify Access Key and Secret Key value. You can use the sample values as below:
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Access Key: Q3AM3UQ867SPQDA43P2G&lt;br&gt;
Secret Key: zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Check the readwrite checkbox from policies list in Assign Policies section and click on Save button.

#### Create S3 bucket to keep the custom files

- Click on the **Buckets** link in LHN.
- Click on **Create Bucket +** button.
- Specify bucket name, for example `santosh-bucket-1`. Keep other values default and click on **Save** button.

#### Using S3 object storage from pods

- To use the S3 object storage from, you can install generally available **AWS CLI v2** by following the AWS documentation.
- Configure AWS CLI with Access Key and Secret Key used while creating the API user in MinIO. You can use OpenShift secret to keep the Access Key and Secret Key. Please check my other post 
https://dev.to/santoshjpawar/how-to-use-openshift-secret-securely-597c for using OpenShift secrets securely. 
- In the pods, use below AWS CLI S3 commands to get the custom files copied to the container storage from S3 object storage. The custom jar file `custom-settlement-extention.jar` kept on S3 bucket `santosh-bucket-1`.
Note that the MinIO port (9000) is different than the MinIO console port (9001)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;aws --endpoint-url http://[minio-server]:9000 s3 cp s3://santosh-bucket-1/custom-settlement-extention.jar /work/custom-jars&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The above command can be added in the container startup script to copy the custom jar file from S3 object storage to the container storage.

You can list the files in the bucket and then copy them in case multiple files need to be copied.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;aws --endpoint-url http://[minio-server]:9000 s3 ls santosh-bucket-1&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>openshift</category>
      <category>devops</category>
      <category>s3</category>
      <category>storage</category>
    </item>
    <item>
      <title>Use OpenShift secret securely without any third party tool</title>
      <dc:creator>santoshjpawar</dc:creator>
      <pubDate>Sun, 24 Oct 2021 03:19:50 +0000</pubDate>
      <link>https://forem.com/santoshjpawar/how-to-use-openshift-secret-securely-597c</link>
      <guid>https://forem.com/santoshjpawar/how-to-use-openshift-secret-securely-597c</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; This approach does not use any OpenShift specific features (apart from oc CLI). So it should work in any Kubernetes cluster as well.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You might have used OpenShift secrets many times by defining them on the cluster and using it inside the pods. There different ways in OpenShift to make the secrets available to the pods. For example, using the volume mounts or environment variables. Those are documented &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Have you ever wondered how much secure these secrets are once you make them available inside the pod? Anyone who can sneak into the container can see these secrets. If your application dumps the environment variables as part of logging, the secrets could be exposed to the outside world in exponentially. &lt;strong&gt;This is exactly what we will be discussing here.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You have a use case where you need to define OpenShift secrets to keep the sensitive information used by the application pods. It could be the database credentials, certificate files or any other sensitive information.&lt;/p&gt;

&lt;p&gt;Here we will use very simple example of using the secret. There is a pod that runs a Java application. This application connects to the database which is running external to the application pod. It can be on another pod or outside of the OpenShift cluster. The application just needs the database username and password to connect to the database (along with other obvious details like host, port and so on). We will treat database password as the only sensitive information. The application reads the database password from the configuration file stored on the file system accessible to the application.&lt;/p&gt;

&lt;p&gt;Lets try to implement this in the OpenShift in a simple way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simple way
&lt;/h2&gt;

&lt;p&gt;The simplest way to use OpenShift secret in the pod is as below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a OpenShift secret that will contain database password. 
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;oc create secret generic db-passwd-secret --from-literal=DB_PASS=password123&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- This secret is then used in the application pod as environment variable. Below is one of the few ways to load secret as environment variables.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: app-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:

&lt;ul&gt;
&lt;li&gt;secretRef:
name: db-passwd-secret&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Container startup script reads the database password from environment variable, and writes it to the configuration file in plaintext.
-  Application reads the database password from the configuration file and uses it to connect to the database.

Though it is very simple implementation using the features supported by OpenShift, it has many security concerns. The most important being the database password is visible in plaintext as environment variable and in the configuration file. Anyone who has access to the container can see the database password.

Les see how we can do it in better way...

## Secure way

![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0s7erh72yhzhvg12eqmw.png)

No one can make the application secure in OpenShift if the application itself does not support handling of the sensitive data securely. In this example, keeping the database password in plaintext in the configuration file is basic security flaw in the application. Such flaws cannot be handled at the infrastructure or platform layer (for example in OpenShift). The application requires certain changes to adopt to the enhanced security model.

### Application changes to use encrypted config
- There are some pre-requisite changes required in the application to remove the dependency on plaintext password. The application should be updated to allow keeping database password encrypted in the configuration file and decrypting it at runtime using the provided passphrase at the time of startup. 
For that, it should support a way to encrypt and decrypt the password using a passphrase. We can keep it simple by using symmetric encryption where same passphrase is used to encrypt and decrypt. There should not be any need to have plaintext password available in the application container (for example to make database connectivity checks in the readiness probe).
- This passphrase is also kept in the OpenShift secret. This is a different secret than the secret used to keep the database password.
So create two secrets, one for database password and other for passphrase.
- Create a OpenShift secret that will contain database password. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;oc create secret generic db-passwd-secret --from-literal=DB_PASS=password123&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Create a OpenShift secret that will contain passphrase. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;oc create secret generic passphrase-secret --from-literal=PASSPHRASE=secretpassphrase&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;### Init container processing
- Init container for loading application secrets -
  - An init container is created to perform the processing of secrets. 
  - The OpenShift secrets `db-passwd-secret` and `passphrase-secret` are made available to init container using volume mounts. 
  - Init container shares the *emptyDir* volume with application container. This data in this volume is kept in-memory and never written to the disk (with `medium: Memory`).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;initContainers:&lt;br&gt;
  volumeMounts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: init-secret-shared
mountPath: /mnt/init-secret
....
containers:
volumeMounts:&lt;/li&gt;
&lt;li&gt;name: init-secret-shared
mountPath: /mnt/init-secret
....
volumes:

&lt;ul&gt;
&lt;li&gt;name: secret-db-pwd
secret:
secretName: db-passwd-secret&lt;/li&gt;
&lt;li&gt;name: secret-passphrase
secret:
secretName: passphrase-secret&lt;/li&gt;
&lt;li&gt;name: init-secret-shared
emptyDir:
medium: Memory&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-   Init container’s startup script reads both the secrets from volume mounts. It copies the plaintext database password (retrieved from OCP secret `db-passwd-secret`) and passphrase (retrieved from OCP secret `passphrase-secret`) into a file on shared volume `init-secret-shared`.
-   The database password and passphrase are written to the init-shared-secret volume in `/mnt/init-secret/secrets` file.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;$ cat /mnt/init-secret/secrets&lt;br&gt;
DB_PASS=password123&lt;br&gt;
PASSPHRASE=secretpassphrase&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-   The init container completes it's execution and exits. It is no more accessible to get inside the container.

### Application container processing
-   The startup script of application container is updated to read the file from shared volume. Inside the application container, the file is available at `/mnt/init-secret/secrets`. The startup script reads the database password and passphrase from the shared file.
-   The application container has the `encrypt.sh` script included which encrypts the plaintext database password using the passphrase.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;encypt.sh --passphrase secretpassphrase password123&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- This command generates the output as encrypted value of the password.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rh75ggs4s0#j@1&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Note: There are various tools to perform symmetric encryption and decryption. One example is [gpg](https://www.gnupg.org/gph/en/manual/x110.html).
-   It writes it to the configuration file (in encrypted format) and deletes the secrets file `/mnt/init-secret/secrets` (or makes is empty). The startup script then starts the application by passing the passphrase to it.
-   The application then reads the encrypted database password from configuration file, decrypts it using the provided passphrase, and uses it to connect to the database.

**Note:** Make sure the application container has permissions to delete the `/mnt/init-secret/secrets` file created by init container. 

## What have we achieved with this approach?
-   The application now supports using encrypted sensitive data implicitly.
-   No password or passphrase is written to anywhere on the disk or available as environment variable.
-   Init container that process the mounted OpenShift secrets is not accessible to read the files from the secret mount path once it's execution completes.
-   Application container reads the password and passphrase, and deletes the file from the shared volume (in-memory) so that even if someone logs into the application container, the secret file containing password and passphrase is not accessible. 

This is one of the possible ways to securely consume OpenShift secrets in the application pods.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>openshift</category>
      <category>kubernetes</category>
      <category>security</category>
    </item>
  </channel>
</rss>
