<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: That Cloud Expert</title>
    <description>The latest articles on Forem by That Cloud Expert (@thatcloudexpert).</description>
    <link>https://forem.com/thatcloudexpert</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/thatcloudexpert"/>
    <language>en</language>
    <item>
      <title>Deploying PostgreSQL on Kubernetes: 2024 Guide</title>
      <dc:creator>That Cloud Expert</dc:creator>
      <pubDate>Mon, 18 Nov 2024 10:21:53 +0000</pubDate>
      <link>https://forem.com/thatcloudexpert/deploying-postgresql-on-kubernetes-2024-guide-4hb3</link>
      <guid>https://forem.com/thatcloudexpert/deploying-postgresql-on-kubernetes-2024-guide-4hb3</guid>
      <description>&lt;p&gt;In the past, deploying PostgreSQL in your environment required a significant amount of manual configuration and management efforts. Kubernetes, the popular container orchestration platform, is making database deployment and management easier. Over the past few years, Kubernetes has placed a special emphasis on supporting stateful applications. Kubernetes can automate deployment, scaling, and management of containerized applications, together with their integrated databases.&lt;/p&gt;

&lt;p&gt;In this article, we’ll show two ways to deploy Postgres on Amazon Elastic Kubernetes Service (EKS), a popular managed Kubernetes service:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Postgres deployment with Amazon Elastic Block Storage (EBS), using EKS default Storage Class&lt;/strong&gt;: Supports basic use cases but less suitable for large scale deployments (Over 64TB) or cost-optimized deployments, and does not provide storage efficiency mechanisms (thin provisioning, compression, tiering, etc.). Also cannot support business critical applications, because it only supports single AZ deployment and doesn't support read-write-many mode.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Postgres deployment with Amazon FSx for NetApp ONTAP&lt;/strong&gt;: A shared storage solution that supports Multi AZ deployments, cost efficiency mechanisms and petabyte-scale deployments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BTW: In both options we will deploy Postgres with Helm to make things easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Option 1: Deploying Postgres on EKS Using EBS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s see what’s involved in deploying a Postgres database on Amazon Elastic Kubernetes Service (EKS), using EBS for persistent data storage and Helm, the Kubernetes package manager, for easier deployment.&lt;/p&gt;

&lt;p&gt;Before you begin, make sure the following tools are installed on your machine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; - AWS’s Command Line Interface (CLI). It should be configured and authenticated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://eksctl.io/installation/" rel="noopener noreferrer"&gt;Eksctl&lt;/a&gt; - AWS’s CLI interface specifically tailored for their EKS service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://eksctl.io/installation/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; - A popular Kubernetes package management system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; - A generic kubernetes CLI interface.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create an EKS Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can create a Kubernetes cluster using the AWS management console or the eksctl utility. In this example, we’ll use eksctl.&lt;/p&gt;

&lt;p&gt;To create an EKS cluster with eksctl first create a new file named cluster-name.yaml with the following information, replacing the values highlighted in red with the data provided below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cluster-name.yaml
# Cluster containing two managed node groups
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: &amp;lt;cluster-name&amp;gt;
  region: &amp;lt;aws-region&amp;gt;
  version: "1.30"  # You may need to update this based on what is currently supported.

managedNodeGroups:
  - name: dev-ng-1
    instanceType: t3.large
    minSize: 1
    maxSize: 1
    desiredCapacity: 1
    volumeSize: 30
    volumeEncrypted: true
    volumeType: gp3
    tags:
      Env: Dev
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
      withAddonPolicies:
        autoScaler: true


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;cluster-name&lt;/strong&gt; with the name you want to assign to your cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&amp;lt;aws-region)&lt;/strong&gt; with the AWS region where you want the cluster deployed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make the following commands easier, please create a couple variables. One named REGION with the aws-region used above. And another one, named CLUSTER_NAME, with the cluster_name used above. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;REGION=us-west-2
CLUSTER_NAME=eks-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To create the cluster, run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster -f cluster-name.yaml --region $REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will take about 30 minutes to complete. Once the cluster is fully provisioned, you can view the nodes using &lt;code&gt;the kubectl get nodes&lt;/code&gt; command. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes
NAME                                         STATUS   ROLES    AGE     VERSION
ip-192-168-70-8.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   6d22h   v1.30.4-eks-a737599
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Set Up IAM&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before the EBS CSI Add-On can do anything, you need to create an AWS role that will allow it to perform operations on your behalf. Fortunately, there is an AWS managed policy that has all the permissions defined (arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy) so you just need to reference it when creating the role. Here are the detailed steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Associate the EKS OIDC provider to your cluster by using the following command. Note that this command depends on the REGION and CLUSTER_NAME variables being set above.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl utils associate-iam-oidc-provider --region=$REGION \
  --cluster=$CLUSTER --approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
Run the following command to create the role. Note that this command depends on the REGION and CLUSTER_NAME variables being set above.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create iamserviceaccount --name ebs-csi-controller-sa \
        --namespace kube-system --cluster $CLUSTER_NAME \
        --role-name AmazonEKS_EBS_CSI_DriverRole \
        --role-only --approve --region $REGION \
        --attach-policy-arn \
          arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: The above command will fail if a role with the name of “AmazonEKS_EBS_CSI_DriverRole” already exists so you should check to confirm it isn’t already there. If it does exist, simply use a different name, but be sure to use the same name in the next step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Add the Amazon EBS CSI Add-On&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The EBS CSI driver can be managed as an EKS add-on, which makes it easier to handle and enhances security. To apply this add-on with eksctl, run the following.  Note that this command depends on the REGION and CLUSTER_NAME variables being set above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create addon --region REGION --name aws-ebs-csi-driver \
  --cluster CLUSTER_NAME --service-account-role-arn \
  arn:aws:iam::&amp;lt;account_id&amp;gt;:role/AmazonEKS_EBS_CSI_DriverRole --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;account _id&lt;/strong&gt; with your numeric AWS account ID. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AmazonEKS_EBS_CSI_DriverRole&lt;/strong&gt; with the role name you used in step 2.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Set a Storage Class&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You need to specify a storage class for the cluster, as well as a default storage class for the persistent volume claims (PVCs).&lt;/p&gt;

&lt;p&gt;To create an AWS storage class for the Amazon EKS cluster, create a file with a name of “ebs-storage-class.yaml” and include the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-pg-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the following kubectl command to create a storage class from your file by executing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f ebs-storage-class.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can view the storage classes available in the cluster by using the kubectl get storageclass command. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get storageclass
NAME                  PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
aws-pg-sc (default)   kubernetes.io/aws-ebs   Delete          Immediate              false                  17s
gp2                   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  50m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Deploy a Helm Chart for PostgreSQL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this example, we will use the &lt;a href="https://github.com/bitnami/charts/tree/main/bitnami/postgresql/#installing-the-chart" rel="noopener noreferrer"&gt;Bitnami Helm chart for PostgreSQL&lt;/a&gt;. We’ll override some of the values in a values.yaml to enable the chart to use our provisioned storage class. Create a file called “postgresdb-values.yaml” and include the following details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;primary:
   persistence:
      storageClass: "aws-pg-sc"
auth: 
   username: postgres 
   password: my-password
   database: my_database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have the file created, use the following command to install the Helm chart with “pgdb” as the release name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add my-repo https://charts.bitnami.com/bitnami
helm install pgdb --values postgresdb-values.yaml my-repo/postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the database is successfully deployed, you can run these commands to verify that the PV, PVC, and pod were created properly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pv
kubectl get pvc
kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The outputs should be similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl --output=custom-columns=NAME:metadata.name,STATUS:status.phase get pv
NAME                                       STATUS
pvc-adaa2e15-aa84-4a21-befc-0c6d0de6a55a   Bound
$ kubectl --output=custom-columns=NAME:metadata.name,STATUS:status.phase get pvc
NAME                     STATUS
data-pgdb-postgresql-0   Bound
$ kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE
pgdb-postgresql-0   1/1     Running   0          4m46s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that I added the --output options to get the output to fit on this page. Feel free to not provide that option to get more information&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Option 2: Deploying Postgres on EKS Using FSx for NetApp ONTAP&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A more advanced option is to use FSxN as your underlying storage for EKS. Amazon FSx for NetApp ONTAP (FSxN) is a fully managed file system that uses the NetApp ONTAP storage operating system, built for demanding enterprise workloads. As mentioned above, this provides a shared storage solution that supports Multi AZ deployments, cost efficiency mechanisms and petabyte-scale deployments.&lt;/p&gt;

&lt;p&gt;Now let’s see what’s involved to deploy your Postgres database in Kubernetes with FSxN.&lt;/p&gt;

&lt;p&gt;Before you begin, make sure the following tools are installed on your machine&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; - AWS’s Command Line Interface (CLI). It should be configured and authenticated.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://eksctl.io/installation/" rel="noopener noreferrer"&gt;Eksctl&lt;/a&gt; - AWS’s CLI interface specifically tailored for their EKS service.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; - A popular Kubernetes package management system.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; - A generic kubernetes CLI interface.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; - A popular provisioning tool.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create EKS Cluster&lt;/strong&gt;&lt;br&gt;
Same as step 1 in the EBS tutorial above.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Deploy FSxN with Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can easily deploy FSxN using Terraform. Both Amazon and NetApp provide a Terraform module which you can reference from your local environment. &lt;/p&gt;

&lt;p&gt;In this example we are going to use the Terraform module provided by NetApp. It can be found in this GitHub repository: &lt;a href="https://github.com/NetApp/FSx-ONTAP-samples-scripts/tree/main/Terraform/deploy-fsx-ontap/module" rel="noopener noreferrer"&gt;NetApp/FSx-ONTAP-samples-scripts&lt;/a&gt;. The module will do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a FSxN file system with one SVM and one volume.&lt;/li&gt;
&lt;li&gt;Create two AWS secrets. One that contains the file system administrative credentials, and another for the SVM administrative credentials.&lt;/li&gt;
&lt;li&gt;Create a security group that will allow all the required ports to leverage a NAS (CIFS or NFS) and/or block (iSCSI) file system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To use the module, create a file named ‘main.tf’ in an empty directory with the following contents while replacing the strings with values that make sense for your deployment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "&amp;gt;=5.25"
    }
  }
}

provider "aws" {
    region = "aws-region"
}

module "fsxontap" {
    source = "github.com/NetApp/FSx-ONTAP-samples-scripts/Terraform/deploy-fsx-ontap/module"

    name = "&amp;lt;u&amp;gt;fsxn-for-eks&amp;lt;/u&amp;gt;"

    deployment_type = "MULTI_AZ_1"
    throughput_in_MBps = 128
    capacity_size_gb = 1024

    vpc_id = "vpc-XXXXXXXXXXXXXXX"
    subnets = {
      "primarysub"   = "primary-subnet-XXXXXXXXXXXXXXXXX"
      "secondarysub" = "secondary-subnet-XXXXXXXXXXXXXXXXX"
    }
    route_table_ids = ["rtb-XXXXXXXXXXXXXXX"]

    create_sg = true
    security_group_name_prefix = "sg_for_fsxn"
    cidr_for_sg = "192.168.0.0/16"
}

output "fsxn_secret_arn" {
  value = module.fsxontap.fsxn_secret_arn
}

output "svm_secret_arn" {
  value = module.fsxontap.svm_secret_arn
}

output "file_system_management_ip" {
  value = module.fsxontap.filesystem_management_ip
}

output "file_system_id" {
  value = module.fsxontap.filesystem_id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Values to replace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;aws-region - The region where you deployed your EKS cluster.
To make the following commands easier, set a variable named “REGION” to the AWS region. For example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;REGION=us-west-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;fsxn-for-eks - The name to associate with the FSx for ONTAP file system.&lt;/li&gt;
&lt;li&gt;vpc-XXXXXXXXXXXXXXX - The ID of the VPC that was created when the EKS cluster was deployed. You can get this information from the AWS console (go to the EKS services page and select the cluster you had deployed), or execute the following command. Note this command depends on the REGION variable being defined.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks describe-cluster --name cluster-name --query cluster.resourcesVpcConfig.vpcId --region $REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you get the VPC ID, to make the next few commands easier, set a variable named VPC_ID to that value. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VPC_ID=vpc-0b98eccb6404905bc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;primary-subnet-XXXXXXXXX and secondary-subnet-XXXXXXXXX - set to different “Public” subnet ids in the cluster. The following command will list all the public subnets in the VPC and their names. Just pick any two. Note that this command depends on the VPC_ID and REGION variables being set before running it.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-subnets --filter Name=vpc-id,Values=$VPC_ID --query "Subnets[].{SubnetId:SubnetId,Name:Tags[?Key=='Name']|[0].Value}" --output=text --region $REGION | grep -i public
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;rtb-XXXXXXXXXXXXXXX - The route id used by the public subnets. The following command will give you all the route IDs in the VPC, with their associated subnets. Choose the route ID that has the public subnets associated with it. Note this command depends on the VPC_ID and REGION variables being set.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-route-tables --filter Name=vpc-id,Values=$VPC_ID --query 'RouteTables[].{RouteTableId:RouteTableId,Associations:Associations[].SubnetId}' --region $REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;192.168.0.0/16 - Set to the VPC”s CIDR. Use the VPC’s CIDR for the cidr_for_sg variable. The following command will give you that value. Note that it depends on the VPC_ID and REGION variables being set.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-vpcs --filter Name=vpc-id,Values=$VPC_ID --query 'Vpcs[0].CidrBlock' --region $REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rest of the values you can leave as is, or adjusted as needed. For more information on what values you can set, see the &lt;a href="https://github.com/NetApp/FSx-ONTAP-samples-scripts/tree/main/Terraform/deploy-fsx-ontap/module" rel="noopener noreferrer"&gt;FSxN GitHub repo&lt;/a&gt; page.&lt;/p&gt;

&lt;p&gt;To initialize the new module, run the following command. This will also initialize backends and install provider plugins:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create and preview an execution plan by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once confirmed, you can execute the Terraform code to set up your FSxN storage environment by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply --auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The process will take up to 45 minutes. You should see a lot of output, but eventually a successful run will look similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 39 added, 0 changed, 0 destroyed.

Outputs:

file_system_management_ip = "198.19.255.117"
file_system_id = "fs-00276859917feca10"
fsxn_secret_arn = "arn:aws:secretsmanager:us-west-2:759995470648:secret:fsxn-secret-6e38c2df-CKPGRm"
svm_secret_arn = "arn:aws:secretsmanager:us-west-2:759995470648:secret:fsxn-secret-cf9c75da-tgr9fW"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Create IAM role for Trident&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this example we will be using NetApp’s Astra Trident to manage the FSxN file system. Since it will be issuing AWS APIs to control the file system, it will need AWS permissions to do so. To give it the appropriate permissions, you’ll need to create a policy and then a role, and finally, associate the role with the policy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3a: Create an IAM Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file named “policy.json” with the contents provided below. Replace “” with the ARN for the secret for the SVM. The secret ARN will be part of the final output of the “terraform apply” command. Be sure to use the one for the SVM and not the one for the files system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "fsx:DescribeFileSystems",
                "fsx:DescribeVolumes",
                "fsx:CreateVolume",
                "fsx:RestoreVolumeFromSnapshot",
                "fsx:DescribeStorageVirtualMachines",
                "fsx:UntagResource",
                "fsx:UpdateVolume",
                "fsx:TagResource",
                "fsx:DeleteVolume"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Action": "secretsmanager:GetSecretValue",
            "Effect": "Allow",
            "Resource": "&amp;lt;svm_secret_arn&amp;gt;"
        }
    ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3b: Create the policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the following command to create the IAM policy. Replace  with the name you want assigned to the policy. Note the following command depends on the REGION variable being set from step 2 above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-policy --policy-name &amp;lt;policy-name&amp;gt; --output=text \
 --policy-document file://policy.json --query=Policy.Arn --region $REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output from this command will just be the ARN for this policy. That string will be used to assign the policy to the role in a command below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3c: Create the assume role policy:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file named “assume_role.json” with the following contents. Make the necessary replacements.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": {
     "Federated": "arn:aws:iam::account_id:oidc-provider/oidc_provider"
    },
    "Action": "sts:AssumeRoleWithWebIdentity",
    "Condition": {
      "StringEquals": {
        "oidc_provider:aud": "sts.amazonaws.com",
        "oidc_provider:sub": "system:serviceaccount:trident:trident-controller"
      }
    }
  }]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Values to replace:&lt;br&gt;
account_id - with your AWS account id number. You can obtain that with the following command. Note the following command depends on the REGION variable being set from step 2 above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws sts get-caller-identity --region $REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;oidc_provider (all three occurrences) - with OIDC provider id of your EKS cluster. You can get that with the following command. Replace  with the name you assigned to your EKS cluster.  Note the following command depends on the REGION variable being set from step 2 above.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks describe-cluster --name &amp;lt;eks_cluster_name&amp;gt; --query \
 cluster.identity.oidc.issuer --output=text --region $REGION | \
 sed -e 's,^https://,,'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3d:Create the role:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Execute the following command to create the role. Replace  with the name you want assigned to the role. Note the following command depends on the REGION variable being set from step 2 above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-role --assume-role-policy-document file://assume_role.json \
  --role-name &amp;lt;role-name&amp;gt; --query=Role.Arn --output=text --region $REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output from the above command should just be the ARN of the role that is created. You will need it in the “helm install” command below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3e: Attach the policy to the role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The final step is to attach the policy created in step 3b to the role created above. Replace “” with the name you assigned to the role. And, replace “” with the ARN of the policy created with step 3b. Note the following command depends on the REGION variable being set from step 2 above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam attach-role-policy --role-name &amp;lt;role-name&amp;gt; \
  --policy-arn  &amp;lt;policy-arn&amp;gt; --region $REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3f: Create an OIDC provider&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Trident will use OIDC to authenticate with AWS and therefore it will need an OIDC provider for the EKS cluster. To do that, just run the following command.  Replace “” with the name of your cluster. Also, note that the following command depends on the REGION variable being set from step 2 above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl utils associate-iam-oidc-provider --cluster &amp;lt;cluster_name&amp;gt; \
  --approve --region $REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Deploy Astra Trident Operator&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Astra Trident is a Kubernetes Operator created by NetApp, which helps integrate its storage technology with Kubernetes. &lt;/p&gt;

&lt;p&gt;There are two steps to install the Trident Operator. The first step is to add the trident repo to your helm configuration. Do that by executing this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add netapp-trident https://netapp.github.io/trident-helm-chart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second step is to run the “helm install” command but before doing that, set a variable named “CI” with the following string. Replace  with the ARN of the role you created during step 3d. Be sure to preserve all the single and double quotes and the space between “role-arn:” and the trident_role_arn. They are necessary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CI="'eks.amazonaws.com/role-arn: &amp;lt;trident_role_arn&amp;gt;'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you are ready to run the helm install command without having to replace anything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install trident netapp-trident/trident-operator --version 100.2406.1 \
 --set cloudProvider="AWS" --set cloudIdentity="$CI" \
 --create-namespace --namespace trident
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the above command installs the latest version (100.2406.1) of Trident at the time of publishing this blog post. Please visit &lt;a href="https://github.com/netapp/trident/releases" rel="noopener noreferrer"&gt;https://github.com/netapp/trident/releases&lt;/a&gt; to see what the latest version is.&lt;/p&gt;

&lt;p&gt;You can confirm that Trident is up and running in your cluster by running the kubectl get deployment command. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get deployment -n trident
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
trident-controller   1/1     1            1           31s
trident-operator     1/1     1            1           62s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Configure Storage Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The next step to get EKS to use FSxN storage is to define a backend storage provider to use the Trident Operator. There are several ways to do that, but for this example we’re going to use the ‘kubectl’ command with a configuration file. So the first step is to create a file named ‘backend-trident.yaml’ with the contents below while replacing  with the file system id created with the ‘terraform apply’ command and  with the ARN of the secret that was also created with the ‘terraform apply’ command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
  name: backend-fsx-ontap-nas
  namespace: trident
spec:
  version: 1
  storageDriverName: ontap-nas
  svm: fsx
  aws:
    fsxFilesystemID: &amp;lt;FSX_ID&amp;gt;
  credentials:
    name: &amp;lt;SVM_SECRET_ARN&amp;gt;
    type: awsarn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the above assumes the name of the SVM name is ‘fsx’. This is the default name that the Terraform module will use, however, it does allow you to change it. So, if you specified a different SVM, please replace ‘fsx’ with the name you gave the SVM.&lt;/p&gt;

&lt;p&gt;Once you have created that file, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -n trident -f backend-trident.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm that the backend was created, use the kubectl get tridentbackendconfig -n trident command. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe tridentbackendconfig -n trident
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have resolved the issue the status may change from Failed to Success automatically since EKS continues to retry the configuration until it succeeds. However, if you want to make sure you are starting fresh, you can run the same command as you did to “install” the backend, but replace “install” with “delete” and it will remove it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Define a Storage Class&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The next step is to create a storage class for FSxN storage. To do that, create a file named “storage-class.yaml” with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ontap-gold
provisioner: csi.trident.netapp.io
parameters:
  backendType: "ontap-nas"
allowVolumeExpansion: True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have created the file run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f storageclass.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm the store class was created, use the ‘kubectl get storageclass’ command. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get storageclass
NAME         PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2          kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  5h50m
ontap-gold   csi.trident.netapp.io   Delete          Immediate              true                   9s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The FSxN storage class is the “ontap-gold” one&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Deploy Helm chart for PostgreSQL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that we have EKS all set up to offer FSxN storage, we are ready to deploy PostgreSQL. Similar to the EBS tutorial above, we will use &lt;a href="https://github.com/bitnami/charts/tree/main/bitnami/postgresql/#installing-the-chart" rel="noopener noreferrer"&gt;the Bitnami Helm chart for PostgreSQL&lt;/a&gt; to provision the PostgreSQL database. To use it, simply create a file &lt;code&gt;name postgres-values.yaml&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;primary:
   persistence:
      storageClass: "ontap-gold"
auth: 
   username: postgres 
   password: demo-password
   database: demo_database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see it sets a default user and password. It will be a best practice to change the password immediately after deploying the database.&lt;/p&gt;

&lt;p&gt;To install the database, run the following two commands. The first one just ensure the appropriate repo has been added. The second actually does the deployment. It names the deployment of the database “pgdb-ninja” but that name can be anything, so feel free to name it something else.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add my-repo https://charts.bitnami.com/bitnami
helm install pgdb-ninja --values postgres-values.yaml my-repo/postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output from the “helm install” command gives information on how to access the database.&lt;/p&gt;

&lt;p&gt;After the database successfully deploys, run the following commands to check that the persistent volumes (PV) and persistent volume claims (PVC) were created by running the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pv
kubectl get pvc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can can also run the following command to ensure the PostgreSQL database itself is up and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The outputs to those command should be similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl --output=custom-columns=NAME:metadata.name,STATUS:status.phase get pv
NAME                                       STATUS
pvc-d38c47ea-1daa-4e18-836a-cbeb74295910   Bound
$ kubectl --output=custom-columns=NAME:metadata.name,STATUS:status.phase get pvc
NAME                           STATUS
data-pgdb-ninja-postgresql-0   Bound
$ kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
pgdb-ninja-postgresql-0   1/1     Running   0          13m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that I intentionally only selected two columns from the normal output so it would fit on the page. Please feel free to just execute the commands above without the –output= option to view more output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, deploying PostgreSQL on Kubernetes, specifically on Amazon Elastic Kubernetes Service (EKS), has become increasingly efficient and versatile. &lt;/p&gt;

&lt;p&gt;This article detailed two deployment strategies: The "Plain Vanilla" approach, utilizing Amazon Elastic Block Storage (EBS), offers a straightforward method for getting PostgreSQL up and running. It's suitable for those seeking a simple deployment without the complexities of high-availability or large-scale performance optimization.&lt;/p&gt;

&lt;p&gt;The "Ninja" method, leveraging Amazon FSx for NetApp ONTAP (FSxN), presents a sophisticated option for enterprises requiring high performance, scalability, and advanced features such as data deduplication, compression, and automatic tiering. This approach not only addresses the limitations of the simpler EBS method but also introduces cost optimization, improved performance, and enhanced data protection capabilities, making it ideal for large-scale and critical applications.&lt;/p&gt;

&lt;p&gt;As Kubernetes continues to evolve, it's clear that its ecosystem is becoming increasingly friendly for stateful applications like PostgreSQL. Whether you're deploying a small-scale application or a large enterprise system, Kubernetes offers robust solutions to meet a wide range of needs, making it a compelling choice for modern application deployment and management.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>postgressql</category>
      <category>eks</category>
    </item>
    <item>
      <title>Storage options for EKS: Comparing Amazon EFS, EBS, S3 and FSx for ONTAP</title>
      <dc:creator>That Cloud Expert</dc:creator>
      <pubDate>Tue, 08 Oct 2024 08:45:41 +0000</pubDate>
      <link>https://forem.com/thatcloudexpert/storage-options-for-eks-comparing-amazon-efs-ebs-s3-and-fsx-for-ontap-kb3</link>
      <guid>https://forem.com/thatcloudexpert/storage-options-for-eks-comparing-amazon-efs-ebs-s3-and-fsx-for-ontap-kb3</guid>
      <description>&lt;p&gt;Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service on AWS. With the rise of s&lt;a href="https://awslabs.github.io/data-on-eks/docs/introduction/intro" rel="noopener noreferrer"&gt;tateful applications running on Kubernetes&lt;/a&gt;, it’s more important than ever to understand the role storage plays for these critical workloads. &lt;/p&gt;

&lt;p&gt;In this post I’ll try to simplify the EKS storage selection process in AWS, and explain in which scenarios it’s best to use a particular service between Amazon EFS, EFS, S3, and Amazon FSx for NetApp ONTAP (FSx for ONTAP).&lt;/p&gt;

&lt;p&gt;In this analysis I’ll touch upon six significant storage metrics/capabilities to make it easier to choose the right storage for a few different workloads running on EKS: general file storage for a SaaS application, data-intensive AI/ML and analytics workloads, NoSQL databases, web applications, and queuing systems.&lt;/p&gt;

&lt;p&gt;The metrics capabilities we’re going to cover are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance, divided into two aspects: Throughput / IOPS and latency&lt;/li&gt;
&lt;li&gt;Durability and availability&lt;/li&gt;
&lt;li&gt;Scalability &lt;/li&gt;
&lt;li&gt;ReadWriteMany&lt;/li&gt;
&lt;li&gt;Supported protocols: Block, NFS, and SMB/CIFS&lt;/li&gt;
&lt;li&gt;Cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Read on as we cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Metrics&lt;/li&gt;
&lt;li&gt;How the different AWS storage services stack up&lt;/li&gt;
&lt;li&gt;Mapping optimal service options per workload&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Metrics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s a short description for each metric/capability that we’ll be looking for in the different storage services:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Performance describes how quickly a storage service can respond to user requests and changes. It can be measured in two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Throughput&lt;/em&gt; is a measure of the amount of data (measured in bits or bytes) that can be processed every second. The shorthand for this measurement is IOPS (input/output per second). These two terms are useful in describing performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Latency&lt;/em&gt; is the measure for the time interval that it takes a storage service to serve out read requests and respond to write operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Durability and availability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Durability is a measure of how safe data is from being lost. Availability refers to the best possible uptime provided by a service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By scalability, we’re talking about the ability to both scale up the amount of storage capacity in use by adding hard drives, memory, etc. to increase the compute power of the servers in use. Scale out refers to adding instances in order to handle the most demanding workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ReadWriteMany&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ReadWriteMany is the ability for multiple nodes to have access to a volume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Protocols&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Different operating systems are tied to specific protocols: Linux machines will use NFS while Windows requires SMB/CIFS. The iSCSI protocol is also widely used in many important workloads. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cost is a basic metric that is always a consideration.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How the different AWS storage services stack up&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The table below checks the metrics against the different AWS storage options we’ll be looking at—Amazon EBS, Amazon EFS, Amazon FSx for NetApp ONTAP, and Amazon S3. This will make it easier to show when each is best to use.&lt;/p&gt;

&lt;p&gt;In the table below, green represents options that support that feature for demanding workloads (i.e., high performance, full support, low cost, etc.), yellow represents some support, and red denotes limited to no support for that feature at the enterprise level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vty6whlvcc4va97oxav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vty6whlvcc4va97oxav.png" alt="Image description" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Mapping optimal service options per workload&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Each workload has its own characteristics and considerations. In this section, I’ll try to map the most important metrics for some of today’s most popular workloads—namely, general file storage for a SaaS application, data-intensive AI/ML and analytics workloads, NoSQL databases, web applications, and queuing systems—to pinpoint the best storage option for each workload.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;General files for SaaS applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top considerations&lt;/strong&gt;: Durability and availability, scalability, cost&lt;/p&gt;

&lt;p&gt;For SaaS-type applications, durability and availability is in many cases the most critical factor for selecting a storage service. For that, EFS, FSx for ONTAP, and S3 all provide good options. EBS runs by default in a single AZ, which makes it less durable and is more susceptible to downtime.&lt;/p&gt;

&lt;p&gt;S3 is indeed the most cost-effective option. However, some important considerations to take into account:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If used as StorageClass you’ll need an S3 CSI driver mountpoint, a new option by AWS. This is less reliable when used  as a file system.&lt;/li&gt;
&lt;li&gt;Latency is relatively high.&lt;/li&gt;
&lt;li&gt;If you need to read/write a lot of small files, the cost might be overwhelming (PUT/GET requests).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EFS and FSx for ONTAP address durability and availability considerations quite well. Their scalability, both scale up and out, are also notable.&lt;/p&gt;

&lt;p&gt;In terms of costs, both EFS and FSx for ONTAP provide multi-AZ availability. The difference is that FSx for ONTAP costs are based on the disk capacity, not the used capacity. If your data is compressible, the cost for FSx for ONTAP will be substantially less expensive. Both have capacity pool options that tier cold data to reduce costs.&lt;/p&gt;

&lt;p&gt;When it comes to performance of EFS and FSx for ONTAP, you should take into account that EFS has a relatively high latency. That can be a dealbreaker for many applications. That high latency plays a major role in data ingestion time and data processing tasks. Below is a customer benchmark for processing 1B records using FSx for ONTAP and EFS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2i6dl0l9m3h11souxpe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2i6dl0l9m3h11souxpe.png" alt="Image description" width="574" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Find the full details of this benchmark test in &lt;a href="https://aws.amazon.com/blogs/apn/how-mycom-osi-optimized-saas-storage-with-amazon-fsx-for-netapp-ontap/" rel="noopener noreferrer"&gt;How MYCOM OSI Optimized SaaS Storage with Amazon FSx for NetApp ONTAP&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data-intensive AI/ML and analytics workloads&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top considerations&lt;/strong&gt;: Performance, latency, scalability, and cost&lt;br&gt;
&lt;strong&gt;Example workloads&lt;/strong&gt;: Analytics (BI), SageMaker, Kubeflow, Airflow&lt;/p&gt;

&lt;p&gt;When running these types of workloads on EKS, runtime is a big issue. You’ll need to make sure that reading/writing data to disk is done in the most efficient way possible.&lt;/p&gt;

&lt;p&gt;For performance/latency considerations, EBS, EFS, and FSx for ONTAP all provide single-digit millisecond latency. That makes these services better suited to handle these data-intensive workloads. FSx for ONTAP is more favorable from a latency perspective than EFS and EBS. For a full discussion of these latency benchmarks, check out &lt;a href="https://sookocheff.com/post/kubernetes/benchmarking-aws-csi-drivers/" rel="noopener noreferrer"&gt;Benchmarking AWS CSI Drivers&lt;/a&gt;, which broke down the results in the following graph:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xxnpck9mccgprzq3o3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xxnpck9mccgprzq3o3l.png" alt="Image description" width="594" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Given these benchmarks, for scalability considerations, EBS might be less ideal. EFS and FSx for ONTAP both support scale out and scale up capabilities. EFS can scale up to dozens of PBs of capacity and scale out to provide 3-30 GBps throughput (&lt;a href="https://docs.aws.amazon.com/efs/latest/ug/performance.html" rel="noopener noreferrer"&gt;source&lt;/a&gt;). FSx for ONTAP can scale up to 36 GBps throughput and dozens of PBs of capacity (&lt;a href="https://aws.amazon.com/blogs/aws/new-scale-out-file-systems-for-amazon-fsx-for-netapp-ontap/" rel="noopener noreferrer"&gt;source&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;From a cost perspective, FSx for ONTAP provides a single-AZ deployment that can be accessed from pods in different AZs. That differs from EFS, where single-AZ deployments can only be accessed by pods within the same AZ. That might be a major limitation and force you to adopt a multi-AZ deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NoSQL DBs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top considerations&lt;/strong&gt;:  Durability and availability, latency, performance&lt;br&gt;
&lt;strong&gt;Example workloads&lt;/strong&gt;: Cassandra, Elasticsearch, Redis, MongoDB&lt;/p&gt;

&lt;p&gt;When running business-critical applications, you don’t want your database to go down, so we’ll first concentrate on the metrics that look at securely running a database in Kubernetes. Important to note: I’m not touching data protection methods in this article, just the storage backend properties of availability and durability.&lt;/p&gt;

&lt;p&gt;Since most of the databases mentioned above don't officially support S3 (that’s only possible using unsupported plugins), I’m leaving it out of our consideration for this workload. I’ll also exclude EFS from the comparison since it only supports NFS, and the best practice for deploying these databases is to attach them as local via iSCSI.&lt;/p&gt;

&lt;p&gt;For the most part, EBS can be a good option, providing the best latency characteristics out of the bunch. Usually, you should be able to determine the number of replicas needed by your databases, allowing you to determine the data protection level you require. However, if your application requires near real-time consistency, that can only be achieved with a multi-AZ deployment, which would make FSx for ONTAP the only viable option.&lt;/p&gt;

&lt;p&gt;When it comes to costs, it’s important to keep in mind that running EBS at scale can become quite a significant expense. For large-scale deployments, it might be a better option to consider FSx for ONTAP. That’s because FSx for ONTAP volumes are thinly provisioned and supported by storage efficiency features—including auto-tiering cold data to a capacity pool and data deduplication, compression, compaction—all of which combine to significantly drive down storage costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top considerations&lt;/strong&gt;: Durability and availability, scalability, cost&lt;/p&gt;

&lt;p&gt;This workload describes scenarios such as file storage for a web server (such as nginx) or for a web content management system such as WordPress. This workload is very similar to the considerations for running file storage for SaaS applications, and everything covered above also applies here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Queuing systems&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top 3 considerations&lt;/strong&gt;: Latency, durability and availability, cost&lt;br&gt;
&lt;strong&gt;Example workloads&lt;/strong&gt;: RabbitMQ, Kafka&lt;/p&gt;

&lt;p&gt;Latency is the key for successful deployment of RabbitMQ or any other queuing system. In that regard, EBS can be a good option. However, for durability and cost you might want to consider FSx for ONTAP.&lt;/p&gt;

&lt;p&gt;RabbitMQ can support cluster deployment for high availability in different AZs, however, this will incur significant cross-AZ traffic costs and won’t support near real-time consistency.&lt;br&gt;
FSx for ONTAP will have a latency penalty compared to using EBS, however, it offers more cost efficiency and the option for multi-AZ deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The purpose of this article is not to replace proper evaluation and testing of the different storage options for your EKS application, but rather help you narrow down the options for choosing a storage platform for it. &lt;/p&gt;

&lt;p&gt;Below are some useful links that can help you in the deployment of the storage solutions for EKS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/fsx-ontap.html" rel="noopener noreferrer"&gt;Fsx for ONTAP on EKS docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NetApp/FSx-ONTAP-samples-scripts/tree/main/EKS/FSxN-as-PVC-for-EKS" rel="noopener noreferrer"&gt;Deploying FSx for ONTAP&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html" rel="noopener noreferrer"&gt;EBS on EKS docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="noopener noreferrer"&gt;EFS on EKS docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>eks</category>
      <category>fsxn</category>
      <category>efs</category>
      <category>ebs</category>
    </item>
    <item>
      <title>How can the AWS Well-Architected Framework improve your storage layer?</title>
      <dc:creator>That Cloud Expert</dc:creator>
      <pubDate>Wed, 18 Sep 2024 11:44:11 +0000</pubDate>
      <link>https://forem.com/thatcloudexpert/how-can-the-aws-well-architected-framework-improve-your-storage-layer-3cp7</link>
      <guid>https://forem.com/thatcloudexpert/how-can-the-aws-well-architected-framework-improve-your-storage-layer-3cp7</guid>
      <description>&lt;p&gt;We don’t talk much about the storage aspects of the AWS Well-Architected Framework, but that’s a big oversight. A storage layer will have a direct effect on every pillar in the Well-Architected Framework, from reliability and security to cost and performance.&lt;/p&gt;

&lt;p&gt;In this blog post we’ll take a look at the AWS Well-Architected Framework and see how the best practices it introduces at the storage layer can play a big part in AWS-based file shares and NAS migrations to AWS.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What is the AWS Well-Architected Framework?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The AWS Well-Architected Framework was designed by AWS to guide customers in building secure, high-performing, resilient, and efficient development and operations on AWS. It’s a set of best practices arranged in 6 pillars—Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization, and Sustainability—which address key aspects of your AWS architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe26wxrkltz4sbvf1avc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe26wxrkltz4sbvf1avc0.png" alt="Image description" width="502" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aligning with the framework can help you design an AWS architecture that runs more efficiently and aligns with industry-leading practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;File Sharing and the AWS Well-Architected pillars&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AWS Well-Architected Framework provides best practices that address storage challenges that can arise when planning NAS migrations or designing AWS architectures to work with file shares.&lt;/p&gt;

&lt;p&gt;There are some important storage aspects to consider in each pillar.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Operational Excellence and Performance Efficiency&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Operational Excellence and Performance Efficiency pillars are pivotal in the AWS Well-Architected Framework. These pillars help organizations develop and operate workloads efficiently by continuously driving process improvements in close connection with both system requirements and business value.&lt;/p&gt;

&lt;p&gt;These pillars address intricate storage challenges faced by evolving cloud architectures. A great example are data-heavy applications, such as artificial intelligence/machine learning (AI/ML) experiments or data transformation and analytics. These workloads are often deployed as containerized microservices that demand persistent and highly available attached data volumes. These architectures and their storage layers require finding a delicate balance of durability and costs without compromising performance.&lt;/p&gt;

&lt;p&gt;How we store and access data in a given workload is also &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/perf-data.html" rel="noopener noreferrer"&gt;key to performance&lt;/a&gt;. A practical example of this is leveraging native file sharing capabilities with a container orchestration service such as Amazon EKS or Amazon ECS instead of using ephemeral local hardware storage. In theory, such setups would be less performant due to higher network latency but more durable and resilient. In practice, we know that the performance impact is negligible and this architecture largely outweighs the disadvantages.&lt;/p&gt;

&lt;p&gt;Another point: As user bases expand, typically storage requirements grow, leading to increased operational costs. So it’s important to maintain storage agility in step with the speed of application development—without losing sight of cost efficiency in the pay-as-you-go model. Managing these types of stateful workloads requires storage agility in both development and operations. Consider a scenario that requires hundreds of data copies, such as testing cycles, backup and disaster recovery, or migrations across environments.&lt;/p&gt;

&lt;p&gt;Another key part of operational excellence is the principle of strong data isolation between tenants and deployment environments. This segmentation principle is there to safeguard both individual and organizational interests, and it’s one that demands a real understanding of business requirements and expectations. One example is the segmentation between different customers that need to take into account not only the customer organizations but also industries (different legal requirements) and geographical locations (e.g., EU or US).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Security and Reliability&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Security and Reliability pillars play a crucial role in mitigating risks, protecting data, and enabling business continuity. These practices enable a workload to deliver value, taking into account compliance requirements in a consistent manner, even in the face of unexpected events. Every enterprise file share and storage architecture needs to maintain secure and continuous data availability—that’s non-negotiable.&lt;/p&gt;

&lt;p&gt;Great software architectures should be designed with storage solutions that are efficient but also resilient and secure. The best practices in the Security and Reliability pillars reinforce data persistence even in evolving workloads, safeguarding against potential disruptions that could compromise the integrity of stored information.&lt;/p&gt;

&lt;p&gt;Data lifecycle management, combined with data protection is a paramount aspect to take into account. In practice, this means identifying and categorizing your data, with a special focus on the sensitive data, and making sure they’re stored accordingly.&lt;/p&gt;

&lt;p&gt;Another good example is leveraging native functionalities to implement data encryption, both during transit and at rest. Also, you want to enforce encryption and other security measures through automation and security control policies, which will improve your overall security posture.&lt;/p&gt;

&lt;p&gt;Consider how these design best practices have been applied in the cases of cloud guardrails and storage policies. Both of these can be applied to the entire organization (or a subset of accounts) and substantially improve the reliability and security posture of workloads.&lt;/p&gt;

&lt;p&gt;As data footprints expand with the increased demand and usage, having a robust data infrastructure in place is a must-have. The ability to quickly recover from unexpected events such as cyberattacks or regional outages is fundamental. The most elemental data protection task one can accomplish is to set up regular backups and craft a disaster recovery strategy. &lt;/p&gt;

&lt;p&gt;However, that only takes you so far. A modern data infrastructure should take into account functionalities that make it easier to implement and fulfill strict recovery point objectives (RPO) and recovery time objective (RTO) requirements to withstand failures in a secure, reliable and cost efficient manner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Cost Optimization and Sustainability&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Cost Optimization and Sustainability pillars are instrumental in addressing two key aspects of operating on AWS: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Responsibly using cloud resources.&lt;/li&gt;
&lt;li&gt;Building an IT culture of thriftiness and efficiency.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While one pillar focuses on best practices that help deliver business value at the lowest cost, the other offers guidance to minimize environmental impacts by using fit-for-purpose resources with efficient energy consumption.&lt;/p&gt;

&lt;p&gt;When an AWS workload uses data intensively or grows at a fast pace, sustainability and cost optimization can be forgotten about. Efficient storage can manage operational costs effectively. The typical pay-as-you-go cloud cost model demands a delicate equilibrium between responsive storage solutions and cost-conscious practices.&lt;/p&gt;

&lt;p&gt;The sustainability pillar encompasses the overall environmental impact of your entire solution infrastructure. Following the best practices this pillar recommends—such as selecting AWS Regions with smaller carbon footprints or storing and processing data geographically closer to end users—can help drive eco-friendly practices that can contribute to a greener IT approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can align with AWS Well-Architected best practices using Amazon FSx for NetApp ONTAP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon FSx for NetApp ONTAP is a fully managed AWS service built on NetApp® ONTAP® software that can serve as an indispensable component in aligning the storage best practices present in the AWS Well-Architected Framework.&lt;/p&gt;

&lt;p&gt;FSx for ONTAP addresses the intricate storage challenges encountered by AWS customers, enabling those best practices to become a reality by offering a suite of advanced features:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Multi-Availability Zone deployment aligns with the Reliability pillar&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Using the Multi-Availability Zone (AZ) configuration option, FSx for ONTAP mirrors your application data across two nodes located in disparate AZs. If an AZ fails, an automatic and seamless failover takes place, with the node in the functional AZ taking on the workload. Once the impacted AZ recovers, a non-disruptive failback to normal dual-node takes place.&lt;/p&gt;

&lt;p&gt;This level of resilience mitigates risk and allows you to design with RPO of zero and RTO of under 60 seconds.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- NetApp Snapshot™ and cloning technologies provide benefits for all pillars&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With FSx for ONTAP, point-in-time volume copies are created at lightning speed, using only pointers to the dataset at a specific time. This is great from both a performance perspective and from a cost optimization and sustainability standpoint since the actual data usage is kept at a minimum.&lt;/p&gt;

&lt;p&gt;Similarly, the FlexClone® technology creates thin-clone data copies. These clone copies leverage the same pointers that Snapshot copies use, so they only consume storage capacity for changes made to the cloned copy, instead of consuming storage for an entire copy of the dataset.&lt;/p&gt;

&lt;p&gt;These technologies are game changers, enabling better business outcomes while simultaneously lowering storage footprint and driving efficiencies.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Cross-region replication bolsters the Reliability and Security pillars&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Cross-region replication, powered by NetApp SnapMirror® data replication technology, enhances backup and disaster recovery capabilities. It enables incremental data replication between regions, achieving an impressive RPO of less than 5 minutes, and RTO of less than 10 minutes.&lt;/p&gt;

&lt;p&gt;This makes rapid recovery possible in a consistent manner even in the face of unexpected events such as accidental deletion due to human error or regional outages, providing a very practical way to address the recommendations in the Reliability and Security, as well as the Performance and Cost Optimization pillars.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Features that address the Security and Operational Excellence pillars&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Security and compliance are bolstered through features like Write-Once, Read-Many (WORM) storage using NetApp SnapLock®, protecting against ransomware attacks. Additional security measures, including Vscan and NetApp FPolicy, coupled with encryption at rest and in transit, fortify the overall data security for workloads and applications with stringent compliance requirements.&lt;/p&gt;

&lt;p&gt;These advanced features mitigate storage deployment and management risks, making it easier to implement the recommendations from the Security and Operational Excellence pillars.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Automation and efficiency features address Operational Excellence, Sustainability, and Cost Optimization pillars&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With FSx for ONTAP, continuous cost optimization is achieved through thin provisioning, storage efficiency features including data compression, deduplication, and compaction, automated data tiering, and thin cloning.&lt;/p&gt;

&lt;p&gt;Multi-protocol data access in file sharing also plays a part in reducing costs in that it allows your data to be accessed no matter which file protocol your applications are using. That avoids the duplicate storage expense and the synchronization complexities involved in running separate services to serve different file access protocols, such as SMB and NFS, for example.&lt;/p&gt;

&lt;p&gt;Plus, FSx for ONTAP leverages NetApp storage efficiencies that reduce storage footprint and costs. These aspects collectively translate into lower overall monthly storage costs, positioning FSx for ONTAP to address several recommendations from the Sustainability, Operational Excellence, and Cost Optimization pillars.&lt;/p&gt;

&lt;p&gt;Operational Excellence is also achieved through the integration with the popular automation tools Terraform, CloudFormation, and Ansible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What would Well-Architected change about your storage environment?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Navigating the storage landscape within AWS requires a holistic approach, and the Well-Architected Framework's pillars can be your guide. From addressing data-heavy workload challenges to ensuring robust security and reliability, and optimizing costs while embracing sustainability, each pillar contributes to a well-rounded storage strategy.&lt;/p&gt;

&lt;p&gt;These best practices highlight the need for agility in storage solutions, rigorous security measures, continuous reliability, cost-conscious practices, and a commitment to sustainability. Organizations can leverage these pillars to build storage architectures that not only meet today's challenges but also remain resilient and adaptable in the ever-evolving cloud.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Implement a fully managed shared file storage for Red Hat OpenShift Service on AWS (ROSA) with Amazon FSx for NetApp ONTAP</title>
      <dc:creator>That Cloud Expert</dc:creator>
      <pubDate>Wed, 14 Aug 2024 17:43:02 +0000</pubDate>
      <link>https://forem.com/thatcloudexpert/implement-a-fully-managed-shared-file-storage-for-red-hat-openshift-service-on-aws-rosa-with-amazon-fsx-for-netapp-ontap-3io1</link>
      <guid>https://forem.com/thatcloudexpert/implement-a-fully-managed-shared-file-storage-for-red-hat-openshift-service-on-aws-rosa-with-amazon-fsx-for-netapp-ontap-3io1</guid>
      <description>&lt;p&gt;Kubernetes is a popular choice among many developers for application deployments, and many of these deployments can benefit from a shared storage layer with greater persistency. &lt;a href="https://aws.amazon.com/rosa/" rel="noopener noreferrer"&gt;Red Hat OpenShift Service on AWS (ROSA)&lt;/a&gt; is a managed OpenShift integration on AWS developed by Red Hat and jointly supported by AWS and Red Hat.&lt;/p&gt;

&lt;p&gt;ROSA clusters typically store data on locally attached &lt;a href="https://aws.amazon.com/ebs/" rel="noopener noreferrer"&gt;Amazon Elastic Block Store (EBS) volumes&lt;/a&gt;. Some customers need the underlying data to be persistent and shared across multiple containers, including containers deployed across multiple Availability Zones (AZs). These customers are looking for a storage solution that scales automatically and provides a more consistent interface to run workloads across on-prem and cloud environments.&lt;/p&gt;

&lt;p&gt;ROSA offers an integration with A&lt;a href="https://aws.amazon.com/fsx/netapp-ontap/" rel="noopener noreferrer"&gt;mazon FSx for NetApp ONTAP&lt;/a&gt; NAS – a scalable, fully managed shared storage service built on NetApp's ONTAP file system. With FSx for ONTAP, customers have access to popular ONTAP features like snapshots, FlexClones, cross-region replication with SnapMirror, and a highly available file server with seamless failover.&lt;/p&gt;

&lt;p&gt;FSx for ONTAP NAS is integrated with the NetApp &lt;a href="https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html" rel="noopener noreferrer"&gt;Trident&lt;/a&gt; driver, a dynamic &lt;a href="https://bluexp.netapp.com/blog/cvo-blg-container-storage-interface-the-foundation-of-k8s-storage" rel="noopener noreferrer"&gt;Container Storage Interface (CSI)&lt;/a&gt; to handle Kubernetes Persistent Volume Claims (PVCs) on storage disks. The Trident CSI driver manages on-demand provisioning of storage volumes across different deployment environments and makes it easier to scale and protect data for your applications.&lt;/p&gt;

&lt;p&gt;In this blog, I demonstrate the use of FSx for ONTAP as a persistent storage layer for ROSA applications. I'll walk through a step-by-step installation of the NetApp Trident CSI driver on a ROSA cluster, provision an FSx for ONTAP NAS file system, deploy a sample stateful application, and demonstrate pod scaling across multi-AZ nodes using dynamic persistent volumes. Finally, I'll cover backup and restore for your application. With this solution, you can set up a shared storage solution that scales across AZ and makes it easier to scale, protect and restore your data using the Trident CSI driver.&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;br&gt;
You need the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://signin.aws.amazon.com/signin?redirect_uri=https://portal.aws.amazon.com/billing/signup/resume&amp;amp;client_id=signup" rel="noopener noreferrer"&gt;AWS account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/auth?client_id=cloud-services&amp;amp;redirect_uri=https%3A%2F%2Fconsole.redhat.com%2F&amp;amp;response_type=code&amp;amp;scope=openid&amp;amp;nonce=bb6a8731-304e-4be3-8cd2-d6e0d104a049&amp;amp;state=24d5885c582a419aa2a4e1e7c1ffd90f&amp;amp;response_mode=fragment" rel="noopener noreferrer"&gt;A Red Hat account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;IAM user with &lt;a href="https://www.rosaworkshop.io/rosa/1-account_setup/" rel="noopener noreferrer"&gt;appropriate permissions&lt;/a&gt; to create and access ROSA cluster&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://console.redhat.com/openshift/downloads" rel="noopener noreferrer"&gt;ROSA CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/auth?client_id=cloud-services&amp;amp;redirect_uri=https%3A%2F%2Fconsole.redhat.com%2Fopenshift%2Fdownloads&amp;amp;response_type=code&amp;amp;scope=openid+rhfull&amp;amp;nonce=8c518523-59c5-4eb4-9473-0203d2339fab&amp;amp;state=52582f8cd36349c093cac6bee1a765ff&amp;amp;response_mode=fragment" rel="noopener noreferrer"&gt;OpenShift command-line interface (oc)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/helm.html" rel="noopener noreferrer"&gt;Helm 3&lt;/a&gt; documentation&lt;/li&gt;
&lt;li&gt;A ROSA cluster&lt;/li&gt;
&lt;li&gt;Access to Red Hat OpenShift web console&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The diagram below shows the ROSA cluster deployed in multiple availability zones (AZs). The ROSA cluster's master nodes, infrastructure nodes and worker nodes run in a private subnet of a customer's Virtual Private Cloud (VPC). You'll create an FSx for ONTAP NAS file system within the same VPC and install the Trident driver in the ROSA cluster, allowing all the subnets of this VPC to connect to the file system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpye9h26gmoumh8skhtgh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpye9h26gmoumh8skhtgh.jpg" alt="Image description" width="800" height="529"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 1 – ROSA integration with Amazon FSx for NetApp ONTAP NAS&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create a ROSA cluster and clone the GitHub repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, install the &lt;a href="https://github.com/aws-samples/rosa-fsx-netapp-ontap" rel="noopener noreferrer"&gt;ROSA cluster&lt;/a&gt;. Use Git to clone the &lt;a href="https://github.com/aws-samples/rosa-fsx-netapp-ontap" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;. If you don't have Git, install it with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf install git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clone the Git repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/aws-samples/rosa-fsx-netapp-ontap.git

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Provision FSx for ONTAP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a multi-AZ FSx for the ONTAP NAS file system in the same VPC as your ROSA cluster.&lt;/p&gt;

&lt;p&gt;If you want to provision the file system with a different storage capacity and throughput, you can override the default values by setting &lt;code&gt;StorageCapacity&lt;/code&gt; and &lt;code&gt;ThroughputCapacity&lt;/code&gt; parameters in the CFN template.&lt;/p&gt;

&lt;p&gt;The value for &lt;code&gt;FSxAllowedCIDR&lt;/code&gt; is the allowed Classless Inter-Domain Routing (CIDR) range for the FSx for ONTAP security groups ingress rules to control access. You can use &lt;code&gt;0.0.0.0/0&lt;/code&gt; or any appropriate CIDR to allow all traffic to access the specific ports of FSx for ONTAP.&lt;/p&gt;

&lt;p&gt;Also, take note of the VPC ID, the two subnet IDs corresponding to the subnets you want your file system to be in, and all route table IDs associated with the ROSA VPC subnets. Enter those values in the command below.&lt;/p&gt;

&lt;p&gt;Run this command in a terminal to create FSx for the ONTAP file system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rosa-fsx-netapp-ontap/fsx

aws cloudformation create-stack \
  --stack-name ROSA-FSXONTAP \
  --template-body file://./FSxONTAP.yaml \
  --region &amp;lt;region-name&amp;gt; \
  --parameters \
  ParameterKey=Subnet1ID,ParameterValue=[subnet1_ID] \
  ParameterKey=Subnet2ID,ParameterValue=[subnet2_ID] \
  ParameterKey=myVpc,ParameterValue=[VPC_ID] \
  ParameterKey=FSxONTAPRouteTable,ParameterValue=[routetable1_ID,routetable2_ID] \
  ParameterKey=FileSystemName,ParameterValue=ROSA-myFSxONTAP \
  ParameterKey=ThroughputCapacity,ParameterValue=256 \
  ParameterKey=FSxAllowedCIDR,ParameterValue=[your_allowed_CIDR] \
  ParameterKey=FsxAdminPassword,ParameterValue=[Define password] \
  ParameterKey=SvmAdminPassword,ParameterValue=[Define password] \
  --capabilities CAPABILITY_NAMED_IAM 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that your file system and storage virtual machine (SVM) has been created using the Amazon FSx console, shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4bt5m58nvl0idi1w04w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4bt5m58nvl0idi1w04w.png" alt="Image description" width="800" height="285"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 2 – Amazon FSx Console&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Install and configure the Trident CSI driver for the ROSA cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use the following Helm command to install the Trident CSI driver in the "trident" namespace on the OpenShift cluster.&lt;/p&gt;

&lt;p&gt;First, add the Astra Trident Helm repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add netapp-trident https://netapp.github.io/trident-helm-chart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, use &lt;code&gt;helm install&lt;/code&gt; and specify a name for your deployment as in the following example, where &lt;code&gt;23.01.1&lt;/code&gt; is the version of Astra Trident you are installing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install &amp;lt;name&amp;gt; netapp-trident/trident-operator --version 23.01.1 --create-namespace --namespace trident
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the Trident driver installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm status trident -n trident
NAME: trident
LAST DEPLOYED: Fri Dec 23 23:17:26 2022
NAMESPACE: trident
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing trident-operator, which deploys and manages NetApp's Trident CSI storage provisioner for Kubernetes.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, verify the installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc get pods -n trident
NAME                                 READY   STATUS    RESTARTS   AGE
trident-controller-cdb6ccbc5-hfp42   6/6     Running   0          85s
trident-node-linux-7gtbr             2/2     Running   0          84s
trident-node-linux-7tjdj             2/2     Running   0          84s
trident-node-linux-gpnb9             2/2     Running   0          85s
trident-node-linux-hwj67             2/2     Running   0          85s
trident-node-linux-kq9k2             2/2     Running   0          84s
trident-node-linux-kxsct             2/2     Running   0          84s
trident-node-linux-n86rc             2/2     Running   0          84s
trident-node-linux-p2j8g             2/2     Running   0          85s
trident-node-linux-t7vpv             2/2     Running   0          84s
trident-operator-74977bc66d-xxh8n    1/1     Running   0          110s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Create a secret for the SVM username and password in the ROSA cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a new file with the SVM username and admin password and save it as &lt;code&gt;svm_secret.yaml&lt;/code&gt;. A sample &lt;code&gt;svm_secret.yaml&lt;/code&gt; file is included in the fsx folder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;svm_secret.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: backend-fsx-ontap-nas-secret
  namespace: trident
type: Opaque
stringData:
  username: vsadmin
  password: step#2 password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The SVM username and its admin password were created in Step 2. You can retrieve it from the AWS Secrets Manager:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaqxlchv0kipa0bno1ci.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaqxlchv0kipa0bno1ci.jpg" alt="Image description" width="800" height="431"&gt;&lt;/a&gt;&lt;br&gt;
_Figure 3 – AWS Secrets Manager Console _&lt;/p&gt;

&lt;p&gt;Add the secrets to the ROSA cluster with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc apply -f svm_secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify that the secrets have been added to the ROSA cluster, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc get secrets -n trident |grep backend-fsx-ontap-nas
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You see the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;backend-fsx-ontap-nas-secret Opaque 2 16d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Configure the Trident CSI backend to FSx for ONTAP NAS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Trident back-end configuration tells Trident how to communicate with the storage system (in this case, FSx for ONTAP). You'll use the ontap-nas driver to provision storage volumes.&lt;/p&gt;

&lt;p&gt;To get started, move into the &lt;strong&gt;fsx&lt;/strong&gt; directory of your cloned Git repository. Open the file backend-ontap-nas.yaml.  Replace the &lt;strong&gt;managementLIF&lt;/strong&gt; and &lt;strong&gt;dataLIF&lt;/strong&gt; in that file with the &lt;strong&gt;Management DNS name&lt;/strong&gt; and &lt;strong&gt;NFS DNS name&lt;/strong&gt; of the Amazon FSx SVM and &lt;strong&gt;svm&lt;/strong&gt; with the &lt;strong&gt;SVM name&lt;/strong&gt;, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: ManagementLIF and DataLIF can be found in the Amazon FSx Console under "&lt;strong&gt;Storage virtual machines&lt;/strong&gt;"  as shown below (highlighted as "Management DNS name" and "NFS DNS name").&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs28bdfodl0sc5dpkft0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs28bdfodl0sc5dpkft0x.png" alt="Image description" width="800" height="443"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Figure 4 - Amazon FSx console - SVM page&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now execute the following commands in the terminal to configure the Trident backend using management DNS and SVM name configuration in the ROSA cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd fsx 
oc apply -f backend-ontap-nas.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command to verify backend configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc get tbc -n trident
NAME BACKEND NAME BACKEND UUID PHASE STATUS
backend-fsx… fsx-ontap 586106c8… Bound Success
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that Trident is configured, you can create a storage class to use the backend you've created. This is a resource object that describes and classifies the type of storage you can request from different storage types available to Kubernetes cluster. Review the file &lt;code&gt;storage-class-csi-nas.yaml&lt;/code&gt; in the &lt;strong&gt;fsx&lt;/strong&gt; folder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Create storage class in ROSA cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, create a storage class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc apply -f storage-class-csi-nas.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the status of the trident-csi storage class creation by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE…
gp2 kubernetes.io/a… Delete WaitForFirstConsumer…
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer…
gp3 ebs.csi.aws.com Delete WaitForFirstConsumer…
gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer…
trident-csi csi.trident… Retain Immediate…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This completes the installation of Trident CSI driver and its connectivity to FSx for ONTAP file system. Now you can deploy a sample MySQL stateful application on ROSA using file volumes on FSx for ONTAP.&lt;/p&gt;

&lt;p&gt;If you want to verify that applications can create PV using Trident operator, create a PVC using the &lt;code&gt;pvc-trident.yaml&lt;/code&gt; file provided in the fsx folder.&lt;/p&gt;

&lt;p&gt;Deploy a sample MySQL stateful application&lt;br&gt;
In this section, you deploy a highly-available MySQL application onto the ROSA cluster using a Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noopener noreferrer"&gt;StatefulSet&lt;/a&gt; and have the PersistentVolume provisioned by Trident.&lt;/p&gt;

&lt;p&gt;Kubernetes StatefulSet verifies the original PersistentVolume (PV) is mounted on the same pod identity when it's rescheduled again to retain data integrity and consistency. For more information about the MySQL application replication configuration, refer to &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/replication.html" rel="noopener noreferrer"&gt;MySQL Official document&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create MySQL Secret
Before you begin with MySQL application deployment, store the application's sensitive information like username and password in &lt;strong&gt;Secrets&lt;/strong&gt;. Save the following manifest to a file name &lt;code&gt;mysql-secrets.yaml&lt;/code&gt; and execute the command below to create the secret.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create the &lt;strong&gt;mysql&lt;/strong&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc create namespace mysql

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a mysql secret file called mysql-secrets.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
 name: mysql-secret
type: Opaque
stringData:
 password: &amp;lt;SQL Password&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the YAML to your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc apply -f mysql-secrets.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the secrets were created:&lt;/p&gt;

&lt;p&gt;$&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; oc get secrets -n mysql | grep mysql
mysql-password opaque 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Create application PVC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, you must create the PVC for your MySQL application. Save the following manifest in a file named &lt;code&gt;mysql-pvc.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: mysql-volume
spec:
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 50Gi
 storageClassName: trident-csi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the YAML to your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc apply -f mysql-pvc.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm that the pvc exists:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc get pvc
NAME           STATUS   VOLUME                                    CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-volume   Bound    pvc-26319553-f29b-4616-b2bb-c700c8416a6b   50Gi       RWO            trident-csi   7s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Run the MySQL application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To deploy your MySQL application on your ROSA cluster, save the following manifest to a file named &lt;code&gt;mysql-deployment.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
 name: mysql-fsx
spec:
 replicas: 1
 selector:
   matchLabels:
     app: mysql-fsx
 template:
   metadata:
     labels:
       app: mysql-fsx
   spec:
     containers:
     - image: mysql:5.7
       name: mysql
       ports:
       - containerPort: 3306
       env: 
         - name: MYSQL_ROOT_PASSWORD
           valueFrom:
             secretKeyRef:
               name: mysql-secret
               key: password
       volumeMounts:
       - mountPath: /var/lib/mysql
         name: mysqlvol
     volumes:
       - name: mysqlvol
         persistentVolumeClaim:
           claimName: mysql-volume 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the YAML to your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc apply -f mysql-deployment.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Verify that the application has been deployed:
$ oc get pods -n mysql
NAME                         READY   STATUS    RESTARTS   AGE
mysql-fsx-6c4d9f6fcb-mzm82   1/1     Running       0      15d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Create a service for the MySQL application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Kubernetes service defines a logical set of pods and a policy to access pods. &lt;strong&gt;StatefulSet&lt;/strong&gt; currently requires a headless service to control the domain of its pods, directly reaching each pod with stable DNS entries. By specifying None for the clusterIP, you can create a headless service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc apply -f mysql-service.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc get svc -n mysql
NAME TYPE CLUSTER-IP EXTERNAL-IP PORTS
mysql ClusterIP None &amp;lt;none&amp;gt; 3306/TCP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Create MySQL client for MySQL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You need a MySQL client so you can access the MySQL applications that you just deployed. Review the content of mysql-client.yaml, and then deploy it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc apply -f mysql-client.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the pod status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc get pods
NAME READY STATUS
mysql-client 1/1 Running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Log in to the MySQL client pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc exec --stdin --tty mysql-client -- sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the MySQL client tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ apk add mysql-client

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within the &lt;strong&gt;mysql-client pod&lt;/strong&gt;, connect to the MySQL server using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mysql -u root -p -h mysql.mysql.svc.cluster.localal

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter the password stored in the &lt;code&gt;mysql-secrets.yaml&lt;/code&gt;. Once connected, create a database in the MySQL database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MySQL [(none)]&amp;gt; CREATE DATABASE erp;
MySQL [(none)]&amp;gt; CREATE TABLE erp.Persons ( ID int, FirstName varchar(255),Lastname varchar(255));
MySQL [(none)]&amp;gt; INSERT INTO erp.Persons (ID, FirstName, LastName) values (1234 , "John" , "Doe");
MySQL [(none)]&amp;gt; commit;

MySQL [(none)]&amp;gt; select * from erp.Persons;
+------+-----------+----------+
| ID  | FirstName | Lastname |
+------+-----------+----------+
| 1234 | John | Doe |
+------+-----------+----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Backup and Restore&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now create a Kubernetes VolumeSnapshotClass so you can snapshot the persistent volume claim (PVC) used for the MySQL deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create a snapshot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Save the manifest in a file called &lt;code&gt;volume-snapshot-class.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
 name: fsx-snapclass
driver: csi.trident.netapp.io
deletionPolicy: Delete

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc create -f volume-snapshot-class.yaml
volumesnapshotclass.snapshot.storage.k8s.io/fsx-snapclass created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a snapshot of the existing PVC by creating &lt;code&gt;VolumeSnapshot&lt;/code&gt; to take a point-in-time copy of your MySQL data. This creates an FSx snapshot that takes almost no space in the filesystem backend. Save this manifest in a file called &lt;code&gt;volume-snapshot.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
 name: mysql-volume-snap-01
spec:
 volumeSnapshotClassName: fsx-snapclass
 source:
   persistentVolumeClaimName: mysql-volume
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc create -f volume-snapshot.yaml volumesnapshot.snapshot.storage.k8s.io/mysql-volume-snap-01 created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And confirm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc get volumesnapshot
NAME                   READYTOUSE   SOURCEPVC      SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS SNAPSHOTCONTENT                                    CREATIONTIME   AGE mysql-volume-snap-01   true         mysql-volume                           50Gi          fsx-snapclass   snapcontent-bce1f186-7786-4f4a-9f3a-e8bf90b7c126   13s            14s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Delete the database erp&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now you can delete the database &lt;strong&gt;erp&lt;/strong&gt;. Log in to the container console using a new terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc exec --stdin --tty mysql-client -n mysql -- sh
mysql -u root -p -h mysql.mysql.svc.cluster.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete the database &lt;strong&gt;erp&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MySQL [(none)]&amp;gt;  DROP DATABASE erp;
Query OK, 1 row affected
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Restore the snapshot&lt;/strong&gt;&lt;br&gt;
To restore the volume to its previous state, you must create a new PVC based on the data in the snapshot you took. To do this, save the following manifest in a file named &lt;code&gt;pvc-clone.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: mysql-volume-clone
spec:
 accessModes:
   - ReadWriteOnce
 storageClassName: trident-csi
 resources:
   requests:
     storage: 50Gi
 dataSource:
   name: mysql-volume-snap-01
   kind: VolumeSnapshot
   apiGroup: snapshot.storage.k8s.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc create -f pvc-clone.yaml
persistentvolumeclaim/mysql-volume-clone created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And confirm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
mysql-volume         Bound    pvc-a3f98de0-06fe-4036-9a22-0d6bd697781a   50Gi       RWO            trident-csi   40m
mysql-volume-clone   Bound    pvc-9784d513-8d45-4996-abe3-7372cd879151   50Gi       RWO            trident-csi   36s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Redeploy the database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now you can redeploy your MySQL application with the restored volume. Save the following manifest to a file named m&lt;code&gt;ysql-deployment-restore.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
 name: mysql-fsx
spec:
 replicas: 1
 selector:
   matchLabels:
     app: mysql-fsx
 template:
   metadata:
     labels::q:q!
       app: mysql-fsx
   spec:
     containers:
     - image: mysql:5.7        
 name: mysql
       ports:
       - containerPort: 3306
       env: 
         - name: MYSQL_ROOT_PASSWORD
           valueFrom:
             secretKeyRef:
               name: mysql-secret
               key: password
       volumeMounts:
       - mountPath: /var/lib/mysql
         name: mysqlvol
     volumes:
       - name: mysqlvol
         persistentVolumeClaim:
           claimName: mysql-volume-clone 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the YAML to your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ oc apply -f mysql-deployment-restore.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the application has been deployed with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc get pods -n mysql
NAME                         READY   STATUS    RESTARTS   AGE
mysql-fsx-6c4d9f6fcb-mzm82   1/1     Running       0      15d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To validate that the database has been restored as expected, go back to the container console and show the existing databases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MySQL [(none)]&amp;gt; SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+ |
| erp                |
+--------------------+
MySQL [(none)]&amp;gt; select * from erp.Persons;
+------+-----------+----------+
| ID   | FirstName | Lastname |
+------+-----------+----------+
| 1234 | John      | Doe      |
+------+-----------+----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this article, you've seen how quick and easy it can be to restore a stateful application using snapshots, nearly instantly. Combining all the capabilities of FSx for ONTAP NAS with the sub-millisecond latencies and multi-AZ availability, FSx for ONTAP NAS is a great storage option for your containerized applications running in ROSA on AWS.&lt;/p&gt;

&lt;p&gt;For more information on this solution, refer to the NetApp Trident documentation. If you would like to improve upon the solution provided in this post, follow the instructions in the GitHub repository.&lt;/p&gt;

</description>
      <category>containers</category>
      <category>managedcloud</category>
      <category>storage</category>
      <category>cloudservices</category>
    </item>
  </channel>
</rss>
