<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vishnu Hari Dadhich</title>
    <description>The latest articles on Forem by Vishnu Hari Dadhich (@vishnuhd).</description>
    <link>https://forem.com/vishnuhd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vishnuhd"/>
    <language>en</language>
    <item>
      <title>Multi-region YugabyteDB deployment on AWS EKS with Istio</title>
      <dc:creator>Vishnu Hari Dadhich</dc:creator>
      <pubDate>Thu, 02 May 2024 06:34:52 +0000</pubDate>
      <link>https://forem.com/vishnuhd/multi-region-yugabytedb-deployment-on-aws-eks-with-istio-2ng5</link>
      <guid>https://forem.com/vishnuhd/multi-region-yugabytedb-deployment-on-aws-eks-with-istio-2ng5</guid>
      <description>&lt;p&gt;In today’s distributed cloud landscape, deploying applications across multiple regions and clusters is crucial for scalability, reliability, and performance. This blog post will guide you through setting up a multi-region, multi-cluster YugabyteDB deployment on AWS EKS with Istio service mesh.&lt;/p&gt;

&lt;h2&gt;
  
  
  WHY YUGABYTEDB?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.yugabyte.com/"&gt;YugabyteDB&lt;/a&gt; is a transactional database that brings together four must-have needs of cloud native apps – namely SQL as a flexible query language, low-latency performance, continuous availability, and globally-distributed scalability. Other databases do not serve all 4 of these needs simultaneously.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monolithic SQL databases offer SQL and low-latency reads, but neither have the ability to tolerate failures, nor can they scale writes across multiple nodes, zones, regions, and clouds.&lt;/li&gt;
&lt;li&gt;Distributed NoSQL databases offer read performance, high availability, and write scalability, but give up on SQL features such as relational data modelling and ACID transactions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  WHY AWS EKS AND ISTIO?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/eks/"&gt;AWS EKS&lt;/a&gt; provides a managed Kubernetes service, simplifying cluster management and deployment. &lt;a href="https://istio.io/"&gt;Istio&lt;/a&gt;, an open-source service mesh, enables traffic management, security, and observability across microservices.&lt;/p&gt;

&lt;p&gt;Combining Yugabyte with AWS EKS and Istio creates a robust, scalable, and secure cloud-native architecture that spans across multiple regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  DEPLOYMENT OVERVIEW
&lt;/h2&gt;

&lt;p&gt;Our deployment consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Three AWS regions (Singapore, Mumbai and Hyderabad).&lt;/li&gt;
&lt;li&gt;One EKS cluster in each region.&lt;/li&gt;
&lt;li&gt;One YugabyteDB master and one YugabyteDB tserver are deployed in each EKS cluster.&lt;/li&gt;
&lt;li&gt;Istio is deployed in each cluster with an east-west gateway to provide a multi-cluster service mesh.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk54aoeavtasdf0u4myag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk54aoeavtasdf0u4myag.png" alt="Image description" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DEPLOYMENT STEPS
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pre-requisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS account with at least three regions enabled&lt;/li&gt;
&lt;li&gt;AWS user with access to create VPC and EKS using eksctl
eksctl, aws cli, kubectl and git installed on your local system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Clone the following repo to follow along with this blog:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/vishnuhd/yugabyte-multiregion-aws-eks-istio.git
cd yugabyte-multiregion-aws-eks-istio
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy AWS EKS clusters
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Deploy EKS clusters in three different regions (namely Singapore, Mumbai and Hyderabad) using eksctl:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Creating EKS cluster in ${region}...\n"
    eksctl create cluster -f ${region}/cluster-config.yaml
    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Rename the kube contexts for the simplicity of this demo:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config rename-context 'yb-mumbai.ap-south-1.eksctl.io' mumbai
kubectl config rename-context 'yb-singapore.ap-southeast-1.eksctl.io' singapore
kubectl config rename-context 'yb-hyderabad.ap-south-2.eksctl.io' hyderabad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; By default, EKS does not provide EBS permissions, follow this article to enable EKS PVC dynamic provisioning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Istio
&lt;/h3&gt;

&lt;p&gt;When configuring a production deployment of Istio, key considerations include whether the mesh will be in single or multiple clusters, Istio control plane setup for high availability, and the choice between a single multicluster service mesh or federated multi-mesh deployment. These factors represent independent dimensions of configuration for Istio deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://istio.io/latest/docs/ops/deployment/deployment-models/"&gt;This&lt;/a&gt; guide describes the various options and considerations when configuring your Istio deployment. For this demo, we are gonna &lt;a href="https://istio.io/latest/docs/setup/install/multicluster/multi-primary_multi-network/"&gt;Install Multi-Primary on different networks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Download Istio:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L https://istio.io/downloadIstio | sh -
cd istio-1.21.0
export PATH=$PWD/bin:$PATH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Plug in CA Certificates for Istio
&lt;/h3&gt;

&lt;p&gt;In a multi-cluster environment, we would want to set up one root CA and use the root CA to issue intermediate certificates to the Istio CAs that run in each cluster. This would ensure the services behind it can only be accessed by services with a trusted mTLS certificate.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a cert directory :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p istio-1.21.0/certs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Generate the root CA certificate and key:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd istio-1.21.0/certs
make -f ../tools/certs/Makefile.selfsigned.mk root-ca
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For each cluster, generate an intermediate certificate and key for the Istio CA:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd istio-1.21.0/certs

{
  for region in mumbai singapore hyderabad; do
    echo -e "Generating certs for cluster - ${region}...\n"
    make -f ../tools/certs/Makefile.selfsigned.mk yb-${region}-cacerts
    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In each cluster, create a secret called &lt;code&gt;cacerts&lt;/code&gt; including all the input files &lt;code&gt;ca-cert.pem&lt;/code&gt;, &lt;code&gt;ca-key.pem&lt;/code&gt;, &lt;code&gt;root-cert.pem&lt;/code&gt; and &lt;code&gt;cert-chain.pem&lt;/code&gt; :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Creating namespace and secret for cluster - ${region}...\n"

    kubectl --context ${region} create namespace istio-system
    kubectl --context ${region} create secret generic cacerts -n istio-system \
          --from-file=istio-1.21.0/certs/yb-${region}/ca-cert.pem \
          --from-file=istio-1.21.0/certs/yb-${region}/ca-key.pem \
          --from-file=istio-1.21.0/certs/yb-${region}/root-cert.pem \
          --from-file=istio-1.21.0/certs/yb-${region}/cert-chain.pem

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this step completed, we are now prepared to install Istio on every cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Istio
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Install Istio using istioctl:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Installing istio for cluster - ${region}...\n"

    istioctl install --context ${region} -f ./${region}/istio.yaml

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install a gateway in each cluster, that is dedicated to &lt;a href="https://en.wikipedia.org/wiki/East-west_traffic"&gt;east-west&lt;/a&gt; traffic. By default, this gateway will be public on the Internet. Production systems may require additional access restrictions (e.g. via firewall rules) to prevent external attacks.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Installing the east-west gateway for cluster - ${region}...\n"

    ./istio-1.21.0/samples/multicluster/gen-eastwest-gateway.sh \
        --mesh mesh1 --cluster yb-${region} --network network-${region} | \
        istioctl --context ${region} install -y -f -

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Since the clusters are on separate networks, we need to expose all services (&lt;code&gt;*.local&lt;/code&gt;) on the east-west gateway in all three clusters. While this gateway is public on the Internet, services behind it can only be accessed by services with a trusted mTLS certificate and workload ID, just as if they were on the same network.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Exposing the services for cluster - ${region}...\n"

    kubectl --context ${region} apply -n istio-system -f \
        ./istio-1.21.0/samples/multicluster/expose-services.yaml

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install a remote secret in each cluster that provides access to the other cluster’s Kube API server.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region1 in mumbai singapore hyderabad; do
    for region2 in mumbai singapore hyderabad; do
      if [[ "${region1}" == "${region2}" ]]; then continue; fi
      echo -e "Create remote secret of ${region1} in ${region2}...\n"

      istioctl create-remote-secret \
        --context ${region1} \
        --name=yb-${region1} | \
        kubectl apply -f - --context ${region2}

      echo -e "-------------\n"
    done
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install YugabyteDB
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To install YugabyteDB using helm charts, add the chart repo:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add yugabytedb https://charts.yugabyte.com
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a YugabyteDB namespace (&lt;code&gt;yb-demo&lt;/code&gt;) in each cluster and enable Istio injection:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Creating namespace for cluster - ${region}...\n"

    kubectl --context ${region} create namespace yb-demo
    kubectl label --context ${region} namespace yb-demo istio-injection=enabled

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install YugabyteDB:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Installing YugabyteDB in cluster - ${region}...\n"

    helm upgrade --install ${region} yugabytedb/yugabyte \
        --version 2.19.3 \
        --namespace yb-demo \
        -f ${region}/overrides.yaml \
        --kube-context ${region} --wait

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install YugabyteDB in each EKS cluster with 1 master and 1 tserver connected with each other through the Istio service mesh. At this point, it is important to understand each parameter being set in the &lt;a href="https://github.com/vishnuhd/yugabyte-multiregion-aws-eks-istio/blob/main/singapore/overrides.yaml"&gt;overrides.yaml&lt;/a&gt; file. Each master and tserver pod needs to know all the master addresses to replicate data and elect leaders.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In addition to the Istio setup, an extra step is required, which involves creating identical Kubernetes services in all clusters to enable DNS service discovery. More information can be found &lt;a href="https://istio.io/latest/docs/ops/deployment/deployment-models/#dns-with-multiple-clusters"&gt;here&lt;/a&gt;. Therefore, we need to replicate the yugabyte-master and yugabyte-tserver services present in the Mumbai region to both the Singapore and Hyderabad regions, and vice versa.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region1 in mumbai singapore hyderabad; do
    for region2 in mumbai singapore hyderabad; do
      if [[ "${region1}" == "${region2}" ]]; then continue; fi
      echo -e "Creating services of ${region2} in ${region1}...\n"

      kubectl --context ${region1} apply -f ${region1}/services-${region2}.yaml -n yb-demo

      echo -e "-------------\n"
    done
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check the YugabyteDB pods and services, all of them should be up and running:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Checking YugabyteDB pods and svcs for cluster - ${region}...\n"

    kubectl --context ${region} get pods,svc -A

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Finally, we need to configure global data distribution, for Yugabyte to handle the data distribution properly across regions:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl --context mumbai exec -n yb-demo mumbai-yugabyte-yb-master-0 -- bash \
-c "/home/yugabyte/master/bin/yb-admin --master_addresses mumbai-yugabyte-yb-master-0.yb-demo.svc.cluster.local,hyderabad-yugabyte-yb-master-0.yb-demo.svc.cluster.local,singapore-yugabyte-yb-master-0.yb-demo.svc.cluster.local modify_placement_info aws.ap-south-1.ap-south-1a,aws.ap-south-2.ap-south-2a,aws.ap-southeast-1.ap-southeast-1a 3"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila, your YugabyteDB multi-regional setup is now complete!&lt;/p&gt;

&lt;h2&gt;
  
  
  Access the YugabyteDB UI
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Find the yb-master-ui service in yb-demo namespace for any cluster and open it in the browser along with the port 7000:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq5k6du2a13ov8i0qokk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq5k6du2a13ov8i0qokk.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see the masters are spread across regions, with Hyderabad one as the leader.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explore the tablet servers:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dniin0za79c9iwz469w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dniin0za79c9iwz469w.png" alt="Image description" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly, we can see the tablet servers being distributed across multiple regions, each of them able to hand synchronous reads and writes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run a sample Yugabyte application in any of the clusters:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run yb-sample-apps \
    -it --rm \
    --image yugabytedb/yb-sample-apps \
    --namespace yb-demo \
    --context singapore \
    --command -- sh

java -jar yb-sample-apps.jar java-client-sql \
    --workload SqlInserts \
    --nodes yb-tserver-common.yb-demo.svc.cluster.local:5433 \
    --num_threads_write 1 \
    --num_threads_read 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we are targeting yb-tserver-common service for reads and writes, which will choose any of the tserver in any of the regions randomly. This also helps in load-balancing the traffic across regions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can also see the tables created by this sample app:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjvq2q4b1z8pqxcbuuu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjvq2q4b1z8pqxcbuuu7.png" alt="Image description" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DISASTER RECOVERY
&lt;/h2&gt;

&lt;p&gt;Disasters can happen anytime, YugabyteDB provides us with a feature called Replication Factor (RF). Configurations usually include a Replication Factor (RF) of 3. In this setup, a write to the leader requires an acknowledgement from one follower before being committed, as the leader and one follower together constitute the majority. In the event of a failure, operational replicas in the Raft group can support consistent reads and writes, while those that have become separated from the Raft consensus quorum cannot make progress. We will now see this feature of Yugabyte in action.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Current leaders for master and tservers:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycv2kzitwrb3zx3jijwj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycv2kzitwrb3zx3jijwj.png" alt="Image description" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghhd1jlktxwcwnkhfgw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghhd1jlktxwcwnkhfgw6.png" alt="Image description" width="720" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently, the master pod in the Hyderabad region is the LEADER, while most of the transactions are handled by the tablet server in the Mumbai region.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let’s make the Hyderabad region go down:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl scale sts hyderabad-yugabyte-yb-master-0 --replicas 0 -n yb-demo --context hyderabad
kubectl scale sts hyderabad-yugabyte-yb-tserver-0 --replicas 0 -n yb-demo --context hyderabad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As soon as we decrease the pod replicas to 0 for both master and tserver, we can see the errors in the Yugabyte master UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgwiz5wu0pnhi2aftzsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgwiz5wu0pnhi2aftzsy.png" alt="Image description" width="720" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tlrki89ovvb2mr6eh9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tlrki89ovvb2mr6eh9x.png" alt="Image description" width="720" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can see that the master server from the Mumbai region is elected as a new Leader and all transactions are still operating even if one whole region is down.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yugabyte also keeps track of the under-replicated tables, so that they can be replicated to the region as soon as it comes back online:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95zwq5jg9ygaezmr218h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95zwq5jg9ygaezmr218h.png" alt="Image description" width="720" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the region comes online again, the data is replicated back to the Hyderabad region:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vxn3jjrhbwg6si6ot92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vxn3jjrhbwg6si6ot92.png" alt="Image description" width="720" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgge9hlllzsv94vkqs9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgge9hlllzsv94vkqs9j.png" alt="Image description" width="720" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywcigexaok8uzcyo94j0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywcigexaok8uzcyo94j0.png" alt="Image description" width="720" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome, now you have a multi-regional fault-tolerant YugabyteDB setup on AWS EKS clusters using Istio as a service mesh.&lt;/p&gt;

&lt;h2&gt;
  
  
  BONUS: SETUP KIALI FOR OBSERVABILITY
&lt;/h2&gt;

&lt;p&gt;The Istio download package comes by default with Kiali and Prometheus, let’s set it up to have a nice view of all the services in the mesh.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Kiali and Prometheus:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Checking Kiali and Prometheus for cluster - ${region}...\n"

    kubectl apply -f istio-1.21.0/samples/addons/prometheus.yaml --context ${region}
    kubectl apply -f istio-1.21.0/samples/addons/kiali.yaml --context ${region}

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Open Kiali dashboard:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;istioctl dashboard kiali --context singapore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh05al0h097vqpocfxi7t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh05al0h097vqpocfxi7t.png" alt="Image description" width="720" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above Kiali graph shows the various clusters and services from the POV of EKS cluster in the Singapore region.&lt;/p&gt;

&lt;h2&gt;
  
  
  CLEANING UP THE RESOURCES
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Uninstall YugabyteDB:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Un-installing YugabyteDB in cluster - ${region}...\n"

    helm uninstall ${region} --namespace yb-demo --kube-context ${region}
    kubectl delete pvc --namespace yb-demo \
      --selector component=yugabytedb,release=${region} \
      --context ${region}

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Delete additional YB services:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region1 in mumbai singapore hyderabad; do
    for region2 in mumbai singapore hyderabad; do
      if [[ "${region1}" == "${region2}" ]]; then continue; fi
      echo -e "Deleting services of ${region2} in ${region1}...\n"

      kubectl --context ${region1} delete -f ${region1}/services-${region2}.yaml -n yb-demo

      echo -e "-------------\n"
    done
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Un-install Kiali and Prometheus:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Checking Kiali and Prometheus for cluster - ${region}...\n"

    kubectl delete -f istio-1.21.0/samples/addons/prometheus.yaml --context ${region}
    kubectl delete -f istio-1.21.0/samples/addons/kiali.yaml --context ${region}

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Uninstall Istio:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  for region in mumbai singapore hyderabad; do
    echo -e "Un-installing Istio in cluster - ${region}...\n"

    istioctl uninstall --purge -y --context ${region}

    echo -e "-------------\n"
  done
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Delete EKS clusters:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl delete cluster yb-mumbai --region ap-south-1
eksctl delete cluster yb-singapore --region ap-southeast-1
eksctl delete cluster yb-hyderabad --region ap-south-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;A multi-region, multi-cluster YugabyteDB deployment on AWS EKS with Istio provides a highly available, scalable, and secure architecture for distributed applications. By leveraging YugabyteDB deployed on multiple AWS regions and EKS clusters, this setup ensures redundancy and failover capabilities, minimizing downtime and ensuring business continuity. Istio’s service mesh capabilities provide advanced traffic management, security, and observability features, allowing for fine-grained control and monitoring of the application traffic. This setup is ideal for organizations requiring a robust and resilient infrastructure for their critical applications.&lt;/p&gt;

&lt;p&gt;The original tech blog is &lt;a href="https://dvops.wordpress.com/2024/04/27/multi-region-yugabytedb-deployment-on-aws-eks-with-istio/"&gt;here&lt;/a&gt;, please follow/subscribe to get notifications directly in your inbox when new content goes live. You can also find me on LinkedIn @ &lt;a href="https://www.linkedin.com/in/vishnuhd/"&gt;in/vishnuhd&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>istio</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>yugabytedb</category>
    </item>
  </channel>
</rss>
