<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: M.M.Monirul Islam</title>
    <description>The latest articles on Forem by M.M.Monirul Islam (@monirul87).</description>
    <link>https://forem.com/monirul87</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/monirul87"/>
    <language>en</language>
    <item>
      <title>Revolutionizing Monitoring: Exploring the Power of Amazon Managed Service for Prometheus</title>
      <dc:creator>M.M.Monirul Islam</dc:creator>
      <pubDate>Wed, 06 Mar 2024 12:38:10 +0000</pubDate>
      <link>https://forem.com/monirul87/revolutionizing-monitoring-exploring-the-power-of-amazon-managed-service-for-prometheus-c0b</link>
      <guid>https://forem.com/monirul87/revolutionizing-monitoring-exploring-the-power-of-amazon-managed-service-for-prometheus-c0b</guid>
      <description>&lt;p&gt;In the dynamic landscape of cloud computing, efficient monitoring is paramount for ensuring the reliability and performance of applications. Amazon Web Services (AWS) has recently introduced a game-changer in the realm of observability – the Amazon Managed Service for Prometheus. This blog dives into the capabilities, benefits, and transformative impact of this managed service, showcasing how it empowers businesses to scale their monitoring infrastructure seamlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Need for Advanced Monitoring
&lt;/h2&gt;

&lt;p&gt;In this section, we explore the evolving needs of modern applications and the challenges associated with traditional monitoring solutions. From the complexities of managing scale to the demand for real-time insights, we set the stage for the emergence of Amazon Managed Service for Prometheus.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unveiling Amazon Managed Service for Prometheus
&lt;/h2&gt;

&lt;p&gt;Delve into the features and architecture of Amazon Managed Service for Prometheus. Highlight the simplicity of setup, seamless integration with existing AWS services, and the scalability that it offers. Emphasize how this managed service lifts the burden of infrastructure management, allowing teams to focus on deriving actionable insights from their monitoring data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits and Use Cases
&lt;/h2&gt;

&lt;p&gt;Examine the tangible benefits of adopting Amazon Managed Service for Prometheus. From cost savings to improved reliability, discuss how this service enhances observability for applications running on AWS. Provide real-world use cases that showcase the versatility of Prometheus in different scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Seamless Integration with AWS Ecosystem
&lt;/h2&gt;

&lt;p&gt;Explore the tight integration of Amazon Managed Service for Prometheus with other AWS services, such as Amazon CloudWatch and AWS Identity and Access Management (IAM). Discuss how this interoperability enriches the monitoring experience and contributes to a holistic approach to application observability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started - A Step-by-Step Guide
&lt;/h2&gt;

&lt;p&gt;Offer readers a practical guide on getting started with Amazon Managed Service for Prometheus. Cover the essential steps, share best practices, and highlight any considerations for a smooth onboarding process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Accessing the AWS Management Console&lt;/strong&gt;&lt;br&gt;
Begin by logging into the AWS Management Console. Navigate to the Amazon Managed Service for Prometheus section and initiate the setup process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Creating a Prometheus Workspace&lt;/strong&gt;&lt;br&gt;
Follow the prompts to create a new Prometheus workspace. Define the required parameters such as the workspace name, desired region, and any specific configurations based on your monitoring needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4fp45dhh115vgkgz7sk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4fp45dhh115vgkgz7sk.png" alt="Image description" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g1zjxqjdrv0kdfoll5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g1zjxqjdrv0kdfoll5h.png" alt="Image description" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8padm7b62xzf4ey4i4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8padm7b62xzf4ey4i4j.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;arn:aws:aps:us-east-1:726459634338:workspace/ws-420cc8c0-93ff-4bfd-9dfb-48e0e9a02d5a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Endpoint - remote write URL&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aps-workspaces.us-east-1.amazonaws.com/workspaces/ws-420cc8c0-93ff-4bfd-9dfb-48e0e9a02d5a/api/v1/remote_write"&gt;https://aps-workspaces.us-east-1.amazonaws.com/workspaces/ws-420cc8c0-93ff-4bfd-9dfb-48e0e9a02d5a/api/v1/remote_write&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Endpoint - query URL&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aps-workspaces.us-east-1.amazonaws.com/workspaces/ws-420cc8c0-93ff-4bfd-9dfb-48e0e9a02d5a/api/v1/query"&gt;https://aps-workspaces.us-east-1.amazonaws.com/workspaces/ws-420cc8c0-93ff-4bfd-9dfb-48e0e9a02d5a/api/v1/query&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Set up IAM roles for service accounts&lt;/strong&gt;&lt;br&gt;
For the method of onboarding that we are documenting, you need to use IAM roles for service accounts in the Amazon EKS cluster where the Prometheus server is running. These roles are also called service roles.&lt;/p&gt;

&lt;p&gt;With service roles, you can associate an IAM role with a Kubernetes service account. This service account can then provide AWS permissions to the containers in any pod that uses that service account. For more information, see IAM roles for service accounts. &lt;/p&gt;

&lt;p&gt;If you have not already set up these roles, follow this instructions to set up the roles: Set up service roles for the ingestion of metrics from Amazon EKS clusters. &lt;/p&gt;

&lt;p&gt;Please make sure you have created an IAM role called amp-iamproxy-ingest-role before continuing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Upgrade your existing Prometheus server using Helm (for Prometheus version 2.26.0 or later)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The instructions in this section includes setting up remote write and sigv4 to authenticate and authorize the Prometheus server to remote write to your AMP workspace.&lt;/p&gt;

&lt;p&gt;To set up remote write from a Prometheus server using Helm chart:&lt;br&gt;
Create a new remote write section in your Helm configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serviceAccounts:
    server:
        name: "amp-iamproxy-ingest-service-account"
        annotations:
            eks.amazonaws.com/role-arn: "arn:aws:iam::726459634338:role/amp-iamproxy-ingest-role"
server:
    remoteWrite:
        - url: https://aps-workspaces.us-east-1.amazonaws.com/workspaces/ws-420cc8c0-93ff-4bfd-9dfb-48e0e9a02d5a/api/v1/remote_write
          sigv4:
            region: us-east-1
          queue_config:
            max_samples_per_send: 1000
            max_shards: 200
            capacity: 2500
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update your existing Prometheus Server configuration using Helm:&lt;/p&gt;

&lt;p&gt;First, find your Helm chart name by entering the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm ls --all-namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the output from this command, look for a chart with a name that includes "prometheus".&lt;/p&gt;

&lt;p&gt;Then enter the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade prometheus-chart-name prometheus-community/prometheus -n prometheus_namespace -f my_prometheus_values_yaml --version current_helm_chart_version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace these placeholders with the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace prometheus-chart-name with the name of the helm chart you found with the previous command.&lt;/li&gt;
&lt;li&gt;Replace prometheus_namespace with the name of the Kubernetes namespace where your Prometheus Server is installed.&lt;/li&gt;
&lt;li&gt;Replace my_prometheus_values_yaml with the path to your Helm configuration file.&lt;/li&gt;
&lt;li&gt;Replace current_helm_chart_version with the current version of the Prometheus Server Helm chart you found with the previous command.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Grafana
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step-1:&lt;/strong&gt; Follow the following procedure to setting up managed grafana.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hvg1hkb38x86848dxqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hvg1hkb38x86848dxqu.png" alt="Image description" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2kfvmz2678nqxthr4rv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2kfvmz2678nqxthr4rv.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm1t5mfx8jzkujcuk2jq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm1t5mfx8jzkujcuk2jq.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfi3fehw1vcieb2gdhzi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfi3fehw1vcieb2gdhzi.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37gchetgnl4yh0b0uf8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37gchetgnl4yh0b0uf8n.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvivkcspkhk823uqgdkrp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvivkcspkhk823uqgdkrp.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Configuring Data Retention and Storage Options&lt;br&gt;
Fine-tune your monitoring environment by configuring data retention policies and storage options. Tailor these settings to align with your application's requirements and expected data volumes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Integrating with AWS Resources&lt;br&gt;
Connect Amazon Managed Service for Prometheus with your existing AWS resources. This could include linking it to Amazon Elastic Kubernetes Service (EKS) clusters, Amazon EC2 instances, or any other AWS services generating monitoring data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Setting Up Alerts and Notifications&lt;br&gt;
Enhance your observability by configuring alerts and notifications. Define thresholds for metrics that matter most to your applications and establish alerting mechanisms to keep your team informed of any deviations from expected behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Exploring Querying Capabilities&lt;br&gt;
Dive into the querying capabilities of Amazon Managed Service for Prometheus. Learn how to write PromQL queries to extract valuable insights from your monitoring data. Leverage this functionality to troubleshoot issues, analyze trends, and optimize performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Integrating with AWS CloudWatch&lt;br&gt;
Explore the seamless integration between Amazon Managed Service for Prometheus and AWS CloudWatch. Understand how these services work together to provide a comprehensive monitoring solution for your AWS environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Scaling and Optimization&lt;br&gt;
As your application grows, ensure that your monitoring solution scales accordingly. Learn best practices for scaling Amazon Managed Service for Prometheus and optimizing its performance to meet the evolving demands of your applications.&lt;/p&gt;

&lt;p&gt;Wrap up the blog by summarizing the key takeaways and emphasizing the transformative potential of Amazon Managed Service for Prometheus. Encourage readers to explore this powerful monitoring solution to elevate their observability practices and stay ahead in the ever-evolving landscape of cloud-native applications.&lt;/p&gt;

&lt;p&gt;By shedding light on the capabilities of Amazon Managed Service for Prometheus, this blog aims to guide organizations toward a more efficient and scalable approach to monitoring, ultimately enhancing their ability to deliver reliable and performant applications on AWS.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kong Gateway on AWS EKS: A Journey into Cloud-native API Management</title>
      <dc:creator>M.M.Monirul Islam</dc:creator>
      <pubDate>Tue, 16 Jan 2024 04:32:39 +0000</pubDate>
      <link>https://forem.com/monirul87/kong-gateway-on-aws-eks-a-journey-into-cloud-native-api-management-8dj</link>
      <guid>https://forem.com/monirul87/kong-gateway-on-aws-eks-a-journey-into-cloud-native-api-management-8dj</guid>
      <description>&lt;p&gt;In the ever-evolving landscape of cloud-native applications, managing APIs efficiently is crucial. An API Gateway plays a pivotal role as a bridge between clients and backends, orchestrating requests and responses while handling a myriad of tasks such as CORS validation, TLS termination, authentication, and more. In this blog post, we embark on a journey into cloud-native API management with Kong Gateway deployed on an AWS EKS cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding the Role of an API Gateway&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At its core, an API Gateway acts as a proxy, mediating communication between clients and backend services. It streamlines the process by handling various tasks in transit, including CORS validation, TLS termination, JWT authentication, header injection, session management, response transformation, rate-limiting, ACLs, and much more. This intermediary layer ensures seamless and secure interactions within a microservices architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Introducing Kong Gateway&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Developed by KongHQ, Kong Gateway stands out as a lightweight and decentralized API Gateway solution. Operating as a Lua application within NGINX and distributed with OpenResty, Kong Gateway sets the stage for modular extensibility through a rich ecosystem of plugins. Whether your API management needs are basic or complex, Kong Gateway provides a scalable and versatile solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Evolution: DBLess Kong Gateway&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Traditionally, Kong Gateway configurations, including routes, services, and plugins, were stored in a database. However, the landscape shifted with the advent of "DBLess" Kong Gateway, also known as the "declarative" method. In this mode, configuration management shifts entirely to code, typically saved as a declarative.yaml file. This paradigm shift brings about several advantages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Version Control:&lt;/strong&gt;&lt;br&gt;
Configuration becomes easily versionable, enabling seamless collaboration and tracking changes over time.&lt;br&gt;
&lt;strong&gt;2. Simplicity and Agility:&lt;/strong&gt;&lt;br&gt;
Eliminating the need for a separate database streamlines deployment and enhances agility in managing configurations.&lt;br&gt;
&lt;strong&gt;3. Infrastructure as Code (IaC):&lt;/strong&gt;&lt;br&gt;
The move towards a code-centric approach aligns with the principles of Infrastructure as Code, promoting consistency and reproducibility.&lt;br&gt;
&lt;strong&gt;4. Maintenance Ease:&lt;/strong&gt;&lt;br&gt;
With configurations stored as code, the need for maintaining a separate database diminishes, simplifying the overall maintenance process.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Deploying Kong Gateway on AWS EKS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we grasp the significance of Kong Gateway and the advantages of the DBLess approach, let's delve into the process of deploying Kong Gateway on an AWS EKS cluster. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Declarative Configuration (prd/declarative.yaml)&lt;/strong&gt;: Specifies the Kong services, routes, and associated plugins using a DBLess approach.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;_format_version: &lt;span class="s2"&gt;"2.1"&lt;/span&gt;
_transform: &lt;span class="nb"&gt;true

&lt;/span&gt;services:
  - name: invoicing-matrics
    url: https://prdawsinvoicing.example.com/actuator/
    routes:
      - paths:
          - /metrics
        methods:
          - GET
          - POST
          - PUT
        strip_path: &lt;span class="nb"&gt;true
        &lt;/span&gt;protocols:
          - https
          - http
        hosts:
          - k-prd.devopsmonirul.tech
  - name: businessCatalog
    url: https://prdawscatalogos.example.com/catalogs/v1.0/catalogs/catalogs/businessCatalog
    routes:
      - paths:
          - /catalogs/businessCatalog
        methods:
          - GET
          - POST
          - PUT
        strip_path: &lt;span class="nb"&gt;true
        &lt;/span&gt;protocols:
          - https
          - http
        hosts:
          - k-prd.devopsmonirul.tech
  - name: usermanagment
    url: https://prdawsusermanagment.example.com/usermanagement/v1.0/users
    routes:
      - paths:
          - /usermanagement
        methods:
          - GET
          - POST
          - PUT
        strip_path: &lt;span class="nb"&gt;true
        &lt;/span&gt;protocols:
          - https
          - http
        hosts:
          - k-prd.devopsmonirul.tech
  - name: ocr
    url: https://prdawsocr.example.com/ocr/v1.0/actuator
    routes:
      - paths:
          - /ocr
        methods:
          - GET
          - POST
          - PUT
        strip_path: &lt;span class="nb"&gt;true
        &lt;/span&gt;protocols:
          - https
          - http
        hosts:
          - k-prd.devopsmonirul.tech
  &lt;span class="c"&gt;#------------------------------------------------- CORS Plugin ----------------------------------------------&lt;/span&gt;
plugins:
  - name: prometheus
  - name: cors
    service: invoicing-matrics
    config:
      origins:
        - https://controller-ai.com
        - https://www.example.com
        - https://monirul.digital
        - https://example.com
        - http://local.devopsmonirul.com:4200
        - &lt;span class="s2"&gt;"*"&lt;/span&gt;
      methods:
        - GET
        - POST
        - PUT
      credentials: &lt;span class="nb"&gt;false
      &lt;/span&gt;max_age: 3600
      preflight_continue: &lt;span class="nb"&gt;false&lt;/span&gt;
  - name: cors
    service: businessCatalog
    config:
      origins:
        - &lt;span class="s2"&gt;"*"&lt;/span&gt;
      methods:
        - GET
        - POST
      credentials: &lt;span class="nb"&gt;false
      &lt;/span&gt;max_age: 3600
      preflight_continue: &lt;span class="nb"&gt;false&lt;/span&gt;
  - name: cors
    service: usermanagment
    config:
      origins:
        - &lt;span class="s2"&gt;"*"&lt;/span&gt;
      methods:
        - GET
        - POST
        - PUT
      credentials: &lt;span class="nb"&gt;false
      &lt;/span&gt;max_age: 3600
      preflight_continue: &lt;span class="nb"&gt;false&lt;/span&gt;
  - name: cors
    service: ocr
    config:
      origins:
        - &lt;span class="s2"&gt;"*"&lt;/span&gt;
      methods:
        - GET
        - POST
        - PUT
      credentials: &lt;span class="nb"&gt;false
      &lt;/span&gt;max_age: 3600
      preflight_continue: &lt;span class="nb"&gt;false&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Helm Configuration (prd/kong.yaml)&lt;/strong&gt;:&lt;br&gt;
Helm chart configurations for Kong deployment, including resource limits, ingress controller settings, environment variables, and autoscaling parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deployment:
  serviceAccount:
    create: &lt;span class="nb"&gt;false

&lt;/span&gt;podAnnotations:
  &lt;span class="s2"&gt;"cluster-autoscaler.kubernetes.io/safe-to-evict"&lt;/span&gt;: &lt;span class="s2"&gt;"true"&lt;/span&gt;

resources:
  limits:
    memory: 1Gi
  requests:
    cpu: 500m
    memory: 1Gi

ingressController:
  enabled: &lt;span class="nb"&gt;false
  &lt;/span&gt;installCRDs: &lt;span class="nb"&gt;false

env&lt;/span&gt;:
  database: &lt;span class="s2"&gt;"off"&lt;/span&gt;
  nginx_worker_processes: &lt;span class="s2"&gt;"2"&lt;/span&gt;
  proxy_access_log: /dev/stdout json_analytics
  proxy_error_log: /dev/stdout
  log_level: &lt;span class="s2"&gt;"error"&lt;/span&gt;
  trusted_ips: &lt;span class="s2"&gt;"0.0.0.0/0,::/0"&lt;/span&gt;
  headers: &lt;span class="s2"&gt;"off"&lt;/span&gt;
  anonymous_reports: &lt;span class="s2"&gt;"off"&lt;/span&gt;
  admin_listen: &lt;span class="s2"&gt;"off"&lt;/span&gt;
  status_listen: 0.0.0.0:8100
  nginx_http_log_format: |
    json_analytics &lt;span class="nv"&gt;escape&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;json &lt;span class="s1"&gt;'{"msec": "$msec", "status": "$status", "request_uri": "$request_uri", "geoip_country_code": "$http_x_client_region", "client_subdivision": "$http_x_client_subdivision", "client_city": "$http_x_client_city","client_city_latlong": "$http_x_client_citylatlong", "connection": "$connection", "connection_requests": "$connection_requests", "pid": "$pid", "request_id": "$request_id", "request_length": "$request_length", "remote_addr": "$remote_addr", "remote_user": "$remote_user", "remote_port": "$remote_port", "time_local": "$time_local", "time_iso8601": "$time_iso8601", "request": "$request", "args": "$args", "body_bytes_sent": "$body_bytes_sent", "bytes_sent": "$bytes_sent", "http_referer": "$http_referer", "http_user_agent": "$http_user_agent", "http_x_forwarded_for": "$http_x_forwarded_for", "http_host": "$http_host", "server_name": "$server_name", "request_time": "$request_time", "upstream": "$upstream_addr", "upstream_connect_time": "$upstream_connect_time", "upstream_header_time": "$upstream_header_time", "upstream_response_time": "$upstream_response_time", "upstream_response_length": "$upstream_response_length", "upstream_cache_status": "$upstream_cache_status", "ssl_protocol": "$ssl_protocol", "ssl_cipher": "$ssl_cipher", "scheme": "$scheme", "request_method": "$request_method", "server_protocol": "$server_protocol", "pipe": "$pipe", "gzip_ratio": "$gzip_ratio", "http_cf_ray": "$http_cf_ray", "trace_id": "$http_x_b3_traceid", "proxy_host": "$proxy_host"}'&lt;/span&gt;

admin:
  enabled: &lt;span class="nb"&gt;true
  &lt;/span&gt;http:
    enabled: &lt;span class="nb"&gt;false
  &lt;/span&gt;tls:
    enabled: &lt;span class="nb"&gt;true

&lt;/span&gt;proxy:
  &lt;span class="nb"&gt;type&lt;/span&gt;: NodePort
  tls:
    enabled: &lt;span class="nb"&gt;false

&lt;/span&gt;autoscaling:
  enabled: &lt;span class="nb"&gt;true
  &lt;/span&gt;minReplicas: 1
  maxReplicas: 11
  metrics:
    - &lt;span class="nb"&gt;type&lt;/span&gt;: Resource
      resource:
        name: cpu
        target:
          &lt;span class="nb"&gt;type&lt;/span&gt;: Utilization
          averageUtilization: 60
    - &lt;span class="nb"&gt;type&lt;/span&gt;: Resource
      resource:
        name: memory
        target:
          &lt;span class="nb"&gt;type&lt;/span&gt;: Utilization
          averageUtilization: 80
affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/os
              operator: In
              values:
              - linux
            - key: kubernetes.io/arch
              operator: In
              values:
              - amd64
              - arm64
            - key: eks.amazonaws.com/compute-type
              operator: NotIn
              values:
              - fargate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Vertical Pod Autoscaler Configuration (prd/vpa.yaml)&lt;/strong&gt;: Configuration for the Vertical Pod Autoscaler, which adjusts resource requests and limits for pods based on their usage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: prd-kong-vpa
  namespace: prd-kong
spec:
  targetRef:
    apiVersion: &lt;span class="s2"&gt;"apps/v1"&lt;/span&gt;
    kind: Deployment
    name: prd-kong
  updatePolicy:
    updateMode: &lt;span class="s2"&gt;"Off"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ingress Configuration (prd/ingress.yaml)&lt;/strong&gt;: Configuration for the Ingress resource, specifying rules and annotations for AWS ALB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: prd-kong-ingress
  namespace: prd-kong
  annotations:
    alb.ingress.kubernetes.io/actions.ssl-redirect: &lt;span class="s1"&gt;'{"Type": "redirect", "RedirectConfig":
        { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'&lt;/span&gt;
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:234366607644:certificate/c25d25f3-78ae-4197-a806-1882f6b947dc
    alb.ingress.kubernetes.io/listen-ports: &lt;span class="s1"&gt;'[{"HTTP": 80}, {"HTTPS":443}]'&lt;/span&gt;
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/success-codes: 200,404,301,302
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - host: k-prd.devopsmonirul.tech
      http:
        paths:
          - path: /&lt;span class="k"&gt;*&lt;/span&gt;
            pathType: ImplementationSpecific
            backend:
              service:
                name: prd-kong-proxy
                port:
                  number: 80   

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Common Scripts (scripts/common.sh):&lt;/strong&gt; Shell script with functions for common tasks, such as printing colored text, checking the existence of commands, and defining an exit strategy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HELM_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"v3.7.2"&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;red&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[31m&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;text&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[0m"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;green&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[32m&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;text&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[0m"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;yellow&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[33m&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;text&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[0m"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;blue&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[36m&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;text&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\0&lt;/span&gt;&lt;span class="s2"&gt;33[0m"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;exists&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;graceful_exit&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    red &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;bastion_command&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;command&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;error_exit &lt;span class="s2"&gt;"Command can't be empty."&lt;/span&gt;
    &lt;span class="k"&gt;else
        &lt;/span&gt;gcloud compute ssh &lt;span class="nt"&gt;--project&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GCP_PROJECT_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--zone&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GCP_ZONE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GCP_USER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GCP_SERVER_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--tunnel-through-iap&lt;/span&gt; &lt;span class="nt"&gt;--command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;install_kubectl&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;exists /usr/local/bin/kubectl&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Kubectl is installed already"&lt;/span&gt;
    &lt;span class="k"&gt;else
        &lt;/span&gt;curl &lt;span class="nt"&gt;-o&lt;/span&gt; kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.13/2022-10-31/bin/linux/amd64/kubectl
        &lt;span class="nb"&gt;chmod&lt;/span&gt; +x ./kubectl
        &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/bin &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cp&lt;/span&gt; ./kubectl &lt;span class="nv"&gt;$HOME&lt;/span&gt;/bin/kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;:&lt;span class="nv"&gt;$HOME&lt;/span&gt;/bin
        &lt;span class="nb"&gt;cp&lt;/span&gt; ./kubectl /usr/local/bin/kubectl
        kubectl version &lt;span class="nt"&gt;--short&lt;/span&gt; &lt;span class="nt"&gt;--client&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;install_helm&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;exists /usr/local/bin/helm&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Helm is installed already"&lt;/span&gt;
    &lt;span class="k"&gt;else
        &lt;/span&gt;curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;--version&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HELM_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;setup_aws_auth&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;AWS_PROJECT_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;AWS_LOCATION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--profile&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Setup Kong Script (scripts/setup-kong.sh)&lt;/strong&gt;: Script for installing kubectl, Helm, adding the Kong Helm repo, setting up AWS authentication, creating namespaces, applying VPA configuration, and deploying Kong using Helm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/.."&lt;/span&gt;

&lt;span class="nb"&gt;source &lt;/span&gt;scripts/common.sh

green &lt;span class="s2"&gt;"Installing Kubectl"&lt;/span&gt;
install_kubectl

green &lt;span class="s2"&gt;"Installing helm version =&amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HELM_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
install_helm

green &lt;span class="s2"&gt;"Setting up Kong Helm Repo"&lt;/span&gt;
helm repo add kong https://charts.konghq.com
helm repo update

green &lt;span class="s2"&gt;"Setting up AWS Auth"&lt;/span&gt;
setup_aws_auth

green &lt;span class="s2"&gt;"Creating Namespace =&amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-kong"&lt;/span&gt;
kubectl create namespace &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nt"&gt;-kong&lt;/span&gt; &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client &lt;span class="nt"&gt;-o&lt;/span&gt; yaml | kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -

green &lt;span class="s2"&gt;"Set the current namespace"&lt;/span&gt;
kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-kong&lt;/span&gt;

green &lt;span class="s2"&gt;"Setting up VPA Config =&amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-kong-vpa"&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/vpa.yaml &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true

&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; prd &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;green &lt;span class="s2"&gt;"Setting up Ingress =&amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-kong-ingress"&lt;/span&gt;
    kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/ingress.yaml
&lt;span class="k"&gt;fi

&lt;/span&gt;green &lt;span class="s2"&gt;"Setting up Kong =&amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
helm upgrade &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    kong/kong &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--namespace&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nt"&gt;-kong&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/kong.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set-file&lt;/span&gt; dblessConfig.config&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/declarative.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--version&lt;/span&gt; 2.6.3 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--wait&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--debug&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Validate Kong Script (scripts/validate-kong.sh)&lt;/strong&gt;: Script for validating the setup, including Helm diff, VPA configuration, and Ingress configuration (for prd environment).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/.."&lt;/span&gt;

&lt;span class="nb"&gt;source &lt;/span&gt;scripts/common.sh

green &lt;span class="s2"&gt;"Installing Kubectl"&lt;/span&gt;
install_kubectl

green &lt;span class="s2"&gt;"Installing helm version =&amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HELM_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
install_helm

green &lt;span class="s2"&gt;"Setting up Kong Helm Repo"&lt;/span&gt;
helm repo add kong https://charts.konghq.com
helm repo update

green &lt;span class="s2"&gt;"Installing Helm Diff Plugin"&lt;/span&gt;
helm plugin &lt;span class="nb"&gt;install &lt;/span&gt;https://github.com/databus23/helm-diff &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true

&lt;/span&gt;green &lt;span class="s2"&gt;"Setting up AWS Auth"&lt;/span&gt;
setup_aws_auth

green &lt;span class="s2"&gt;"Set the current namespace"&lt;/span&gt;
kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-kong&lt;/span&gt;

green &lt;span class="s2"&gt;"Validating VPA Config =&amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-kong-vpa"&lt;/span&gt;
kubectl diff &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/vpa.yaml &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true

&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; prd &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;green &lt;span class="s2"&gt;"Validating Ingress =&amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-kong-ingress"&lt;/span&gt;
    kubectl diff &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/ingress.yaml &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
&lt;/span&gt;&lt;span class="k"&gt;fi

&lt;/span&gt;green &lt;span class="s2"&gt;"Validating Kong =&amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
helm diff upgrade &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    kong/kong &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--namespace&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NAMESPACE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/kong.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set-file&lt;/span&gt; dblessConfig.config&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/declarative.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--version&lt;/span&gt; 2.6.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;GitLab CI Configuration (.gitlab-ci.yml)&lt;/strong&gt;: CI/CD pipeline configuration for validating and deploying Kong in the prd environment.&lt;/p&gt;

&lt;p&gt;For validations job,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;validate-kong-prd:
  stage: validate-kong-prd
  environment: kong-prd
  variables:
    ENV: prd
    KONG_NAME: prd
    AWS_PROJECT_ID: monirul-digital
    CLUSTER_NAME: monirul
    AWS_LOCATION: eu-west-1
    NAMESPACE: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-kong&lt;/span&gt;
  extends: &lt;span class="o"&gt;[&lt;/span&gt; .common-dependencies &lt;span class="o"&gt;]&lt;/span&gt;
  script:
    - aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="nv"&gt;$AWS_PROJECT_ID&lt;/span&gt; &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="nv"&gt;$AWS_LOCATION&lt;/span&gt; &lt;span class="nt"&gt;--profile&lt;/span&gt; &lt;span class="nv"&gt;$CLUSTER_NAME&lt;/span&gt;
    - ./scripts/validate-kong.sh
  only:
    refs:
      - master
      - merge_requests
    changes:
      - prd/&lt;span class="k"&gt;**&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;
  allow_failure: &lt;span class="nb"&gt;false&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Deployment Job,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deploy-helm-kong-prd:
  stage: deploy-kong-prd
  environment: kong-prd
  variables:
    ENV: prd
    KONG_NAME: prd
    AWS_PROJECT_ID: monirul-digital
    CLUSTER_NAME: monirul
    AWS_LOCATION: eu-west-1
    NAMESPACE: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;KONG_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-kong&lt;/span&gt;
  extends: &lt;span class="o"&gt;[&lt;/span&gt; .common-dependencies &lt;span class="o"&gt;]&lt;/span&gt;
  script:
    - aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="nv"&gt;$AWS_PROJECT_ID&lt;/span&gt; &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="nv"&gt;$AWS_LOCATION&lt;/span&gt; &lt;span class="nt"&gt;--profile&lt;/span&gt; &lt;span class="nv"&gt;$CLUSTER_NAME&lt;/span&gt;
    - ./scripts/setup-kong.sh
  only:
    refs:
      - master
    changes:
      - prd/&lt;span class="k"&gt;**&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;
  allow_failure: &lt;span class="nb"&gt;false
  &lt;/span&gt;when: manual

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The deployment setup appears to be well-organized and follows best practices for deploying Kong on AWS EKS. The use of GitLab CI/CD enhances automation and ensures consistent deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure that your deployment scripts and configurations align with your specific requirements and AWS EKS environment.&lt;/li&gt;
&lt;li&gt;Monitor Kong Gateway's performance, logs, and metrics in the AWS EKS cluster to identify and address any issues.&lt;/li&gt;
&lt;li&gt;Consider further optimizations or enhancements based on specific use cases or evolving requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have any specific concerns or questions, feel free to ask!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>kubernetes</category>
      <category>kong</category>
    </item>
    <item>
      <title>Connecting Multiple GKE Clusters to a Single Cloud SQL Instance</title>
      <dc:creator>M.M.Monirul Islam</dc:creator>
      <pubDate>Tue, 22 Aug 2023 05:03:54 +0000</pubDate>
      <link>https://forem.com/monirul87/connecting-multiple-gke-clusters-to-a-single-cloud-sql-instance-3fa0</link>
      <guid>https://forem.com/monirul87/connecting-multiple-gke-clusters-to-a-single-cloud-sql-instance-3fa0</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdsji3wx0wvkscu0d5va.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdsji3wx0wvkscu0d5va.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
In a cloud-native environment, managing multiple Google Kubernetes Engine (GKE) clusters is common, especially when different parts of an application need to be isolated. However, these isolated clusters might still need to access a central database like Google Cloud SQL (PostgreSQL). In this blog post, we'll guide you through the process of connecting multiple GKE VPC-native clusters to a single Cloud SQL instance, even when they reside in separate VPC networks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge&lt;/strong&gt;&lt;br&gt;
Imagine you have several GKE clusters, each operating in its own Virtual Private Cloud (VPC) network, for reasons of security and isolation. On the other hand, you have a Cloud SQL instance (PostgreSQL) that serves as the central repository for your application's data. The challenge is to enable seamless connectivity between any of the GKE clusters and the shared Cloud SQL instance, maintaining both network isolation and data security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution&lt;/strong&gt;&lt;br&gt;
To address this challenge, we will outline the steps to establish a secure and efficient connection between your GKE VPC-native clusters and the Cloud SQL database, regardless of the VPC network they are in.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Dedicated Node Pools&lt;br&gt;
Begin by creating dedicated node pools within each GKE cluster. These node pools are optimized for backend workloads and ensure that your applications requiring database access have the necessary resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provisioning Cloud SQL&lt;br&gt;
Set up a PostgreSQL database instance in Google Cloud SQL. This instance will serve as the central database for your application's data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have many options for creating this instance: we can use Infrastructure as Code (IAC)/manual steps or gcloud command  steps provided below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

gcloud sql instances create postgres-instance &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--project&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;monirul-test &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--database-version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;POSTGRES_13 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4GiB &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--root-password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"DummyPass765#"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--availability-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;zonal &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-west2-a


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Configuring Service Accounts
After creating the service account, you need to assign the Cloud SQL Client role to it. To allow this service account to authenticate and access Cloud SQL, you need to generate a JSON key file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Procedure to creating the service-account-key Secret from a file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl create secret generic service-account-key &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;key.json&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/monirul-test-c7a15664f347.json


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command will generate a key file named key.json in your current directory. This key file will be used by the Cloud SQL Proxy and your GKE application to authenticate.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploying the Cloud SQL Proxy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deploying the Cloud SQL Proxy as a sidecar is a crucial step in connecting your GKE application to a Cloud SQL database securely. Below is a Kubernetes Deployment YAML manifest demonstrating how to deploy the Cloud SQL Proxy container as a sidecar alongside your main application container:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apiVersion: apps/v1
kind: Deployment
metadata:
    name: test-app-deployment
    annotations:
        linkerd.io/inject: enabled
spec:
    replicas: 1
    strategy:
        &lt;span class="nb"&gt;type&lt;/span&gt;: RollingUpdate
        rollingUpdate:
            maxSurge: 50%
            maxUnavailable: 0
    selector:
        matchLabels:
            app: test-app
    template:
        metadata:
            labels:
                app: test-app
        spec:
            affinity:
                nodeAffinity:
                    requiredDuringSchedulingIgnoredDuringExecution:
                        nodeSelectorTerms:
                        - matchExpressions:
                            - key: dedicated
                              operator: In
                              values:
                              - backend
            tolerations:
                - effect: NoSchedule
                  key: dedicated
                  operator: Equal
                  value: backend
            containers:
                - name: test-app
                  image: asia.gcr.io/monirul-test/test-app-deployment:test-app-1.0.0
                  ports:
                      - containerPort: 5000
                        name: test-app
                  resources:
                      limits:
                          cpu: &lt;span class="s1"&gt;'1'&lt;/span&gt;
                          memory: &lt;span class="s1"&gt;'3Gi'&lt;/span&gt;
                      requests:
                          cpu: &lt;span class="s1"&gt;'0.5'&lt;/span&gt;
                          memory: &lt;span class="s1"&gt;'2Gi'&lt;/span&gt;
                  &lt;span class="nb"&gt;env&lt;/span&gt;:
                      - name: DB_PASSWORD
                        valueFrom:
                            secretKeyRef:
                                name: test-app-secret
                                key: DB_PASSWORD
                      - name: APP_PROJECT_ID
                        valueFrom:
                            configMapKeyRef:
                                name: test-app-config
                                key: APP_PROJECT_ID
                      - name: DB_HOST
                        valueFrom:
                            configMapKeyRef:
                                name: test-app-config
                                key: DB_HOST
                      - name: DB_USER
                        valueFrom:
                            configMapKeyRef:
                                name: test-app-config
                                key: DB_USER
                      - name: APP_PORT
                        valueFrom:
                            configMapKeyRef:
                                name: test-app-config
                                key: APP_PORT
                      - name: DB_NAME
                        valueFrom:
                            configMapKeyRef:
                                name: test-app-config
                                key: DB_NAME
                      - name: DB_PORT
                        valueFrom:
                            configMapKeyRef:
                                name: test-app-config
                                key: DB_PORT
                      - name: DATABASE_MAX_CONNECTIONS
                        valueFrom:
                            configMapKeyRef:
                                name: test-app-config
                                key: DATABASE_MAX_CONNECTIONS
                - name: cloud-sql-proxy
                  image: gcr.io/cloudsql-docker/gce-proxy:1.22.0
                  &lt;span class="nb"&gt;command&lt;/span&gt;:
                      - &lt;span class="s1"&gt;'/cloud_sql_proxy'&lt;/span&gt;
                      - &lt;span class="s1"&gt;'-ip_address_types=PUBLIC'&lt;/span&gt;
                      - &lt;span class="s1"&gt;'-instances=monirul-test:us-west2:postgres-instance=tcp:0.0.0.0:5432'&lt;/span&gt;
                      - &lt;span class="s1"&gt;'-credential_file=/var/secrets/cloud-sql/key.json'&lt;/span&gt;
                  resources:
                      requests:
                          cpu: &lt;span class="s1"&gt;'100m'&lt;/span&gt;
                          memory: &lt;span class="s1"&gt;'128Mi'&lt;/span&gt;
                      limits:
                          cpu: &lt;span class="s1"&gt;'200m'&lt;/span&gt;
                          memory: &lt;span class="s1"&gt;'256Mi'&lt;/span&gt;
                  volumeMounts:  &lt;span class="c"&gt;# Mount the Secret as a volume&lt;/span&gt;
                      - name: service-account-key
                        mountPath: /var/secrets/cloud-sql
            volumes:  &lt;span class="c"&gt;# Define the volume that references the Secret&lt;/span&gt;
                - name: service-account-key
                  secret:
                      secretName: service-account-key



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This YAML manifest defines a Deployment for your application ("test-app-deployment") and includes a sidecar container named "cloud-sql-proxy." The Cloud SQL Proxy handles authentication and encryption for secure connections between your GKE application and the Cloud SQL database.&lt;/p&gt;

&lt;p&gt;Ensure you replace the image with your actual image. Once deployed, this configuration allows your GKE application to securely access the Cloud SQL database using the Cloud SQL Proxy.&lt;/p&gt;

&lt;p&gt;ConfigMap:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apiVersion: v1
kind: ConfigMap
metadata:
    name: test-app-config
data:
    APP_PORT: &lt;span class="s1"&gt;'5000'&lt;/span&gt;
    APP_PROJECT_ID: monirul-test
    DB_NAME: postgres
    DB_HOST: localhost
    DB_USER: postgres
    DATABASE_MAX_CONNECTIONS: &lt;span class="s1"&gt;'15'&lt;/span&gt;
    DB_PORT: &lt;span class="s1"&gt;'5432'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Secret:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apiVersion: v1
data:
  DB_PASSWORD: &lt;span class="nv"&gt;RXhDZXJlODc0Iw&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;
kind: Secret
metadata:
  name: test-app-secret
  namespace: test-app
&lt;span class="nb"&gt;type&lt;/span&gt;: Opaque


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Service.yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apiVersion: v1
kind: Service
metadata:
  name: test-app-svc
spec:
  ports:
    - name: test-app-svc-port
      protocol: TCP
      port: 5000
      targetPort: 5000


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The provided configurations are clear and should work for your use case. Just make sure that the actual values you use for your application match the ones you've configured in the ConfigMap and Secret.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testing and Validation
Certainly, testing and validation are essential steps to ensure the GKE to Cloud SQL connection works correctly. Your provided commands for testing and validation look good:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;k &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; test-app-deployment-786d4bdd56-2sktd &lt;span class="nt"&gt;--&lt;/span&gt; sh
Defaulted container &lt;span class="s2"&gt;"test-app"&lt;/span&gt; out of: test-app, cloud-sql-proxy
&lt;span class="c"&gt;# curl http://localhost:5000&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"message"&lt;/span&gt;:&lt;span class="s2"&gt;"hello world"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="c"&gt;# curl http://localhost:5000/data&lt;/span&gt;
&lt;span class="o"&gt;[[&lt;/span&gt;&lt;span class="s2"&gt;"+60102098121"&lt;/span&gt;,&lt;span class="s2"&gt;"Monirul"&lt;/span&gt;,&lt;span class="s2"&gt;"Islam"&lt;/span&gt;,&lt;span class="s2"&gt;"devops.monirul@gmail.com"&lt;/span&gt;&lt;span class="o"&gt;]]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These commands will help you test various aspects of your setup, including basic connectivity and data retrieval. Be sure to conduct thorough testing, including edge cases and error scenarios, to ensure the reliability and performance of your GKE to Cloud SQL connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Benefits&lt;/strong&gt;&lt;br&gt;
Connecting multiple GKE VPC-native clusters to a centralized Cloud SQL instance provides several advantages:&lt;/p&gt;

&lt;p&gt;Isolation: Each GKE cluster remains isolated within its own VPC network, enhancing security and separation of workloads.&lt;/p&gt;

&lt;p&gt;Centralized Data Management: Data consistency is ensured with a single Cloud SQL database instance, simplifying data management.&lt;/p&gt;

&lt;p&gt;Scalability: This architecture can scale horizontally to handle increased workloads and data storage requirements.&lt;/p&gt;

&lt;p&gt;Security: Strong access controls, encryption, and secure connections guarantee data confidentiality and integrity.&lt;/p&gt;

&lt;p&gt;Efficiency: The Cloud SQL Proxy streamlines database connections, reducing latency and ensuring reliable access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Connecting multiple GKE VPC-native clusters to a shared Cloud SQL instance is a critical step in building scalable, secure, and efficient cloud-native applications. With the right architecture, tools, and best practices, you can seamlessly manage your data while maintaining network isolation and data security.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reference:&lt;/em&gt; &lt;br&gt;
&lt;a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="noopener noreferrer"&gt;https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gke</category>
      <category>cloudsql</category>
      <category>cloudsqlproxy</category>
      <category>cloud</category>
    </item>
    <item>
      <title>EKS cluster Monitoring for AWS Fargate with Prometheus and managed Grafana</title>
      <dc:creator>M.M.Monirul Islam</dc:creator>
      <pubDate>Wed, 22 Mar 2023 10:33:34 +0000</pubDate>
      <link>https://forem.com/monirul87/eks-cluster-monitoring-for-aws-fargate-with-prometheus-and-managed-grafana-1h2f</link>
      <guid>https://forem.com/monirul87/eks-cluster-monitoring-for-aws-fargate-with-prometheus-and-managed-grafana-1h2f</guid>
      <description>&lt;p&gt;Firstly, We need to create node group in our existing EKS cluster as metrics are inaccessible to Fargate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzgp0plcwrh059v0rwre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzgp0plcwrh059v0rwre.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Node group for prometheus:
&lt;/h2&gt;

&lt;p&gt;I actually used IAC (terrafrom) to create eks node group (worker node) for prometheus.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

resource &lt;span class="s2"&gt;"aws_eks_node_group"&lt;/span&gt; &lt;span class="s2"&gt;"monirul_ec2"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  cluster_name    &lt;span class="o"&gt;=&lt;/span&gt; aws_eks_cluster.monirul.name
  node_group_name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"monirul_ec2_prometheus"&lt;/span&gt;
  node_role_arn   &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.node.arn
  subnet_ids &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
    var.private_subnet_id_a,
    var.private_subnet_id_b
  &lt;span class="o"&gt;]&lt;/span&gt;

  scaling_config &lt;span class="o"&gt;{&lt;/span&gt;
    desired_size &lt;span class="o"&gt;=&lt;/span&gt; 2
    max_size     &lt;span class="o"&gt;=&lt;/span&gt; 5
    min_size     &lt;span class="o"&gt;=&lt;/span&gt; 1
  &lt;span class="o"&gt;}&lt;/span&gt;

  ami_type       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AL2_x86_64"&lt;/span&gt; &lt;span class="c"&gt;# AL2_x86_64, AL2_x86_64_GPU, AL2_ARM_64, CUSTOM&lt;/span&gt;
  capacity_type  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ON_DEMAND"&lt;/span&gt;  &lt;span class="c"&gt;# ON_DEMAND, SPOT&lt;/span&gt;
  disk_size      &lt;span class="o"&gt;=&lt;/span&gt; 20
  instance_types &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"m5.large"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;

  depends_on &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
    aws_iam_role_policy_attachment.node_AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.node_AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.node_AmazonEC2ContainerRegistryReadOnly,
  &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# EKS Node IAM Role&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"node"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Ec2-Worker-Role"&lt;/span&gt;

  assume_role_policy &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;POLICY&lt;/span&gt;&lt;span class="sh"&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;POLICY
&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"node_AmazonEKSWorkerNodePolicy"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  policy_arn &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"&lt;/span&gt;
  role       &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.node.name
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"node_AmazonEKS_CNI_Policy"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  policy_arn &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"&lt;/span&gt;
  role       &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.node.name
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"node_AmazonEC2ContainerRegistryReadOnly"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  policy_arn &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"&lt;/span&gt;
  role       &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.node.name
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Or we can create manually. Go to AWS Management Console -&amp;gt; EKS -&amp;gt; Your cluster -&amp;gt; Compute -&amp;gt; Add node group.&lt;/p&gt;

&lt;p&gt;We know that, We have to use EC2 for Prometheus, since will need volumes mounted to it.&lt;/p&gt;

&lt;p&gt;While creating node group, we have to attach an IAM role to EC2 worker nodes. For easy demonstration, I have created a new IAM role and attached policies as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxl71jl4eeatago33599e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxl71jl4eeatago33599e.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the following command to confirm that your EC2 worker nodes are running properly (2 pods are should be running state). &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

 &lt;span class="nv"&gt;$ &lt;/span&gt;k get po &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system | &lt;span class="nb"&gt;grep &lt;/span&gt;aws-node
aws-node-jx8dh                                 1/1     Running   0          13d
aws-node-mx4gq                                 1/1     Running   0          13d


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Node exporter runs as a daemon set and is responsible for collecting metrics of the host it runs on. Most of these metrics are low-level operating system metrics like vCPU, memory, network, disk (of the host machine, not containers), and hardware statistics, etc. These metrics are inaccessible to Fargate customers since AWS is responsible for the health of the host machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Install EBS CSI driver
&lt;/h2&gt;

&lt;p&gt;Prometheus and Grafana needs persistent storage attached to them, which is also called PV(Persistent Volume) in terms of Kubernetes.&lt;/p&gt;

&lt;p&gt;For stateful workloads to use Amazon EBS volumes as PV, we have to add aws-ebs-csi-driver into the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Associating IAM role to Service account
&lt;/h2&gt;

&lt;p&gt;Before we add aws-ebs-csi-driver, we need to create an IAM role, and associate it with Kubernetes service account.&lt;/p&gt;

&lt;p&gt;Let's use an example policy file, which you can download using the command below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl &lt;span class="nt"&gt;-sSL&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; ebs-csi-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/docs/example-iam-policy.json


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now let's create a new IAM policy with that file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;EBS_CSI_POLICY_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;AmazonEBSCSIPolicy
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;
aws iam create-policy &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="nv"&gt;$AWS_REGION&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--policy-name&lt;/span&gt; &lt;span class="nv"&gt;$EBS_CSI_POLICY_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--policy-document&lt;/span&gt; file://ebs-csi-policy.json

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;EBS_CSI_POLICY_ARN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-1 iam list-policies &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Policies[?PolicyName==`'&lt;/span&gt;&lt;span class="nv"&gt;$EBS_CSI_POLICY_NAME&lt;/span&gt;&lt;span class="s1"&gt;'`].Arn'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$EBS_CSI_POLICY_ARN&lt;/span&gt;
&lt;span class="c"&gt;# arn:aws:iam::2343123456678:policy/AmazonEBSCSIPolicy&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After that, let's attach the new policy to Kubernetes service account.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

eksctl create iamserviceaccount &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; &lt;span class="nv"&gt;$EKS_CLUSTER_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; ebs-csi-controller-irsa &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--attach-policy-arn&lt;/span&gt; &lt;span class="nv"&gt;$EBS_CSI_POLICY_ARN&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--override-existing-serviceaccounts&lt;/span&gt; &lt;span class="nt"&gt;--approve&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And now, we're ready to install aws-ebs-csi-driver!&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up aws-ebs-csi-driver Helm Repo
&lt;/h2&gt;

&lt;p&gt;Assuming that helm is installed, let's add new helm repository as below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm repo update


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After adding new helm repository, let's install aws-ebs-csi-driver with below command using helm.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; aws-ebs-csi-driver &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.2.4 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; serviceAccount.controller.create&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; serviceAccount.snapshot.create&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;enableVolumeScheduling&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;enableVolumeResizing&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;enableVolumeSnapshot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; serviceAccount.snapshot.name&lt;span class="o"&gt;=&lt;/span&gt;ebs-csi-controller-irsa &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; serviceAccount.controller.name&lt;span class="o"&gt;=&lt;/span&gt;ebs-csi-controller-irsa &lt;span class="se"&gt;\&lt;/span&gt;
  aws-ebs-csi-driver/aws-ebs-csi-driver


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Creating Namespace =&amp;gt; prometheus
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl create namespace prometheus &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client &lt;span class="nt"&gt;-o&lt;/span&gt; yaml | kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Setting up Prometheus Helm Repositories
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm repo add kube-state-metrics https://kubernetes.github.io/kube-state-metrics
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Setting up Prometheus
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;CHART_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"19.7.2"&lt;/span&gt;
helm upgrade &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--wait&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    prometheus prometheus-community/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--namespace&lt;/span&gt; prometheus &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--version&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CHART_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-f&lt;/span&gt; prometheus/prometheus-values.yaml &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; alertmanager.persistentVolume.storageClass&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gp2"&lt;/span&gt;,server.persistentVolume.storageClass&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gp2"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--debug&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Verify that Prometheus pods are running:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt; prometheus
NAME                                                READY   STATUS    RESTARTS   AGE
prometheus-alertmanager-0                           1/1     Running   0          12d
prometheus-kube-state-metrics-6fcf5978bf-dssx2      1/1     Running   0          12d
prometheus-prometheus-node-exporter-677lp           1/1     Running   0          13d
prometheus-prometheus-node-exporter-mwn7j           1/1     Running   0          13d
prometheus-prometheus-pushgateway-fdb75d75f-5pfdt   1/1     Running   0          12d
prometheus-server-5d957cfd5f-thcvn                  2/2     Running   0          12d


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here is the prometheus value that we can use during installation. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;prometheus-values.yaml&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

rbac:
  create: &lt;span class="nb"&gt;true

&lt;/span&gt;podSecurityPolicy:
  enabled: &lt;span class="nb"&gt;false

&lt;/span&gt;imagePullSecrets: &lt;span class="o"&gt;[]&lt;/span&gt;
&lt;span class="c"&gt;# - name: "image-pull-secret"&lt;/span&gt;

&lt;span class="c"&gt;## Define serviceAccount names for components. Defaults to component's fully qualified name.&lt;/span&gt;
&lt;span class="c"&gt;##&lt;/span&gt;
serviceAccounts:
  server:
    create: &lt;span class="nb"&gt;true
    &lt;/span&gt;name: &lt;span class="s2"&gt;""&lt;/span&gt;
    annotations: &lt;span class="o"&gt;{}&lt;/span&gt;

&lt;span class="c"&gt;## Monitors ConfigMap changes and POSTs to a URL&lt;/span&gt;
&lt;span class="c"&gt;## Ref: https://github.com/jimmidyson/configmap-reload&lt;/span&gt;
&lt;span class="c"&gt;##&lt;/span&gt;
configmapReload:
  prometheus:
    &lt;span class="c"&gt;## If false, the configmap-reload container will not be deployed&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    enabled: &lt;span class="nb"&gt;true&lt;/span&gt;

    &lt;span class="c"&gt;## configmap-reload container name&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    name: configmap-reload

    &lt;span class="c"&gt;## configmap-reload container image&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    image:
      repository: jimmidyson/configmap-reload
      tag: v0.8.0
      &lt;span class="c"&gt;# When digest is set to a non-empty value, images will be pulled by digest (regardless of tag value).&lt;/span&gt;
      digest: &lt;span class="s2"&gt;""&lt;/span&gt;
      pullPolicy: IfNotPresent

    &lt;span class="c"&gt;# containerPort: 9533&lt;/span&gt;

    &lt;span class="c"&gt;## Additional configmap-reload container arguments&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    extraArgs: &lt;span class="o"&gt;{}&lt;/span&gt;
    &lt;span class="c"&gt;## Additional configmap-reload volume directories&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    extraVolumeDirs: &lt;span class="o"&gt;[]&lt;/span&gt;


    &lt;span class="c"&gt;## Additional configmap-reload mounts&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    extraConfigmapMounts: &lt;span class="o"&gt;[]&lt;/span&gt;
      &lt;span class="c"&gt;# - name: prometheus-alerts&lt;/span&gt;
      &lt;span class="c"&gt;#   mountPath: /etc/alerts.d&lt;/span&gt;
      &lt;span class="c"&gt;#   subPath: ""&lt;/span&gt;
      &lt;span class="c"&gt;#   configMap: prometheus-alerts&lt;/span&gt;
      &lt;span class="c"&gt;#   readOnly: true&lt;/span&gt;

    &lt;span class="c"&gt;## Security context to be added to configmap-reload container&lt;/span&gt;
    containerSecurityContext: &lt;span class="o"&gt;{}&lt;/span&gt;

    &lt;span class="c"&gt;## configmap-reload resource requests and limits&lt;/span&gt;
    &lt;span class="c"&gt;## Ref: http://kubernetes.io/docs/user-guide/compute-resources/&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    resources: &lt;span class="o"&gt;{}&lt;/span&gt;

server:
  &lt;span class="c"&gt;## Prometheus server container name&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  name: server

  &lt;span class="c"&gt;## Use a ClusterRole (and ClusterRoleBinding)&lt;/span&gt;
  &lt;span class="c"&gt;## - If set to false - we define a RoleBinding in the defined namespaces ONLY&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  &lt;span class="c"&gt;## NB: because we need a Role with nonResourceURL's ("/metrics") - you must get someone with Cluster-admin privileges to define this role for you, before running with this setting enabled.&lt;/span&gt;
  &lt;span class="c"&gt;##     This makes prometheus work - for users who do not have ClusterAdmin privs, but wants prometheus to operate on their own namespaces, instead of clusterwide.&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  &lt;span class="c"&gt;## You MUST also set namespaces to the ones you have access to and want monitored by Prometheus.&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  &lt;span class="c"&gt;# useExistingClusterRoleName: nameofclusterrole&lt;/span&gt;

  &lt;span class="c"&gt;## namespaces to monitor (instead of monitoring all - clusterwide). Needed if you want to run without Cluster-admin privileges.&lt;/span&gt;
  &lt;span class="c"&gt;# namespaces:&lt;/span&gt;
  &lt;span class="c"&gt;#   - yournamespace&lt;/span&gt;

  &lt;span class="c"&gt;# sidecarContainers - add more containers to prometheus server&lt;/span&gt;
  &lt;span class="c"&gt;# Key/Value where Key is the sidecar `- name: &amp;lt;Key&amp;gt;`&lt;/span&gt;
  &lt;span class="c"&gt;# Example:&lt;/span&gt;
  &lt;span class="c"&gt;#   sidecarContainers:&lt;/span&gt;
  &lt;span class="c"&gt;#      webserver:&lt;/span&gt;
  &lt;span class="c"&gt;#        image: nginx&lt;/span&gt;
  sidecarContainers: &lt;span class="o"&gt;{}&lt;/span&gt;

  &lt;span class="c"&gt;# sidecarTemplateValues - context to be used in template for sidecarContainers&lt;/span&gt;
  &lt;span class="c"&gt;# Example:&lt;/span&gt;
  &lt;span class="c"&gt;#   sidecarTemplateValues: *your-custom-globals&lt;/span&gt;
  &lt;span class="c"&gt;#   sidecarContainers:&lt;/span&gt;
  &lt;span class="c"&gt;#     webserver: |-&lt;/span&gt;
  &lt;span class="c"&gt;#       {{ include "webserver-container-template" . }}&lt;/span&gt;
  &lt;span class="c"&gt;# Template for `webserver-container-template` might looks like this:&lt;/span&gt;
  &lt;span class="c"&gt;#   image: "{{ .Values.server.sidecarTemplateValues.repository }}:{{ .Values.server.sidecarTemplateValues.tag }}"&lt;/span&gt;
  &lt;span class="c"&gt;#   ...&lt;/span&gt;
  &lt;span class="c"&gt;#&lt;/span&gt;
  sidecarTemplateValues: &lt;span class="o"&gt;{}&lt;/span&gt;

  &lt;span class="c"&gt;## Prometheus server container image&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  image:
    repository: quay.io/prometheus/prometheus
    &lt;span class="c"&gt;# if not set appVersion field from Chart.yaml is used&lt;/span&gt;
    tag: &lt;span class="s2"&gt;""&lt;/span&gt;
    &lt;span class="c"&gt;# When digest is set to a non-empty value, images will be pulled by digest (regardless of tag value).&lt;/span&gt;
    digest: &lt;span class="s2"&gt;""&lt;/span&gt;
    pullPolicy: IfNotPresent

  &lt;span class="c"&gt;## prometheus server priorityClassName&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  priorityClassName: &lt;span class="s2"&gt;""&lt;/span&gt;

  &lt;span class="c"&gt;## EnableServiceLinks indicates whether information about services should be injected&lt;/span&gt;
  &lt;span class="c"&gt;## into pod's environment variables, matching the syntax of Docker links.&lt;/span&gt;
  &lt;span class="c"&gt;## WARNING: the field is unsupported and will be skipped in K8s prior to v1.13.0.&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  enableServiceLinks: &lt;span class="nb"&gt;true&lt;/span&gt;

  &lt;span class="c"&gt;## The URL prefix at which the container can be accessed. Useful in the case the '-web.external-url' includes a slug&lt;/span&gt;
  &lt;span class="c"&gt;## so that the various internal URLs are still able to access as they are in the default case.&lt;/span&gt;
  &lt;span class="c"&gt;## (Optional)&lt;/span&gt;
  prefixURL: &lt;span class="s2"&gt;""&lt;/span&gt;

  &lt;span class="c"&gt;## External URL which can access prometheus&lt;/span&gt;
  &lt;span class="c"&gt;## Maybe same with Ingress host name&lt;/span&gt;
  baseURL: &lt;span class="s2"&gt;""&lt;/span&gt;

  &lt;span class="c"&gt;## Additional server container environment variables&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  &lt;span class="c"&gt;## You specify this manually like you would a raw deployment manifest.&lt;/span&gt;
  &lt;span class="c"&gt;## This means you can bind in environment variables from secrets.&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  &lt;span class="c"&gt;## e.g. static environment variable:&lt;/span&gt;
  &lt;span class="c"&gt;##  - name: DEMO_GREETING&lt;/span&gt;
  &lt;span class="c"&gt;##    value: "Hello from the environment"&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  &lt;span class="c"&gt;## e.g. secret environment variable:&lt;/span&gt;
  &lt;span class="c"&gt;## - name: USERNAME&lt;/span&gt;
  &lt;span class="c"&gt;##   valueFrom:&lt;/span&gt;
  &lt;span class="c"&gt;##     secretKeyRef:&lt;/span&gt;
  &lt;span class="c"&gt;##       name: mysecret&lt;/span&gt;
  &lt;span class="c"&gt;##       key: username&lt;/span&gt;
  &lt;span class="nb"&gt;env&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;

  &lt;span class="c"&gt;# List of flags to override default parameters, e.g:&lt;/span&gt;
  &lt;span class="c"&gt;# - --enable-feature=agent&lt;/span&gt;
  &lt;span class="c"&gt;# - --storage.agent.retention.max-time=30m&lt;/span&gt;
  defaultFlagsOverride: &lt;span class="o"&gt;[]&lt;/span&gt;

  extraFlags:
    - web.enable-lifecycle
    &lt;span class="c"&gt;## web.enable-admin-api flag controls access to the administrative HTTP API which includes functionality such as&lt;/span&gt;
    &lt;span class="c"&gt;## deleting time series. This is disabled by default.&lt;/span&gt;
    &lt;span class="c"&gt;# - web.enable-admin-api&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    &lt;span class="c"&gt;## storage.tsdb.no-lockfile flag controls BD locking&lt;/span&gt;
    &lt;span class="c"&gt;# - storage.tsdb.no-lockfile&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    &lt;span class="c"&gt;## storage.tsdb.wal-compression flag enables compression of the write-ahead log (WAL)&lt;/span&gt;
    &lt;span class="c"&gt;# - storage.tsdb.wal-compression&lt;/span&gt;

  &lt;span class="c"&gt;## Path to a configuration file on prometheus server container FS&lt;/span&gt;
  configPath: /etc/config/prometheus.yml

  &lt;span class="c"&gt;### The data directory used by prometheus to set --storage.tsdb.path&lt;/span&gt;
  &lt;span class="c"&gt;### When empty server.persistentVolume.mountPath is used instead&lt;/span&gt;
  storagePath: &lt;span class="s2"&gt;""&lt;/span&gt;

  global:
    &lt;span class="c"&gt;## How frequently to scrape targets by default&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    scrape_interval: 1m
    &lt;span class="c"&gt;## How long until a scrape request times out&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    scrape_timeout: 10s
    &lt;span class="c"&gt;## How frequently to evaluate rules&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    evaluation_interval: 1m
  &lt;span class="c"&gt;## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  remoteWrite: &lt;span class="o"&gt;[]&lt;/span&gt;
  &lt;span class="c"&gt;## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_read&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  remoteRead: &lt;span class="o"&gt;[]&lt;/span&gt;

  &lt;span class="c"&gt;## Custom HTTP headers for Liveness/Readiness/Startup Probe&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  &lt;span class="c"&gt;## Useful for providing HTTP Basic Auth to healthchecks&lt;/span&gt;
  probeHeaders: &lt;span class="o"&gt;[]&lt;/span&gt;
    &lt;span class="c"&gt;# - name: "Authorization"&lt;/span&gt;
    &lt;span class="c"&gt;#   value: "Bearer ABCDEabcde12345"&lt;/span&gt;

  &lt;span class="c"&gt;## Additional Prometheus server container arguments&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  extraArgs: &lt;span class="o"&gt;{}&lt;/span&gt;

  &lt;span class="c"&gt;## Additional InitContainers to initialize the pod&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  extraInitContainers: &lt;span class="o"&gt;[]&lt;/span&gt;

  &lt;span class="c"&gt;## Additional Prometheus server Volume mounts&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  extraVolumeMounts: &lt;span class="o"&gt;[]&lt;/span&gt;

  &lt;span class="c"&gt;## Additional Prometheus server Volumes&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  extraVolumes: &lt;span class="o"&gt;[]&lt;/span&gt;

  &lt;span class="c"&gt;## Additional Prometheus server hostPath mounts&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  extraHostPathMounts: &lt;span class="o"&gt;[]&lt;/span&gt;
    &lt;span class="c"&gt;# - name: certs-dir&lt;/span&gt;
    &lt;span class="c"&gt;#   mountPath: /etc/kubernetes/certs&lt;/span&gt;
    &lt;span class="c"&gt;#   subPath: ""&lt;/span&gt;
    &lt;span class="c"&gt;#   hostPath: /etc/kubernetes/certs&lt;/span&gt;
    &lt;span class="c"&gt;#   readOnly: true&lt;/span&gt;

  extraConfigmapMounts: &lt;span class="o"&gt;[]&lt;/span&gt;
    &lt;span class="c"&gt;# - name: certs-configmap&lt;/span&gt;
    &lt;span class="c"&gt;#   mountPath: /prometheus&lt;/span&gt;
    &lt;span class="c"&gt;#   subPath: ""&lt;/span&gt;
    &lt;span class="c"&gt;#   configMap: certs-configmap&lt;/span&gt;
    &lt;span class="c"&gt;#   readOnly: true&lt;/span&gt;

  &lt;span class="c"&gt;## Additional Prometheus server Secret mounts&lt;/span&gt;
  &lt;span class="c"&gt;# Defines additional mounts with secrets. Secrets must be manually created in the namespace.&lt;/span&gt;
  extraSecretMounts: &lt;span class="o"&gt;[]&lt;/span&gt;
    &lt;span class="c"&gt;# - name: secret-files&lt;/span&gt;
    &lt;span class="c"&gt;#   mountPath: /etc/secrets&lt;/span&gt;
    &lt;span class="c"&gt;#   subPath: ""&lt;/span&gt;
    &lt;span class="c"&gt;#   secretName: prom-secret-files&lt;/span&gt;
    &lt;span class="c"&gt;#   readOnly: true&lt;/span&gt;

  &lt;span class="c"&gt;## ConfigMap override where fullname is {{.Release.Name}}-{{.Values.server.configMapOverrideName}}&lt;/span&gt;
  &lt;span class="c"&gt;## Defining configMapOverrideName will cause templates/server-configmap.yaml&lt;/span&gt;
  &lt;span class="c"&gt;## to NOT generate a ConfigMap resource&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  configMapOverrideName: &lt;span class="s2"&gt;""&lt;/span&gt;

  &lt;span class="c"&gt;## Extra labels for Prometheus server ConfigMap (ConfigMap that holds serverFiles)&lt;/span&gt;
  extraConfigmapLabels: &lt;span class="o"&gt;{}&lt;/span&gt;

  ingress:
    &lt;span class="c"&gt;## If true, Prometheus server Ingress will be created&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    enabled: &lt;span class="nb"&gt;false&lt;/span&gt;

    &lt;span class="c"&gt;# For Kubernetes &amp;gt;= 1.18 you should specify the ingress-controller via the field ingressClassName&lt;/span&gt;
    &lt;span class="c"&gt;# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress&lt;/span&gt;
    &lt;span class="c"&gt;# ingressClassName: nginx&lt;/span&gt;

    &lt;span class="c"&gt;## Prometheus server Ingress annotations&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    annotations: &lt;span class="o"&gt;{}&lt;/span&gt;
    &lt;span class="c"&gt;#   kubernetes.io/ingress.class: nginx&lt;/span&gt;
    &lt;span class="c"&gt;#   kubernetes.io/tls-acme: 'true'&lt;/span&gt;

    &lt;span class="c"&gt;## Prometheus server Ingress additional labels&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    extraLabels: &lt;span class="o"&gt;{}&lt;/span&gt;

    &lt;span class="c"&gt;## Prometheus server Ingress hostnames with optional path&lt;/span&gt;
    &lt;span class="c"&gt;## Must be provided if Ingress is enabled&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    hosts: &lt;span class="o"&gt;[]&lt;/span&gt;
    &lt;span class="c"&gt;#   - prometheus.domain.com&lt;/span&gt;
    &lt;span class="c"&gt;#   - domain.com/prometheus&lt;/span&gt;

    path: /

    &lt;span class="c"&gt;# pathType is only for k8s &amp;gt;= 1.18&lt;/span&gt;
    pathType: Prefix

    &lt;span class="c"&gt;## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.&lt;/span&gt;
    extraPaths: &lt;span class="o"&gt;[]&lt;/span&gt;
    &lt;span class="c"&gt;# - path: /*&lt;/span&gt;
    &lt;span class="c"&gt;#   backend:&lt;/span&gt;
    &lt;span class="c"&gt;#     serviceName: ssl-redirect&lt;/span&gt;
    &lt;span class="c"&gt;#     servicePort: use-annotation&lt;/span&gt;

    &lt;span class="c"&gt;## Prometheus server Ingress TLS configuration&lt;/span&gt;
    &lt;span class="c"&gt;## Secrets must be manually created in the namespace&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    tls: &lt;span class="o"&gt;[]&lt;/span&gt;
    &lt;span class="c"&gt;#   - secretName: prometheus-server-tls&lt;/span&gt;
    &lt;span class="c"&gt;#     hosts:&lt;/span&gt;
    &lt;span class="c"&gt;#       - prometheus.domain.com&lt;/span&gt;

  &lt;span class="c"&gt;## Server Deployment Strategy type&lt;/span&gt;
  strategy:
    &lt;span class="nb"&gt;type&lt;/span&gt;: Recreate

  &lt;span class="c"&gt;## hostAliases allows adding entries to /etc/hosts inside the containers&lt;/span&gt;
  hostAliases: &lt;span class="o"&gt;[]&lt;/span&gt;
  &lt;span class="c"&gt;#   - ip: "127.0.0.1"&lt;/span&gt;
  &lt;span class="c"&gt;#     hostnames:&lt;/span&gt;
  &lt;span class="c"&gt;#       - "example.com"&lt;/span&gt;

  &lt;span class="c"&gt;## Node tolerations for server scheduling to nodes with taints&lt;/span&gt;
  &lt;span class="c"&gt;## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  tolerations: &lt;span class="o"&gt;[]&lt;/span&gt;
    &lt;span class="c"&gt;# - key: "key"&lt;/span&gt;
    &lt;span class="c"&gt;#   operator: "Equal|Exists"&lt;/span&gt;
    &lt;span class="c"&gt;#   value: "value"&lt;/span&gt;
    &lt;span class="c"&gt;#   effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"&lt;/span&gt;

  &lt;span class="c"&gt;## Node labels for Prometheus server pod assignment&lt;/span&gt;
  &lt;span class="c"&gt;## Ref: https://kubernetes.io/docs/user-guide/node-selection/&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  nodeSelector: &lt;span class="o"&gt;{}&lt;/span&gt;

  &lt;span class="c"&gt;## Pod affinity&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  &lt;span class="c"&gt;# affinity: {}&lt;/span&gt;
  &lt;span class="c"&gt;# affinity:&lt;/span&gt;
    &lt;span class="c"&gt;# nodeAffinity:&lt;/span&gt;
    &lt;span class="c"&gt;#   requiredDuringSchedulingIgnoredDuringExecution:&lt;/span&gt;
    &lt;span class="c"&gt;#     nodeSelectorTerms:&lt;/span&gt;
    &lt;span class="c"&gt;#       - matchExpressions:&lt;/span&gt;
    &lt;span class="c"&gt;#         - key: eks.amazonaws.com/compute-type&lt;/span&gt;
    &lt;span class="c"&gt;#           operator: NotIn&lt;/span&gt;
    &lt;span class="c"&gt;#           values:&lt;/span&gt;
    &lt;span class="c"&gt;#             - fargate&lt;/span&gt;
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/os
              operator: In
              values:
              - linux
            - key: kubernetes.io/arch
              operator: In
              values:
              - amd64
              - arm64
            - key: eks.amazonaws.com/compute-type
              operator: NotIn
              values:
              - fargate
  &lt;span class="c"&gt;## PodDisruptionBudget settings&lt;/span&gt;
  &lt;span class="c"&gt;## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  podDisruptionBudget:
    enabled: &lt;span class="nb"&gt;false
    &lt;/span&gt;maxUnavailable: 1

  &lt;span class="c"&gt;## Use an alternate scheduler, e.g. "stork".&lt;/span&gt;
  &lt;span class="c"&gt;## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  &lt;span class="c"&gt;# schedulerName:&lt;/span&gt;

  persistentVolume:
    &lt;span class="c"&gt;## If true, Prometheus server will create/use a Persistent Volume Claim&lt;/span&gt;
    &lt;span class="c"&gt;## If false, use emptyDir&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    enabled: &lt;span class="nb"&gt;true&lt;/span&gt;

    &lt;span class="c"&gt;## Prometheus server data Persistent Volume access modes&lt;/span&gt;
    &lt;span class="c"&gt;## Must match those of existing PV or dynamic provisioner&lt;/span&gt;
    &lt;span class="c"&gt;## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    accessModes:
      - ReadWriteOnce

    &lt;span class="c"&gt;## Prometheus server data Persistent Volume labels&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    labels: &lt;span class="o"&gt;{}&lt;/span&gt;

    &lt;span class="c"&gt;## Prometheus server data Persistent Volume annotations&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    annotations: &lt;span class="o"&gt;{}&lt;/span&gt;

    &lt;span class="c"&gt;## Prometheus server data Persistent Volume existing claim name&lt;/span&gt;
    &lt;span class="c"&gt;## Requires server.persistentVolume.enabled: true&lt;/span&gt;
    &lt;span class="c"&gt;## If defined, PVC must be created manually before volume will be bound&lt;/span&gt;
    existingClaim: &lt;span class="s2"&gt;""&lt;/span&gt;

    &lt;span class="c"&gt;## Prometheus server data Persistent Volume mount root path&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    mountPath: /data

    &lt;span class="c"&gt;## Prometheus server data Persistent Volume size&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    size: 8Gi

    &lt;span class="c"&gt;## Prometheus server data Persistent Volume Storage Class&lt;/span&gt;
    &lt;span class="c"&gt;## If defined, storageClassName: &amp;lt;storageClass&amp;gt;&lt;/span&gt;
    &lt;span class="c"&gt;## If set to "-", storageClassName: "", which disables dynamic provisioning&lt;/span&gt;
    &lt;span class="c"&gt;## If undefined (the default) or set to null, no storageClassName spec is&lt;/span&gt;
    &lt;span class="c"&gt;##   set, choosing the default provisioner.  (gp2 on AWS, standard on&lt;/span&gt;
    &lt;span class="c"&gt;##   GKE, AWS &amp;amp; OpenStack)&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    &lt;span class="c"&gt;# storageClass: "-"&lt;/span&gt;

    &lt;span class="c"&gt;## Prometheus server data Persistent Volume Binding Mode&lt;/span&gt;
    &lt;span class="c"&gt;## If defined, volumeBindingMode: &amp;lt;volumeBindingMode&amp;gt;&lt;/span&gt;
    &lt;span class="c"&gt;## If undefined (the default) or set to null, no volumeBindingMode spec is&lt;/span&gt;
    &lt;span class="c"&gt;##   set, choosing the default mode.&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    &lt;span class="c"&gt;# volumeBindingMode: ""&lt;/span&gt;

    &lt;span class="c"&gt;## Subdirectory of Prometheus server data Persistent Volume to mount&lt;/span&gt;
    &lt;span class="c"&gt;## Useful if the volume's root directory is not empty&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    subPath: &lt;span class="s2"&gt;""&lt;/span&gt;

    &lt;span class="c"&gt;## Persistent Volume Claim Selector&lt;/span&gt;
    &lt;span class="c"&gt;## Useful if Persistent Volumes have been provisioned in advance&lt;/span&gt;
    &lt;span class="c"&gt;## Ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    &lt;span class="c"&gt;# selector:&lt;/span&gt;
    &lt;span class="c"&gt;#  matchLabels:&lt;/span&gt;
    &lt;span class="c"&gt;#    release: "stable"&lt;/span&gt;
    &lt;span class="c"&gt;#  matchExpressions:&lt;/span&gt;
    &lt;span class="c"&gt;#    - { key: environment, operator: In, values: [ dev ] }&lt;/span&gt;

    &lt;span class="c"&gt;## Persistent Volume Name&lt;/span&gt;
    &lt;span class="c"&gt;## Useful if Persistent Volumes have been provisioned in advance and you want to use a specific one&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    &lt;span class="c"&gt;# volumeName: ""&lt;/span&gt;

  emptyDir:
    &lt;span class="c"&gt;## Prometheus server emptyDir volume size limit&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    sizeLimit: &lt;span class="s2"&gt;""&lt;/span&gt;

  &lt;span class="c"&gt;## Annotations to be added to Prometheus server pods&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  podAnnotations: &lt;span class="o"&gt;{}&lt;/span&gt;
    &lt;span class="c"&gt;# iam.amazonaws.com/role: prometheus&lt;/span&gt;

  &lt;span class="c"&gt;## Labels to be added to Prometheus server pods&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  podLabels: &lt;span class="o"&gt;{}&lt;/span&gt;

  &lt;span class="c"&gt;## Prometheus AlertManager configuration&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  alertmanagers: &lt;span class="o"&gt;[]&lt;/span&gt;

  &lt;span class="c"&gt;## Specify if a Pod Security Policy for node-exporter must be created&lt;/span&gt;
  &lt;span class="c"&gt;## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  podSecurityPolicy:
    annotations: &lt;span class="o"&gt;{}&lt;/span&gt;
      &lt;span class="c"&gt;## Specify pod annotations&lt;/span&gt;
      &lt;span class="c"&gt;## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor&lt;/span&gt;
      &lt;span class="c"&gt;## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp&lt;/span&gt;
      &lt;span class="c"&gt;## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl&lt;/span&gt;
      &lt;span class="c"&gt;##&lt;/span&gt;
      &lt;span class="c"&gt;# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'&lt;/span&gt;
      &lt;span class="c"&gt;# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'&lt;/span&gt;
      &lt;span class="c"&gt;# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'&lt;/span&gt;

  &lt;span class="c"&gt;## Use a StatefulSet if replicaCount needs to be greater than 1 (see below)&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  replicaCount: 1

  &lt;span class="c"&gt;## Annotations to be added to deployment&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  deploymentAnnotations: &lt;span class="o"&gt;{}&lt;/span&gt;

  statefulSet:
    &lt;span class="c"&gt;## If true, use a statefulset instead of a deployment for pod management.&lt;/span&gt;
    &lt;span class="c"&gt;## This allows to scale replicas to more than 1 pod&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    enabled: &lt;span class="nb"&gt;false

    &lt;/span&gt;annotations: &lt;span class="o"&gt;{}&lt;/span&gt;
    labels: &lt;span class="o"&gt;{}&lt;/span&gt;
    podManagementPolicy: OrderedReady

    &lt;span class="c"&gt;## Alertmanager headless service to use for the statefulset&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    headless:
      annotations: &lt;span class="o"&gt;{}&lt;/span&gt;
      labels: &lt;span class="o"&gt;{}&lt;/span&gt;
      servicePort: 80
      &lt;span class="c"&gt;## Enable gRPC port on service to allow auto discovery with thanos-querier&lt;/span&gt;
      gRPC:
        enabled: &lt;span class="nb"&gt;false
        &lt;/span&gt;servicePort: 10901
        &lt;span class="c"&gt;# nodePort: 10901&lt;/span&gt;

  &lt;span class="c"&gt;## Prometheus server readiness and liveness probe initial delay and timeout&lt;/span&gt;
  &lt;span class="c"&gt;## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  tcpSocketProbeEnabled: &lt;span class="nb"&gt;false
  &lt;/span&gt;probeScheme: HTTP
  readinessProbeInitialDelay: 30
  readinessProbePeriodSeconds: 5
  readinessProbeTimeout: 4
  readinessProbeFailureThreshold: 3
  readinessProbeSuccessThreshold: 1
  livenessProbeInitialDelay: 30
  livenessProbePeriodSeconds: 15
  livenessProbeTimeout: 10
  livenessProbeFailureThreshold: 3
  livenessProbeSuccessThreshold: 1
  startupProbe:
    enabled: &lt;span class="nb"&gt;false
    &lt;/span&gt;periodSeconds: 5
    failureThreshold: 30
    timeoutSeconds: 10

  &lt;span class="c"&gt;## Prometheus server resource requests and limits&lt;/span&gt;
  &lt;span class="c"&gt;## Ref: http://kubernetes.io/docs/user-guide/compute-resources/&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  resources: &lt;span class="o"&gt;{}&lt;/span&gt;
    &lt;span class="c"&gt;# limits:&lt;/span&gt;
    &lt;span class="c"&gt;#   cpu: 500m&lt;/span&gt;
    &lt;span class="c"&gt;#   memory: 512Mi&lt;/span&gt;
    &lt;span class="c"&gt;# requests:&lt;/span&gt;
    &lt;span class="c"&gt;#   cpu: 500m&lt;/span&gt;
    &lt;span class="c"&gt;#   memory: 512Mi&lt;/span&gt;

  &lt;span class="c"&gt;# Required for use in managed kubernetes clusters (such as AWS EKS) with custom CNI (such as calico),&lt;/span&gt;
  &lt;span class="c"&gt;# because control-plane managed by AWS cannot communicate with pods' IP CIDR and admission webhooks are not working&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  hostNetwork: &lt;span class="nb"&gt;false&lt;/span&gt;

  &lt;span class="c"&gt;# When hostNetwork is enabled, this will set to ClusterFirstWithHostNet automatically&lt;/span&gt;
  dnsPolicy: ClusterFirst

  &lt;span class="c"&gt;# Use hostPort&lt;/span&gt;
  &lt;span class="c"&gt;# hostPort: 9090&lt;/span&gt;

  &lt;span class="c"&gt;## Vertical Pod Autoscaler config&lt;/span&gt;
  &lt;span class="c"&gt;## Ref: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler&lt;/span&gt;
  verticalAutoscaler:
    &lt;span class="c"&gt;## If true a VPA object will be created for the controller (either StatefulSet or Deployemnt, based on above configs)&lt;/span&gt;
    enabled: &lt;span class="nb"&gt;false&lt;/span&gt;
    &lt;span class="c"&gt;# updateMode: "Auto"&lt;/span&gt;
    &lt;span class="c"&gt;# containerPolicies:&lt;/span&gt;
    &lt;span class="c"&gt;# - containerName: 'prometheus-server'&lt;/span&gt;

  &lt;span class="c"&gt;# Custom DNS configuration to be added to prometheus server pods&lt;/span&gt;
  dnsConfig: &lt;span class="o"&gt;{}&lt;/span&gt;
    &lt;span class="c"&gt;# nameservers:&lt;/span&gt;
    &lt;span class="c"&gt;#   - 1.2.3.4&lt;/span&gt;
    &lt;span class="c"&gt;# searches:&lt;/span&gt;
    &lt;span class="c"&gt;#   - ns1.svc.cluster-domain.example&lt;/span&gt;
    &lt;span class="c"&gt;#   - my.dns.search.suffix&lt;/span&gt;
    &lt;span class="c"&gt;# options:&lt;/span&gt;
    &lt;span class="c"&gt;#   - name: ndots&lt;/span&gt;
    &lt;span class="c"&gt;#     value: "2"&lt;/span&gt;
  &lt;span class="c"&gt;#   - name: edns0&lt;/span&gt;

  &lt;span class="c"&gt;## Security context to be added to server pods&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  securityContext:
    runAsUser: 65534
    runAsNonRoot: &lt;span class="nb"&gt;true
    &lt;/span&gt;runAsGroup: 65534
    fsGroup: 65534

  &lt;span class="c"&gt;## Security context to be added to server container&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  containerSecurityContext: &lt;span class="o"&gt;{}&lt;/span&gt;

  service:
    &lt;span class="c"&gt;## If false, no Service will be created for the Prometheus server&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    enabled: &lt;span class="nb"&gt;true

    &lt;/span&gt;annotations: &lt;span class="o"&gt;{}&lt;/span&gt;
    labels: &lt;span class="o"&gt;{}&lt;/span&gt;
    clusterIP: &lt;span class="s2"&gt;""&lt;/span&gt;

    &lt;span class="c"&gt;## List of IP addresses at which the Prometheus server service is available&lt;/span&gt;
    &lt;span class="c"&gt;## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips&lt;/span&gt;
    &lt;span class="c"&gt;##&lt;/span&gt;
    externalIPs: &lt;span class="o"&gt;[]&lt;/span&gt;

    loadBalancerIP: &lt;span class="s2"&gt;""&lt;/span&gt;
    loadBalancerSourceRanges: &lt;span class="o"&gt;[]&lt;/span&gt;
    servicePort: 80
    sessionAffinity: None
    &lt;span class="nb"&gt;type&lt;/span&gt;: ClusterIP

    &lt;span class="c"&gt;## Enable gRPC port on service to allow auto discovery with thanos-querier&lt;/span&gt;
    gRPC:
      enabled: &lt;span class="nb"&gt;false
      &lt;/span&gt;servicePort: 10901
      &lt;span class="c"&gt;# nodePort: 10901&lt;/span&gt;

    &lt;span class="c"&gt;## If using a statefulSet (statefulSet.enabled=true), configure the&lt;/span&gt;
    &lt;span class="c"&gt;## service to connect to a specific replica to have a consistent view&lt;/span&gt;
    &lt;span class="c"&gt;## of the data.&lt;/span&gt;
    statefulsetReplica:
      enabled: &lt;span class="nb"&gt;false
      &lt;/span&gt;replica: 0

  &lt;span class="c"&gt;## Prometheus server pod termination grace period&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  terminationGracePeriodSeconds: 300

  &lt;span class="c"&gt;## Prometheus data retention period (default if not specified is 15 days)&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  retention: &lt;span class="s2"&gt;"15d"&lt;/span&gt;

&lt;span class="c"&gt;## Prometheus server ConfigMap entries for rule files (allow prometheus labels interpolation)&lt;/span&gt;
ruleFiles: &lt;span class="o"&gt;{}&lt;/span&gt;

&lt;span class="c"&gt;## Prometheus server ConfigMap entries&lt;/span&gt;
&lt;span class="c"&gt;##&lt;/span&gt;
serverFiles:
  &lt;span class="c"&gt;## Alerts configuration&lt;/span&gt;
  &lt;span class="c"&gt;## Ref: https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/&lt;/span&gt;
  alerting_rules.yml: &lt;span class="o"&gt;{}&lt;/span&gt;
  &lt;span class="c"&gt;# groups:&lt;/span&gt;
  &lt;span class="c"&gt;#   - name: Instances&lt;/span&gt;
  &lt;span class="c"&gt;#     rules:&lt;/span&gt;
  &lt;span class="c"&gt;#       - alert: InstanceDown&lt;/span&gt;
  &lt;span class="c"&gt;#         expr: up == 0&lt;/span&gt;
  &lt;span class="c"&gt;#         for: 5m&lt;/span&gt;
  &lt;span class="c"&gt;#         labels:&lt;/span&gt;
  &lt;span class="c"&gt;#           severity: page&lt;/span&gt;
  &lt;span class="c"&gt;#         annotations:&lt;/span&gt;
  &lt;span class="c"&gt;#           description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes.'&lt;/span&gt;
  &lt;span class="c"&gt;#           summary: 'Instance {{ $labels.instance }} down'&lt;/span&gt;
  &lt;span class="c"&gt;## DEPRECATED DEFAULT VALUE, unless explicitly naming your files, please use alerting_rules.yml&lt;/span&gt;
  alerts: &lt;span class="o"&gt;{}&lt;/span&gt;

  &lt;span class="c"&gt;## Records configuration&lt;/span&gt;
  &lt;span class="c"&gt;## Ref: https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/&lt;/span&gt;
  recording_rules.yml: &lt;span class="o"&gt;{}&lt;/span&gt;
  &lt;span class="c"&gt;## DEPRECATED DEFAULT VALUE, unless explicitly naming your files, please use recording_rules.yml&lt;/span&gt;
  rules: &lt;span class="o"&gt;{}&lt;/span&gt;

  prometheus.yml:
    rule_files:
      - /etc/config/recording_rules.yml
      - /etc/config/alerting_rules.yml
    &lt;span class="c"&gt;## Below two files are DEPRECATED will be removed from this default values file&lt;/span&gt;
      - /etc/config/rules
      - /etc/config/alerts

    scrape_configs:
      - job_name: prometheus
        static_configs:
          - targets:
            - localhost:9090

      &lt;span class="c"&gt;# A scrape configuration for running Prometheus on a Kubernetes cluster.&lt;/span&gt;
      &lt;span class="c"&gt;# This uses separate scrape configs for cluster components (i.e. API server, node)&lt;/span&gt;
      &lt;span class="c"&gt;# and services to allow each to use different authentication configs.&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# Kubernetes labels will be added as Prometheus labels on metrics via the&lt;/span&gt;
      &lt;span class="c"&gt;# `labelmap` relabeling action.&lt;/span&gt;

      &lt;span class="c"&gt;# Scrape config for API servers.&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# Kubernetes exposes API servers as endpoints to the default/kubernetes&lt;/span&gt;
      &lt;span class="c"&gt;# service so this uses `endpoints` role and uses relabelling to only keep&lt;/span&gt;
      &lt;span class="c"&gt;# the endpoints associated with the default/kubernetes service using the&lt;/span&gt;
      &lt;span class="c"&gt;# default named port `https`. This works for single API server deployments as&lt;/span&gt;
      &lt;span class="c"&gt;# well as HA API server deployments.&lt;/span&gt;
      - job_name: &lt;span class="s1"&gt;'kubernetes-apiservers'&lt;/span&gt;

        kubernetes_sd_configs:
          - role: endpoints

        &lt;span class="c"&gt;# Default to scraping over https. If required, just disable this or change to&lt;/span&gt;
        &lt;span class="c"&gt;# `http`.&lt;/span&gt;
        scheme: https

        &lt;span class="c"&gt;# This TLS &amp;amp; bearer token file config is used to connect to the actual scrape&lt;/span&gt;
        &lt;span class="c"&gt;# endpoints for cluster components. This is separate to discovery auth&lt;/span&gt;
        &lt;span class="c"&gt;# configuration because discovery &amp;amp; scraping are two separate concerns in&lt;/span&gt;
        &lt;span class="c"&gt;# Prometheus. The discovery auth config is automatic if Prometheus runs inside&lt;/span&gt;
        &lt;span class="c"&gt;# the cluster. Otherwise, more config options have to be provided within the&lt;/span&gt;
        &lt;span class="c"&gt;# &amp;lt;kubernetes_sd_config&amp;gt;.&lt;/span&gt;
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          &lt;span class="c"&gt;# If your node certificates are self-signed or use a different CA to the&lt;/span&gt;
          &lt;span class="c"&gt;# master CA, then disable certificate verification below. Note that&lt;/span&gt;
          &lt;span class="c"&gt;# certificate verification is an integral part of a secure infrastructure&lt;/span&gt;
          &lt;span class="c"&gt;# so this should only be disabled in a controlled environment. You can&lt;/span&gt;
          &lt;span class="c"&gt;# disable certificate verification by uncommenting the line below.&lt;/span&gt;
          &lt;span class="c"&gt;#&lt;/span&gt;
          insecure_skip_verify: &lt;span class="nb"&gt;true
        &lt;/span&gt;bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

        &lt;span class="c"&gt;# Keep only the default/kubernetes service endpoints for the https port. This&lt;/span&gt;
        &lt;span class="c"&gt;# will add targets for each API server which Kubernetes adds an endpoint to&lt;/span&gt;
        &lt;span class="c"&gt;# the default/kubernetes service.&lt;/span&gt;
        relabel_configs:
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
            action: keep
            regex: default&lt;span class="p"&gt;;&lt;/span&gt;kubernetes&lt;span class="p"&gt;;&lt;/span&gt;https

      - job_name: &lt;span class="s1"&gt;'kubernetes-nodes'&lt;/span&gt;

        &lt;span class="c"&gt;# Default to scraping over https. If required, just disable this or change to&lt;/span&gt;
        &lt;span class="c"&gt;# `http`.&lt;/span&gt;
        scheme: https

        &lt;span class="c"&gt;# This TLS &amp;amp; bearer token file config is used to connect to the actual scrape&lt;/span&gt;
        &lt;span class="c"&gt;# endpoints for cluster components. This is separate to discovery auth&lt;/span&gt;
        &lt;span class="c"&gt;# configuration because discovery &amp;amp; scraping are two separate concerns in&lt;/span&gt;
        &lt;span class="c"&gt;# Prometheus. The discovery auth config is automatic if Prometheus runs inside&lt;/span&gt;
        &lt;span class="c"&gt;# the cluster. Otherwise, more config options have to be provided within the&lt;/span&gt;
        &lt;span class="c"&gt;# &amp;lt;kubernetes_sd_config&amp;gt;.&lt;/span&gt;
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          &lt;span class="c"&gt;# If your node certificates are self-signed or use a different CA to the&lt;/span&gt;
          &lt;span class="c"&gt;# master CA, then disable certificate verification below. Note that&lt;/span&gt;
          &lt;span class="c"&gt;# certificate verification is an integral part of a secure infrastructure&lt;/span&gt;
          &lt;span class="c"&gt;# so this should only be disabled in a controlled environment. You can&lt;/span&gt;
          &lt;span class="c"&gt;# disable certificate verification by uncommenting the line below.&lt;/span&gt;
          &lt;span class="c"&gt;#&lt;/span&gt;
          insecure_skip_verify: &lt;span class="nb"&gt;true
        &lt;/span&gt;bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

        kubernetes_sd_configs:
          - role: node

        relabel_configs:
          - action: labelmap
            regex: __meta_kubernetes_node_label_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - target_label: __address__
            replacement: kubernetes.default.svc:443
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_node_name]
            regex: &lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
            target_label: __metrics_path__
            replacement: /api/v1/nodes/&lt;span class="nv"&gt;$1&lt;/span&gt;/proxy/metrics


      - job_name: &lt;span class="s1"&gt;'kubernetes-nodes-cadvisor'&lt;/span&gt;

        &lt;span class="c"&gt;# Default to scraping over https. If required, just disable this or change to&lt;/span&gt;
        &lt;span class="c"&gt;# `http`.&lt;/span&gt;
        scheme: https

        &lt;span class="c"&gt;# This TLS &amp;amp; bearer token file config is used to connect to the actual scrape&lt;/span&gt;
        &lt;span class="c"&gt;# endpoints for cluster components. This is separate to discovery auth&lt;/span&gt;
        &lt;span class="c"&gt;# configuration because discovery &amp;amp; scraping are two separate concerns in&lt;/span&gt;
        &lt;span class="c"&gt;# Prometheus. The discovery auth config is automatic if Prometheus runs inside&lt;/span&gt;
        &lt;span class="c"&gt;# the cluster. Otherwise, more config options have to be provided within the&lt;/span&gt;
        &lt;span class="c"&gt;# &amp;lt;kubernetes_sd_config&amp;gt;.&lt;/span&gt;
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          &lt;span class="c"&gt;# If your node certificates are self-signed or use a different CA to the&lt;/span&gt;
          &lt;span class="c"&gt;# master CA, then disable certificate verification below. Note that&lt;/span&gt;
          &lt;span class="c"&gt;# certificate verification is an integral part of a secure infrastructure&lt;/span&gt;
          &lt;span class="c"&gt;# so this should only be disabled in a controlled environment. You can&lt;/span&gt;
          &lt;span class="c"&gt;# disable certificate verification by uncommenting the line below.&lt;/span&gt;
          &lt;span class="c"&gt;#&lt;/span&gt;
          insecure_skip_verify: &lt;span class="nb"&gt;true
        &lt;/span&gt;bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

        kubernetes_sd_configs:
          - role: node

        &lt;span class="c"&gt;# This configuration will work only on kubelet 1.7.3+&lt;/span&gt;
        &lt;span class="c"&gt;# As the scrape endpoints for cAdvisor have changed&lt;/span&gt;
        &lt;span class="c"&gt;# if you are using older version you need to change the replacement to&lt;/span&gt;
        &lt;span class="c"&gt;# replacement: /api/v1/nodes/$1:4194/proxy/metrics&lt;/span&gt;
        &lt;span class="c"&gt;# more info here https://github.com/coreos/prometheus-operator/issues/633&lt;/span&gt;
        relabel_configs:
          - action: labelmap
            regex: __meta_kubernetes_node_label_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - target_label: __address__
            replacement: kubernetes.default.svc:443
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_node_name]
            regex: &lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
            target_label: __metrics_path__
            replacement: /api/v1/nodes/&lt;span class="nv"&gt;$1&lt;/span&gt;/proxy/metrics/cadvisor

        &lt;span class="c"&gt;# Metric relabel configs to apply to samples before ingestion.&lt;/span&gt;
        &lt;span class="c"&gt;# [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs)&lt;/span&gt;
        &lt;span class="c"&gt;# metric_relabel_configs:&lt;/span&gt;
        &lt;span class="c"&gt;# - action: labeldrop&lt;/span&gt;
        &lt;span class="c"&gt;#   regex: (kubernetes_io_hostname|failure_domain_beta_kubernetes_io_region|beta_kubernetes_io_os|beta_kubernetes_io_arch|beta_kubernetes_io_instance_type|failure_domain_beta_kubernetes_io_zone)&lt;/span&gt;

      &lt;span class="c"&gt;# Scrape config for service endpoints.&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# The relabeling allows the actual service scrape endpoint to be configured&lt;/span&gt;
      &lt;span class="c"&gt;# via the following annotations:&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/scrape`: Only scrape services that have a value of&lt;/span&gt;
      &lt;span class="c"&gt;# `true`, except if `prometheus.io/scrape-slow` is set to `true` as well.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need&lt;/span&gt;
      &lt;span class="c"&gt;# to set this to `https` &amp;amp; most likely set the `tls_config` of the scrape config.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/port`: If the metrics are exposed on a different port to the&lt;/span&gt;
      &lt;span class="c"&gt;# service then set this appropriately.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/param_&amp;lt;parameter&amp;gt;`: If the metrics endpoint uses parameters&lt;/span&gt;
      &lt;span class="c"&gt;# then you can set any parameter&lt;/span&gt;
      - job_name: &lt;span class="s1"&gt;'kubernetes-service-endpoints'&lt;/span&gt;
        honor_labels: &lt;span class="nb"&gt;true

        &lt;/span&gt;kubernetes_sd_configs:
          - role: endpoints

        relabel_configs:
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_annotation_prometheus_io_scrape]
            action: keep
            regex: &lt;span class="nb"&gt;true&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_annotation_prometheus_io_scrape_slow]
            action: drop
            regex: &lt;span class="nb"&gt;true&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_annotation_prometheus_io_scheme]
            action: replace
            target_label: __scheme__
            regex: &lt;span class="o"&gt;(&lt;/span&gt;https?&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_annotation_prometheus_io_path]
            action: replace
            target_label: __metrics_path__
            regex: &lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
            action: replace
            target_label: __address__
            regex: &lt;span class="o"&gt;(&lt;/span&gt;.+?&lt;span class="o"&gt;)(&lt;/span&gt;?::&lt;span class="se"&gt;\d&lt;/span&gt;+&lt;span class="o"&gt;)&lt;/span&gt;?&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="se"&gt;\d&lt;/span&gt;+&lt;span class="o"&gt;)&lt;/span&gt;
            replacement: &lt;span class="nv"&gt;$1&lt;/span&gt;:&lt;span class="nv"&gt;$2&lt;/span&gt;
          - action: labelmap
            regex: __meta_kubernetes_service_annotation_prometheus_io_param_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
            replacement: __param_&lt;span class="nv"&gt;$1&lt;/span&gt;
          - action: labelmap
            regex: __meta_kubernetes_service_label_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_namespace]
            action: replace
            target_label: namespace
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_name]
            action: replace
            target_label: service
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_node_name]
            action: replace
            target_label: node

      &lt;span class="c"&gt;# Scrape config for slow service endpoints; same as above, but with a larger&lt;/span&gt;
      &lt;span class="c"&gt;# timeout and a larger interval&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# The relabeling allows the actual service scrape endpoint to be configured&lt;/span&gt;
      &lt;span class="c"&gt;# via the following annotations:&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/scrape-slow`: Only scrape services that have a value of `true`&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need&lt;/span&gt;
      &lt;span class="c"&gt;# to set this to `https` &amp;amp; most likely set the `tls_config` of the scrape config.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/port`: If the metrics are exposed on a different port to the&lt;/span&gt;
      &lt;span class="c"&gt;# service then set this appropriately.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/param_&amp;lt;parameter&amp;gt;`: If the metrics endpoint uses parameters&lt;/span&gt;
      &lt;span class="c"&gt;# then you can set any parameter&lt;/span&gt;
      - job_name: &lt;span class="s1"&gt;'kubernetes-service-endpoints-slow'&lt;/span&gt;
        honor_labels: &lt;span class="nb"&gt;true

        &lt;/span&gt;scrape_interval: 5m
        scrape_timeout: 30s

        kubernetes_sd_configs:
          - role: endpoints

        relabel_configs:
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_annotation_prometheus_io_scrape_slow]
            action: keep
            regex: &lt;span class="nb"&gt;true&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_annotation_prometheus_io_scheme]
            action: replace
            target_label: __scheme__
            regex: &lt;span class="o"&gt;(&lt;/span&gt;https?&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_annotation_prometheus_io_path]
            action: replace
            target_label: __metrics_path__
            regex: &lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
            action: replace
            target_label: __address__
            regex: &lt;span class="o"&gt;(&lt;/span&gt;.+?&lt;span class="o"&gt;)(&lt;/span&gt;?::&lt;span class="se"&gt;\d&lt;/span&gt;+&lt;span class="o"&gt;)&lt;/span&gt;?&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="se"&gt;\d&lt;/span&gt;+&lt;span class="o"&gt;)&lt;/span&gt;
            replacement: &lt;span class="nv"&gt;$1&lt;/span&gt;:&lt;span class="nv"&gt;$2&lt;/span&gt;
          - action: labelmap
            regex: __meta_kubernetes_service_annotation_prometheus_io_param_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
            replacement: __param_&lt;span class="nv"&gt;$1&lt;/span&gt;
          - action: labelmap
            regex: __meta_kubernetes_service_label_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_namespace]
            action: replace
            target_label: namespace
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_name]
            action: replace
            target_label: service
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_node_name]
            action: replace
            target_label: node

      - job_name: &lt;span class="s1"&gt;'prometheus-pushgateway'&lt;/span&gt;
        honor_labels: &lt;span class="nb"&gt;true

        &lt;/span&gt;kubernetes_sd_configs:
          - role: service

        relabel_configs:
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_annotation_prometheus_io_probe]
            action: keep
            regex: pushgateway

      &lt;span class="c"&gt;# Example scrape config for probing services via the Blackbox Exporter.&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# The relabeling allows the actual service scrape endpoint to be configured&lt;/span&gt;
      &lt;span class="c"&gt;# via the following annotations:&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/probe`: Only probe services that have a value of `true`&lt;/span&gt;
      - job_name: &lt;span class="s1"&gt;'kubernetes-services'&lt;/span&gt;
        honor_labels: &lt;span class="nb"&gt;true

        &lt;/span&gt;metrics_path: /probe
        params:
          module: &lt;span class="o"&gt;[&lt;/span&gt;http_2xx]

        kubernetes_sd_configs:
          - role: service

        relabel_configs:
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_annotation_prometheus_io_probe]
            action: keep
            regex: &lt;span class="nb"&gt;true&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__address__]
            target_label: __param_target
          - target_label: __address__
            replacement: blackbox
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__param_target]
            target_label: instance
          - action: labelmap
            regex: __meta_kubernetes_service_label_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_namespace]
            target_label: namespace
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_service_name]
            target_label: service

      &lt;span class="c"&gt;# Example scrape config for pods&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# The relabeling allows the actual pod scrape endpoint to be configured via the&lt;/span&gt;
      &lt;span class="c"&gt;# following annotations:&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/scrape`: Only scrape pods that have a value of `true`,&lt;/span&gt;
      &lt;span class="c"&gt;# except if `prometheus.io/scrape-slow` is set to `true` as well.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need&lt;/span&gt;
      &lt;span class="c"&gt;# to set this to `https` &amp;amp; most likely set the `tls_config` of the scrape config.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`.&lt;/span&gt;
      - job_name: &lt;span class="s1"&gt;'kubernetes-pods'&lt;/span&gt;
        honor_labels: &lt;span class="nb"&gt;true

        &lt;/span&gt;kubernetes_sd_configs:
          - role: pod

        relabel_configs:
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_scrape]
            action: keep
            regex: &lt;span class="nb"&gt;true&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_scrape_slow]
            action: drop
            regex: &lt;span class="nb"&gt;true&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_scheme]
            action: replace
            regex: &lt;span class="o"&gt;(&lt;/span&gt;https?&lt;span class="o"&gt;)&lt;/span&gt;
            target_label: __scheme__
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_path]
            action: replace
            target_label: __metrics_path__
            regex: &lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip]
            action: replace
            regex: &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="se"&gt;\d&lt;/span&gt;+&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="o"&gt;(([&lt;/span&gt;A-Fa-f0-9]&lt;span class="o"&gt;{&lt;/span&gt;1,4&lt;span class="o"&gt;}&lt;/span&gt;::?&lt;span class="o"&gt;){&lt;/span&gt;1,7&lt;span class="o"&gt;}[&lt;/span&gt;A-Fa-f0-9]&lt;span class="o"&gt;{&lt;/span&gt;1,4&lt;span class="o"&gt;})&lt;/span&gt;
            replacement: &lt;span class="s1"&gt;'[$2]:$1'&lt;/span&gt;
            target_label: __address__
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip]
            action: replace
            regex: &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="se"&gt;\d&lt;/span&gt;+&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="o"&gt;((([&lt;/span&gt;0-9]+?&lt;span class="o"&gt;)(&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;|&lt;span class="nv"&gt;$)&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;4&lt;span class="o"&gt;})&lt;/span&gt;
            replacement: &lt;span class="nv"&gt;$2&lt;/span&gt;:&lt;span class="nv"&gt;$1&lt;/span&gt;
            target_label: __address__
          - action: labelmap
            regex: __meta_kubernetes_pod_annotation_prometheus_io_param_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
            replacement: __param_&lt;span class="nv"&gt;$1&lt;/span&gt;
          - action: labelmap
            regex: __meta_kubernetes_pod_label_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_namespace]
            action: replace
            target_label: namespace
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_name]
            action: replace
            target_label: pod
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_phase]
            regex: Pending|Succeeded|Failed|Completed
            action: drop

      &lt;span class="c"&gt;# Example Scrape config for pods which should be scraped slower. An useful example&lt;/span&gt;
      &lt;span class="c"&gt;# would be stackriver-exporter which queries an API on every scrape of the pod&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# The relabeling allows the actual pod scrape endpoint to be configured via the&lt;/span&gt;
      &lt;span class="c"&gt;# following annotations:&lt;/span&gt;
      &lt;span class="c"&gt;#&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/scrape-slow`: Only scrape pods that have a value of `true`&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need&lt;/span&gt;
      &lt;span class="c"&gt;# to set this to `https` &amp;amp; most likely set the `tls_config` of the scrape config.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.&lt;/span&gt;
      &lt;span class="c"&gt;# * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`.&lt;/span&gt;
      - job_name: &lt;span class="s1"&gt;'kubernetes-pods-slow'&lt;/span&gt;
        honor_labels: &lt;span class="nb"&gt;true

        &lt;/span&gt;scrape_interval: 5m
        scrape_timeout: 30s

        kubernetes_sd_configs:
          - role: pod

        relabel_configs:
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_scrape_slow]
            action: keep
            regex: &lt;span class="nb"&gt;true&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_scheme]
            action: replace
            regex: &lt;span class="o"&gt;(&lt;/span&gt;https?&lt;span class="o"&gt;)&lt;/span&gt;
            target_label: __scheme__
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_path]
            action: replace
            target_label: __metrics_path__
            regex: &lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip]
            action: replace
            regex: &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="se"&gt;\d&lt;/span&gt;+&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="o"&gt;(([&lt;/span&gt;A-Fa-f0-9]&lt;span class="o"&gt;{&lt;/span&gt;1,4&lt;span class="o"&gt;}&lt;/span&gt;::?&lt;span class="o"&gt;){&lt;/span&gt;1,7&lt;span class="o"&gt;}[&lt;/span&gt;A-Fa-f0-9]&lt;span class="o"&gt;{&lt;/span&gt;1,4&lt;span class="o"&gt;})&lt;/span&gt;
            replacement: &lt;span class="s1"&gt;'[$2]:$1'&lt;/span&gt;
            target_label: __address__
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip]
            action: replace
            regex: &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="se"&gt;\d&lt;/span&gt;+&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="o"&gt;((([&lt;/span&gt;0-9]+?&lt;span class="o"&gt;)(&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;|&lt;span class="nv"&gt;$)&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;4&lt;span class="o"&gt;})&lt;/span&gt;
            replacement: &lt;span class="nv"&gt;$2&lt;/span&gt;:&lt;span class="nv"&gt;$1&lt;/span&gt;
            target_label: __address__
          - action: labelmap
            regex: __meta_kubernetes_pod_annotation_prometheus_io_param_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
            replacement: __param_&lt;span class="nv"&gt;$1&lt;/span&gt;
          - action: labelmap
            regex: __meta_kubernetes_pod_label_&lt;span class="o"&gt;(&lt;/span&gt;.+&lt;span class="o"&gt;)&lt;/span&gt;
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_namespace]
            action: replace
            target_label: namespace
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_name]
            action: replace
            target_label: pod
          - source_labels: &lt;span class="o"&gt;[&lt;/span&gt;__meta_kubernetes_pod_phase]
            regex: Pending|Succeeded|Failed|Completed
            action: drop

&lt;span class="c"&gt;# adds additional scrape configs to prometheus.yml&lt;/span&gt;
&lt;span class="c"&gt;# must be a string so you have to add a | after extraScrapeConfigs:&lt;/span&gt;
&lt;span class="c"&gt;# example adds prometheus-blackbox-exporter scrape config&lt;/span&gt;
extraScrapeConfigs: &lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="c"&gt;# - job_name: 'prometheus-blackbox-exporter'&lt;/span&gt;
  &lt;span class="c"&gt;#   metrics_path: /probe&lt;/span&gt;
  &lt;span class="c"&gt;#   params:&lt;/span&gt;
  &lt;span class="c"&gt;#     module: [http_2xx]&lt;/span&gt;
  &lt;span class="c"&gt;#   static_configs:&lt;/span&gt;
  &lt;span class="c"&gt;#     - targets:&lt;/span&gt;
  &lt;span class="c"&gt;#       - https://example.com&lt;/span&gt;
  &lt;span class="c"&gt;#   relabel_configs:&lt;/span&gt;
  &lt;span class="c"&gt;#     - source_labels: [__address__]&lt;/span&gt;
  &lt;span class="c"&gt;#       target_label: __param_target&lt;/span&gt;
  &lt;span class="c"&gt;#     - source_labels: [__param_target]&lt;/span&gt;
  &lt;span class="c"&gt;#       target_label: instance&lt;/span&gt;
  &lt;span class="c"&gt;#     - target_label: __address__&lt;/span&gt;
  &lt;span class="c"&gt;#       replacement: prometheus-blackbox-exporter:9115&lt;/span&gt;

&lt;span class="c"&gt;# Adds option to add alert_relabel_configs to avoid duplicate alerts in alertmanager&lt;/span&gt;
&lt;span class="c"&gt;# useful in H/A prometheus with different external labels but the same alerts&lt;/span&gt;
alertRelabelConfigs: &lt;span class="o"&gt;{}&lt;/span&gt;
  &lt;span class="c"&gt;# alert_relabel_configs:&lt;/span&gt;
  &lt;span class="c"&gt;# - source_labels: [dc]&lt;/span&gt;
  &lt;span class="c"&gt;#   regex: (.+)\d+&lt;/span&gt;
  &lt;span class="c"&gt;#   target_label: dc&lt;/span&gt;

networkPolicy:
  &lt;span class="c"&gt;## Enable creation of NetworkPolicy resources.&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  enabled: &lt;span class="nb"&gt;false&lt;/span&gt;

&lt;span class="c"&gt;# Force namespace of namespaced resources&lt;/span&gt;
forceNamespace: &lt;span class="s2"&gt;""&lt;/span&gt;

&lt;span class="c"&gt;# Extra manifests to deploy as an array&lt;/span&gt;
extraManifests: &lt;span class="o"&gt;[]&lt;/span&gt;
  &lt;span class="c"&gt;# - apiVersion: v1&lt;/span&gt;
  &lt;span class="c"&gt;#   kind: ConfigMap&lt;/span&gt;
  &lt;span class="c"&gt;#   metadata:&lt;/span&gt;
  &lt;span class="c"&gt;#   labels:&lt;/span&gt;
  &lt;span class="c"&gt;#     name: prometheus-extra&lt;/span&gt;
  &lt;span class="c"&gt;#   data:&lt;/span&gt;
  &lt;span class="c"&gt;#     extra-data: "value"&lt;/span&gt;

&lt;span class="c"&gt;# Configuration of subcharts defined in Chart.yaml&lt;/span&gt;

&lt;span class="c"&gt;## alertmanager sub-chart configurable values&lt;/span&gt;
&lt;span class="c"&gt;## Please see https://github.com/prometheus-community/helm-charts/tree/main/charts/alertmanager&lt;/span&gt;
&lt;span class="c"&gt;##&lt;/span&gt;
alertmanager:
  &lt;span class="c"&gt;## If false, alertmanager will not be installed&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  enabled: &lt;span class="nb"&gt;true

  &lt;/span&gt;persistence:
    size: 2Gi

  podSecurityContext:
    runAsUser: 65534
    runAsNonRoot: &lt;span class="nb"&gt;true
    &lt;/span&gt;runAsGroup: 65534
    fsGroup: 65534

&lt;span class="c"&gt;## kube-state-metrics sub-chart configurable values&lt;/span&gt;
&lt;span class="c"&gt;## Please see https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics&lt;/span&gt;
&lt;span class="c"&gt;##&lt;/span&gt;
kube-state-metrics:
  &lt;span class="c"&gt;## If false, kube-state-metrics sub-chart will not be installed&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  enabled: &lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;## promtheus-node-exporter sub-chart configurable values&lt;/span&gt;
&lt;span class="c"&gt;## Please see https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter&lt;/span&gt;
&lt;span class="c"&gt;##&lt;/span&gt;
prometheus-node-exporter:
  &lt;span class="c"&gt;## If false, node-exporter will not be installed&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  enabled: &lt;span class="nb"&gt;true

  &lt;/span&gt;rbac:
    pspEnabled: &lt;span class="nb"&gt;false

  &lt;/span&gt;containerSecurityContext:
    allowPrivilegeEscalation: &lt;span class="nb"&gt;false
  &lt;/span&gt;affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/os
              operator: In
              values:
              - linux
            - key: kubernetes.io/arch
              operator: In
              values:
              - amd64
              - arm64
            - key: eks.amazonaws.com/compute-type
              operator: NotIn
              values:
              - fargate

&lt;span class="c"&gt;## pprometheus-pushgateway sub-chart configurable values&lt;/span&gt;
&lt;span class="c"&gt;## Please see https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-pushgateway&lt;/span&gt;
&lt;span class="c"&gt;##&lt;/span&gt;
prometheus-pushgateway:
  &lt;span class="c"&gt;## If false, pushgateway will not be installed&lt;/span&gt;
  &lt;span class="c"&gt;##&lt;/span&gt;
  enabled: &lt;span class="nb"&gt;true&lt;/span&gt;

  &lt;span class="c"&gt;# Optional service annotations&lt;/span&gt;
  serviceAnnotations:
    prometheus.io/probe: pushgateway



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note: you have to add nodeAffinity for node exporter&lt;/strong&gt;&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;affinity:&lt;br&gt;
    nodeAffinity:&lt;br&gt;
      requiredDuringSchedulingIgnoredDuringExecution:&lt;br&gt;
        nodeSelectorTerms:&lt;br&gt;
          - matchExpressions:&lt;br&gt;
            - key: kubernetes.io/os&lt;br&gt;
              operator: In&lt;br&gt;
              values:&lt;br&gt;
              - linux&lt;br&gt;
            - key: kubernetes.io/arch&lt;br&gt;
              operator: In&lt;br&gt;
              values:&lt;br&gt;
              - amd64&lt;br&gt;
              - arm64&lt;br&gt;
            - key: eks.amazonaws.com/compute-type&lt;br&gt;
              operator: NotIn&lt;br&gt;
              values:&lt;br&gt;
              - fargate&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Ingress for prometheus url&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Lets add the ingress for prometheus in the following way-&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;apiVersion: networking.k8s.io/v1&lt;br&gt;
kind: Ingress&lt;br&gt;
metadata:&lt;br&gt;
  annotations:&lt;br&gt;
    alb.ingress.kubernetes.io/actions.ssl-redirect: &lt;span class="s1"&gt;'{"Type": "redirect", "RedirectConfig":&lt;br&gt;
      { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'&lt;/span&gt;&lt;br&gt;
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:2343668766434:certificate/c25d25f3-78ae-4197-a806-1882f6b947dc&lt;br&gt;
    alb.ingress.kubernetes.io/listen-ports: &lt;span class="s1"&gt;'[{"HTTP": 80}, {"HTTPS":443}]'&lt;/span&gt;&lt;br&gt;
    alb.ingress.kubernetes.io/scheme: internet-facing&lt;br&gt;
    alb.ingress.kubernetes.io/success-codes: 200,404,301,302&lt;br&gt;
    alb.ingress.kubernetes.io/target-type: ip&lt;br&gt;
  finalizers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ingress.k8s.aws/resources
name: prometheus-server
namespace: prometheus
spec:
ingressClassName: alb
rules:&lt;/li&gt;
&lt;li&gt;http:
  paths:

&lt;ul&gt;
&lt;li&gt;backend:
  service:
    name: prometheus-server
    port:
      number: 80
path: /
pathType: Prefix&lt;/li&gt;
&lt;li&gt;backend:
  service:
    name: prometheus-server
    port:
      number: 80
path: /
pathType: Prefix&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  AWS Managed Grafana:&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Add this prometheus url as a datasource on grafana. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci8i5s10nfkol1k7bdbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci8i5s10nfkol1k7bdbx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it! . It's ready now to create dashboard. &lt;/p&gt;

</description>
      <category>eks</category>
      <category>kubernetes</category>
      <category>prometheus</category>
      <category>grafana</category>
    </item>
    <item>
      <title>AWS EKS Setup with eksctl &amp; Argo CD installation, configuration &amp; deploy app with ArgoCD &amp; Kustomize</title>
      <dc:creator>M.M.Monirul Islam</dc:creator>
      <pubDate>Mon, 23 May 2022 15:47:50 +0000</pubDate>
      <link>https://forem.com/monirul87/aws-eks-setup-with-eksctl-argo-cd-installation-configuration-deploy-app-with-argocd-kustomize-56pg</link>
      <guid>https://forem.com/monirul87/aws-eks-setup-with-eksctl-argo-cd-installation-configuration-deploy-app-with-argocd-kustomize-56pg</guid>
      <description>&lt;h1&gt;
  
  
  Setting up a production Kubernetes service on AWS EKS
&lt;/h1&gt;

&lt;p&gt;The easiest way to create an EKS on AWS is to use eksctl. And it is recommended to create a bastion server in AWS and run it there rather than a laptop as the environment to run the eksctl cli. &lt;/p&gt;

&lt;h2&gt;
  
  
  Create AWS User
&lt;/h2&gt;

&lt;p&gt;First, create an admin user so that it can be used programatically in AWS IAM, and obtain the AWS Access Key ID and AWS Secret Access Key of the user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Bastion Server
&lt;/h2&gt;

&lt;p&gt;Let's create a bastion server in AWS. Even if the instance type is t3.small, it is sufficient. &lt;/p&gt;

&lt;h2&gt;
  
  
  Install kubectl
&lt;/h2&gt;

&lt;p&gt;Install kubectl of desired kubernetes version&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-o&lt;/span&gt; kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x ./kubectl
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/bin &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cp&lt;/span&gt; ./kubectl &lt;span class="nv"&gt;$HOME&lt;/span&gt;/bin/kubectl &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;:&lt;span class="nv"&gt;$HOME&lt;/span&gt;/bin
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'export PATH=$PATH:$HOME/bin'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bashrc
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl version &lt;span class="nt"&gt;--short&lt;/span&gt; &lt;span class="nt"&gt;--client&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Reference link: &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install aws cli
&lt;/h2&gt;

&lt;p&gt;To use eksctl, We need to set the credential of the AWS user. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;unzip
&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="s2"&gt;"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"awscliv2.zip"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;unzip awscliv2.zip
&lt;span class="nv"&gt;$ &lt;/span&gt;./aws/install


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Reference link to install the aws cli:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS cli configuration settings
&lt;/h2&gt;

&lt;p&gt;Now set configuration in aws cli in the following way-&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws configure
AWS Access Key ID &lt;span class="o"&gt;[&lt;/span&gt;None]: ~~~
AWS Secret Access Key &lt;span class="o"&gt;[&lt;/span&gt;None]: ~~~
Default region name &lt;span class="o"&gt;[&lt;/span&gt;None]: ap-southeast-1
Default output format &lt;span class="o"&gt;[&lt;/span&gt;None]: json


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Reference link: &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install eksctl cli
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;--silent&lt;/span&gt; &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;_amd64.tar.gz"&lt;/span&gt; | &lt;span class="nb"&gt;tar &lt;/span&gt;xz &lt;span class="nt"&gt;-C&lt;/span&gt; /tmp
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mv&lt;/span&gt; /tmp/eksctl /usr/local/bin
&lt;span class="nv"&gt;$ &lt;/span&gt;eksctl version
0.98.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Reference link: &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create EKS Cluster
&lt;/h2&gt;

&lt;p&gt;Create EKS Cluster with eksctl cli.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;eksctl create cluster &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--version&lt;/span&gt; 1.21 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--name&lt;/span&gt; eks-monirul-cluster &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--vpc-nat-mode&lt;/span&gt; HighlyAvailable &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--node-private-networking&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--region&lt;/span&gt; ap-southeast-1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--node-type&lt;/span&gt; t3.medium &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--nodes&lt;/span&gt; 2 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--with-oidc&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--ssh-access&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--ssh-public-key&lt;/span&gt; monirul &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--managed&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here,&lt;br&gt;
&lt;code&gt;version&lt;/code&gt;: the Kubernetes version to use&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vpc-nat-mode&lt;/code&gt;: All outbounds of kubernetes go out through the nat gateway. The default option is single, so only one is created. In a development environment, it may not matter, but in production, we must use the &lt;code&gt;HighAvailable&lt;/code&gt; option to create one for each subnet.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;node-private-networking&lt;/code&gt;: If this option is not present, a node group is created in the public subnet. Use this option so that it is created in a private subnet for security.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;node-type&lt;/code&gt;: the instance type of the node to be created&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nodes&lt;/code&gt;: the number of nodes to be created&lt;/p&gt;

&lt;p&gt;Check if the cluster has been successfully created&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  EKS security related settings
&lt;/h2&gt;

&lt;p&gt;Up to this point, a Kubernets cluster is created in a secure way considering HA for commercial use. But for security, we need to do one more thing. The current state is that the cluster endpoint is allowed to be public.&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To restrict this, allow private access and modify kubernetes to issue commands only to the bastion server by limiting the CIDR block. Enter the public IPv4 address of the bastion server.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;eksctl utils update-cluster-endpoints &lt;span class="nt"&gt;--cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eks-monirul-cluster &lt;span class="nt"&gt;--private-access&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;--public-access&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;--approve&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;eksctl utils set-public-access-cidrs &lt;span class="nt"&gt;--cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eks-monirul-cluster 1.1.1.1/32 &lt;span class="nt"&gt;--approve&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;1.1.1.1/32 is the address of the bastion server.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In fact, EKS Cluster creation can be created at once by creating a yaml file with the --dry-run option of eksctl and giving all options from creation to EKS security-related settings.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, if we install Istio in the EKS Cluster created in this way and deploy the service, we can use it immediately.&lt;/p&gt;

&lt;h1&gt;
  
  
  Argo CD installation, setup &amp;amp; deploy a simple app with kustomize
&lt;/h1&gt;

&lt;p&gt;ArgoCD monitors changes in Kubernetes manifests managed by GitOps, and plays a role in maintaining the form deployed in the actual cluster in the same way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Argo CD installation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="noopener noreferrer"&gt;https://argo-cd.readthedocs.io/en/stable/getting_started/&lt;/a&gt;&lt;br&gt;
Install Argo CD on kubernetes cluster&lt;/p&gt;

&lt;p&gt;In production, install the HA version of the Argo CD.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace argocd
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/ha/install.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Install Argo CD CLI
&lt;/h2&gt;

&lt;p&gt;Install Argo CD CLI&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-sSL&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /usr/local/bin/argocd


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Argo CD service exposure
&lt;/h2&gt;

&lt;p&gt;Argo CD does not expose the server to the outside by default. Change the service type to LoadBalancer as shown below and expose it to the outside.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl patch svc argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"spec": {"type": "LoadBalancer"}}'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Change admin password
&lt;/h2&gt;

&lt;p&gt;Argo CD stores the initial password of the initial admin account as the secret of kubernetes. Get the password as below.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; argocd get secret argocd-initial-admin-secret &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.password}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;echo
&lt;/span&gt;jaIyQ3MMuLnl6h0l



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Log in to Argo CD using the Argo CD CLI. First, get the address of the created Load Balancer.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And log in. Username is admin&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd login &amp;lt;ARGOCD_SERVER_DOMAIN/URL&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Update the password of the admin user after first login.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;argocd account update-password


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Argo CD polls the Git repository once every 3 minutes to check the difference from the actual kubernetes cluster. Therefore, if we are unlucky during distribution, we have to wait up to 3 minutes before Argo CD distributes the changed image. If we want to eliminate the delay caused by polling like this, we can create a webhook with Argo CD in the Git repository. Here is the link:&lt;br&gt;
&lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/webhook/" rel="noopener noreferrer"&gt;https://argo-cd.readthedocs.io/en/stable/operator-manual/webhook/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And, usually Argo CD has only a specific IP inbound in the security group to prevent access except for internal developers or operators. At this time, if we have created a webhook as above, we must also open Github's webhook-related API inbound to the Argo CD's load balancer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The link below contains information about the IP address of GitHub,&lt;br&gt;
&lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/about-githubs-ip-addresses" rel="noopener noreferrer"&gt;https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/about-githubs-ip-addresses&lt;/a&gt;&lt;br&gt;
We can check the IP that needs to be put in the actual inbound below. Just put the list in hooks.&lt;br&gt;
&lt;a href="https://api.github.com/meta" rel="noopener noreferrer"&gt;https://api.github.com/meta&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If we configure to send notification to Slack using Argo CD Notification, the development team can receive a notification when the deployment is not successful.&lt;/p&gt;

&lt;p&gt;There is also a way to set up the project by using the ApplicationSet of the Argo CD. If we used App of Apps in the past, a more advanced concept here is Application Sets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opengitops.dev/" rel="noopener noreferrer"&gt;https://opengitops.dev/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/open-gitops/documents" rel="noopener noreferrer"&gt;https://github.com/open-gitops/documents&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Currently, kustomize is used for configuration management of Kubernetes, and kustomize is deployed from Argo CD. If we are using branching in kustomize or helm, it might be helpful to read Stop Using Branches for Deploying to Different GitOps Environments.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Kustomize yaml-
&lt;/h2&gt;

&lt;p&gt;kustomization.yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namePrefix: kustomize-monirul-

resources:
- nginx-deployment.yaml
- nginx-svc.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Deployment definition file: nginx-deployment.yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Service definition file: nginx-svc.yaml&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Service&lt;br&gt;
metadata:&lt;br&gt;
  labels:&lt;br&gt;
    app: nginx&lt;br&gt;
  name: nginx&lt;br&gt;
spec:&lt;br&gt;
  ports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
&lt;span class="nb"&gt;type&lt;/span&gt;: ClusterIP&lt;/li&gt;
&lt;/ul&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  ArgoCD configuration to deploy simple app:&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73eolgd2dt6kxyu6dkg3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73eolgd2dt6kxyu6dkg3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Screenshot of Argo CD:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3haq6q9pfy3u659d1rse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3haq6q9pfy3u659d1rse.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>kubernetes</category>
      <category>argocd</category>
    </item>
    <item>
      <title>Provisioning Amazon Elastic Kubernetes Service (Amazon EKS) with Terraform</title>
      <dc:creator>M.M.Monirul Islam</dc:creator>
      <pubDate>Wed, 18 May 2022 15:32:59 +0000</pubDate>
      <link>https://forem.com/monirul87/provisioning-amazon-elastic-kubernetes-service-amazon-eks-with-terraform-5cd6</link>
      <guid>https://forem.com/monirul87/provisioning-amazon-elastic-kubernetes-service-amazon-eks-with-terraform-5cd6</guid>
      <description>&lt;p&gt;AWS EKS is a managed container service to run and scale Kubernetes applications in the cloud or on-premises.&lt;/p&gt;

&lt;p&gt;HashiCorp Terraform is an Infrastructure as Code (IaC) tool that lets us define both cloud and on-prem resources in human-readable configuration files that can version, reuse, and share.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon EKS Cluster using Terraform:
&lt;/h2&gt;

&lt;p&gt;This Repository will be used to keep infrastructure configuration for eks cluster&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Account&lt;/li&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;AWS CLI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To configure AWS CLI, We need to enter AWS Access Key ID, Secret Access Key, region and output format. Please note proper privillage is required to create eks cluster resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;aws configure
AWS Access Key ID &lt;span class="o"&gt;[&lt;/span&gt;None]: AWS_ACCESS_KEY_ID
AWS Secret Access Key &lt;span class="o"&gt;[&lt;/span&gt;None]: AWS_SECRET_ACCESS_KEY
Default region name &lt;span class="o"&gt;[&lt;/span&gt;None]: AWS_REGION
Default output format &lt;span class="o"&gt;[&lt;/span&gt;None]: json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terraform Initial Setup Configuration
&lt;/h2&gt;

&lt;p&gt;Need to create an AWS provider. It allows to interact with the AWS resources, such as VPC, EKS, S3, EC2, and many others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;providers.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform &lt;span class="o"&gt;{&lt;/span&gt;
  required_version &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 1.1.0"&lt;/span&gt;

  required_providers &lt;span class="o"&gt;{&lt;/span&gt;
    aws &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="nb"&gt;source&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      version &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 4.0"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

provider &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  region &lt;span class="o"&gt;=&lt;/span&gt; var.region

  access_key &lt;span class="o"&gt;=&lt;/span&gt; var.aws_access_key
  secret_key &lt;span class="o"&gt;=&lt;/span&gt; var.aws_secret_key

  &lt;span class="c"&gt;# other options for authentication&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terraform State Setup
&lt;/h2&gt;

&lt;p&gt;Now, We need to create terraform backend to specify the location of the backend Terraform state file on S3.&lt;br&gt;
Remote state is storing that state file remotely, rather than on my local filesystem.&lt;br&gt;
&lt;strong&gt;backend.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform &lt;span class="o"&gt;{&lt;/span&gt;
  backend &lt;span class="s2"&gt;"s3"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    bucket  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mondev-terraform-states"&lt;/span&gt;
    key     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-eks-mondev.tfstate"&lt;/span&gt;
    region  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ap-southeast-1"&lt;/span&gt;
    encrypt &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Network Infrastructure Setup
&lt;/h2&gt;

&lt;p&gt;Setting up the VPC, Subnets, Security Groups, etc.&lt;br&gt;
Amazon EKS requires subnets must be in at least two different availability zones.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS VPC (Virtual Private Cloud).&lt;/li&gt;
&lt;li&gt;Two public and two private Subnets in different availability zones.&lt;/li&gt;
&lt;li&gt;Internet Gateway to provide internet access for services within VPC.&lt;/li&gt;
&lt;li&gt;NAT Gateway in public subnets. It is used in private subnets to allow services to connect to the internet.&lt;/li&gt;
&lt;li&gt;Routing Tables and associate subnets with them. Add required routing rules.&lt;/li&gt;
&lt;li&gt;Security Groups and associate subnets with them. Add required routing rules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;vpc.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# VPC&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"mondev"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  cidr_block &lt;span class="o"&gt;=&lt;/span&gt; var.vpc_cidr

  enable_dns_hostnames &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true
  &lt;/span&gt;enable_dns_support   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true

  &lt;/span&gt;tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name                                           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-vpc"&lt;/span&gt;,
    &lt;span class="s2"&gt;"kubernetes.io/cluster/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-cluster"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"shared"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Public Subnets&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"public"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  count &lt;span class="o"&gt;=&lt;/span&gt; var.availability_zones_count

  vpc_id            &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.id
  cidr_block        &lt;span class="o"&gt;=&lt;/span&gt; cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, count.index&lt;span class="o"&gt;)&lt;/span&gt;
  availability_zone &lt;span class="o"&gt;=&lt;/span&gt; data.aws_availability_zones.available.names[count.index]

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name                                           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-public-sg"&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/cluster/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-cluster"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"shared"&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/role/elb"&lt;/span&gt;                       &lt;span class="o"&gt;=&lt;/span&gt; 1
  &lt;span class="o"&gt;}&lt;/span&gt;

  map_public_ip_on_launch &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Private Subnets&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"private"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  count &lt;span class="o"&gt;=&lt;/span&gt; var.availability_zones_count

  vpc_id            &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.id
  cidr_block        &lt;span class="o"&gt;=&lt;/span&gt; cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, count.index + var.availability_zones_count&lt;span class="o"&gt;)&lt;/span&gt;
  availability_zone &lt;span class="o"&gt;=&lt;/span&gt; data.aws_availability_zones.available.names[count.index]

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name                                           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-private-sg"&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/cluster/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-cluster"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"shared"&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/role/internal-elb"&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; 1
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Internet Gateway&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_internet_gateway"&lt;/span&gt; &lt;span class="s2"&gt;"mondev"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  vpc_id &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.id

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Name"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-igw"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  depends_on &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;aws_vpc.mondev]
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Route Table(s)&lt;/span&gt;
&lt;span class="c"&gt;# Route the public subnet traffic through the IGW&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_route_table"&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  vpc_id &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.id

  route &lt;span class="o"&gt;{&lt;/span&gt;
    cidr_block &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
    gateway_id &lt;span class="o"&gt;=&lt;/span&gt; aws_internet_gateway.mondev.id
  &lt;span class="o"&gt;}&lt;/span&gt;

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-Default-rt"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Route table and subnet associations&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_route_table_association"&lt;/span&gt; &lt;span class="s2"&gt;"internet_access"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  count &lt;span class="o"&gt;=&lt;/span&gt; var.availability_zones_count

  subnet_id      &lt;span class="o"&gt;=&lt;/span&gt; aws_subnet.public[count.index].id
  route_table_id &lt;span class="o"&gt;=&lt;/span&gt; aws_route_table.main.id
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# NAT Elastic IP&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_eip"&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  vpc &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true

  &lt;/span&gt;tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-ngw-ip"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# NAT Gateway&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_nat_gateway"&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  allocation_id &lt;span class="o"&gt;=&lt;/span&gt; aws_eip.main.id
  subnet_id     &lt;span class="o"&gt;=&lt;/span&gt; aws_subnet.public[0].id

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-ngw"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Add route to route table&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_route"&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  route_table_id         &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.default_route_table_id
  nat_gateway_id         &lt;span class="o"&gt;=&lt;/span&gt; aws_nat_gateway.main.id
  destination_cidr_block &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Security group for public subnet&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"public_sg"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-Public-sg"&lt;/span&gt;
  vpc_id &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.id

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-Public-sg"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Security group traffic rules&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"sg_ingress_public_443"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.public_sg.id
  &lt;span class="nb"&gt;type&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
  from_port         &lt;span class="o"&gt;=&lt;/span&gt; 443
  to_port           &lt;span class="o"&gt;=&lt;/span&gt; 443
  protocol          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
  cidr_blocks       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"sg_ingress_public_80"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.public_sg.id
  &lt;span class="nb"&gt;type&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
  from_port         &lt;span class="o"&gt;=&lt;/span&gt; 80
  to_port           &lt;span class="o"&gt;=&lt;/span&gt; 80
  protocol          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
  cidr_blocks       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"sg_egress_public"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.public_sg.id
  &lt;span class="nb"&gt;type&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"egress"&lt;/span&gt;
  from_port         &lt;span class="o"&gt;=&lt;/span&gt; 0
  to_port           &lt;span class="o"&gt;=&lt;/span&gt; 0
  protocol          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
  cidr_blocks       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Security group for data plane&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"data_plane_sg"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-Worker-sg"&lt;/span&gt;
  vpc_id &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.id

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-Worker-sg"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Security group traffic rules&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"nodes"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow nodes to communicate with each other"&lt;/span&gt;
  security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.data_plane_sg.id
  &lt;span class="nb"&gt;type&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
  from_port         &lt;span class="o"&gt;=&lt;/span&gt; 0
  to_port           &lt;span class="o"&gt;=&lt;/span&gt; 65535
  protocol          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
  cidr_blocks       &lt;span class="o"&gt;=&lt;/span&gt; flatten&lt;span class="o"&gt;([&lt;/span&gt;cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 0&lt;span class="o"&gt;)&lt;/span&gt;, cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 1&lt;span class="o"&gt;)&lt;/span&gt;, cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 2&lt;span class="o"&gt;)&lt;/span&gt;, cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 3&lt;span class="o"&gt;)])&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"nodes_inbound"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow worker Kubelets and pods to receive communication from the cluster control plane"&lt;/span&gt;
  security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.data_plane_sg.id
  &lt;span class="nb"&gt;type&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
  from_port         &lt;span class="o"&gt;=&lt;/span&gt; 1025
  to_port           &lt;span class="o"&gt;=&lt;/span&gt; 65535
  protocol          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
  cidr_blocks       &lt;span class="o"&gt;=&lt;/span&gt; flatten&lt;span class="o"&gt;([&lt;/span&gt;cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 2&lt;span class="o"&gt;)&lt;/span&gt;, cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 3&lt;span class="o"&gt;)])&lt;/span&gt;
  &lt;span class="c"&gt;# cidr_blocks       = flatten([var.private_subnet_cidr_blocks])&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"node_outbound"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.data_plane_sg.id
  &lt;span class="nb"&gt;type&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"egress"&lt;/span&gt;
  from_port         &lt;span class="o"&gt;=&lt;/span&gt; 0
  to_port           &lt;span class="o"&gt;=&lt;/span&gt; 0
  protocol          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
  cidr_blocks       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Security group for control plane&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"control_plane_sg"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-ControlPlane-sg"&lt;/span&gt;
  vpc_id &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.id

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-ControlPlane-sg"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Security group traffic rules&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"control_plane_inbound"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.control_plane_sg.id
  &lt;span class="nb"&gt;type&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
  from_port         &lt;span class="o"&gt;=&lt;/span&gt; 0
  to_port           &lt;span class="o"&gt;=&lt;/span&gt; 65535
  protocol          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
  cidr_blocks       &lt;span class="o"&gt;=&lt;/span&gt; flatten&lt;span class="o"&gt;([&lt;/span&gt;cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 0&lt;span class="o"&gt;)&lt;/span&gt;, cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 1&lt;span class="o"&gt;)&lt;/span&gt;, cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 2&lt;span class="o"&gt;)&lt;/span&gt;, cidrsubnet&lt;span class="o"&gt;(&lt;/span&gt;var.vpc_cidr, var.subnet_cidr_bits, 3&lt;span class="o"&gt;)])&lt;/span&gt;
  &lt;span class="c"&gt;# cidr_blocks       = flatten([var.private_subnet_cidr_blocks, var.public_subnet_cidr_blocks])&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"control_plane_outbound"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.control_plane_sg.id
  &lt;span class="nb"&gt;type&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"egress"&lt;/span&gt;
  from_port         &lt;span class="o"&gt;=&lt;/span&gt; 0
  to_port           &lt;span class="o"&gt;=&lt;/span&gt; 65535
  protocol          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
  cidr_blocks       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  EKS Cluster Setup
&lt;/h1&gt;

&lt;p&gt;Creating EKS cluster. Kubernetes clusters managed by Amazon EKS make calls to other AWS services on your behalf to manage the resources that we use with the service. For example, EKS will create an Auto Scaling Groups for each instance group if we use managed nodes.&lt;/p&gt;

&lt;p&gt;Setting up the IAM Roles and Policies for EKS: EKS requires a few IAM Roles with relevant Policies to be pre-defined to operate correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM Role:&lt;/strong&gt; Create Role with the needed permissions that Amazon EKS will use to create AWS resources for Kubernetes clusters and interact with AWS APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM Policy:&lt;/strong&gt; Attach the trusted Policy (AmazonEKSClusterPolicy) which will allow Amazon EKS to assume and use this role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;eks-cluster.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# EKS Cluster&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_eks_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"mondev"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-cluster"&lt;/span&gt;
  role_arn &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.cluster.arn
  version  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.22"&lt;/span&gt;

  vpc_config &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;# security_group_ids      = [aws_security_group.eks_cluster.id, aws_security_group.eks_nodes.id] # already applied to subnet&lt;/span&gt;
    subnet_ids              &lt;span class="o"&gt;=&lt;/span&gt; flatten&lt;span class="o"&gt;([&lt;/span&gt;aws_subnet.public[&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;.id, aws_subnet.private[&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;.id]&lt;span class="o"&gt;)&lt;/span&gt;
    endpoint_private_access &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true
    &lt;/span&gt;endpoint_public_access  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true
    &lt;/span&gt;public_access_cidrs     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  tags &lt;span class="o"&gt;=&lt;/span&gt; merge&lt;span class="o"&gt;(&lt;/span&gt;
    var.tags
  &lt;span class="o"&gt;)&lt;/span&gt;

  depends_on &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
    aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy
  &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;span class="c"&gt;# EKS Cluster IAM Role&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-Cluster-Role"&lt;/span&gt;

  assume_role_policy &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;POLICY&lt;/span&gt;&lt;span class="sh"&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;POLICY
&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"cluster_AmazonEKSClusterPolicy"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  policy_arn &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"&lt;/span&gt;
  role       &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.cluster.name
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;span class="c"&gt;# EKS Cluster Security Group&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"eks_cluster"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-cluster-sg"&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Cluster communication with worker nodes"&lt;/span&gt;
  vpc_id      &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.id

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-cluster-sg"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"cluster_inbound"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow worker nodes to communicate with the cluster API Server"&lt;/span&gt;
  from_port                &lt;span class="o"&gt;=&lt;/span&gt; 443
  protocol                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
  security_group_id        &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.eks_cluster.id
  source_security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.eks_nodes.id
  to_port                  &lt;span class="o"&gt;=&lt;/span&gt; 443
  &lt;span class="nb"&gt;type&lt;/span&gt;                     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"cluster_outbound"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow cluster API Server to communicate with the worker nodes"&lt;/span&gt;
  from_port                &lt;span class="o"&gt;=&lt;/span&gt; 1024
  protocol                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
  security_group_id        &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.eks_cluster.id
  source_security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.eks_nodes.id
  to_port                  &lt;span class="o"&gt;=&lt;/span&gt; 65535
  &lt;span class="nb"&gt;type&lt;/span&gt;                     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"egress"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Node Groups (Managed) Setup
&lt;/h2&gt;

&lt;p&gt;Creating a Node Group(s) to run application workload.&lt;br&gt;
&lt;strong&gt;IAM Role:&lt;/strong&gt; Similar to the EKS cluster, before we create worker node group, we must create IAM role with needed permissions for the node group to communicate with other AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM Policy:&lt;/strong&gt; Attach the trusted Policy (AmazonEKSWorkerNodePolicy) which will allow amazon EC2 to assume and using this role. Also, attach the AWS managed permission Policy (AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;node-groups.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# EKS Node Groups&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_eks_node_group"&lt;/span&gt; &lt;span class="s2"&gt;"mondev"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  cluster_name    &lt;span class="o"&gt;=&lt;/span&gt; aws_eks_cluster.mondev.name
  node_group_name &lt;span class="o"&gt;=&lt;/span&gt; var.project
  node_role_arn   &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.node.arn
  subnet_ids      &lt;span class="o"&gt;=&lt;/span&gt; aws_subnet.private[&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;.id

  scaling_config &lt;span class="o"&gt;{&lt;/span&gt;
    desired_size &lt;span class="o"&gt;=&lt;/span&gt; 2
    max_size     &lt;span class="o"&gt;=&lt;/span&gt; 5
    min_size     &lt;span class="o"&gt;=&lt;/span&gt; 1
  &lt;span class="o"&gt;}&lt;/span&gt;

  ami_type       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AL2_x86_64"&lt;/span&gt; &lt;span class="c"&gt;# AL2_x86_64, AL2_x86_64_GPU, AL2_ARM_64, CUSTOM&lt;/span&gt;
  capacity_type  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ON_DEMAND"&lt;/span&gt;  &lt;span class="c"&gt;# ON_DEMAND, SPOT&lt;/span&gt;
  disk_size      &lt;span class="o"&gt;=&lt;/span&gt; 20
  instance_types &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"t2.medium"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;

  tags &lt;span class="o"&gt;=&lt;/span&gt; merge&lt;span class="o"&gt;(&lt;/span&gt;
    var.tags
  &lt;span class="o"&gt;)&lt;/span&gt;

  depends_on &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
    aws_iam_role_policy_attachment.node_AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.node_AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.node_AmazonEC2ContainerRegistryReadOnly,
  &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;span class="c"&gt;# EKS Node IAM Role&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"node"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-Worker-Role"&lt;/span&gt;

  assume_role_policy &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;POLICY&lt;/span&gt;&lt;span class="sh"&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;POLICY
&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"node_AmazonEKSWorkerNodePolicy"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  policy_arn &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"&lt;/span&gt;
  role       &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.node.name
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"node_AmazonEKS_CNI_Policy"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  policy_arn &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"&lt;/span&gt;
  role       &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.node.name
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"node_AmazonEC2ContainerRegistryReadOnly"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  policy_arn &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"&lt;/span&gt;
  role       &lt;span class="o"&gt;=&lt;/span&gt; aws_iam_role.node.name
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;span class="c"&gt;# EKS Node Security Group&lt;/span&gt;
resource &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"eks_nodes"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-node-sg"&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Security group for all nodes in the cluster"&lt;/span&gt;
  vpc_id      &lt;span class="o"&gt;=&lt;/span&gt; aws_vpc.mondev.id

  egress &lt;span class="o"&gt;{&lt;/span&gt;
    from_port   &lt;span class="o"&gt;=&lt;/span&gt; 0
    to_port     &lt;span class="o"&gt;=&lt;/span&gt; 0
    protocol    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
    cidr_blocks &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  tags &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Name                                           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-node-sg"&lt;/span&gt;
    &lt;span class="s2"&gt;"kubernetes.io/cluster/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.project&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-cluster"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"owned"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"nodes_internal"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow nodes to communicate with each other"&lt;/span&gt;
  from_port                &lt;span class="o"&gt;=&lt;/span&gt; 0
  protocol                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
  security_group_id        &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.eks_nodes.id
  source_security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.eks_nodes.id
  to_port                  &lt;span class="o"&gt;=&lt;/span&gt; 65535
  &lt;span class="nb"&gt;type&lt;/span&gt;                     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"aws_security_group_rule"&lt;/span&gt; &lt;span class="s2"&gt;"nodes_cluster_inbound"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow worker Kubelets and pods to receive communication from the cluster control plane"&lt;/span&gt;
  from_port                &lt;span class="o"&gt;=&lt;/span&gt; 1025
  protocol                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
  security_group_id        &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.eks_nodes.id
  source_security_group_id &lt;span class="o"&gt;=&lt;/span&gt; aws_security_group.eks_cluster.id
  to_port                  &lt;span class="o"&gt;=&lt;/span&gt; 65535
  &lt;span class="nb"&gt;type&lt;/span&gt;                     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terraform Variables
&lt;/h2&gt;

&lt;p&gt;Creating IAM user with administrator access to the AWS account, and get access key and secret key for authentication.&lt;br&gt;
&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;variable &lt;span class="s2"&gt;"aws_access_key"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS access key"&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; string
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"aws_secret_key"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS secret key"&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; string
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"region"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The aws region. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html"&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; string
  default     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ap-southeast-1"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"availability_zones_count"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The number of AZs."&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; number
  default     &lt;span class="o"&gt;=&lt;/span&gt; 2
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"project"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MonirulProject"&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; string
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"vpc_cidr"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The CIDR block for the VPC. Default value is a valid CIDR, but not acceptable by AWS and should be overridden"&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; string
  default     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"subnet_cidr_bits"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The number of subnet bits for the CIDR. For example, specifying a value 8 for this parameter will create a CIDR with a mask of /24."&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; number
  default     &lt;span class="o"&gt;=&lt;/span&gt; 8
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"tags"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"A map of tags to add to all resources"&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; map&lt;span class="o"&gt;(&lt;/span&gt;string&lt;span class="o"&gt;)&lt;/span&gt;
  default &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Project"&lt;/span&gt;     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MonirulProject"&lt;/span&gt;
    &lt;span class="s2"&gt;"Environment"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Development"&lt;/span&gt;
    &lt;span class="s2"&gt;"Owner"&lt;/span&gt;       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Monirul"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set terraform variables values as per requirements.&lt;br&gt;
&lt;strong&gt;terraform.tfvars&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws_access_key &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"aaaaaaaaaaaaaa"&lt;/span&gt;
aws_secret_key &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"bbbbbbbbbbbbbbbbbbbbb"&lt;/span&gt;

region                   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ap-southeast-1"&lt;/span&gt;
availability_zones_count &lt;span class="o"&gt;=&lt;/span&gt; 2

project &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MonirulProject"&lt;/span&gt;

vpc_cidr         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
subnet_cidr_bits &lt;span class="o"&gt;=&lt;/span&gt; 8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And, terraform data sources as well.&lt;br&gt;
&lt;strong&gt;data-sources.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;data &lt;span class="s2"&gt;"aws_availability_zones"&lt;/span&gt; &lt;span class="s2"&gt;"available"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  state &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"available"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Launch EKS Infrastructure
&lt;/h2&gt;

&lt;p&gt;Once we have finished declaring the resources, we can deploy all resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform init
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform plan
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0vi5qdkankiu27uynh8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0vi5qdkankiu27uynh8.png" alt="Image description" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Output
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Cluster
|-- README.md
|-- backend.tf
|-- data-sources.tf
|-- eks-cluster.tf
|-- node-groups.tf
|-- outputs.tf
|-- providers.tf
|-- terraform.tfvars
|-- variables.tf
|-- vpc.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Access Cluster and create different namespace if required
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks &lt;span class="nt"&gt;--region&lt;/span&gt; ap-southeast-1 update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; MonirulProject-cluster
kubectl create ns dev &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; kubectl create ns stg &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; kubectl create ns prd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Availability zone (Pod Topology)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nbhqjyld4cpyajs9plk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nbhqjyld4cpyajs9plk.png" alt="Image description" width="800" height="148"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Clean up workspace
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Workspaces for multiple environments
&lt;/h2&gt;

&lt;p&gt;To manage multiple distinct sets of infrastructure  resources/environments.&lt;br&gt;
Instead of creating a new directory for each environment to manage we need to just create workspace and use them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform workspace new dev
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform workspace new stg
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform workspace new prd
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform workspace list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>eks</category>
      <category>kubernetes</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Auto Scale Kubernetes Pods for Microservices</title>
      <dc:creator>M.M.Monirul Islam</dc:creator>
      <pubDate>Mon, 09 May 2022 09:00:50 +0000</pubDate>
      <link>https://forem.com/monirul87/auto-scale-kubernetes-pods-for-microservices-3761</link>
      <guid>https://forem.com/monirul87/auto-scale-kubernetes-pods-for-microservices-3761</guid>
      <description>&lt;p&gt;In Kubernetes, autoscaling prevents over provisioning resources for microservices running in a cluster. Here is the procedure to set up horizontal and vertical scaling.&lt;/p&gt;

&lt;p&gt;In Kubernetes, cluster capacity planning is critical to avoid overprovisioned or under provisioned infrastructure. IT admins need a reliable and cost-effective way to maintain operational clusters and pods in high-load situations and to scale infrastructure automatically to meet resource requirements.&lt;/p&gt;

&lt;p&gt;We know that, Kubernetes supports 3 different types of autoscaling:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Vertical Pod Autoscaler (VPA)&lt;/strong&gt;. Increases or decreases the resource limits on the pod.&lt;br&gt;
&lt;strong&gt;2. Horizontal Pod Autoscaler (HPA)&lt;/strong&gt;. Increases or decreases the number of pod instances.&lt;br&gt;
&lt;strong&gt;3. Cluster Autoscaler (CA).&lt;/strong&gt; Increases or decreases the nodes in the node pool, based on pod&lt;br&gt;
scheduling.&lt;/p&gt;

&lt;p&gt;I’m going to focuses on the Horizontal and Vertical options, as I will be working on a pod level, not a node level.&lt;/p&gt;
&lt;h2&gt;
  
  
  Set up a microservice in a Kubernetes cluster:
&lt;/h2&gt;

&lt;p&gt;To get started, let's create a &lt;strong&gt;REST API&lt;/strong&gt; to deploy as a microservice in containers on Kubernetes. To take this deeper, we can first &lt;strong&gt;create the REST API&lt;/strong&gt; -- written in Go, as presented below -- which deploys a microservice on Kubernetes. Save the below content in a file named &lt;strong&gt;deployment.yml&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: microsvcmonirul
  namespace: monirul
spec:
selector:
   matchLabels:
     run: microsvcmonirul
replicas: 1
template:
   metadata:
  labels:
    run: microsvcmonirul
spec:
  containers:
  - name: microsvcmonirul
image: &lt;span class="s2"&gt;"monirul87/microsvcmonirul-1.0.3"&lt;/span&gt; ports:
- containerPort: 8080
         resources:
          requests:
            memory: &lt;span class="s2"&gt;"64Mi"&lt;/span&gt;
            cpu: &lt;span class="s2"&gt;"125m"&lt;/span&gt;
          limits:
            memory: &lt;span class="s2"&gt;"128Mi"&lt;/span&gt;
            cpu: &lt;span class="s2"&gt;"250m"&lt;/span&gt;
&lt;span class="nt"&gt;---&lt;/span&gt;
apiVersion: v1
kind: Service
metadata:
  name: microsvcmonirul
  namespace: monirul
  labels:
    run: microsvcmonirul
 spec: ports:
  - port: 8087
    targetPort: 8080
  selector:
    run: microsvcmonirul
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, run the following command to deploy the microservice into the Kubernetes cluster:&lt;br&gt;
&lt;code&gt;[root@kmaster microservice]# kubectl apply -f deployment.yml&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Once complete, the new pod will start up in the cluster as shown below-&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0fh2odafllrsc9q2qsr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0fh2odafllrsc9q2qsr.png" alt="Image description" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To access the microservice's operational activity, expose the service ports to the public ip,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@kmaster microservice]# kubectl patch svc microsvcmonirul -n&lt;br&gt;
monirul -p '{"spec": {"type": "LoadBalancer",&lt;br&gt;
"externalIPs":["149.20.184.84"]}}'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j9uwj2jsvezf3mpzlpo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j9uwj2jsvezf3mpzlpo.png" alt="Image description" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If I try to access the Golang REST API from my browser, it will return the expected results below seen-&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hp466eembauvb95ohdb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hp466eembauvb95ohdb.png" alt="Image description" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the application is running as a microservice in a Kubernetes cluster, let's auto scale my application horizontally in response to a sudden increase or decrease in resource demand&lt;/p&gt;
&lt;h2&gt;
  
  
  Horizontal Pod Autoscaler (HPA)
&lt;/h2&gt;

&lt;p&gt;The HPA scales the number of pods in a deployment based on a custom metric or a resource metric of a pod. Kubernetes admins can also use it to set thresholds that trigger autoscaling through changes to the number of pod replicas inside a deployment controller.&lt;/p&gt;

&lt;p&gt;For example, if there is a sustained spike in CPU utilization above a designated threshold, the HPA will increase the number of pods in the deployment to manage the new load to maintain smooth application function.&lt;/p&gt;

&lt;p&gt;To configure the HPA controller to manage a workload, create a &lt;strong&gt;HorizontalPodAutoscaler&lt;/strong&gt; object. Or, HPA can also be configured with the &lt;strong&gt;kubectl autoscale&lt;/strong&gt; subcommand. Here I’m going to use subcommand-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: microsvcmonirul
  namespace: monirul
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: microsvcmonirul
  targetCPUUtilizationPercentage: 50
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the following subcommand to create an autoscaling CPU deployment,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@kmaster microservice]# kubectl autoscale deployment&lt;br&gt;
microsvcmonirul -n monirul --cpu-percent=50 --min=1 --max=4&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will increase pods to a maximum of four replicas when the microservice deployment observes more than 50% CPU utilization over a sustained period.&lt;/p&gt;

&lt;p&gt;To check the HPA status with namespace monirul, run the &lt;code&gt;kubectl get hpa -n monirul&lt;/code&gt; command, which will give us the current and target CPU consumption. Initially an ''unknown'' value can appear in the current state, but with time to pull metrics, the server and percentage utilization will start to appear.&lt;/p&gt;

&lt;p&gt;For a detailed HPA status, use the &lt;strong&gt;describe&lt;/strong&gt; command to find details such as metrics, events and conditions.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe hpa -n monirul&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lxnjnj0b87tbty95bud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lxnjnj0b87tbty95bud.png" alt="Image description" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above microservice running in a single pod has less than 50% CPU utilization, there is no need to auto scale the pods.&lt;/p&gt;
&lt;h2&gt;
  
  
  Trigger microservice autoscaling by applying load
&lt;/h2&gt;

&lt;p&gt;To introduce load on the application, we use a BusyBox image in a container, which will run a shell script to make infinite calls to the REST endpoint created in the previous microservice. BusyBox is a lightweight image of many common Unix utilities -- like &lt;strong&gt;wget&lt;/strong&gt; -- which we use to put stress on the microservice. This stress increases the resource consumption on the pods.&lt;/p&gt;

&lt;p&gt;Save the following YAML configuration to a file named &lt;strong&gt;infinite-calls-monirul.yaml&lt;/strong&gt;. At the bottom of the code, the &lt;strong&gt;wget&lt;/strong&gt; command calls the REST API on an infinite &lt;strong&gt;while loop&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: infinite-calls-monirul
  namespace: monirul
  labels:

    app: infinite-calls-monirul
spec:
  replicas: 1
  selector:
    matchLabels:
      app: infinite-calls-monirul
template:
 &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="s2"&gt;"
metadata:
  name: infinite-calls-monirul
  labels:
    app: infinite-calls-monirul
spec:
  containers:
  - name: infinite-calls-monirul
 image: busybox
command:
- /bin/sh
- -c
- "&lt;/span&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;wget &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-O-&lt;/span&gt; http://149.20.184.84:8087/employee&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy this YAML configuration with the &lt;code&gt;kubectl apply -f infinite-calls-monirul.yml&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;Once the container is active, run a &lt;strong&gt;/bin/sh&lt;/strong&gt; shell on the container using the &lt;code&gt;kubectl exec -it &amp;lt;CONTAINER_NAME&amp;gt; sh&lt;/code&gt; command to verify that a process is running and performing web requests to the REST endpoint infinitely. These infinite calls introduce load on the application and result in processor time on the container hosting this web application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5a2tixtukrfcmfwexlke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5a2tixtukrfcmfwexlke.png" alt="Image description" width="800" height="290"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxy8ysd03jborgfy60jmz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxy8ysd03jborgfy60jmz.png" alt="Image description" width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a few minutes of running under this load, the HPA begins to observe an increase in current CPU utilization and auto scales to manage the incoming load. It creates the maximum number of pods to maintain CPU below that 50% -- that is why the replica count is now four, which is the maximum.&lt;br&gt;
&lt;code&gt;kubectl get hpa -w -n monirul&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx947ebb0leq503xlruy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx947ebb0leq503xlruy.png" alt="Image description" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To see the detailed events and activity of the HPA, run the following command and observe the highlighted section below for the events and autoscaling triggers.&lt;br&gt;
&lt;code&gt;kubectl describe hpa -n monirul&lt;/code&gt;&lt;br&gt;
 &lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2my6656hnwi6f6zkkg91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2my6656hnwi6f6zkkg91.png" alt="Image description" width="800" height="143"&gt;&lt;/a&gt; &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44p7xs7dqrej27r5xcxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44p7xs7dqrej27r5xcxc.png" alt="Image description" width="800" height="88"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u4erqnsw6fz49rt7zm4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u4erqnsw6fz49rt7zm4.png" alt="Image description" width="800" height="314"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjqs4r1n2nrz1vdjumvy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjqs4r1n2nrz1vdjumvy.png" alt="Image description" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Vertical Pod Autoscaler
&lt;/h2&gt;

&lt;p&gt;The VPA increases and decreases the CPU and memory resource requests of pod containers to better match the allocated cluster resource to actual usage. Container resource limits are based on live metrics from a metric server, rather than manual adjustments to benchmark resource utilization on the pods.&lt;/p&gt;

&lt;p&gt;In other words, a VPA frees users from manually setting up resource limits and requests for the containers in their pods to match the current resource requirements.&lt;/p&gt;

&lt;p&gt;The VPA can only replace the pods managed by a replication controller, such as deployments, and it requires the Kubernetes metrics server.&lt;/p&gt;

&lt;p&gt;A VPA has three main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recommender:&lt;/strong&gt; Monitors resource utilization and computes target values. In the recommendation mode, VPA will update the suggested values but will not terminate pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Updater:&lt;/strong&gt; Terminates the pods that were scaled with new resource limits. Because Kubernetes can't change the resource limits of a running pod, VPA terminates the pods with outdated limits and replaces them with pods with updated resource request and limit values.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Admission Controller:&lt;/strong&gt; Intercepts pod creation requests. If the pod is matched by a VPA config with mode not set to "off," the controller rewrites the request by applying recommended resources to the pod specification. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conflicts, caveats and challenges in autoscaling
&lt;/h2&gt;

&lt;p&gt;Kubernetes autoscaling demonstrates flexibility and a powerful use case: It dynamically manages infrastructure scaling in production environments and enhances resource utilization, which reduces overhead.&lt;/p&gt;

&lt;p&gt;HPA and VPA are useful, and there is a temptation to use both, but this can lead to potential conflicts. For example, HPA and VPA detect CPU at threshold levels. And while the VPA will try to terminate the resource and create a new one with updated thresholds, HPA will try to create new pods with old specs.&lt;/p&gt;

&lt;p&gt;This can lead to wrong resource allocations and conflicts.&lt;br&gt;
To prevent such a situation and still use HPA and VPA in parallel, make sure they rely on different metrics to auto scale.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>eks</category>
      <category>aws</category>
      <category>go</category>
    </item>
  </channel>
</rss>
