<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Woobuntu</title>
    <description>The latest articles on Forem by Woobuntu (@woobuntu).</description>
    <link>https://forem.com/woobuntu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/woobuntu"/>
    <language>en</language>
    <item>
      <title>How to Enable Google OIDC Login in Vault Using Helm and Terraform</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Wed, 16 Jul 2025 07:19:27 +0000</pubDate>
      <link>https://forem.com/woobuntu/how-to-enable-google-oidc-login-in-vault-using-helm-and-terraform-7h3</link>
      <guid>https://forem.com/woobuntu/how-to-enable-google-oidc-login-in-vault-using-helm-and-terraform-7h3</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post documents a real-world example of enabling Google OIDC-based login for Vault.&lt;/p&gt;

&lt;p&gt;It is based on an environment where Vault is managed via the official Helm chart using Terraform.&lt;/p&gt;

&lt;p&gt;If you're looking to integrate a unified authentication mechanism across your organization using OIDC, this post may help.&lt;/p&gt;

&lt;p&gt;For better understanding, check out this article first:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/woobuntu/understanding-oauth-20-and-openid-connect-cfh"&gt;OAuth 2.0 and OpenID Connect&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why we introduced OIDC
&lt;/h2&gt;

&lt;p&gt;Every employee in our organization is issued a Google Workspace account. As we adopted internal tools like Vault, ArgoCD, Grafana, and Jenkins, managing access control became increasingly important.&lt;/p&gt;

&lt;p&gt;Creating and managing separate user accounts for each tool was both tedious and insecure. So we decided to unify authentication using Google OIDC, leveraging the accounts already in place.&lt;/p&gt;

&lt;p&gt;This greatly reduced the complexity of user management and brought consistency to authentication across all systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setup Process
&lt;/h2&gt;

&lt;p&gt;Reference:&lt;br&gt;
&lt;a href="https://github.com/hashicorp/vault-guides/tree/master/identity/oidc-auth#vault-openid-demo" rel="noopener noreferrer"&gt;hashicorp/vault-guides - oidc-auth&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Set up an OAuth client in Google Cloud Console
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;This process could potentially be replaced with the Terraform resource &lt;code&gt;google_iam_oauth_client&lt;/code&gt; in the future:&lt;br&gt;&lt;br&gt;
&lt;a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_oauth_client" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_oauth_client&lt;/a&gt;&lt;br&gt;&lt;br&gt;
I plan to refactor accordingly.&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to Google Cloud Console&lt;/li&gt;
&lt;li&gt;Select your project&lt;/li&gt;
&lt;li&gt;Navigate to APIs &amp;amp; Services → Credentials&lt;/li&gt;
&lt;li&gt;Click Create Credentials → OAuth client ID&lt;/li&gt;
&lt;li&gt;Choose Web application as the application type
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2om530g9aedfcw86av6.png" alt="Image0" width="800" height="290"&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Web application&lt;/strong&gt; as the Application type
&lt;/li&gt;
&lt;li&gt;Set a name
&lt;/li&gt;
&lt;li&gt;Add your Vault origin in &lt;strong&gt;Authorized JavaScript origins&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the redirect URI as defined in the official documentation:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;your-vault-domain&amp;gt;/ui/vault/auth/oidc/oidc/callback
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Once created, securely store the &lt;code&gt;client_id&lt;/code&gt; and &lt;code&gt;client_secret&lt;/code&gt; (e.g., Vault, AWS Secrets Manager)&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvrr39uf9cdhum0rtchs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvrr39uf9cdhum0rtchs.png" alt="Image1" width="543" height="897"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Configure OIDC in Vault
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// vault-config.tf
resource "vault_auth_backend" "oidc" {
  type = "oidc"
  path = "oidc"
}

resource "vault_generic_endpoint" "oidc_config" {
  path = "auth/oidc/config"

  data_json = jsonencode({
    "oidc_discovery_url" = "https://accounts.google.com"
    "oidc_client_id"     = local.vault_secrets.vault_google_client_id
    "oidc_client_secret" = local.vault_secrets.vault_google_client_secret
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// vault-users.tf
resource "vault_generic_endpoint" "woobuntu" {
  path = "auth/oidc/role/woobuntu"

  data_json = jsonencode({
    user_claim            = "email"
    oidc_scopes           = "openid email"
    bound_audiences       = [local.vault_secrets.vault_google_client_id]
    allowed_redirect_uris = ["https://vault.my-company.com/ui/vault/auth/oidc/oidc/callback"]
    policies              = [
      // your Vault policies here
    ]
    ttl = "1h"
    bound_claims = {
      "email" = "woobuntu@my-company.com"
    }
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We injected the client_id and client_secret issued in step 1. The remaining configuration follows the documentation below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/hashicorp/vault-guides/tree/master/identity/oidc-auth#configure-vault" rel="noopener noreferrer"&gt;https://github.com/hashicorp/vault-guides/tree/master/identity/oidc-auth#configure-vault&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.hashicorp.com/vault/api-docs/auth/jwt#create-update-role" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/vault/api-docs/auth/jwt#create-update-role&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;💡 Note on role configuration&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We chose to create a separate role per user because Vault does not currently support mapping multiple users to a single OIDC role via email matching. Initially, we tried assigning multiple emails via bound_claims, but bound_claims_type only supports string or glob—not arrays.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In the future, we plan to adopt group-based access control using the groups_claim and bound_claims.group features. This depends on broader adoption of Google Workspace groups within our organization.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;💡 Security Tip&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Avoid hardcoding sensitive credentials like client_id and client_secret in .tfvars or local variables. Instead, use a secure secret manager such as Vault or AWS Secrets Manager:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  vault_secrets = jsondecode(data.aws_secretsmanager_secret_version.vault_secrets.secret_string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>vault</category>
      <category>oauth</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Enable Google OIDC Login in Grafana Using Helm and Terraform</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Wed, 16 Jul 2025 06:03:40 +0000</pubDate>
      <link>https://forem.com/woobuntu/how-to-enable-google-oidc-login-in-grafana-using-helm-and-terraform-1igj</link>
      <guid>https://forem.com/woobuntu/how-to-enable-google-oidc-login-in-grafana-using-helm-and-terraform-1igj</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post documents a real-world case of enabling Google OIDC-based login for Grafana.&lt;/p&gt;

&lt;p&gt;It is written based on an environment where Grafana is managed via the official Helm chart using Terraform.&lt;/p&gt;

&lt;p&gt;This guide may be helpful if you're looking to integrate a centralized authentication system into internal tools using OIDC.&lt;/p&gt;

&lt;p&gt;For better understanding, check out this post first:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/woobuntu/understanding-oauth-20-and-openid-connect-cfh"&gt;OAuth 2.0 and OpenID Connect&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why we introduced OIDC
&lt;/h2&gt;

&lt;p&gt;All members in our organization are issued Google Workspace accounts. As we adopted several internal tools—Grafana, Argo CD, Vault, Jenkins, and others—user access management became increasingly important.&lt;/p&gt;

&lt;p&gt;Managing separate user accounts for each tool was not only cumbersome but also introduced security concerns. To unify authentication across all services, we adopted Google OIDC as a centralized login mechanism.&lt;/p&gt;

&lt;p&gt;This allowed us to simplify user management and establish a consistent authentication flow across the system.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setup Process
&lt;/h2&gt;

&lt;p&gt;Refer to the Grafana documentation:&lt;br&gt;
&lt;a href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/configure-authentication/google/#enable-google-oauth-in-grafana" rel="noopener noreferrer"&gt;Configure Google OAuth authentication | Grafana docs&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Register OAuth Client in Google Cloud Console
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;This process could potentially be replaced with the Terraform resource &lt;code&gt;google_iam_oauth_client&lt;/code&gt; in the future:&lt;br&gt;&lt;br&gt;
&lt;a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_oauth_client" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_oauth_client&lt;/a&gt;&lt;br&gt;&lt;br&gt;
I plan to refactor accordingly.&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to Google Cloud Console&lt;/li&gt;
&lt;li&gt;Select your project&lt;/li&gt;
&lt;li&gt;Navigate to APIs &amp;amp; Services → Credentials&lt;/li&gt;
&lt;li&gt;Click Create Credentials → OAuth client ID&lt;/li&gt;
&lt;li&gt;Choose Web application as the application type
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2om530g9aedfcw86av6.png" alt="Image0" width="800" height="290"&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Web application&lt;/strong&gt; as the Application type
&lt;/li&gt;
&lt;li&gt;Set a name
&lt;/li&gt;
&lt;li&gt;Add your Grafana origin in &lt;strong&gt;Authorized JavaScript origins&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the redirect URI as defined in the official documentation:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;your-grafana-domain&amp;gt;/login/google
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Once created, securely store the &lt;code&gt;client_id&lt;/code&gt; and &lt;code&gt;client_secret&lt;/code&gt; (e.g., Vault, AWS Secrets Manager)&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrad24uxzdkc6uddef98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrad24uxzdkc6uddef98.png" alt="Image1" width="571" height="922"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure &lt;code&gt;grafana.ini&lt;/code&gt; via Helm and Terraform
&lt;/h3&gt;

&lt;p&gt;We configured the grafana.ini values through Terraform like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// modules/grafana/main.tf
resource "helm_release" "default" {
  ...
  chart      = "grafana"
  repository = "https://grafana.github.io/helm-charts"

  values = [
    file("${path.module}/values.yaml"),
    jsonencode({
      ...
      "grafana.ini" = merge({
        ...
      }, var.grafana_ini)
      env = var.grafana_env
    })
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// grafana.tf
module "grafana" {
  source             = "../modules/grafana"
  storage_class_name = "grafana"
  ...

  grafana_ini = {
    ...
    "auth.google" = {
      enabled         = true
      allow_sign_up   = true
      auto_login      = true
      client_id       = local.grafana_secrets.grafana_google_client_id
      client_secret   = local.grafana_secrets.grafana_google_client_secret
      scopes          = "openid profile email"
      auth_url        = "https://accounts.google.com/o/oauth2/v2/auth"
      token_url       = "https://oauth2.googleapis.com/token"
      api_url         = "https://openidconnect.googleapis.com/v1/userinfo"
      allowed_domains = "my-company.com gmail.com"
      use_pkce        = true
    }
    server = {
      root_url = "https://grafana.my-company.com"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;client_id&lt;/code&gt; and &lt;code&gt;client_secret&lt;/code&gt; are injected using the values created in step 1. The other settings follow the &lt;a href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/configure-authentication/google/#enable-google-oauth-in-grafana" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;💡 Security Tip: Avoid hardcoding sensitive credentials like &lt;code&gt;client_id&lt;/code&gt; and &lt;code&gt;client_secret&lt;/code&gt; in your .tfvars or local values. Instead, load them from a secure secret manager such as Vault or AWS Secrets Manager:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  grafana_secrets = jsondecode(data.aws_secretsmanager_secret_version.grafana_secrets.secret_string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>devops</category>
      <category>grafana</category>
      <category>oauth</category>
    </item>
    <item>
      <title>Implementing Google OIDC Login for Argo CD (Helm + Terraform Setup)</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Tue, 15 Jul 2025 08:13:28 +0000</pubDate>
      <link>https://forem.com/woobuntu/implementing-google-oidc-login-for-argo-cd-helm-terraform-setup-2p1n</link>
      <guid>https://forem.com/woobuntu/implementing-google-oidc-login-for-argo-cd-helm-terraform-setup-2p1n</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post documents a real-world example of integrating Google OIDC login into Argo CD.&lt;/p&gt;

&lt;p&gt;It's written based on an infrastructure where the Argo CD Helm chart is managed via Terraform.&lt;/p&gt;

&lt;p&gt;If you're looking to unify authentication across your organization using OIDC, this guide may help.&lt;/p&gt;

&lt;p&gt;👉 For better understanding, check out this related post first:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/woobuntu/understanding-oauth-20-and-openid-connect-cfh"&gt;OAuth2.0 and OpenID Connect&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Three Ways to Apply Google OIDC in Argo CD
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/google/" rel="noopener noreferrer"&gt;Official Docs: Google - Argo CD - Declarative GitOps CD for Kubernetes&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;OpenID Connect using Dex
This method does not support Google Workspace group claims (i.e., you can’t authorize users based on group membership).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡 Since our company doesn’t actively use Google Workspace groups, we chose this method.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SAML App Auth using Dex&lt;br&gt;
This method is discouraged by the Dex maintainers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;OpenID Connect + Google Groups using Dex&lt;br&gt;
Supports access control based on group claims (if you actively use Google Groups).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  What is Dex?
&lt;/h3&gt;

&lt;p&gt;🔗 &lt;a href="https://dexidp.io/docs/connectors/" rel="noopener noreferrer"&gt;Dex Connectors Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9iv5u1h75jhoo33gwkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9iv5u1h75jhoo33gwkl.png" alt="Image0" width="760" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dex is an identity provider (IdP) that connects to various authentication backends such as SAML, LDAP, GitHub, and Google, and exposes a unified OpenID Connect (OIDC) interface to clients.&lt;br&gt;
In other words, Dex acts as a bridge that standardizes diverse authentication sources under a single OIDC protocol.&lt;/p&gt;


&lt;h2&gt;
  
  
  Implementation Steps
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Reference: &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/google/" rel="noopener noreferrer"&gt;Argo CD Docs - OpenID Connect using Dex&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  1. Creating OAuth Client in Google Cloud Console
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;This process could potentially be replaced with the Terraform resource google_iam_oauth_client in the future:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_oauth_client" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_oauth_client&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I plan to refactor accordingly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to Google Cloud Console
&lt;/li&gt;
&lt;li&gt;Select a project
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;APIs &amp;amp; Services&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;Credentials&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create credentials → OAuth client ID&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2om530g9aedfcw86av6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2om530g9aedfcw86av6.png" alt="Image0" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Web application&lt;/strong&gt; as the Application type
&lt;/li&gt;
&lt;li&gt;Set a name
&lt;/li&gt;
&lt;li&gt;Add your ArgoCD origin in &lt;strong&gt;Authorized JavaScript origins&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the redirect URI as defined in the official documentation:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;argocd domain&amp;gt;/api/dex/callback
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Once created, securely store the &lt;code&gt;client_id&lt;/code&gt; and &lt;code&gt;client_secret&lt;/code&gt; (e.g., Vault, AWS Secrets Manager)&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zkuh76pegh4laviipsc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zkuh76pegh4laviipsc.png" alt="Image1" width="566" height="907"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Configuring &lt;code&gt;argocd-cm&lt;/code&gt; via Helm + Terraform
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// modules/argocd/main.tf
resource "helm_release" "default" {
  ...
  chart            = "argo-cd"
  repository       = "https://argoproj.github.io/argo-helm"
  ...

  values = [
    file("${path.module}/values.yaml"),
    yamlencode({
      ...
      configs = {
        ...
        rbac = {
          "policy.csv" = join("\n", concat([
            "p, role:terraform, repositories, *, *, allow",
            "g, terraform-user, role:terraform"
          ], var.additional_policies))
          scopes = var.rbac_scopes // This corresponds to OIDC scopes
        }
        cm = {
          "server.rbac.log.enforce.enable" = true
          ...
          "dex.config"                     = yamlencode(var.dex_config)
          url                              = "https://${local.argocd_ingress_domain}" // required for redirect URI
          ...
        }
      }
      ...
    })
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// argocd.tf

module "argocd" {
  source                     = "../modules/argocd"
  cluster_name               = var.cluster_name
  argocd_ingress_root_domain = "my-company.com"

    ...
  // Google OIDC uses email as the username
  rbac_scopes = "[email]"

  dex_config = {
    "connectors" = [
      {
        config = {
          issuer       = "https://accounts.google.com"
          clientId     = local.argocd_secrets.argocd_google_client_id
          clientSecret = local.argocd_secrets.argocd_google_client_secret
        }
        type = "oidc"
        id   = "google"
        name = "Google"
      }
    ]
  }
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The clientId and clientSecret were injected using the values generated in the previous step.&lt;br&gt;
Other configuration values were referenced from the following documentation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/google/#configure-argo-to-use-openid-connect" rel="noopener noreferrer"&gt;https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/google/#configure-argo-to-use-openid-connect&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dexidp.io/docs/connectors/oidc/" rel="noopener noreferrer"&gt;https://dexidp.io/docs/connectors/oidc/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;💡Since clientId and clientSecret are sensitive credentials,&lt;br&gt;
it is highly recommended not to hardcode them in .tfvars or local blocks. Instead, use a secret manager such as Vault or AWS Secrets Manager to securely retrieve and inject these values.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
    ...
    argocd_secrets  = jsondecode(data.aws_secretsmanager_secret_version.argocd_secrets.secret_string)
    ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>argocd</category>
      <category>oauth</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Enable Google OIDC Login in Jenkins Using Helm, JCasC, and Terraform</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Tue, 15 Jul 2025 05:18:22 +0000</pubDate>
      <link>https://forem.com/woobuntu/how-to-enable-google-oidc-login-in-jenkins-using-helm-jcasc-and-terraform-51o9</link>
      <guid>https://forem.com/woobuntu/how-to-enable-google-oidc-login-in-jenkins-using-helm-jcasc-and-terraform-51o9</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;📌 This post describes a real-world case of applying Google OIDC-based login to Jenkins.&lt;/p&gt;

&lt;p&gt;The environment assumes Jenkins is installed via Helm and managed as a Terraform resource, with configuration handled using the &lt;code&gt;JCasC&lt;/code&gt; plugin.&lt;/p&gt;

&lt;p&gt;If you're aiming to unify authentication across tools using OIDC in your organization, this post should be a helpful reference.&lt;/p&gt;

&lt;p&gt;🔗 You may want to read the following post first for better context:&lt;br&gt;
&lt;a href="https://dev.to/woobuntu/understanding-oauth-20-and-openid-connect-cfh"&gt;Understanding OAuth 2.0 and OpenID Connect&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why We Introduced OIDC Authentication
&lt;/h2&gt;

&lt;p&gt;All employees in our organization are issued Google Workspace accounts, and we have multiple internal tools requiring access control — including Jenkins, ArgoCD, Vault, and Grafana.&lt;/p&gt;

&lt;p&gt;Managing separate user accounts for each tool was both time-consuming and a potential security risk.&lt;/p&gt;

&lt;p&gt;So we introduced Google OIDC to unify authentication based on existing Google accounts,&lt;br&gt;&lt;br&gt;
which allowed us to reduce the complexity of user management and apply a consistent authentication model across our systems.&lt;/p&gt;


&lt;h2&gt;
  
  
  Implementation Steps
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Jenkins OIDC plugin: &lt;a href="https://plugins.jenkins.io/oic-auth/" rel="noopener noreferrer"&gt;OpenId Connect Authentication&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  🧩 Prerequisites
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Jenkins is installed via Helm; Helm charts are managed using Terraform.&lt;/li&gt;
&lt;li&gt;Jenkins configuration is handled through the &lt;code&gt;JCasC&lt;/code&gt; plugin.&lt;/li&gt;
&lt;/ol&gt;


&lt;h3&gt;
  
  
  1. Creating OAuth Client in Google Cloud Console
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;This process could potentially be replaced with the Terraform resource &lt;code&gt;google_iam_oauth_client&lt;/code&gt; in the future:&lt;br&gt;&lt;br&gt;
&lt;a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_oauth_client" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_oauth_client&lt;/a&gt;&lt;br&gt;&lt;br&gt;
I plan to refactor accordingly.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Reference:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/jenkinsci/oic-auth-plugin/blob/master/docs/configuration/GOOGLE.md#provider-configuration" rel="noopener noreferrer"&gt;https://github.com/jenkinsci/oic-auth-plugin/blob/master/docs/configuration/GOOGLE.md#provider-configuration&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to Google Cloud Console
&lt;/li&gt;
&lt;li&gt;Select a project
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;APIs &amp;amp; Services&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;Credentials&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create credentials → OAuth client ID&lt;/strong&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2om530g9aedfcw86av6.png" alt="Image0" width="800" height="290"&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Web application&lt;/strong&gt; as the Application type
&lt;/li&gt;
&lt;li&gt;Set a name
&lt;/li&gt;
&lt;li&gt;Add your Jenkins origin in &lt;strong&gt;Authorized JavaScript origins&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the redirect URI as defined in the official documentation:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;your-jenkins-domain&amp;gt;/securityRealm/finishLogin
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Once created, securely store the &lt;code&gt;client_id&lt;/code&gt; and &lt;code&gt;client_secret&lt;/code&gt; (e.g., Vault, AWS Secrets Manager)&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzm0f6lwiktj0pf9ifdh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzm0f6lwiktj0pf9ifdh.png" alt="Image1" width="552" height="880"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqsnaut3x22wk3pdwgp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqsnaut3x22wk3pdwgp2.png" alt="Image2" width="515" height="676"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The screenshots above are for demonstration purposes only. In real projects, you should never expose client information — especially the Client Secret — in public.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Configuring the &lt;code&gt;oic-auth&lt;/code&gt; Plugin
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// modules/jenkins/main.tf
resource "helm_release" "default" {
  ...
  chart            = "jenkins"
  repository       = "https://charts.jenkins.io"
  ...

  values = [
    file("${path.module}/values.yaml"),
    yamlencode({
      controller = {
          ...
          installPlugins = concat([
              ...
          ], var.extra_plugins)
        ...
        JCasC = {
          securityRealm         = var.yaml_encoded_security_realm
          ...
        }
      }
      ...
    })
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// jenkins_helm.tf
module "jenkins" {
  source                      = "../modules/jenkins"
    ...
  yaml_encoded_security_realm = yamlencode({
    oic = {
      serverConfiguration = {
        wellKnown = {
          wellKnownOpenIDConfigurationUrl = "https://accounts.google.com/.well-known/openid-configuration"
          scopesOverride                  = "openid profile email"
        }
      }
      clientId       = local.jenkins_secrets.jenkins_google_client_id
      clientSecret   = local.jenkins_secrets.jenkins_google_client_secret
      userNameField  = "email"
      emailFieldName = "email"
      pkceEnabled    = true
    }
  })

  extra_plugins = [
      ...
    "oic-auth:4.494.v6b_f419104767",
    ...
  ]
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;clientId&lt;/code&gt; and &lt;code&gt;clientSecret&lt;/code&gt; were injected from the values issued earlier,&lt;br&gt;
and the rest of the config values were based on the official documentation:&lt;br&gt;
&lt;a href="https://github.com/jenkinsci/oic-auth-plugin/blob/master/docs/configuration/GOOGLE.md#jcasc" rel="noopener noreferrer"&gt;https://github.com/jenkinsci/oic-auth-plugin/blob/master/docs/configuration/GOOGLE.md#jcasc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚠️ &lt;em&gt;In the documentation above, &lt;code&gt;wellKnownOpenIDConfigurationUrl&lt;/code&gt; and &lt;code&gt;scopesOverride&lt;/code&gt; are placed directly under &lt;code&gt;oic&lt;/code&gt;, but this caused OIDC to fail in my case.&lt;br&gt;
Placing them under &lt;code&gt;serverConfiguration.wellKnown&lt;/code&gt; — as shown in the general configuration reference — worked correctly:&lt;br&gt;
&lt;a href="https://github.com/jenkinsci/oic-auth-plugin/blob/master/docs/configuration/README.md#jcasc-configuration-reference" rel="noopener noreferrer"&gt;https://github.com/jenkinsci/oic-auth-plugin/blob/master/docs/configuration/README.md#jcasc-configuration-reference&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;🔐 Since &lt;code&gt;clientId&lt;/code&gt; and &lt;code&gt;clientSecret&lt;/code&gt; are sensitive credentials, they should not be hardcoded in &lt;code&gt;.tfvars&lt;/code&gt; or &lt;code&gt;locals&lt;/code&gt;.&lt;br&gt;
It is strongly recommended to retrieve them securely from Vault or AWS Secrets Manager instead:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
    ...
  jenkins_secrets = jsondecode(data.aws_secretsmanager_secret_version.jenkins_secrets.secret_string)
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>devops</category>
      <category>oauth</category>
      <category>jenkins</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>Understanding OAuth 2.0 and OpenID Connect</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Tue, 15 Jul 2025 01:55:52 +0000</pubDate>
      <link>https://forem.com/woobuntu/understanding-oauth-20-and-openid-connect-cfh</link>
      <guid>https://forem.com/woobuntu/understanding-oauth-20-and-openid-connect-cfh</guid>
      <description>&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/996OiexHze0"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  OAuth 2.0
&lt;/h2&gt;

&lt;p&gt;OAuth 2.0 is an &lt;strong&gt;authorization&lt;/strong&gt; protocol that allows a client application to access a user's resources with limited permissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background: Why OAuth 2.0 was introduced
&lt;/h3&gt;

&lt;p&gt;In this article, the term “service” refers to the &lt;strong&gt;Client&lt;/strong&gt; in the OAuth 2.0 context.&lt;/p&gt;

&lt;p&gt;Let’s say a service (the &lt;code&gt;Client&lt;/code&gt;) supports Google login, and it needs access to the logged-in user's contact list.&lt;br&gt;&lt;br&gt;
If OAuth 2.0 didn't exist, the service (&lt;code&gt;Client&lt;/code&gt;) would have to log in to Google &lt;strong&gt;using the user’s (&lt;code&gt;Resource Owner&lt;/code&gt;) own ID and password&lt;/strong&gt;, just like the user would.&lt;/p&gt;

&lt;p&gt;In other words, the service (&lt;code&gt;Client&lt;/code&gt;) would have to &lt;strong&gt;ask for the user's password&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Obviously, this is not acceptable in real-world services.&lt;br&gt;&lt;br&gt;
A protocol was needed to allow the service (&lt;code&gt;Client&lt;/code&gt;) to access the user’s Google contacts &lt;strong&gt;without ever knowing the user's password&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
That’s the background behind the creation of OAuth 2.0.&lt;/p&gt;


&lt;h3&gt;
  
  
  How OAuth 2.0 works
&lt;/h3&gt;

&lt;p&gt;Since only the user (&lt;code&gt;Resource Owner&lt;/code&gt;) should know their Google password, only the user should log in to Google.&lt;br&gt;&lt;br&gt;
So, the service (&lt;code&gt;Client&lt;/code&gt;) must provide an interface that lets the user log in to Google directly, and during that process, ask for permission to access the user’s Google resources.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 This sequence diagram is written in &lt;a href="https://mermaid.js.org/" rel="noopener noreferrer"&gt;Mermaid&lt;/a&gt; syntax.&lt;br&gt;&lt;br&gt;
Unfortunately, dev.to does not support rendering Mermaid diagrams directly.&lt;br&gt;&lt;br&gt;
You can &lt;a href="https://woobuntu2024.notion.site/OAuth2-0-OpenID-Connect-230919ecc9e98027b453c6532c265aff?pvs=74" rel="noopener noreferrer"&gt;view the full visual version here on Notion&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sequenceDiagram 
    box blue FrontChannel
        participant Client
        participant Google
        participant Resource Owner
    end
    box red BackChannel
        participant Client's Server
        participant Google's Authorization Server
        participant Google's Resource Server
    end
    Client-&amp;gt;&amp;gt;Resource Owner: Provide an interface like "Login with Google"
    Resource Owner -&amp;gt;&amp;gt; Google: Redirected to Google login page
    Google-&amp;gt;&amp;gt;Resource Owner: Ask for consent to access resources in the given scope
    Resource Owner-&amp;gt;&amp;gt;Google: Give consent
    Google-&amp;gt;&amp;gt;Client: Redirect to pre-registered redirect URI or callback URL with an authorization code (OAuth 2.0 Authorization Grant)
    Client-&amp;gt;&amp;gt;Client's Server: Send the authorization code received from Google to the backend server

    Client's Server--&amp;gt;Google's Authorization Server: Send the code along with the client secret issued by Google
    Google's Authorization Server--&amp;gt;Client's Server: If valid, issue an access token
    Client's Server--&amp;gt;Google's Resource Server: Use the access token to access user resources on Google
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  OpenID Connect
&lt;/h2&gt;

&lt;p&gt;In OAuth 2.0, the client (Client) can gain access to a user's resources,&lt;br&gt;
but it is not guaranteed to know who the user is (i.e., no identity information is provided by default).&lt;/p&gt;

&lt;p&gt;It’s technically possible to include scopes that return some identity-related info,&lt;br&gt;
but that’s more of a workaround — not a proper authentication mechanism.&lt;/p&gt;

&lt;p&gt;OpenID Connect (OIDC) was introduced to solve this problem.&lt;br&gt;
It extends OAuth 2.0 to handle user authentication properly.&lt;/p&gt;




&lt;h3&gt;
  
  
  How OpenID Connect works
&lt;/h3&gt;

&lt;p&gt;If you include the openid scope in your OAuth 2.0 request,&lt;br&gt;
Google’s authorization server will return not only an access token but also an ID token.&lt;/p&gt;

&lt;p&gt;The ID token is a JWT (JSON Web Token) that contains identity information such as email, name, etc.&lt;br&gt;
By verifying or decoding this token, the client can securely identify the user.&lt;/p&gt;

</description>
      <category>oauth</category>
      <category>security</category>
      <category>webdev</category>
      <category>identity</category>
    </item>
    <item>
      <title>DevOps Interview Practice #6</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Sun, 15 Jun 2025 14:33:10 +0000</pubDate>
      <link>https://forem.com/woobuntu/devops-interview-practice-6-4cih</link>
      <guid>https://forem.com/woobuntu/devops-interview-practice-6-4cih</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;I'm a Korean DevOps engineer preparing for international opportunities. Since English isn’t my first language, I’ve been practicing both my language skills and technical knowledge at the same time. &lt;/p&gt;

&lt;p&gt;As part of my daily English practice, I asked ChatGPT (acting as my senior DevOps engineer) to give me one interview-style question each day. I try to answer in English based on what I know, and then improve my explanation through feedback and correction.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s today’s question and my answer:&lt;/p&gt;




&lt;h2&gt;
  
  
  Question
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Have you ever set up CI/CD pipelines using GitHub Actions or another tool like Jenkins or ArgoCD? If so, can you briefly describe what kind of pipeline you built?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  My First Answer (Raw)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;I build a pipelinen that adopt jenkins for CI, argoCD for CD ar my current job.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;at CI stage, when there are changes to the source code, jenkins build application image and push it to ecr. then apply it's image tag to the values.yaml file in the mirroring branch. and it has name prefixed argocd/ for original branch name. this branch is used for argocd sync.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;there are some reasons that is use this kind of branch strategy.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;since argocd is a gitops tool, it syncs kubernetes resources based on git. so a new commit which reflects new image tag is created, argocd carry out deployment based on it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;if image tag is written in original branch, there could be some difficulties for cooperation. all coworkers should pull every time even if there are no&lt;br&gt;&lt;br&gt;
meaningful changes to the source code.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;so i chose to make a dedicated branch for argocd sync. when there are changes to the source code, jenkins commits image tag generated by original commit to the dedicated branch. i make the commit message to contain original branch name and original commit message, so that which commit argocd is syncing are visible.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Refined Answer (with feedback)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;I built a CI/CD pipeline at my current job, using Jenkins for CI and ArgoCD for CD.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In the CI stage, when changes are made to the source code, Jenkins builds the application image and pushes it to ECR. Then, it applies the new image tag to a values.yaml file in a dedicated mirroring branch. This branch is prefixed with argocd/ followed by the original branch name, and is used exclusively for ArgoCD synchronization.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I chose this strategy because ArgoCD is a GitOps tool—it syncs Kubernetes resources based on the Git repository. So, whenever a new commit containing the updated image tag is created, ArgoCD triggers a deployment based on that commit.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If we were to update the image tag directly in the original development branch, it could disrupt collaboration, as developers would have to pull new commits even when there were no meaningful code changes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To avoid that, a dedicated sync branch is automatically created by the CI pipeline. Jenkins commits the new image tag to this branch, including the original branch name and commit message in the commit. This makes it easy to trace which source commit ArgoCD is deploying.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>interview</category>
      <category>english</category>
    </item>
    <item>
      <title>DevOps Interview Practice #5</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Sun, 15 Jun 2025 14:27:26 +0000</pubDate>
      <link>https://forem.com/woobuntu/devops-interview-practice-5-2cp</link>
      <guid>https://forem.com/woobuntu/devops-interview-practice-5-2cp</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;I'm a Korean DevOps engineer preparing for international opportunities. Since English isn’t my first language, I’ve been practicing both my language skills and technical knowledge at the same time. &lt;/p&gt;

&lt;p&gt;As part of my daily English practice, I asked ChatGPT (acting as my senior DevOps engineer) to give me one interview-style question each day. I try to answer in English based on what I know, and then improve my explanation through feedback and correction.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s today’s question and my answer:&lt;/p&gt;




&lt;h2&gt;
  
  
  Question
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Let’s say your production application is running on EKS, and suddenly, the application becomes unresponsive. No pods are restarting, and there are no obvious error logs. How would you start troubleshooting this issue?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  My First Answer (Raw)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;If there are no error logs and nothing special about metrics of resources, then there's a hig chance that traffic couldn't be routed to the application, i guess.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;in such cases, it would be appropriate to check the reason from client to application.&lt;br&gt;
First, you should check whether the domain resolves to the right ip using tools like dig or nslookup. it should point a load balancer, because it's the only way to expose application on eks.&lt;br&gt;
if the domain points right loadbalancer. then you should check if the traffic can arrive to it. there's a chance that the traffic is blocked by security group or acl.&lt;br&gt;
if you're sure that traffic can arrive to the loadbalancer, then you should check if the traffic can be forwarded from loadbalancer to a node. the security group of nodes should allow one of loadbalancer. &lt;br&gt;
then you should check if the kube proxy can forward the traffic to pods. if you are using ingress, then the traffic is first routed to ingress controller, then is routed again to the appropriate service.&lt;br&gt;
if there's no problem until here, then it must be pod's problem. since there are no problems of logs and metrics, it would be readiness problem, not liveness.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Refined Answer (with feedback)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;If there are no error logs and nothing unusual in the resource metrics, there's a high chance that the traffic is not reaching the application.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In such cases, it's appropriate to troubleshoot from the outside in — from the client to the application.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;First, I would check whether the domain resolves to the correct IP address using tools like dig or nslookup. In EKS, the domain should typically point to a load balancer, since that's the standard way to expose services externally.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If the domain is pointing to the correct load balancer, then I would verify whether traffic is actually reaching it. It's possible that the traffic is blocked by a security group or a network ACL.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If I'm sure that the traffic reaches the load balancer, the next step is to check whether the load balancer can forward the traffic to the nodes. The nodes’ security groups must allow traffic from the load balancer's security group.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;After that, I would check whether the traffic is successfully routed from the node to the pods.&lt;br&gt;
If you're using a LoadBalancer service, kube-proxy handles the routing directly to the pods. If you're using Ingress, then the traffic first goes to the Ingress Controller, which forwards it to the correct Service, and then to the pods.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If everything seems fine up to this point, then the issue may be at the pod level. Since there are no error logs and resource usage is normal, it’s more likely a readiness issue than a liveness one.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>interview</category>
      <category>english</category>
    </item>
    <item>
      <title>DevOps Interview Practice #4: What is the difference between a rolling update and a recreate strategy in Kubernetes Deployments?</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Fri, 06 Jun 2025 14:17:44 +0000</pubDate>
      <link>https://forem.com/woobuntu/devops-interview-practice-4-what-is-the-difference-between-a-rolling-update-and-a-recreate-58o7</link>
      <guid>https://forem.com/woobuntu/devops-interview-practice-4-what-is-the-difference-between-a-rolling-update-and-a-recreate-58o7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;I'm a Korean DevOps engineer preparing for international opportunities. Since English isn’t my first language, I’ve been practicing both my language skills and technical knowledge at the same time. &lt;/p&gt;

&lt;p&gt;As part of my daily English practice, I asked ChatGPT (acting as my senior DevOps engineer) to give me one interview-style question each day. I try to answer in English based on what I know, and then improve my explanation through feedback and correction.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s today’s question and my answer:&lt;/p&gt;




&lt;h2&gt;
  
  
  Question
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is the difference between a rolling update and a recreate strategy in Kubernetes Deployments?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  My First Answer (Raw)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Recreate is a deploymeny strategy that ensure version persistency but sacrifice availability during version update. On the other hand, Rolling Update is one that ensure availability but sacrifice version persistency during update.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Recreate strategy ends all existing pods immediately, and when it's done, then starts scale out of new replicaset.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;RollingUpdate strategy ensures availability by maxUnavailable and maxSurge. existing pods are decremended under the maxUnavailable, new pods are created under the maxSurge.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Refined Answer (with feedback)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Recreate is a deployment strategy that ensures version consistency but sacrifices availability during the update. On the other hand, RollingUpdate prioritizes availability, even if that means temporarily running different versions of the application.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;With the Recreate strategy, all existing pods are terminated first. Only after that process is complete does the new ReplicaSet begin scaling up.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;RollingUpdate maintains availability using two key parameters: &lt;code&gt;maxUnavailable&lt;/code&gt; and &lt;code&gt;maxSurge&lt;/code&gt;. Existing pods are scaled down according to the &lt;code&gt;maxUnavailable&lt;/code&gt; setting, while new pods are created within the limit defined by &lt;code&gt;maxSurge&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>english</category>
      <category>interview</category>
    </item>
    <item>
      <title>DevOps Interview Practice #3: What happens when you scale a StatefulSet from 2 replicas to 3?</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Thu, 29 May 2025 13:01:22 +0000</pubDate>
      <link>https://forem.com/woobuntu/devops-interview-practice-3-what-happens-when-you-scale-a-statefulset-from-2-replicas-to-3-18di</link>
      <guid>https://forem.com/woobuntu/devops-interview-practice-3-what-happens-when-you-scale-a-statefulset-from-2-replicas-to-3-18di</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;I'm a Korean DevOps engineer preparing for international opportunities. Since English isn’t my first language, I’ve been practicing both my language skills and technical knowledge at the same time. &lt;/p&gt;

&lt;p&gt;As part of my daily English practice, I asked ChatGPT (acting as my senior DevOps engineer) to give me one interview-style question each day. I try to answer in English based on what I know, and then improve my explanation through feedback and correction.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s today’s question and my answer:&lt;/p&gt;




&lt;h2&gt;
  
  
  Question
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What happens when you scale a StatefulSet from 2 replicas to 3?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  My First Answer (Raw)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;When you scale out statefulset's replicas from 2 to 3, a new pod is generated with index 2. statefulset manages pods the way it can distinguish each pod by it's name and index, a newly populated pod has expectable name. &lt;br&gt;
you can access the pod with the address like 'pod name.namespace.svc cluster.local' which is offered by headless service, and this is how it can be uniquely identifiable and accesable in  a cluster. &lt;br&gt;
Also, if volumeclaimtemplates are defined, a pvc is generated with a new pod.&lt;br&gt;
If the storage class supports dynamic provisioning, condition matching pv is automatically made and bound to the pvc. If condition matching pv already exists, it could be bound.&lt;br&gt;
the pvc is uniquley linked to the pod's name so that data can be persistent even if a pod is removed and regenerated. This ensure statefulset can maintain data persistency and pod's uniqueness.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Refined Answer (with feedback)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;When you scale a StatefulSet from 2 to 3 replicas, a new pod with the index 2 is created.&lt;br&gt;
StatefulSet manages pods in a way that each one is uniquely identifiable by its name and index, so the newly created pod gets a predictable name.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can access this pod using a DNS address like pod-name.namespace.svc.cluster.local, which is provided by a headless service.&lt;br&gt;
This allows the pod to be uniquely addressable within the cluster.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If volumeClaimTemplates are defined, a new PVC is automatically created for the new pod.&lt;br&gt;
If the associated StorageClass supports dynamic provisioning, a new PV that matches the claim’s requirements will be provisioned and bound automatically.&lt;br&gt;
If a matching PV already exists, it will be used instead.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Since the PVC is uniquely associated with the pod’s name, the pod can reconnect to the same volume even after it is deleted and recreated.&lt;br&gt;
This ensures data persistence and stable identity, which are key characteristics of StatefulSets.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>english</category>
      <category>interview</category>
    </item>
    <item>
      <title>DevOps Interview Practice #2: What’s the difference between a Deployment and a StatefulSet in Kubernetes?</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Thu, 29 May 2025 12:58:56 +0000</pubDate>
      <link>https://forem.com/woobuntu/devops-interview-practice-2-whats-the-difference-between-a-deployment-and-a-statefulset-in-4go5</link>
      <guid>https://forem.com/woobuntu/devops-interview-practice-2-whats-the-difference-between-a-deployment-and-a-statefulset-in-4go5</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;I'm a Korean DevOps engineer preparing for international opportunities. Since English isn’t my first language, I’ve been practicing both my language skills and technical knowledge at the same time. &lt;/p&gt;

&lt;p&gt;As part of my daily English practice, I asked ChatGPT (acting as my senior DevOps engineer) to give me one interview-style question each day. I try to answer in English based on what I know, and then improve my explanation through feedback and correction.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s today’s question and my answer:&lt;/p&gt;




&lt;h2&gt;
  
  
  Question
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What’s the difference between a Deployment and a StatefulSet in Kubernetes?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  My First Answer (Raw)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Deployment is a controller which manages pods that don't need to be distinguishable because they are stateless. On the other hand, Statefulset is a controller which manages pods that need to be distinguishable becauae theu are stateful. sharded database is well known example for statefulset. each pod has to mount different volume so they must be distinguished.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Refined Answer (with feedback)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;A Deployment is a controller that manages pods which don’t need to be distinguishable because they are stateless.&lt;br&gt;
On the other hand, a StatefulSet is used for managing pods that must be distinguishable, since they are stateful.&lt;br&gt;
A sharded database is a well-known example of a workload suited for StatefulSets, as each pod needs to mount a different volume and maintain a unique identity.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>english</category>
      <category>interview</category>
    </item>
    <item>
      <title>DevOps Interview Practice #1: What tool do you use to check if a pod is healthy?</title>
      <dc:creator>Woobuntu</dc:creator>
      <pubDate>Thu, 29 May 2025 12:55:57 +0000</pubDate>
      <link>https://forem.com/woobuntu/devops-interview-practice-1-what-tool-do-you-use-to-check-if-a-pod-is-healthy-5an7</link>
      <guid>https://forem.com/woobuntu/devops-interview-practice-1-what-tool-do-you-use-to-check-if-a-pod-is-healthy-5an7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;I'm a Korean DevOps engineer preparing for international opportunities. Since English isn’t my first language, I’ve been practicing both my language skills and technical knowledge at the same time. &lt;/p&gt;

&lt;p&gt;As part of my daily English practice, I asked ChatGPT (acting as my senior DevOps engineer) to give me one interview-style question each day. I try to answer in English based on what I know, and then improve my explanation through feedback and correction.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s today’s question and my answer:&lt;/p&gt;




&lt;h2&gt;
  
  
  Question
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What tool do you use to check if a pod is healthy?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  My First Answer (Raw)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;There are two options to check pod's health, which are readiness probe and liveness probe. First, readiness probe is literally the option that determine whether the pod can receive traffic. If it fails, the pod shouldn't receive traffic anymore, so kube proxy eliminates it's endpoint from service. Liveness probe is literally the option which check whether the pod is alive. If the pod is running but it can not behave properly for any reason, kubelet restarts it. With their nature in my mind, they should run repeateadly in a pod's lifecycle. However, It's important to note that they are mutually independent. It's possible to pass the liveness check but readiness check, or vice verca. So you should pay attention configuring liveneas check. Because it is possible for the pod can't be ready forever because the kubelet restarts the pod before readiness probe succeeds. &lt;br&gt;
You can prepare this kind of accident using startup probe. It ensures if the pod is ready to accept readiness and liveness check. if the pod have long warming up process, it's useful to configure startup probe to delay readiness and liveness check.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Refined Answer (with feedback)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;There are two main mechanisms to check a pod’s health: the readiness probe and the liveness probe.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The readiness probe determines whether a pod is ready to receive traffic.&lt;br&gt;
If it fails, the pod is removed from the service’s endpoints by kube-proxy, so that no traffic is routed to it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The liveness probe, on the other hand, checks whether the pod is still alive.&lt;br&gt;
Even if the container is technically running, the kubelet will restart it if the liveness probe fails repeatedly—assuming it's no longer functioning properly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Because of their nature, both probes typically run repeatedly throughout a pod's lifecycle.&lt;br&gt;
It's important to note, however, that these probes operate independently:&lt;br&gt;
a pod may pass one while failing the other.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This can be problematic—especially if the liveness probe is too aggressive.&lt;br&gt;
In such cases, the pod might get restarted before it even has a chance to become ready, causing it to remain in a perpetual unready state.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To handle this, you can configure a startup probe, which is designed to delay the execution of readiness and liveness checks until the application has finished initializing.&lt;br&gt;
For pods with a long startup time, this is an effective way to prevent premature restarts and ensure proper lifecycle management.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>interview</category>
      <category>english</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
