<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sujay Pillai</title>
    <description>The latest articles on Forem by Sujay Pillai (@sujaypillai).</description>
    <link>https://forem.com/sujaypillai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sujaypillai"/>
    <language>en</language>
    <item>
      <title>I Let Three AI Agents Argue About My Architecture — Here's What Happened</title>
      <dc:creator>Sujay Pillai</dc:creator>
      <pubDate>Wed, 18 Feb 2026 05:05:15 +0000</pubDate>
      <link>https://forem.com/sujaypillai/i-let-three-ai-agents-argue-about-my-architecture-heres-what-happened-4d3</link>
      <guid>https://forem.com/sujaypillai/i-let-three-ai-agents-argue-about-my-architecture-heres-what-happened-4d3</guid>
      <description>&lt;p&gt;If you've ever tried to find details about a specific CNCF certification exam, you know the pain. The official &lt;a href="https://www.cncf.io/training/courses/" rel="noopener noreferrer"&gt;CNCF training page&lt;/a&gt; lists dozens of courses, certifications, and workshops — all in one endless scroll. There's no way to search by exam domain, compare certification weightings side by side, or filter by difficulty level. You're left bouncing between PDFs, GitHub repos, and Linux Foundation pages just to answer a simple question like "What percentage of the CKA exam covers cluster architecture?"&lt;/p&gt;

&lt;p&gt;I wanted to fix that — build a clean, searchable hub for all 15 CNCF certification exams. But the design space was wide open. What frontend framework? What search technology? Where to host? How to keep the data in sync with the upstream &lt;a href="https://github.com/cncf/curriculum" rel="noopener noreferrer"&gt;curriculum repo&lt;/a&gt;? The kind of project where you can burn an entire day debating "static site generator or full server?" before writing a single line of code. (Spoiler: the finished site is live at &lt;a href="https://cncfexamguide.sujaypill.ai" rel="noopener noreferrer"&gt;cncfexamguide.sujaypill.ai&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;Instead, I opened a terminal, typed one prompt, and watched three AI agents debate the architecture &lt;em&gt;for&lt;/em&gt; me. Three and a half minutes later, I had a synthesized design document that was better than anything I'd have written alone — because one of the agents was specifically hired to &lt;strong&gt;tear the plan apart&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is the story of that session, and a deep dive into Claude Code's &lt;strong&gt;Agent Teams&lt;/strong&gt; feature — the experimental capability that lets you spawn multiple specialized agents that work in parallel, each with a distinct role and perspective.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge: A CNCF Exam Search Website
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/cncf/curriculum" rel="noopener noreferrer"&gt;CNCF curriculum repository&lt;/a&gt; contains exam materials for 15 cloud-native certifications — CKA, CKAD, CKS, KCNA, and more. Each exam has PDFs, markdown READMEs listing domains and weightings, and links to study resources.&lt;/p&gt;

&lt;p&gt;The goal: build a clean, searchable website where someone preparing for any CNCF exam can quickly find exam domains, compare certifications, and discover resources. Sounds straightforward, but the design space is wide open. What frontend framework? What search technology? Where to host? How much infrastructure is too much?&lt;/p&gt;

&lt;p&gt;Rather than spiraling into analysis paralysis, I decided to let Claude Code's Agent Teams feature do what it's designed for — &lt;strong&gt;run multiple perspectives in parallel and synthesize the results&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assembling the Team
&lt;/h2&gt;

&lt;p&gt;Agent Teams lets you launch specialized "teammates" directly from your prompt using the &lt;code&gt;@teammate-name&lt;/code&gt; syntax. Each teammate runs as an independent Claude Code instance with its own context, tools, and instructions. Here's the prompt I used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I'm designing a website that will run on Azure
Container Apps which will help users to search for
all the CNCF exams and its guide. The curriculum for
all the exams are found in this github repo -
https://github.com/cncf/curriculum.git
This has been cloned into current directory as
curriculum folder. Create an agent team to explore
this from different angles: one teammate on UX, one
on technical architecture, one playing devil's
advocate.

@ux-designer — Focus on information architecture,
search UX, user journeys, exam comparison features,
mobile-first, accessibility.

@architect — Focus on Azure Container Apps
deployment, frontend/backend stack, data pipeline
from GitHub, search implementation, CI/CD,
infrastructure-as-code.

@devils-advocate — Challenge every assumption.
Is ACA overkill? Are we over-engineering? Cost
analysis. Competitive landscape. What are simpler
alternatives?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0gks7lkkam5gkkdasx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0gks7lkkam5gkkdasx6.png" alt="The prompt that started it all — one input, three agents about to spin up"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The moment I hit Enter, Claude Code spawned three independent agents — each one receiving the full project context plus their specialized role instructions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🎨 &lt;strong&gt;@ux-designer&lt;/strong&gt; — Information architecture, search patterns, user journeys, exam comparison features, mobile &amp;amp; accessibility.&lt;/li&gt;
&lt;li&gt;🏗️ &lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/architect"&gt;@architect&lt;/a&gt;&lt;/strong&gt; — Azure Container Apps, frontend/backend stack, data pipeline, search implementation, CI/CD, IaC with Bicep.&lt;/li&gt;
&lt;li&gt;😈 &lt;strong&gt;@devils-advocate&lt;/strong&gt; — Challenge assumptions. Is ACA overkill? Cost analysis. Simpler alternatives. What could go wrong?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6tcxklt5f9zj5u3bu2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6tcxklt5f9zj5u3bu2t.png" alt="Three agents launched in parallel with role descriptions visible"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Watching Them Work: The In-Process Display Mode
&lt;/h2&gt;

&lt;p&gt;Here's where it gets interesting. Claude Code offers two &lt;strong&gt;display modes&lt;/strong&gt; for agent teams — and the one I used, &lt;strong&gt;in-process mode&lt;/strong&gt;, keeps everything in a single terminal window.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;In-Process&lt;/th&gt;
&lt;th&gt;Split Panes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Terminal setup&lt;/td&gt;
&lt;td&gt;Single window&lt;/td&gt;
&lt;td&gt;Requires tmux or iTerm2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Navigation&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Shift+↑/↓&lt;/code&gt; to switch agents&lt;/td&gt;
&lt;td&gt;Each agent in its own pane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Visibility&lt;/td&gt;
&lt;td&gt;One agent at a time, status bar for all&lt;/td&gt;
&lt;td&gt;All agents visible simultaneously&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Quick sessions, any terminal&lt;/td&gt;
&lt;td&gt;Long-running parallel work&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Configuration&lt;/td&gt;
&lt;td&gt;&lt;code&gt;--teammate-mode in-process&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;--teammate-mode split-panes&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In in-process mode, a &lt;strong&gt;teammate navigation bar&lt;/strong&gt; appears at the bottom of the terminal showing all active agents. You can see each agent's name, switch between them, and monitor their progress — all without leaving your terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t4lpqnzb9t54o1khvm1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t4lpqnzb9t54o1khvm1.png" alt="All three agents working in parallel — note the teammate bar at the bottom"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip:&lt;/strong&gt; Press &lt;code&gt;Ctrl+T&lt;/code&gt; to show all teammates and their status. Press &lt;code&gt;Shift+↑&lt;/code&gt; to expand and read a specific agent's output. The navigation bar shows token usage and elapsed time per agent.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To configure the display mode, add this to your &lt;code&gt;.claude/settings.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"teammateMode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"in-process"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or pass it as a CLI flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use in-process mode (works in any terminal)&lt;/span&gt;
claude &lt;span class="nt"&gt;--teammate-mode&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="nt"&gt;-process&lt;/span&gt;

&lt;span class="c"&gt;# Use split panes (requires tmux or iTerm2)&lt;/span&gt;
claude &lt;span class="nt"&gt;--teammate-mode&lt;/span&gt; split-panes

&lt;span class="c"&gt;# Auto-detect: split panes if in tmux, otherwise in-process&lt;/span&gt;
claude &lt;span class="nt"&gt;--teammate-mode&lt;/span&gt; auto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Three Minutes, Three Perspectives
&lt;/h2&gt;

&lt;p&gt;While I watched, each agent dove deep into the problem space. They independently read through the CNCF curriculum repository, analyzed the 15 exam PDFs, and produced their recommendations — all running in parallel.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;0:00 – 0:30&lt;/strong&gt; — Agents spawn and begin reading the CNCF curriculum repository. Each one scans all 15 exam directories, PDFs, and markdown files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;0:30 – 2:00&lt;/strong&gt; — Parallel deep analysis. The UX designer maps user journeys. The architect designs infrastructure. The devil's advocate researches alternatives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2:00 – 3:00&lt;/strong&gt; — Agents complete their analysis and report back to the main agent with detailed findings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3:00 – 3:38&lt;/strong&gt; — Main agent synthesizes all three reports into a unified design document.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🏗️ The Architect's Blueprint
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;@architect&lt;/code&gt; agent came back with a comprehensive infrastructure plan:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6a1l6ye5bcb2cyti9z2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6a1l6ye5bcb2cyti9z2o.png" alt="The architect delivers: data pipeline, search strategy, Azure Container Apps setup, and a cost estimate"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key recommendations from the architect:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Pipeline:&lt;/strong&gt; GitHub Actions ETL process, curriculum repo as a git submodule, Node.js scripts for markdown + PDF extraction, output as static JSON API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search:&lt;/strong&gt; &lt;a href="https://pagefind.app" rel="noopener noreferrer"&gt;Pagefind&lt;/a&gt; for client-side search — free, zero-latency, sub-100ms results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure:&lt;/strong&gt; Azure Container Apps with scale-to-zero, Azure Front Door CDN, Bicep IaC&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; ~$52-60/month at low traffic, ~$300/month at 10K concurrent users&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Roadmap:&lt;/strong&gt; MVP in 2-3 weeks, enhancements in 1-2 weeks, optimization in 1 week&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  😈 The Devil's Advocate Strikes
&lt;/h3&gt;

&lt;p&gt;And then there was the &lt;code&gt;@devils-advocate&lt;/code&gt;. This is where things got &lt;em&gt;really&lt;/em&gt; interesting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzgaaeiou7a8t9qzum46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzgaaeiou7a8t9qzum46.png" alt="The devil's advocate delivering critical analysis"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The devil's advocate didn't just nitpick — it fundamentally challenged the core assumptions:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🔥 "Is Azure Container Apps overkill for a site that's essentially serving static content? You're paying for container orchestration to serve files that change once a quarter."&lt;/p&gt;

&lt;p&gt;"Why build a custom search solution when the data fits in a single JSON file? Pagefind works, but have you considered that a simple &lt;code&gt;ctrl+F&lt;/code&gt; on a well-structured page might be enough for an MVP?"&lt;/p&gt;

&lt;p&gt;"The CNCF already has a certifications page. What's your differentiation? If it's just 'better search,' that's a feature, not a product."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Synthesis: Where the Magic Happens
&lt;/h2&gt;

&lt;p&gt;Once all three agents reported back, the main agent — the orchestrator — did something I didn't expect. It didn't just concatenate the three reports. It &lt;strong&gt;synthesized&lt;/strong&gt; them, identifying consensus points, resolving debates, and producing a unified design document that was stronger than any individual report.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5umudv11kkih1ldn7ucv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5umudv11kkih1ldn7ucv.png" alt="The main agent synthesizing findings from all three teammates"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The synthesis document had clear sections: &lt;strong&gt;Consensus Points&lt;/strong&gt; (where all three agents agreed), &lt;strong&gt;Key Debates&lt;/strong&gt; (where they disagreed and how the debates were resolved), and a &lt;strong&gt;Recommended Approach&lt;/strong&gt; that incorporated the best ideas from each perspective.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Plot Twist: What the Devil's Advocate Changed
&lt;/h3&gt;

&lt;p&gt;This was my favorite part of the entire session. The synthesis included a dedicated section titled &lt;strong&gt;"What the Devil's Advocate Changed"&lt;/strong&gt; — a list of concrete ways the critical review &lt;em&gt;actually improved&lt;/em&gt; the final design:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lpx0qkqjkbd4g8k2m9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lpx0qkqjkbd4g8k2m9d.png" alt="The payoff — the devil's advocate made the design genuinely better"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key changes driven by the devil's advocate:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Static generation over SSR&lt;/strong&gt; — Why run a server when the data changes quarterly? The final design uses Astro as a static site generator instead of a full server-side rendered app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pagefind over Azure AI Search for MVP&lt;/strong&gt; — The architect initially suggested an upgrade path to Azure AI Search. The devil's advocate argued that client-side search with Pagefind handles the dataset size (15 exams) perfectly, saving ~$200/month.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale-to-zero validated&lt;/strong&gt; — The devil's advocate confirmed that Azure Container Apps' scale-to-zero was appropriate here, but pushed for a cost ceiling and monitoring alerts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Differentiation defined&lt;/strong&gt; — Forced the team to articulate why this site needs to exist beyond the CNCF's own page: structured comparison, cross-exam search, and study path recommendations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the real power of agent teams. It's not just about parallelism — it's about &lt;strong&gt;productive disagreement&lt;/strong&gt;. The devil's advocate made the architect's plan cheaper, simpler, and more focused. Without it, we'd have an over-engineered system solving the wrong problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Watch the Full Demo
&lt;/h2&gt;

&lt;p&gt;Here's the complete 3.5-minute recording of the session described above — from the initial prompt to the final synthesis. Watch three agents analyze, debate, and converge on a design in real time:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/yXgHI6qg7JU"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Agent Teams
&lt;/h2&gt;

&lt;p&gt;Agent Teams is currently an experimental feature in Claude Code. Here's how to enable it and start your first team:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Enable the Feature
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Set the environment variable to enable agent teams&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Launch Teammates from Your Prompt
&lt;/h3&gt;

&lt;p&gt;Use the &lt;code&gt;@teammate-name&lt;/code&gt; syntax directly in your prompt. Give each teammate a clear role and specific focus areas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Refactor the authentication module.

@security-reviewer — Audit the current auth flow for
vulnerabilities. Check token handling, session management, CSRF.

@implementer — Refactor the code to use JWT with refresh
tokens. Update all middleware and route handlers.

@tester — Write comprehensive tests for the new auth flow.
Cover edge cases: expired tokens, concurrent sessions, revocation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Choose Your Display Mode
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Option A: In-process (default, works everywhere)&lt;/span&gt;
claude &lt;span class="nt"&gt;--teammate-mode&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="nt"&gt;-process&lt;/span&gt;

&lt;span class="c"&gt;# Option B: Split panes (tmux or iTerm2 required)&lt;/span&gt;
claude &lt;span class="nt"&gt;--teammate-mode&lt;/span&gt; split-panes

&lt;span class="c"&gt;# Option C: Auto-detect (recommended)&lt;/span&gt;
claude &lt;span class="nt"&gt;--teammate-mode&lt;/span&gt; auto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Control Your Team
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check status of all teammates&lt;/span&gt;
/teammate status

&lt;span class="c"&gt;# Add a new teammate mid-session&lt;/span&gt;
/teammate add

&lt;span class="c"&gt;# Remove a specific teammate&lt;/span&gt;
/teammate remove security-reviewer

&lt;span class="c"&gt;# Emergency stop: press Escape to halt all agents&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Best Practices
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Limit to 3-5 teammates&lt;/strong&gt; — More agents means more context and higher costs. Keep teams focused.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Give distinct roles&lt;/strong&gt; — Overlapping responsibilities lead to redundant work. Each agent should have a clear lane.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Always include a critic&lt;/strong&gt; — A devil's advocate or reviewer agent consistently improves outcomes by challenging assumptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use for parallel independent tasks&lt;/strong&gt; — Agent teams shine when work can be divided without constant coordination.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;What struck me most about this session wasn't the speed — though getting a comprehensive design in 3.5 minutes is remarkable. It was the &lt;strong&gt;quality of the disagreement&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In a real team, getting a UX designer, a systems architect, and a skeptical reviewer to sit in the same room and hash out a design takes days of calendar wrangling. The feedback loops are slow. The politics are real. And nobody wants to be the one who says "this is over-engineered" in front of the person who designed it.&lt;/p&gt;

&lt;p&gt;Agent teams compress all of that into minutes. The devil's advocate has no ego to protect. The architect doesn't take the criticism personally. The UX designer's input is weighted equally. And the synthesis agent — the orchestrator — has the superhuman ability to hold all three perspectives in memory simultaneously and find the intersection.&lt;/p&gt;

&lt;p&gt;Is this a replacement for real human collaboration? No. But as a &lt;strong&gt;first pass&lt;/strong&gt; — a way to rapidly explore a design space, surface assumptions, and identify the real decisions that need human judgment — it's the most productive 3.5 minutes I've spent on architecture in a long time. And the proof is in the result — the design that came out of this session is now live at &lt;a href="https://cncfexamguide.sujaypill.ai" rel="noopener noreferrer"&gt;cncfexamguide.sujaypill.ai&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/agent-teams" rel="noopener noreferrer"&gt;Claude Code Agent Teams Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/agent-teams#choose-a-display-mode" rel="noopener noreferrer"&gt;Display Mode Configuration Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cncf/curriculum" rel="noopener noreferrer"&gt;CNCF Curriculum Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cncfexamguide.sujaypill.ai" rel="noopener noreferrer"&gt;CNCF Exam Guide — The finished site built from this session&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>claudecode</category>
      <category>ai</category>
      <category>kubernetes</category>
      <category>architecture</category>
    </item>
    <item>
      <title>docker init - create docker related assets</title>
      <dc:creator>Sujay Pillai</dc:creator>
      <pubDate>Mon, 15 May 2023 16:28:43 +0000</pubDate>
      <link>https://forem.com/docker/docker-init-create-docker-related-assets-1akh</link>
      <guid>https://forem.com/docker/docker-init-create-docker-related-assets-1akh</guid>
      <description>&lt;p&gt;One of the latest features that got released with &lt;a href="https://docs.docker.com/desktop/release-notes/#4180" rel="noopener noreferrer"&gt;Docker Desktop 4.18.0&lt;/a&gt; is &lt;code&gt;docker init&lt;/code&gt; which helps to create docker-related assets in the project folder. This post will explain how to utilize the new command in a &lt;code&gt;Next.js&lt;/code&gt; project for generating a &lt;code&gt;Dockerfile&lt;/code&gt; and &lt;code&gt;docker-compose.yml&lt;/code&gt; file.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://nextjs.org" rel="noopener noreferrer"&gt;&lt;code&gt;Next.js&lt;/code&gt;&lt;/a&gt; is a popular choice for building server-side rendered React applications. Its features such as automatic code splitting, server-side rendering, and static site generation have made it a powerful tool for building performant web applications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;One of the pre-requisite to work with Next.js is to have &lt;code&gt;Node.js&lt;/code&gt; installed in your development environment. Let's see how we can scaffold the project with installing node in our local but rather making use of docker image &lt;code&gt;node:18.15.0&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To create a Next.js app, open your terminal, cd into the directory you’d like to create the app in, and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -v $(pwd):/nextjsapp node:18.15.0 \
     npx create-next-app nextjsapp --use-npm \
     --example "https://github.com/vercel/next-learn/tree/master/basics/learn-starter"

npm WARN exec The following package was not found and will be installed: create-next-app@13.3.0
Creating a new Next.js app in /nextjsapp.

Downloading files from repo https://github.com/vercel/next-learn/tree/master/basics/learn-starter. This might take a moment.

Installing packages. This might take a couple of minutes.


added 20 packages, and audited 21 packages in 8s

3 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

Success! Created nextjsapp at /nextjsapp
Inside that directory, you can run several commands:

  npm run dev
    Starts the development server.

  npm run build
    Builds the app for production.

  npm start
    Runs the built app in production mode.

We suggest that you begin by typing:

  cd nextjsapp
  npm run dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us break the above command in to smaller chunks to understand what it is doing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Since I do not have Node.js installed on my local system, I am utilizing the docker image &lt;code&gt;node:18.15.0&lt;/code&gt; to create a container runtime that allows me to scaffold the project within the container.&lt;/li&gt;
&lt;li&gt;To ensure that the project location within the container is   persisted, it is mounted on the host filesystem using the parameter &lt;code&gt;-v $(pwd):/nextjsapp&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Once the container exits the directory from which you executed the above command will have the below files.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tree -L 1
.
├── README.md
├── node_modules
├── package-lock.json
├── package.json
├── pages
├── public
└── styles
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;npx create-next-app nextjsapp --use-npm \
 --example "https://github.com/vercel/next- learn/tree/master/basics/learn-starter"&lt;/code&gt; this part of the command override &lt;code&gt;CMD&lt;/code&gt; from the base image. 
&lt;code&gt;npx&lt;/code&gt; - a cli tool to install and manage dependencies hosted in the npm registry.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now with single &lt;code&gt;docker init&lt;/code&gt; command you can generate &lt;code&gt;Dockerfile&lt;/code&gt;, &lt;code&gt;compose.yaml&lt;/code&gt; &amp;amp; &lt;code&gt;.dockerignore&lt;/code&gt; file based on the framework used in your project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker init

Welcome to the Docker Init CLI!

This utility will walk you through creating the following files with sensible defaults for your project:
  - .dockerignore
  - Dockerfile
  - compose.yaml

Let's get started!

? What application platform does your project use? Node
? What version of Node do you want to use? 18.15.0
? Which package manager do you want to use? npm
? Do you want to run "npm run build" before starting your server? Yes
? What directory is your build output to? (comma-separate if multiple) .next
? What command do you want to use to start the app? npm start
? What port does your server listen on? 3000

CREATED: .dockerignore
CREATED: Dockerfile
CREATED: compose.yaml

✔ Your Docker files are ready!

Take a moment to review them and tailor them to your application.

When you're ready, start your application by running: docker compose up --build

Your application will be available at http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon successful execution of the above command you will see that your Next.js project now has docker support in the project structure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsey4ewlzm2hzhcqrz9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsey4ewlzm2hzhcqrz9a.png" alt="Image description" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It thus helps to easily integrate Docker into my current project without the need to write Dockerfile and docker-compose file from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;docker init is currently in beta. Don't use in Production environments&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Ingest AWS CloudTrail Logs to Microsoft Sentinel</title>
      <dc:creator>Sujay Pillai</dc:creator>
      <pubDate>Wed, 06 Apr 2022 17:46:36 +0000</pubDate>
      <link>https://forem.com/aws-builders/ingest-aws-cloudtrail-logs-to-microsoft-sentinel-2jmn</link>
      <guid>https://forem.com/aws-builders/ingest-aws-cloudtrail-logs-to-microsoft-sentinel-2jmn</guid>
      <description>&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/sentinel/" rel="noopener noreferrer"&gt;Microsoft Sentinel&lt;/a&gt; is a Cloud Native &lt;em&gt;security information and event management&lt;/em&gt; (SIEM) and &lt;em&gt;security orchestration, automation, and response&lt;/em&gt; (SOAR) solution with built-in AI for analytics. It removes the cost and complexity of achieving the central and focused near real time view of the active threats in your environment.&lt;/p&gt;

&lt;p&gt;The Data connectors page, accessible from the Microsoft Sentinel navigation menu, shows the full list of connectors that Microsoft Sentinel provides, and their status. We will use the &lt;code&gt;Amazon Web Services S3&lt;/code&gt; connector to pull AWS CloudTrail logs into Microsoft Sentinel. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe38ddjb9vnk56n5nt489.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe38ddjb9vnk56n5nt489.png" alt="Microsoft Sentinel Connector" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this connector to work we need to grant Microsoft Sentinel access to the AWS CloudTrail logs that we configured previously. By setting up this connector there is a trust relationship established between Amazon Web Services and Microsoft Sentinel. This can be achieved by creating a role that gives permission to Microsoft Sentinel to access CloudTrail logs.&lt;/p&gt;

&lt;p&gt;In the previous &lt;a href="https://dev.to/aws-builders/configuring-amazon-sqs-queues-using-terraform-9g2"&gt;blog&lt;/a&gt; we had already created that role with necessary permission to access CloudTrail logs. &lt;/p&gt;

&lt;p&gt;The Role ARN and SQS Queue url in output will be handy for the connector configuration-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Changes to Outputs:
  + sentinelrole = "arn:aws:iam::123456789012:role/AzureSentinelRole"
  + sqsurl       = "https://sqs.ap-southeast-1.amazonaws.com/123456789012/awscbcloudtrailqueue"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the Microsoft Sentinel blade navigate to Data connectors. Select &lt;strong&gt;&lt;code&gt;Amazon Web Services S3&lt;/code&gt;&lt;/strong&gt; and in the details page click on &lt;strong&gt;&lt;code&gt;Open connector page&lt;/code&gt;&lt;/strong&gt; to configure  connector.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3r7aiti13ojy9ap7841.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3r7aiti13ojy9ap7841.png" alt="S3 Connector" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ROLE ARN&lt;/td&gt;
&lt;td&gt;arn:aws:iam::123456789012:role/AzureSentinelRole&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQS URL&lt;/td&gt;
&lt;td&gt;&lt;a href="https://sqs.ap-southeast-1.amazonaws.com/123456789012/awscbcloudtrailqueue" rel="noopener noreferrer"&gt;https://sqs.ap-southeast-1.amazonaws.com/123456789012/awscbcloudtrailqueue&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Terraform code for automating the whole setup on AWS side can be found &lt;a href="https://github.com/sujaypillai/awscb001" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You could check the status of the connector from the Connector page as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn09bt8lzc3j9zx3l797v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn09bt8lzc3j9zx3l797v.png" alt="Connector Status" width="413" height="647"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;&lt;code&gt;AWSCloudTrail&lt;/code&gt;&lt;/strong&gt; or navigate to the Log Analytics workspace to see the CloudTrail logs from your AWS Account&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnewdkado7jz0bopj6znp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnewdkado7jz0bopj6znp.png" alt="CloudTrail query" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On successful connection Microsoft Sentinel creates a table called &lt;strong&gt;&lt;code&gt;AWSCloudTrail&lt;/code&gt;&lt;/strong&gt; with the columns as documented &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/reference/tables/AWSCloudTrail" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can write custom queries using &lt;a href="https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/" rel="noopener noreferrer"&gt;Kusto Query&lt;/a&gt; on top of this data and return result as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wcysymgm91ijl6ej6sy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wcysymgm91ijl6ej6sy.png" alt="KustoQuery" width="614" height="774"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Microsoft Sentinel allows you to create custom workbooks across your data, and also comes with built-in workbook templates to allow you to quickly gain insights across your data. Once such workbook is &lt;code&gt;AWS S3 Workbook&lt;/code&gt; built by Microsoft Sentinel Community.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwamgnw2tcbzk66hehqgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwamgnw2tcbzk66hehqgo.png" alt="SentinelWorkbook" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja0q3g026lvglnwklgyu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja0q3g026lvglnwklgyu.png" alt="SentinelWorkbook1" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SentinelHealth&lt;/strong&gt; data table provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxif4q2rexl3tmr7hev7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxif4q2rexl3tmr7hev7.png" alt="SentinelHealth" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Configuring Amazon SQS queues using terraform</title>
      <dc:creator>Sujay Pillai</dc:creator>
      <pubDate>Thu, 31 Mar 2022 01:54:05 +0000</pubDate>
      <link>https://forem.com/aws-builders/configuring-amazon-sqs-queues-using-terraform-9g2</link>
      <guid>https://forem.com/aws-builders/configuring-amazon-sqs-queues-using-terraform-9g2</guid>
      <description>&lt;p&gt;&lt;a href="https://aws.amazon.com/sqs/" rel="noopener noreferrer"&gt;Amazon SQS&lt;/a&gt; is a lightweight, fully-managed message queuing service. We can use SQS to decouple and scale microservices, &lt;br&gt;
serverless applications, and distributed systems. &lt;br&gt;
SQS makes it easy to store, receive, and send messages between software components.&lt;/p&gt;

&lt;p&gt;In this blog you will see how we can configure an S3 bucket as source of event for a SQS Queue to be consumed by &lt;a href="https://docs.microsoft.com/en-us/azure/sentinel/overview" rel="noopener noreferrer"&gt;Microsoft Sentinel&lt;/a&gt;;a scalable, cloud-native, security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution. In our case we will showcase how we can make use of SQS to push all the CloudTrail data generated in our account to Microsoft Sentinel there by establising communication between two major cloud providers. &lt;/p&gt;

&lt;p&gt;For this to happen we will need an &lt;strong&gt;IAM assumed role&lt;/strong&gt; with necessary permissions to grant Microsoft Sentinel access to your CloudTrail logs stored in S3 Bucket and the message generated in SQS as a result of object creation in the bucket.&lt;/p&gt;

&lt;p&gt;Resource: &lt;code&gt;aws_iam_role&lt;/code&gt; is used to create an assumed role &lt;code&gt;AzureSentinelRole&lt;/code&gt; to grant permissions to your Microsoft Sentinel account (ExternalId) to access your AWS resources. We also need to attach appropriate IAM permissions policies to grant Microsoft Sentinel access to the appropriate resources such as S3 bucket, SQS etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "assume_role" {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRole"]
    principals {
      type        = "AWS"
      identifiers = ["arn:aws:iam::197857026523:root"]
    }
    condition {
      test     = "StringEquals"
      variable = "sts:ExternalId"
      values   = ["65d3595c-c730-4a11-5e37-5115bae05e5e"]
    }
  }
}

resource "aws_iam_role" "this" {
  name                  = "AzureSentinelRole"
  description           = "Azure Sentinel Integration"
  assume_role_policy    = data.aws_iam_policy_document.assume_role.json
  managed_policy_arns = [
    "arn:aws:iam::aws:policy/AmazonSQSReadOnlyAccess",
    "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess",
    "arn:aws:iam::aws:policy/service-role/AWSLambdaSQSQueueExecutionRole"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;65d3595c-c730-4a11-5e37-5115bae05e5e&lt;/code&gt; : &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-workspace-overview" rel="noopener noreferrer"&gt;Log Analytics workspace&lt;/a&gt; id &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;197857026523&lt;/code&gt; : Microsoft Sentinel's service account ID for AWS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AmazonSQSReadOnlyAccess, AWSLambdaSQSQueueExecutionRole, AmazonS3ReadOnlyAccess permission policies attached to the Sentinel role.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Resource: &lt;code&gt;aws_sqs_queue&lt;/code&gt; is used to create the SQS queue named &lt;code&gt;awscbcloudtrailqueue&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Resource: &lt;code&gt;aws_sqs_queue_policy&lt;/code&gt; is used to create SQS Policy that grants &lt;code&gt;AzureSentinelRole&lt;/code&gt; necessary permission to carry out required actions on the newly created SQS queue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_sqs_queue" "sqs_queue" {
  name                      = var.trailQueueName
  delay_seconds             = 90
  max_message_size          = 2048
  message_retention_seconds = 86400
  receive_wait_time_seconds = 10
  kms_master_key_id         = aws_kms_key.primary.arn

  depends_on = [
    aws_s3_bucket.cloudtrailbucket,
    aws_kms_key.primary
  ]
}

resource "aws_sqs_queue_policy" "sqs_queue_policy" {
  queue_url = aws_sqs_queue.sqs_queue.id
  policy    = &amp;lt;&amp;lt;POLICY
{
  "Version": "2012-10-17",
  "Id": "sqspolicy",
  "Statement": [
    {
      "Sid": "CloudTrailSQS",
      "Effect": "Allow",
      "Principal": {
          "Service": "s3.amazonaws.com"
      },
      "Action": [
          "SQS:SendMessage"
      ],
      "Resource": "${aws_sqs_queue.sqs_queue.arn}",
      "Condition": {
          "ArnLike": {
              "aws:SourceArn": "${aws_s3_bucket.cloudtrailbucket.arn}"
          },
          "StringEquals": {
              "aws:SourceAccount": "${data.aws_caller_identity.current.account_id}"
          }
      }
    },
    {
      "Sid": "CloudTrailSQS",
      "Effect": "Allow",
      "Principal": {
           "AWS": "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/AzureSentinelRole"
      },
      "Action": [
        "SQS:ChangeMessageVisibility",
        "SQS:DeleteMessage",
        "SQS:ReceiveMessage",
        "SQS:GetQueueUrl"
      ],
      "Resource": "${aws_sqs_queue.sqs_queue.arn}" 
    }
  ]
}
POLICY
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jnluq2quut2utmymbg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jnluq2quut2utmymbg7.png" alt="SQS Queue access policy" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to configure CloudTrail S3 bucket &lt;code&gt;awscbcloudtrail&lt;/code&gt; to send notifications to your SQS queue when an object is created in it. &lt;/p&gt;

&lt;p&gt;Resource: &lt;code&gt;aws_s3_bucket_notification&lt;/code&gt; is used to create a notification named &lt;code&gt;awscbtrail-log-event&lt;/code&gt; on the bucket &lt;code&gt;awscbcloudtrail&lt;/code&gt; with the destination as the SQS queue we created above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket_notification" "bucket_notification" {
  bucket = aws_s3_bucket.cloudtrailbucket.id
  queue {
    id        = "${var.trailName}-log-event"
    queue_arn = aws_sqs_queue.sqs_queue.arn
    events    = ["s3:ObjectCreated:*"]
  }
  depends_on = [
    aws_sqs_queue.sqs_queue
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrdv4wc1tgwjh8m4j7k5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrdv4wc1tgwjh8m4j7k5.png" alt="s3 bucket notification" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the s3 bucket notification is in place and with the proper permission set we will see the messages arriving in the queue. Shown below is the queue received 1 message -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhaybhjojc8el6no5atkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhaybhjojc8el6no5atkj.png" alt="SQS Queue receiving message" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's put the url for the sqs queue and the arn for the Sentinel Role that we created above as an output in terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "sentinelrole" {
    value = aws_iam_role.this.arn
}

output "sqsurl" {
  value = aws_sqs_queue.sqs_queue.url
}

....
Changes to Outputs:
  + sentinelrole = "arn:aws:iam::123456789012:role/AzureSentinelRole"
  + sqsurl       = "https://sqs.ap-southeast-1.amazonaws.com/123456789012/awscbcloudtrailqueue"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source code for above setup is &lt;a href="https://github.com/sujaypillai/awscb001/tree/sqs" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next blog we will see how we can connect Microsoft Sentinel to your AWS Account to consume the above message created in SQS queue, thus allowing us to ingest the CloudTrail data to Azure.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Encrypt CloudTrail logs with multi-region Key with Terraform</title>
      <dc:creator>Sujay Pillai</dc:creator>
      <pubDate>Wed, 23 Mar 2022 02:20:03 +0000</pubDate>
      <link>https://forem.com/aws-builders/encrypt-cloudtrail-logs-with-multi-region-key-with-terraform-1hln</link>
      <guid>https://forem.com/aws-builders/encrypt-cloudtrail-logs-with-multi-region-key-with-terraform-1hln</guid>
      <description>&lt;p&gt;&lt;strong&gt;Who did what, where and when?&lt;/strong&gt; in my AWS account(s) through the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs is what AWS CloudTrail answerable to you as account owner. It enables auditing, security monitoring, operational troubleshooting, records user activity and API usage across AWS services as Events. These events can be viewed from the &lt;code&gt;Event History&lt;/code&gt; page in the AWS CloudTrail console and are available for up to 90 days after they occur. &lt;/p&gt;

&lt;p&gt;CloudTrail records three types of events: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Management events&lt;/em&gt;&lt;/strong&gt; capturing control plane actions on resources such as creating or deleting Amazon Simple Storage Service (Amazon S3) buckets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Data events&lt;/em&gt;&lt;/strong&gt; capturing data plane actions within a resource, such as reading or writing an Amazon S3 object.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Insight events&lt;/em&gt;&lt;/strong&gt; shows unusual API activities in your AWS account compared to historical API usage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE). You can also choose to encrypt your log files with an AWS KMS key. In the previous &lt;a href="https://dev.to/aws-builders/creating-a-multi-region-key-using-terraform-51o4"&gt;blog&lt;/a&gt; we saw how to build a multi-region key using terraform. We will make use of the same MRK to encrypt the CloudTrail log files and store it in an S3 bucket here.&lt;/p&gt;

&lt;p&gt;Resource: &lt;code&gt;aws_cloudtrail&lt;/code&gt; is used to create a trail for your account/organization.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudtrail" "default" {
  name                          = var.trailName
  s3_bucket_name                = var.trailBucket
  is_organization_trail         = true
  is_multi_region_trail         = true
  include_global_service_events = true
  kms_key_id                    = aws_kms_key.primary.arn
  depends_on = [
    aws_s3_bucket.cloudtrailbucket,
    aws_kms_key.primary
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see we are enabling the trail for multi-region and at organization level, which you can do it for a single-region and single account too.&lt;/p&gt;

&lt;p&gt;For the CloudTrail to write the log-files it needs an S3 bucket and for the same reason we have &lt;code&gt;depends_on&lt;/code&gt; added into the resource block to create the S3 bucket first.&lt;/p&gt;

&lt;p&gt;Resource: &lt;code&gt;aws_s3_bucket&lt;/code&gt; is used to create an S3 bucket using terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "cloudtrailbucket" {
  bucket = var.trailBucket
  depends_on = [
    aws_kms_key.primary
  ]
  force_destroy = true
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.primary.id
        sse_algorithm     = "aws:kms"
      }
      bucket_key_enabled = "false"
    }
  }
  object_lock_configuration {
    object_lock_enabled = "Enabled"
  }

  policy = &amp;lt;&amp;lt;POLICY
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSCloudTrailAclCheck",
            "Effect": "Allow",
            "Principal": {
              "Service": "cloudtrail.amazonaws.com"
            },
            "Action": "s3:GetBucketAcl",
            "Resource": [
                "arn:aws:s3:::${var.trailBucket}"
            ]
        },
        {
            "Sid": "AWSCloudTrailWriteAccount",
            "Effect": "Allow",
            "Principal": {
              "Service": "cloudtrail.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::${var.trailBucket}/AWSLogs/${data.aws_caller_identity.current.account_id}/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control",
                    "AWS:SourceArn" : "arn:aws:cloudtrail:ap-southeast-1:${data.aws_caller_identity.current.account_id}:trail/${var.trailName}"
                }
            }
        },
        {
            "Sid" : "AWSCloudTrailWriteOrganization",
            "Effect" : "Allow",
            "Principal" : {
                "Service" : "cloudtrail.amazonaws.com"
            },
            "Action" : "s3:PutObject",
            "Resource" : "arn:aws:s3:::${var.trailBucket}/AWSLogs/${data.aws_organizations_organization.myorg.id}/*",
            "Condition" : {
                "StringEquals" : {
                    "s3:x-amz-acl" : "bucket-owner-full-control",
                    "AWS:SourceArn" : "arn:aws:cloudtrail:ap-southeast-1:${data.aws_caller_identity.current.account_id}:trail/${var.trailName}"
                }
            }
        }
    ]
}
POLICY
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you just add the first statement in the bucket policy and configure your cloudtrail to write to this bucket you can see how CloudTrail logs would be beneficial in getting some valuable information. The below screenshot shows how the CloudTrail service reported &lt;code&gt;InsufficientS3BucketPolicyException&lt;/code&gt; error while trying to create trail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6jin5zbuju1fywin1wc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6jin5zbuju1fywin1wc.png" alt="InsufficientBucketPolicyException" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also the CloudTrail would need explicit permission to use the KMS key to encrypt logs on behalf of specific accounts. If KMS key policy is not correctly configured for CloudTrail, CloudTrail cannot deliver logs. The IAM global condition key &lt;code&gt;aws:SourceArn&lt;/code&gt; helps ensure that CloudTrail uses the KMS key only for this specific organizational trail what we are configuring. We would have to update the KMS policy for the key that we previously created with the below statement:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  statement {
    sid       = "Allow CloudTrail to encrypt logs"
    effect    = "Allow"
    actions   = ["kms:GenerateDataKey*"]
    resources = ["*"]
    principals {
      type        = "Service"
      identifiers = ["cloudtrail.amazonaws.com"]
    }
    condition {
      test     = "StringLike"
      variable = "kms:EncryptionContext:aws:cloudtrail:arn"
      values   = ["arn:aws:cloudtrail:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:trail/${var.trailName}"]
    }
    condition {
      test     = "StringEquals"
      variable = "aws:SourceArn"
      values   = ["arn:aws:cloudtrail:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:trail/${var.trailName}"]
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run the &lt;code&gt;terraform apply&lt;/code&gt; command with all the above configuration you will have the Trail created as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94nziz0kreg6vksekn46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94nziz0kreg6vksekn46.png" alt="EncryptedTrail" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The terraform source code for above setup can be found &lt;a href="https://github.com/sujaypillai/awscb001/tree/cloudtrail" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudtrail</category>
      <category>encrypted</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Creating a multi-region key using terraform</title>
      <dc:creator>Sujay Pillai</dc:creator>
      <pubDate>Thu, 17 Mar 2022 18:41:41 +0000</pubDate>
      <link>https://forem.com/aws-builders/creating-a-multi-region-key-using-terraform-51o4</link>
      <guid>https://forem.com/aws-builders/creating-a-multi-region-key-using-terraform-51o4</guid>
      <description>&lt;p&gt;For organizations to encrypt their data in a cloud-native approach AWS provides a fully managed service &lt;a href="https://aws.amazon.com/kms/" rel="noopener noreferrer"&gt;AWS KMS&lt;/a&gt;, a high-performance key management system with the “pay as you go” model to lower costs and reduce their administration burden compared to self-managed &lt;a href="https://csrc.nist.gov/glossary/term/Hardware_Security_Module_HSM" rel="noopener noreferrer"&gt;hardware security module (HSM)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;By choosing AWS KMS organizations get three options for encryption key management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS KMS with customer or AWS-managed keys&lt;/li&gt;
&lt;li&gt;AWS KMS with BYOK &lt;/li&gt;
&lt;li&gt;AWS KMS with a KMS custom key store key management backed by &lt;a href="https://aws.amazon.com/cloudhsm/" rel="noopener noreferrer"&gt;CloudHSM&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/hashicorp/terraform-provider-aws" rel="noopener noreferrer"&gt;Terraform AWS provider&lt;/a&gt; version &lt;a href="https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.64.0" rel="noopener noreferrer"&gt;3.64.0&lt;/a&gt; introduced new resource &lt;code&gt;aws_kms_replica_key&lt;/code&gt; by which we can create &lt;a href="https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt" rel="noopener noreferrer"&gt;Customer Managed Key (CMK)&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;In this blog post, we will walkthrough the steps for creating a &lt;em&gt;multi-region&lt;/em&gt; CMK using the resource &lt;code&gt;aws_kms_replica_key&lt;/code&gt; which was introduced newly in &lt;a href="https://github.com/hashicorp/terraform-provider-aws" rel="noopener noreferrer"&gt;Terraform AWS provider&lt;/a&gt; version &lt;a href="https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.64.0" rel="noopener noreferrer"&gt;3.64.0&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Multi-Region keys come in handy for data security scenarios - Disaster recovery, Global data management, Distributed signing applications, Active-active applications that spun multiple regions.&lt;/p&gt;

&lt;p&gt;As we need the resource type &lt;code&gt;aws_kms_replica_key&lt;/code&gt; from Terraform AWS provider the below block helps to add this to our project. Make sure you have atleast 3.64.0 version to achieve this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 3.64.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Multi-Region keys are not global. You create a multi-Region primary key and then replicate it into regions that you select within an AWS partition. Then you manage the Multi-Region key in each region independently.&lt;/p&gt;

&lt;p&gt;In our case we will have the primary key created in Singapore region while the replicas in Sydney and Jakarta respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Singapore
provider "aws" {
  region = "ap-southeast-1"
}

# Sydney
provider "aws" {
  alias  = "secondary"
  region = "ap-southeast-2"
}

# Jakarta
# 3.70.0 Terraform AWS Provider release will use AWS SDK v1.42.23 
# which adds ap-southeast-3 to the list of regions for the standard AWS partition.
# https://github.com/hashicorp/terraform-provider-aws/issues/22252
provider "aws" {
  alias  = "tertiary"
  region = "ap-southeast-3"
  skip_region_validation = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt; If you are trying to create a KMS replica in &lt;code&gt;JAKARTA&lt;/code&gt; region you will encounter an error as below&lt;br&gt;
  Error: Invalid AWS Region: ap-southeast-3&lt;br&gt;
  with provider["registry.terraform.io/hashicorp/aws"].tertiary,&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is becuase support for &lt;code&gt;ap-southeast-3&lt;/code&gt; was added in AWS SDK v1.42.23 and used in Terraform AWS Provider v3.70.0. The temporary solution for this would be to add &lt;code&gt;skip_region_validation = true&lt;/code&gt; statement in the provider block.&lt;/p&gt;

&lt;p&gt;Unlike other AWS resource policies, a AWS KMS key policy does not automatically give permission to the account or any of its users. To give permission to account administrators, the key policy must include an explicit statement that provides this permission, like this one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "kms" {
  # Allow root users full management access to key
  statement {
    effect = "Allow"
    actions = [
      "kms:*"
    ]
    resources = ["*"]
    principals {
      type        = "AWS"
      identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"]
    }
  }

  # Allow other accounts limited access to key
  statement {
    effect = "Allow"
    actions = [
      "kms:CreateGrant",
      "kms:Encrypt",
      "kms:Decrypt",
      "kms:ReEncrypt*",
      "kms:GenerateDataKey*",
      "kms:DescribeKey",
    ]

    resources = ["*"]

    # AWS account IDs that need access to this key
    principals {
      type        = "AWS"
      identifiers = var.account_ids
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating multi-region primary key
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_kms_key" "primary" {
  description         = "CMK for AWS CB Blog"
  enable_key_rotation = true
  policy              = data.aws_iam_policy_document.kms.json
  multi_region        = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Resource&lt;/strong&gt; :&lt;code&gt;aws_kms_key&lt;/code&gt; is used to create a single-region OR multi-region primary KMS key.&lt;/p&gt;

&lt;p&gt;As this is a multi-region key the &lt;code&gt;id&lt;/code&gt; &amp;amp; &lt;code&gt;key_id&lt;/code&gt; has &lt;em&gt;mrk-&lt;/em&gt; as prefix.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform show -json terraform.tfstate | jq '.values.root_module.resources[0].values.id'
"mrk-01641fdcadec421f9ed2665c7d78ef9c"

terraform show -json terraform.tfstate | jq '.values.root_module.resources[0].values.key_id'
"mrk-01641fdcadec421f9ed2665c7d78ef9c"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also refer to KMS key using its alias. Resource : &lt;code&gt;aws_kms_alias&lt;/code&gt; is used to create an alias.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_kms_alias" "alias" {
  target_key_id = aws_kms_key.primary.id
  name          = format("alias/%s", lower("AWS_CB_CMK"))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt; "name" must begin with 'alias/' and be comprised of only [a-zA-Z0-9:/_-]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiok7uemk2v6dfkrho5cm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiok7uemk2v6dfkrho5cm.png" alt="AWS KMS CMK" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating multi-region replica keys
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_kms_replica_key" "secondary" {
  provider = aws.secondary

  description             = "Multi-Region replica key"
  deletion_window_in_days = 7
  primary_key_arn         = aws_kms_key.primary.arn
}

resource "aws_kms_replica_key" "tertiary" {
  provider = aws.tertiary

  description             = "Multi-Region replica key"
  deletion_window_in_days = 7
  primary_key_arn         = aws_kms_key.primary.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Resource :&lt;code&gt;aws_kms_replica_key&lt;/code&gt; is used to create a multi-region replica key. Here we are passing explicitly the provider alias (aws.secondary &amp;amp; aws.tertiary) to create the keys in Sydney &amp;amp; Jakarta region.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pk3hu1k1lvc27qdkw4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pk3hu1k1lvc27qdkw4x.png" alt="MRK" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need to set a waiting period of 7 (min) - 30 (max, default) days for deleting the KMS key.&lt;/p&gt;

&lt;p&gt;We thus have a primary key in Singapore region with its replica in Sydney &amp;amp; Jakarta respectively. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;While you implement this and in continuation of this blog we will see &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/encrypt-cloudtrail-logs-with-multi-region-key-with-terraform-1hln"&gt;How we can consume this key in encrypting an S3 bucket to configure AWS CloudTrail&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/configuring-amazon-sqs-queues-using-terraform-9g2"&gt;Configure an SQS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Integrating Azure Sentinel to consume AWS CloudTrail data.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Source Code for above setup available &lt;a href="https://github.com/sujaypillai/awscb001/tree/mrk" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>kms</category>
      <category>keys</category>
    </item>
    <item>
      <title>Enable Kubernetes Metrics Server on Docker Desktop</title>
      <dc:creator>Sujay Pillai</dc:creator>
      <pubDate>Mon, 20 Sep 2021 13:43:48 +0000</pubDate>
      <link>https://forem.com/docker/enable-kubernetes-metrics-server-on-docker-desktop-5434</link>
      <guid>https://forem.com/docker/enable-kubernetes-metrics-server-on-docker-desktop-5434</guid>
      <description>&lt;p&gt;The steps below in this blog will help you setup &lt;a href="https://github.com/kubernetes-sigs/metrics-server" rel="noopener noreferrer"&gt;Kubernetes Metrics Server&lt;/a&gt; on &lt;strong&gt;Docker Desktop&lt;/strong&gt; which provides a standalone instance of Kubernetes running as a Docker container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/metrics-server" rel="noopener noreferrer"&gt;Kubernetes Metrics Server&lt;/a&gt; is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noopener noreferrer"&gt;Horizontal Pod Autoscaler&lt;/a&gt; and &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="noopener noreferrer"&gt;Vertical Pod Autoscaler&lt;/a&gt;. &lt;/p&gt;

&lt;h4&gt;
  
  
  Metrics Server offers:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;A single deployment that works on most clusters&lt;/li&gt;
&lt;li&gt;Scalable support up to 5,000 node clusters&lt;/li&gt;
&lt;li&gt;Resource efficiency: Metrics Server uses 1m core of CPU and 3 MB of memory per node&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  You can use Metrics Server for:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;CPU/Memory based horizontal autoscaling&lt;/li&gt;
&lt;li&gt;Automatically adjusting/suggesting resources needed by containers&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/docker-for-mac/install/" rel="noopener noreferrer"&gt;Install Docker Desktop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Enable Kubernetes on Docker Desktop&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have enabled the Kubernetes on Docker Desktop, and if you run the below commands you should see messages like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl top node 
error: Metrics API not available
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl top pod -A
error: Metrics API not available
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Metrics server isn't included with Docker Desktop's installation of Kubernetes and to install it we will have to download the latest &lt;code&gt;components.yaml&lt;/code&gt; file from &lt;a href="https://github.com/kubernetes-sigs/metrics-server/releases" rel="noopener noreferrer"&gt;Metrics-Server&lt;/a&gt; releases page and open it in your text editor.&lt;/p&gt;

&lt;p&gt;If you try to execute the command &lt;code&gt;kubectl apply -f components.yaml&lt;/code&gt; you will see the pods get created but with some errors as highlighted below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidwso3o6d1e490a2w659.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidwso3o6d1e490a2w659.png" alt="MetricsServer_Setup" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the line --kubelet-insecure-tls under the args section as shown below [L136] :&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foftoy932a02jeammjkgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foftoy932a02jeammjkgy.png" alt="MetricsServer_Setup02" width="800" height="668"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Execute the command &lt;code&gt;kubectl apply -f components.yaml&lt;/code&gt; to apply the changes:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0c12qtgslxae6o9llwf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0c12qtgslxae6o9llwf.png" alt="MetricsServer_Setup03" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now if you execute the &lt;code&gt;kubectl top node&lt;/code&gt; &amp;amp; &lt;code&gt;kubectl top pod -A&lt;/code&gt; commands you should see the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl top node 
NAME             CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
docker-desktop   1310m        32%    1351Mi          71%       
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl top pod -A
NAMESPACE     NAME                                     CPU(cores)   MEMORY(bytes)   
cpu-example   cpu-demo                                 1003m        1Mi             
kube-system   coredns-f9fd979d6-g2rfx                  9m           9Mi             
kube-system   coredns-f9fd979d6-wndgm                  6m           9Mi             
kube-system   etcd-docker-desktop                      35m          36Mi            
kube-system   kube-apiserver-docker-desktop            55m          325Mi           
kube-system   kube-controller-manager-docker-desktop   41m          47Mi            
kube-system   kube-proxy-s72fj                         1m           25Mi            
kube-system   kube-scheduler-docker-desktop            9m           17Mi            
kube-system   metrics-server-56c59cf9ff-jndxd          10m          14Mi            
kube-system   storage-provisioner                      4m           5Mi             
kube-system   vpnkit-controller                        1m           15Mi 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use &lt;a href="https://github.com/kubernetes/dashboard" rel="noopener noreferrer"&gt;Kubernetes Dashboard&lt;/a&gt; to view the above data (and more information) in a web UI. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.&lt;/p&gt;

&lt;p&gt;To deploy Dashboard, execute following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To access Dashboard from your local workstation you must create a secure channel to your Kubernetes cluster. Run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl proxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get the token for login to dashboard using the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret |grep default-token | awk '{print $1}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To access the HTTPS endpoint of dashboard go to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Login to the dashboard using the token from above step and you should see a dashboard as below:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjo8n2aleue4cm6c4gqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjo8n2aleue4cm6c4gqq.png" alt="Metrics_Dashboard" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;This setting should only be used for the local Docker Desktop Kubernetes cluster, and not recommended for any hosted or production clusters.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>docker</category>
      <category>desktop</category>
      <category>kubernetes</category>
      <category>metrics</category>
    </item>
  </channel>
</rss>
