<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Luigi Di Fraia</title>
    <description>The latest articles on Forem by Luigi Di Fraia (@luigidifraia).</description>
    <link>https://forem.com/luigidifraia</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/luigidifraia"/>
    <language>en</language>
    <item>
      <title>Why Every Platform Team Shouldn't Build Their AI Standards From Scratch</title>
      <dc:creator>Luigi Di Fraia</dc:creator>
      <pubDate>Tue, 28 Apr 2026 05:20:01 +0000</pubDate>
      <link>https://forem.com/luigidifraia/why-every-platform-team-shouldnt-build-their-ai-standards-from-scratch-2lma</link>
      <guid>https://forem.com/luigidifraia/why-every-platform-team-shouldnt-build-their-ai-standards-from-scratch-2lma</guid>
      <description>&lt;p&gt;&lt;em&gt;This is Part 0 of a series on building agentic AI workflows for platform engineering. &lt;a href="https://dev.to/luigidifraia/transformative-ai-powered-platform-engineering-2902"&gt;Part 1&lt;/a&gt; jumped straight into the practical how-to: your first steering file, the workspace structure, getting started with Terraform. This article takes a step back to ask a broader question: why is every team building this from scratch?&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Something odd is happening across the industry right now. Thousands of platform engineering teams are independently discovering the same thing: AI coding assistants are competent at the language level but ignorant at the team level.&lt;/p&gt;

&lt;p&gt;They know Terraform syntax but not your Terraform conventions. They know Kubernetes but not your Kubernetes workflow. They'll generate a perfectly valid IAM policy using &lt;code&gt;jsonencode()&lt;/code&gt; when your team exclusively uses &lt;code&gt;data.aws_iam_policy_document&lt;/code&gt;. They'll suggest &lt;code&gt;kubectl apply&lt;/code&gt; when your team is GitOps-first and everything goes through ArgoCD.&lt;/p&gt;

&lt;p&gt;The fix is straightforward. You encode your standards into files that AI agents read automatically: steering files, skills, agent definitions, so every conversation starts with your team's conventions already loaded. I'll cover the mechanics in detail later in this series.&lt;/p&gt;

&lt;p&gt;But here's the question nobody seems to be asking: &lt;strong&gt;why is every team writing these from scratch?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Terraform Module Problem, Again
&lt;/h2&gt;

&lt;p&gt;We've been here before. Before the Terraform Registry existed, every team wrote their own VPC module. Most of them were 80% identical: the same subnets, the same route tables, the same NAT gateway pattern, with 20% of team-specific customisation on top.&lt;/p&gt;

&lt;p&gt;The Terraform Registry didn't eliminate custom modules. It eliminated the redundant 80%. Teams could start from a community module and customise the rest.&lt;/p&gt;

&lt;p&gt;AI workspace configurations have the same problem today. Every platform team that adopts agentic AI tooling starts from zero:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Always use &lt;code&gt;data.aws_iam_policy_document&lt;/code&gt;", discovered independently by every AWS/Terraform team&lt;/li&gt;
&lt;li&gt;"Conventional commits with semantic release", written into a steering file by every team that uses them&lt;/li&gt;
&lt;li&gt;"No &lt;code&gt;0.0.0.0/0&lt;/code&gt; ingress unless documented", encoded as a rule by every security-conscious team&lt;/li&gt;
&lt;li&gt;"Pin provider versions, pin Terraform versions", rediscovered after the first breaking upgrade&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is collective knowledge being individually rediscovered. It's wasteful, and it's solvable.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Layered Model for Shared Standards
&lt;/h2&gt;

&lt;p&gt;The solution looks like what already works for linting, CI templates, and infrastructure modules: composable, shareable, layered configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 1: Industry baseline.&lt;/strong&gt; Universal best practices that are true regardless of your organisation. AWS Well-Architected principles as steering rules. CIS Benchmarks as security baselines. Terraform style conventions. Git hygiene. These should be published, versioned, and consumable, not rediscovered by every team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 2: Organisation standards.&lt;/strong&gt; Your company's specific opinions layered on top: naming conventions, tagging standards, provider version pins, CI template references, security baselines. Shared across teams within the organisation, maintained centrally, consumed as a dependency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 3: Team customisation.&lt;/strong&gt; The 20% that's genuinely unique: your specific module structure, your environment names, your repo layout. This is the only layer teams should write from scratch, and it's small because the heavy lifting was done by the layers below.&lt;/p&gt;

&lt;p&gt;The key property is that each layer extends and can override the previous one, exactly like ESLint's shareable configs, where you extend &lt;code&gt;eslint-config-recommended&lt;/code&gt;, then your org's config, then add your team's overrides.&lt;/p&gt;

&lt;p&gt;The tooling is already moving in this direction. &lt;a href="https://kiro.dev" rel="noopener noreferrer"&gt;Kiro&lt;/a&gt; provides an explicit layered model through its &lt;code&gt;.kiro/&lt;/code&gt; directory: steering files, skills, and agent definitions at different scopes. &lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; supports a similar pattern natively through &lt;code&gt;CLAUDE.md&lt;/code&gt; files scoped at the project root, subdirectory, or global level. The concept of layered, overridable context isn't hypothetical; it's how multiple tools already work. What's missing is the sharing and distribution layer on top.&lt;/p&gt;




&lt;h2&gt;
  
  
  This Is Already Starting to Happen
&lt;/h2&gt;

&lt;p&gt;The pattern isn't theoretical. Early efforts are emerging that treat standards as machine-readable tooling rather than documents.&lt;/p&gt;

&lt;p&gt;In the UK Government space, Version 1 (my employer) open-sourced a &lt;a href="https://github.com/Version1/uk-gov-tech-standards-mcp" rel="noopener noreferrer"&gt;proof-of-concept MCP server wrapping 102 curated UK Government technology standards&lt;/a&gt; from GDS, NCSC, Cabinet Office, and ICO as searchable, context-aware tools. Instead of an engineer reading through the Service Manual to find applicable accessibility standards, their AI assistant can query them directly, filtered by work type, development phase, and priority level. The repo hasn't seen active development since its initial release, but the concept holds: standards as something AI agents consume, not something humans remember to check.&lt;/p&gt;

&lt;p&gt;In the US, &lt;a href="https://legismcp.com/" rel="noopener noreferrer"&gt;LegislMCP&lt;/a&gt; takes a similar approach for government data: an open-source MCP server spanning 29 federal and state data sources with 161 tools, covering everything from Congressional records to FDA and EPA data.&lt;/p&gt;

&lt;p&gt;These are point solutions, individual MCP servers for specific domains. The bigger opportunity is the layered model: composable packs of steering files, skills, and agent definitions that teams can extend rather than rebuild.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where Standards Bodies Fit
&lt;/h2&gt;

&lt;p&gt;This is where it gets interesting for organisations with centralised standards: government departments, regulated industries, large enterprises.&lt;/p&gt;

&lt;p&gt;Take UK Government as an example. GDS already publishes the Technology Code of Practice and the Service Standard. NCSC publishes cloud security guidance. These are well-maintained, authoritative, and widely referenced. They're also PDFs and web pages that people read once during onboarding and then forget.&lt;/p&gt;

&lt;p&gt;What if they were published as AI workspace configurations instead?&lt;/p&gt;

&lt;p&gt;A GDS steering pack could encode the Service Standard as rules that AI agents follow automatically. Not "here's a document about accessibility" but "every frontend component you generate must meet WCAG 2.2 AA, and here's how." An NCSC security pack could encode their cloud security principles as non-negotiable guardrails that every agent respects by default.&lt;/p&gt;

&lt;p&gt;The distribution mechanism already exists. Git repos with semantic versioning. Teams declare a dependency, pin a version, and get updates when the standards evolve:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;extends&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@gds/service-standard:^2.0"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@ncsc/cloud-security:^1.5"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@myorg/platform-standards:^3.0"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Updates to security guidance would propagate to every team's AI agents on the next version bump. That's fundamentally different from emailing a PDF and hoping people read it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Gap Today
&lt;/h2&gt;

&lt;p&gt;The ecosystem is moving faster than you might expect. AWS recently launched &lt;a href="https://aws.amazon.com/blogs/machine-learning/the-future-of-managing-agents-at-scale-aws-agent-registry-now-in-preview/" rel="noopener noreferrer"&gt;Agent Registry&lt;/a&gt; (preview) as part of Amazon Bedrock AgentCore: a centralised place to discover, share, and govern agents, tools, MCP servers, and agent skills across an enterprise. It supports MCP and A2A natively, includes approval workflows and lifecycle management, and indexes agents regardless of where they're built.&lt;/p&gt;

&lt;p&gt;This solves an important piece of the puzzle: finding and reusing agents and tools at scale. But there's a layer above where the tooling doesn't exist yet. While individual teams are starting to encode standards as machine-readable tooling (as the examples above show), there's no mechanism for sharing &lt;em&gt;workspace configurations&lt;/em&gt; at scale: the steering files, skills, and agent definitions that shape how agents behave within a specific domain. An agent registry tells you "this MCP server exists and here's how to invoke it." A shareable steering pack tells you "when writing Terraform for AWS, here are the rules your agents should follow."&lt;/p&gt;

&lt;p&gt;The building blocks are falling into place. The npm registry solved package discovery and versioning. ESLint solved layered configuration with extends chains and override rules. The Terraform Registry solved module sharing with documentation and version pinning. Agent registries are now solving agent and tool discovery. The remaining gap is the configuration layer that ties it all together.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Can Do Now
&lt;/h2&gt;

&lt;p&gt;You don't need to wait for the ecosystem. You can structure your workspace to be ready for it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Separate universal rules from team-specific ones.&lt;/strong&gt; Even within your own steering files, keep a clear boundary between "this is true for any AWS Terraform project" and "this is specific to us." When shareable packs exist, you'll know exactly which rules to replace with a dependency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Version your workspace config.&lt;/strong&gt; It's already in git. Treat it like a product: tag releases, write changelogs, make it consumable by other teams in your organisation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Share within your org first.&lt;/strong&gt; If you have multiple platform teams, publish your Layer 2 config as an internal package. Get feedback. Iterate. This is the fastest way to discover what's genuinely universal and what's team-specific opinion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Contribute upstream.&lt;/strong&gt; If you've written a good AWS Terraform baseline, open-source it. The community will tell you quickly what's universal and what's opinionated.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The Series So Far and What's Next
&lt;/h2&gt;

&lt;p&gt;This article covered the why: why shared, layered AI workspace configurations matter and why every team shouldn't start from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Already published:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/luigidifraia/transformative-ai-powered-platform-engineering-2902"&gt;Part 1: Your First Steering File: Teaching AI Your Terraform Conventions&lt;/a&gt;&lt;/strong&gt;: the practical getting started guide covering workspace structure, your first steering file, and the tooling choice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Coming next:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Part 2&lt;/strong&gt;: Steering files in depth, encoding Terraform, Git, and CI/CD standards as non-negotiable rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 3&lt;/strong&gt;: Skills and agents, deep reference material and purpose-built agents for different roles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 4&lt;/strong&gt;: Tool integrations and the refinement loop, connecting agents to your workflow and making the system compound over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each part is practical and hands-on, with real examples from an AWS platform engineering stack (Terraform, GitLab, EKS, Control Tower).&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you haven't read Part 1 yet, &lt;a href="https://dev.to/luigidifraia/transformative-ai-powered-platform-engineering-2902"&gt;start there&lt;/a&gt; for the hands-on setup.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you're building agentic AI workflows into platform engineering, or thinking about shared standards for your organisation, I'd love to hear your approach.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>devops</category>
    </item>
    <item>
      <title>Transformative AI-Powered Platform Engineering</title>
      <dc:creator>Luigi Di Fraia</dc:creator>
      <pubDate>Sat, 25 Apr 2026 06:36:38 +0000</pubDate>
      <link>https://forem.com/luigidifraia/transformative-ai-powered-platform-engineering-2902</link>
      <guid>https://forem.com/luigidifraia/transformative-ai-powered-platform-engineering-2902</guid>
      <description>&lt;p&gt;&lt;em&gt;This is Part 1 of a series on building agentic AI workflows for platform engineering teams. The series covers workspace design, encoding standards, agent architecture, tool integrations, and the refinement loop that makes it all compound over time.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;If you're running a platform engineering team in 2026 and your AI tooling still consists of "paste Terraform into ChatGPT and hope for the best," you're leaving serious velocity on the table.&lt;/p&gt;

&lt;p&gt;But here's the thing most people get wrong: the answer isn't better prompts. It's better structure.&lt;/p&gt;

&lt;p&gt;In my current engagement, we've been building agentic AI workflows into platform engineering for a while now. The stack starts where most platform teams start: AWS, Terraform for IaC, GitLab for source control and CI/CD. Multiple accounts, multiple environments, and a growing collection of modules that encode your team's opinions about how infrastructure should look.&lt;/p&gt;

&lt;p&gt;No single person holds all of those opinions in their head. And neither does an LLM; not without help.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With Ad-Hoc AI
&lt;/h2&gt;

&lt;p&gt;Every platform engineer has done this: you're writing a Terraform module, you ask your AI assistant to generate an IAM policy, and it hands you a &lt;code&gt;jsonencode()&lt;/code&gt; block with inline JSON. It works. It's also wrong: your team uses &lt;code&gt;data.aws_iam_policy_document&lt;/code&gt; exclusively, for good reasons (readability, composability, Checkov compatibility). But the AI doesn't know that.&lt;/p&gt;

&lt;p&gt;You correct it. It apologises. Next session, it does the same thing again.&lt;/p&gt;

&lt;p&gt;Or this: you ask it to create an EKS add-on configuration, and it generates a &lt;code&gt;kubectl apply&lt;/code&gt; command. Your team is GitOps-first: everything goes through ArgoCD. But the AI doesn't know that either.&lt;/p&gt;

&lt;p&gt;The pattern is always the same. The AI is competent at the language level but ignorant at the team level. It knows Terraform syntax but not your Terraform conventions. It knows Kubernetes but not your Kubernetes workflow.&lt;/p&gt;

&lt;p&gt;Most teams try to fix this with longer prompts, or by pasting their standards into the chat window. That works for about ten minutes, until the context window fills up or you start a new session.&lt;/p&gt;




&lt;h2&gt;
  
  
  What If Your Standards Were Built Into the Tools?
&lt;/h2&gt;

&lt;p&gt;Imagine this instead: every time an AI agent writes Terraform in your workspace, it has already read your module structure conventions, your naming rules, your IAM policy patterns, your provider configuration, and your security baseline. Not because someone pasted them in; because they're part of the workspace itself.&lt;/p&gt;

&lt;p&gt;Every time it creates a merge request, it knows your commit message format, your branch naming convention, your CI template patterns, and your cross-linking strategy between tickets and code.&lt;/p&gt;

&lt;p&gt;Every time it designs a new feature, it can check your existing codebase for similar patterns, identify which repos are affected, and plan the work in the right order.&lt;/p&gt;

&lt;p&gt;That's what an AI-powered workspace gives you. Not smarter AI but better-informed AI.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Big Picture
&lt;/h2&gt;

&lt;p&gt;Over this series, I'll walk through how to build this from scratch. Here's what we'll cover:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The foundation&lt;/strong&gt;: steering files that encode your non-negotiable rules. These are loaded into every AI conversation automatically. Your Terraform patterns, your git conventions, your CI/CD standards. Write them once, enforce them forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep reference material&lt;/strong&gt;: skills that agents opt into when they need domain-specific knowledge. Your landing zone structure, your account vending patterns, your CI template library. Too detailed for every conversation, essential for the right ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specialised agents&lt;/strong&gt;: purpose-built agents for different roles: one that writes infrastructure code, one that reviews merge requests from security and compliance perspectives, one that blueprints features into implementation tasks, one that ships code end-to-end. Each with its own tools, context, and boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tool integrations&lt;/strong&gt;: connecting your agents to the systems they need: your ticket tracker for work management, AWS documentation for reference, your CI/CD pipelines for deployment status. Agents that can only read and write files are useful. Agents that participate in your actual workflow are transformative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The refinement loop&lt;/strong&gt;: the part that makes it all compound. Every time the AI gets something wrong, you encode the correction in the workspace. Next session, it gets it right. Over weeks and months, your workspace accumulates the team's collective judgement.&lt;/p&gt;

&lt;p&gt;And here's the part that doesn't get talked about enough: &lt;strong&gt;onboarding becomes trivial&lt;/strong&gt;. A new engineer clones the workspace and immediately has access to every convention, every pattern, every hard-won lesson the team has learned; not as a Confluence page they'll never read, but as active rules built into the tools they use from minute one. No more three-month ramp-up. No more "ask Sarah, she knows how we do IAM policies." The workspace &lt;em&gt;is&lt;/em&gt; the institutional knowledge.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;p&gt;To keep this concrete, the series assumes a specific (but common) platform engineering stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud&lt;/strong&gt;: AWS, multi-account (Control Tower for landing zone)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IaC&lt;/strong&gt;: Terraform, multi-environment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source Control &amp;amp; CI/CD&lt;/strong&gt;: GitLab with shared CI templates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret Management&lt;/strong&gt;: AWS Secrets Manager, never in code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Later in the series, we'll layer on Kubernetes (EKS), a developer portal (Backstage), and GitOps (ArgoCD). But the foundation starts here: with Terraform and the rules your team already has but hasn't encoded yet.&lt;/p&gt;

&lt;p&gt;If your stack differs, the principles still apply. The workspace structure is stack-agnostic; only the content of the steering files and skills changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tooling Choice
&lt;/h2&gt;

&lt;p&gt;The workspace structure in this series is built around &lt;a href="https://kiro.dev" rel="noopener noreferrer"&gt;Kiro&lt;/a&gt;, an AI-powered IDE from AWS. It's an opinionated choice, and deliberately so.&lt;/p&gt;

&lt;p&gt;Kiro provides a layered context model through its &lt;code&gt;.kiro/&lt;/code&gt; directory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Steering files&lt;/strong&gt;: always injected into every conversation, non-negotiable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skills&lt;/strong&gt;: deeper reference material that specific agents opt into&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent definitions&lt;/strong&gt;: role-specific behaviour, tools, and context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This enforced separation of concerns is what makes the system scale. Your Terraform rules don't bloat every conversation with Kubernetes context. Your CI patterns are available when needed but not loaded when irrelevant.&lt;/p&gt;

&lt;p&gt;If your team uses &lt;a href="https://www.anthropic.com/product/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, the same principles apply with native support. Your &lt;code&gt;CLAUDE.md&lt;/code&gt; file (scoped at the project root, subdirectory, or global level) is loaded into every session automatically, giving you a layered context model without additional configuration. Claude Code also builds auto-memory as it works, saving learnings across sessions without manual encoding. Keep your &lt;code&gt;CLAUDE.md&lt;/code&gt; concise: Claude Code's system prompt already consumes a significant portion of the model's reliable instruction-following capacity, so prioritise rules that are truly universal.  &lt;/p&gt;

&lt;p&gt;If your team uses a different AI tool, the &lt;code&gt;AGENTS.md&lt;/code&gt; file at the workspace root serves as a portable fallback: it's a plain markdown file that tools like Cursor and others pick up automatically. You won't get the layered context model, but you'll get the basics.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started Today, Before Part 2
&lt;/h2&gt;

&lt;p&gt;You don't need to wait for the rest of this series to start. Here's what you can do right now:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create one steering file.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pick the area where your AI assistant causes the most damage. For most platform teams, that's Terraform. Write down the rules you find yourself repeating:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What's your module file structure?&lt;/li&gt;
&lt;li&gt;How do you write IAM policies? (&lt;code&gt;data.aws_iam_policy_document&lt;/code&gt;? &lt;code&gt;jsonencode()&lt;/code&gt;? Something else?)&lt;/li&gt;
&lt;li&gt;What's your naming convention?&lt;/li&gt;
&lt;li&gt;What provider version do you pin?&lt;/li&gt;
&lt;li&gt;What security rules are non-negotiable?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Put it in &lt;code&gt;.kiro/steering/terraform.md&lt;/code&gt; (or whatever your AI tool's equivalent is). It doesn't need to be perfect. It needs to exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create an AGENTS.md file.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At your workspace root, write a plain markdown file that describes your project: what it is, how it's structured, how to build it, and the three or four rules that matter most. This works with any AI tool, no configuration required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Test it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ask your AI assistant to generate something it usually gets wrong: an IAM policy, a CI pipeline, a Kubernetes manifest. See if the steering file corrects the behaviour. If it doesn't, tighten the rule. If it does, you've just experienced the refinement loop.&lt;/p&gt;

&lt;p&gt;That's the foundation. In Part 2, we'll go deep on steering files, the specific rules that prevent the most common AI-generated mistakes in Terraform, GitLab CI, and git workflows.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Next in the series: **Steering Files: Teaching AI Your Non-Negotiable Rules&lt;/em&gt;**&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow along for the rest of the series, or connect if you're building something similar. I'd love to compare notes.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
