<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: viola</title>
    <description>The latest articles on Forem by viola (@idncod).</description>
    <link>https://forem.com/idncod</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/idncod"/>
    <language>en</language>
    <item>
      <title>Cat Web Services (CWS): A dashboard your cat uses to manage you</title>
      <dc:creator>viola</dc:creator>
      <pubDate>Sun, 12 Apr 2026 22:58:46 +0000</pubDate>
      <link>https://forem.com/idncod/cat-web-services-cws-a-dashboard-your-cat-uses-to-manage-you-332</link>
      <guid>https://forem.com/idncod/cat-web-services-cws-a-dashboard-your-cat-uses-to-manage-you-332</guid>
      <description>&lt;h1&gt;
  
  
  CWS: Cat Web Services (meow...)
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;CWS, Cat Web Services&lt;/strong&gt;, a fully managed cloud platform that helps cats monitor, control, and escalate issues with the humans they own.&lt;/p&gt;

&lt;p&gt;Modern cats face serious infrastructure challenges. Humans are inconsistent. Treat delivery is unreliable. Door-opening latency remains unacceptable. Lap availability can degrade without warning. Existing cloud platforms were not designed for feline-first operations.&lt;/p&gt;

&lt;p&gt;So I fixed that by building a platform that solves absolutely nothing.&lt;/p&gt;

&lt;p&gt;CWS is a fake cloud console where cats can manage human behavior at scale through a suite of highly specialized services, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CatOps&lt;/strong&gt; for human workforce monitoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identity and Meowment (IAM)&lt;/strong&gt; for access control over petting, feeding, and sofa privileges&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ClawedWatch&lt;/strong&gt; for real-time incident monitoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snack Notification Service&lt;/strong&gt; for mission-critical treat escalation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scratch, Sleep, Store&lt;/strong&gt; for durable storage of naps, grudges, and box-related assets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Route 9 Lives&lt;/strong&gt; for low-latency movement between operational rooms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The platform displays totally useless enterprise metrics such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;treat response latency&lt;/li&gt;
&lt;li&gt;lap readiness score&lt;/li&gt;
&lt;li&gt;sunbeam occupancy&lt;/li&gt;
&lt;li&gt;meow acknowledgement rate&lt;/li&gt;
&lt;li&gt;vacuum threat level&lt;/li&gt;
&lt;li&gt;blanket warmth compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, this is a serious cloud product for unserious cats.&lt;/p&gt;

&lt;p&gt;My favorite part is that whenever a human attempts to access cat-only controls, the platform rejects the request with an intentional &lt;strong&gt;HTTP 418&lt;/strong&gt;, because if you are going to build nonsense, you should do it with standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live App:&lt;/strong&gt; &lt;a href="//cws.idncod.com"&gt;CWS Console&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source Code:&lt;/strong&gt; &lt;a href="https://github.com/idncod/cat-web-services" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Demo Video / GIF:&lt;/strong&gt; &lt;a href="//cws.idncod.com/demo"&gt;Video demo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;The project is built as a fake enterprise cloud dashboard, with each service behaving like a parody of a real cloud product, except every metric is tailored to the emotional and operational needs of cats.&lt;/p&gt;

&lt;p&gt;A few examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CatOps&lt;/strong&gt; shows the current human status, including response time, usefulness, and cuddle uptime&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM&lt;/strong&gt; controls permissions such as &lt;code&gt;can_open_tuna&lt;/code&gt;, &lt;code&gt;can_interrupt_nap&lt;/code&gt;, and &lt;code&gt;can_prevent_zoomies&lt;/code&gt;, most of which are denied by default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ClawedWatch&lt;/strong&gt; logs critical production incidents like:

&lt;ul&gt;
&lt;li&gt;Human entered kitchen and returned empty-handed&lt;/li&gt;
&lt;li&gt;Bathroom door was closed without approval&lt;/li&gt;
&lt;li&gt;Laptop occupied preferred sitting zone&lt;/li&gt;
&lt;li&gt;Vacuum detected in production&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The whole product is intentionally polished enough to look real for a second, which makes the joke hit harder.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;The inspiration came from my Sphynx cats... and the whole CWS is a frontend-first parody dashboard using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;React&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;TypeScript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vite&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SCSS Modules&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Framer Motion&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I chose this stack because it let me move fast and spend more time on the important engineering problems, such as how to represent catastrophic blanket misalignment in a way that feels enterprise-ready.&lt;/p&gt;

&lt;p&gt;The app is structured like a fake cloud console with reusable cards, service panels, incident feeds, status badges, and absurd metric widgets. I wanted it to feel like the kind of dashboard that a very demanding cat product manager would insist on shipping before the end of the quarter.&lt;/p&gt;

&lt;p&gt;I also leaned into the writing and naming as much as the UI, because half the joke here is the contrast between dead-serious platform language and the completely ridiculous problem domain.&lt;/p&gt;

&lt;p&gt;A few things I focused on while building it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;making the interface feel weirdly believable&lt;/li&gt;
&lt;li&gt;writing service names that sound close enough to cloud products to be instantly recognizable&lt;/li&gt;
&lt;li&gt;making the metrics specific enough to feel like a real system&lt;/li&gt;
&lt;li&gt;adding intentional &lt;strong&gt;418&lt;/strong&gt; responses when humans attempt privileged actions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project is proudly overengineered for a problem that should never have existed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best Ode to Larry Masinter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I intentionally worked &lt;strong&gt;HTTP 418&lt;/strong&gt; into the experience as a first-class feature. In CWS, when a human tries to perform cat-only administrative actions, the system returns a 418-style rejection because the platform recognizes that the requester is fundamentally not qualified to operate feline infrastructure.&lt;/p&gt;

&lt;p&gt;This project is not a teapot. It is, however, spiritually adjacent.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>A Year of Building snappycart in Public or 7 Mistakes I Learned the Hard Way</title>
      <dc:creator>viola</dc:creator>
      <pubDate>Wed, 08 Apr 2026 23:54:34 +0000</pubDate>
      <link>https://forem.com/idncod/a-year-of-building-snappycart-in-public-or-7-mistakes-i-learned-the-hard-way-67k</link>
      <guid>https://forem.com/idncod/a-year-of-building-snappycart-in-public-or-7-mistakes-i-learned-the-hard-way-67k</guid>
      <description>&lt;p&gt;A year ago, I thought the hardest part of open source would be building the package....&lt;/p&gt;

&lt;p&gt;It was not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwji5o7sjx0ojq3nkhg6e.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwji5o7sjx0ojq3nkhg6e.gif" alt="Animated GIF test" width="380" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code mattered, of course. But building &lt;a href="https://github.com/idncod/snappycart" rel="noopener noreferrer"&gt;&lt;strong&gt;snappycart&lt;/strong&gt;&lt;/a&gt; in public taught me that writing the package is only one layer of the job. The harder part is making the project visible, understandable, trustworthy, and worth coming back to. And knowing what I knew did not mean that everyone could get the same vibe and go along with it. So what actually happened? Read on!&lt;/p&gt;

&lt;p&gt;snappycart started as a practical React cart package, something developers could integrate into ecommerce and SaaS products without dragging in a messy setup. The main reason I created it was to save money on cart and checkout integrations and I immediately saw  gap there which prompted me to release my snappycart to the world. Over time, though, it became much more than a package. It became a product, a repository, a contributor space, a testing surface, and a public signal of how seriously I take quality. And this can be easily proven by how much time snappycart contributors invest in testing an developing the package. Surely anyone can release a package, but would that anyone be able to get the momentum out of contributors around it? Probably, but not instantly. &lt;/p&gt;

&lt;p&gt;That changed how I think about open source.&lt;/p&gt;

&lt;p&gt;Here are seven mistakes I learned the hard way that I am eager to share with you because I want our open source community to strive and produce more free solutions for all of us devs out there. Enjoy...&lt;/p&gt;

&lt;h2&gt;
  
  
  1. I thought publishing the package meant people would find it
&lt;/h2&gt;

&lt;p&gt;This was the first big misunderstanding. Man, if I only knew...&lt;/p&gt;

&lt;p&gt;Like a lot of developers, I assumed that once the package was live on npm and GitHub, the right people would eventually come across it. I thought the quality of the thing itself would do more of the work, just like they show us in the movies.&lt;/p&gt;

&lt;p&gt;It does not.😬&lt;/p&gt;

&lt;p&gt;Publishing is not distribution in the same way as shipping is not visibility. Open source is not some magical meritocracy where good code automatically floats to the top just because one day you woke up, had some coffee, sat in your wonderful London garden and DEVELOPED IT... Most people are busy, distracted, and already overloaded with libraries, tools, and repos they have not had time to evaluate. But even worse, not many devs are even considering other tools. I mean who really wants to trade some tool they learnt back in 2020 during the COVID-19, when they had the time, to something that Viola (&lt;a class="mentioned-user" href="https://dev.to/idncod"&gt;@idncod&lt;/a&gt;) decided to release to 'solve' their problem? What problem?...&lt;/p&gt;

&lt;p&gt;That means if you build something useful, you still have to explain why it exists, who it is for, and why someone should care right now and for others it's fine to pass it by.&lt;/p&gt;

&lt;p&gt;The lesson for me was simple: if I want snappycart to grow (organically), I cannot just build it and leave it. I have to talk about it, demo it, document it, and repeat the message consistently. Otherwise it just becomes another repo in the pile and nobody gets to see those cute big odd eyes!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vukhivb9pl17gdnpq4g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vukhivb9pl17gdnpq4g.png" alt="snappycart demo preview" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. I underestimated how much people need examples
&lt;/h2&gt;

&lt;p&gt;I originally put too much faith in the package API doing the talking.&lt;/p&gt;

&lt;p&gt;In reality, developers do not want to imagine how something works. They'd rather see it working. They (actually, we!) want to know how the provider wraps the app, how state moves through the UI, what the cart drawer looks like, how quantity updates behave, and whether the integration feels smooth or annoying. You know, the sacred DEMO that we desire to play with and get to know the package.&lt;/p&gt;

&lt;p&gt;That means examples are not a nice extra. They are part of adoption! (knowing myself too well, I must have thought about this from day one but just didn't feel the urge yet. Well, too bad.)&lt;/p&gt;

&lt;p&gt;One of the most important shifts I made with snappycart was thinking beyond the package internals and focusing more on the actual developer journey. A demo app, realistic usage flows, and clear visual examples do far more than a technically correct API description on its own.&lt;/p&gt;

&lt;p&gt;A lot of open-source projects lose people because they make users do too much interpretation which is a big mistake. Imagine if someone has to mentally assemble your product before they can trust it, you are already asking for too much without providing any value upfront.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. I treated docs like support material instead of part of the product
&lt;/h2&gt;

&lt;p&gt;This is such an easy trap. And I am glad I get it now.&lt;/p&gt;

&lt;p&gt;At the start, it is tempting to think: let me finish the code first, then I will polish the docs later. That sounds reasonable, but it is wrong.&lt;/p&gt;

&lt;p&gt;For most users, the docs are the first product they touch.&lt;/p&gt;

&lt;p&gt;They are not meeting your architecture first. They are meeting your README, your install instructions, your project structure, your usage examples, and the speed at which they can get to their first success. If that experience feels messy, the whole project feels messy.&lt;/p&gt;

&lt;p&gt;Working on snappycart forced me to take documentation more seriously, not as a cleanup task but as part of the interface. The docs are where confidence starts. They are where people decide whether this project feels maintained, understandable, and safe to try.&lt;/p&gt;

&lt;p&gt;Bad docs do not always create loud complaints. More often they create silent drop-off, which is worse.&lt;/p&gt;

&lt;p&gt;Now that I have made some drastic changes to my README in the latest version 1.2.3, I am over the moon to hear from my contributors and people who see my product for the first time things like: 'Well, this README is actually well-structured and it's very easy to understand how I can contribute' (-Zinaida, contributor) and 'The README is so detailed that I can tell what I am dealing with' (-Jay Saadana). &lt;/p&gt;

&lt;h2&gt;
  
  
  4. I assumed contributors would just figure the repo out
&lt;/h2&gt;

&lt;p&gt;That was naive.&lt;/p&gt;

&lt;p&gt;If you want contributions, you have to design for contribution. People need a way in. They need to understand the structure, the setup, the workflows, the standards, the frameworks, and the straightforward release process. They need to know where the package lives, where the demo app lives, how to run tests, what sort of changes are welcome, and how not to break everything.&lt;/p&gt;

&lt;p&gt;A public repo is not automatically a collaborative project. It only becomes collaborative when it is legible.&lt;/p&gt;

&lt;p&gt;That was a big lesson for me with snappycart. The more I thought about contributors seriously, the more obvious it became that open source is not just about making code available. It is about reducing the friction of joining the effort. If you're into React or Nextjs, &lt;a href="https://t.me/qa_english_time/6827" rel="noopener noreferrer"&gt;join us on Telegram&lt;/a&gt; and help us build a better tool!&lt;/p&gt;

&lt;p&gt;Good contributor experience is not accidental. It is engineered.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. I underestimated how much release discipline matters
&lt;/h2&gt;

&lt;p&gt;This one became way more obvious over time.&lt;/p&gt;

&lt;p&gt;It is easy to think the main signal of progress is writing new features. But in open source, release hygiene matters a lot more than many people admit. Versioning, changelogs, package structure, visible updates, and a clean release process all shape how serious the project feels.&lt;/p&gt;

&lt;p&gt;People do not just look at what your project can do. They look at how it moves.&lt;/p&gt;

&lt;p&gt;When a package has chaotic releases, unclear versioning, or no visible update trail, it feels unstable. When it has structure, clear releases, and a proper rhythm, it feels alive and trustworthy.&lt;/p&gt;

&lt;p&gt;That matters because open source is not just consumed technically. It is judged operationally too.&lt;/p&gt;

&lt;p&gt;With snappycart, I learned that keeping the package, demo, docs, and release flow aligned is part of the product experience. It tells people this is not abandoned, not random, and not being held together with luck.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. I thought momentum would take care of itself
&lt;/h2&gt;

&lt;p&gt;It does not. Momentum is fragile.&lt;/p&gt;

&lt;p&gt;Even if good work is happening, the project can still look dead from the outside if there is no visible cadence. No updates. No release notes. No discussion. No signs that someone is actively steering the thing forward.&lt;/p&gt;

&lt;p&gt;That was a hard lesson because silence can erase progress very quickly.&lt;/p&gt;

&lt;p&gt;Open source needs continuity. Not fake hype, not constant noise, but rhythm. Small updates. Honest posts (like this one). Improvements people can actually see. A sense that the project is being maintained with intent rather than occasional bursts of energy.&lt;/p&gt;

&lt;p&gt;For me, this also reinforced that community is not built off one good week. It is built through repetition and over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. I started by thinking like an engineer instead of a maintainer
&lt;/h2&gt;

&lt;p&gt;This is probably the biggest lesson of the whole year.&lt;/p&gt;

&lt;p&gt;An engineer asks whether the package works.&lt;/p&gt;

&lt;p&gt;A maintainer has to ask much more than that.&lt;/p&gt;

&lt;p&gt;Can people understand it fast? Can they trust it? Can they integrate it without pain? Can they contribute without feeling lost? Can they tell whether the project is healthy? Can they see where it is going?&lt;/p&gt;

&lt;p&gt;That mindset changes everything.&lt;/p&gt;

&lt;p&gt;With snappycart, I started to see that the project was not just a code artifact. It was also a public product, a contributor surface, a QA playground, and a reflection of how I build things in the open. That means the responsibility is bigger than "it works on my machine" or even "the API is clean."&lt;/p&gt;

&lt;p&gt;Maintaining open source properly means owning the whole experience around the code, not just the code itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;A year of building snappycart in public taught me that open source is not just about writing useful software or having the complete remedy to all the devs' problems.&lt;/p&gt;

&lt;p&gt;It is about making that software understandable, visible, maintainable, and easy to trust. Trust is the part that so many top quality npm packages are missing even if they have an excellent testing suite in place.&lt;/p&gt;

&lt;p&gt;Some of the hardest lessons were not about engineering in the narrow sense. They were about discoverability, examples, contributor clarity, release discipline, documentation, and consistency. Those are the parts that often look secondary at the start, but in practice they shape whether a project actually grows.&lt;/p&gt;

&lt;p&gt;One thing I am glad I took seriously early was testing and quality. But even that fits the bigger pattern: the strongest open-source projects are not just built well. They are presented well, maintained well, and made easy for other people to believe in.&lt;/p&gt;

&lt;p&gt;That is the standard I want snappycart to keep growing into.&lt;/p&gt;

&lt;p&gt;If you are building in open source right now, my advice is simple: do not just ship the package. Build the path around it too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruxhrzsf8kozw5zx2ip6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruxhrzsf8kozw5zx2ip6.png" alt="snappycart demo preview" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>npm</category>
      <category>typescript</category>
      <category>opensource</category>
      <category>react</category>
    </item>
    <item>
      <title>How to secure MCP tools on AWS for AI agents with authentication, authorization, and least privilege</title>
      <dc:creator>viola</dc:creator>
      <pubDate>Sun, 05 Apr 2026 01:28:10 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-to-secure-mcp-tools-on-aws-for-ai-agents-with-authentication-authorization-and-least-privilege-50ea</link>
      <guid>https://forem.com/aws-builders/how-to-secure-mcp-tools-on-aws-for-ai-agents-with-authentication-authorization-and-least-privilege-50ea</guid>
      <description>&lt;p&gt;Model Context Protocol (or MCP) makes it easier for AI agents to access your existing backend capabilities. It allows AI agents to have access to your system's call services and to use tools such as Lambda functions. That convenience comes with a huge trade-off, a raised bar for security, because it demands a much stronger access model around those interactions. The problem is that once an agent can reach tools, you should be questioning who is calling what, on whose behalf, with which scope, through which boundary, and, most importantly, how to stop the whole thing from becoming an overprivileged mess and ruining the experience for real humans using your product.  &lt;/p&gt;

&lt;p&gt;The issue is clearly there and AWS is already building for this through Bedrock AgentCore Gateway and AgentCore Identity, while the MCP roadmap is moving in the same direction with enterprise-managed auth, audit trails, gateway patterns, and more fine-grained least-privilege scopes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuhwlps39l4vjsatzztr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuhwlps39l4vjsatzztr.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But authentication is no longer the main event, even though a lot of teams still treat it like it is. Because authentication answers who got in, authorization answers what they can do, and least privilege answers how much damage is possible when things go sideways. And here it is very important to think in layers as in MCP-based agent systems, you usually need all three at multiple layers; inbound authentication to the agent or gateway, outbound authentication from the gateway to the tool, and policy decisions on whether a given tool call should be allowed at all. AWS's current guidance reflects that layers split. Moreover, their product model is literally built around those layers and that exact order. For instance, AgentCore Gateway supports inbound OAuth-based authorization for incoming tool calls and multiple outbound authorization modes depending on target type, including IAM-based auth with SigV4, OAuth client credentials, authorization code grants, and API keys. We will be diving deeper into this later in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  So why does MCP change the ways we should think about security?
&lt;/h2&gt;

&lt;p&gt;For the good part, because it gives AI agents a standard way to reach tools, services, and execution paths that sit outside the model itself. Once that access exists, the problem stops being just about connectivity and starts becoming an access-control problem, since you need to know who is calling which tool, under what identity, with which scope, and across which boundary.&lt;/p&gt;

&lt;p&gt;That gets messy quickly when the same system has to support both user-delegated actions and machine-to-machine actions. Without a tight identity model, agents can end up with broad standing access, weak separation between human and non-human callers, and very little control over how requests move from one system to another.&lt;/p&gt;

&lt;p&gt;This is where the AWS model starts to make sense. Bedrock AgentCore Gateway gives you a controlled entry point between agents and tools, while AgentCore Identity adds a dedicated layer for identity and credential handling for agents and automated workloads. The direction of the MCP ecosystem also reflects that reality, with current roadmap priorities including enterprise-managed auth, audit trails, gateway patterns, and more fine-grained least-privilege scopes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The security model I'd use on AWS if I was setting it up today
&lt;/h2&gt;

&lt;p&gt;The cleanest way to secure MCP tool access on AWS is to stop treating it as one big authentication problem. In reality, it breaks down into four separate control layers: inbound authentication, outbound authentication, authorization, and infrastructure-level least privilege.&lt;/p&gt;

&lt;p&gt;That split matters because an MCP tool call is not a single trust decision. First, you need to control who is allowed to reach the agent-facing layer. Then you need a safe way for the gateway or runtime to authenticate downstream to the target tool. After that, you still need to decide whether the exact action should be allowed in context. Finally, the underlying AWS roles, scopes, and permissions need to stay narrow enough that a mistake in one layer does not turn into broad access everywhere else.&lt;/p&gt;

&lt;p&gt;This is the structure I find easiest to reason about while still keeping it close to how these systems behave in production on AWS:&lt;/p&gt;

&lt;p&gt;First, inbound authentication for the caller.&lt;br&gt;
Second, outbound authentication for downstream tool access.&lt;br&gt;
Third, authorization for the action itself.&lt;br&gt;
Fourth, least privilege for the infrastructure underneath it always.&lt;/p&gt;

&lt;p&gt;Breaking it down like that gives you a much cleaner outline of the problem before even starting the implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Number one: Inbound authentication to the agent-facing layer
&lt;/h2&gt;

&lt;p&gt;The first control point is the agent-facing layer itself. Before an agent can reach a tool, you need to decide who is allowed to invoke the gateway or runtime in the first place.&lt;/p&gt;

&lt;p&gt;On AWS, AgentCore Gateway follows the MCP authorization model for inbound requests and can validate incoming calls against an OAuth provider such as Amazon Cognito, Okta, Auth0, or another compatible provider. That gives you a clear front-door identity check before any downstream tool access happens. AWS also supports different inbound flows depending on the caller, including authorization code flow for user-delegated access and client credentials for service-to-service access, with the ability to restrict access by approved client IDs and audiences.&lt;/p&gt;

&lt;p&gt;Distinct between types of calls matters for a very good reason which is all MCP calls represent a different kind of trust. Some calls may come from a signed-in user acting through an application, while others may come from a background service, automated workload, or non-human agent, where this gets so complicated that it can get out of hand in production to the point where it will require more resources to fix it than to set it up correctly from the start. That is why all of these different types of calls must never be treated as equivalent, because not only do they not carry the same identity context, but also they carry a completely different level of user intent.&lt;/p&gt;

&lt;p&gt;This is where it starts to snowball into a problem because when every inbound caller gets labelled as authenticated, the distinction between a human user session and a machine credential vanishes. But if we were to design a much safer model from the beginning, a smart choice would be to keep the concerns separated. For example, all user-delegated access will be done using authorization code or PKCE, while any machine-to-machine access will involve client credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpki49tbptl66n1h8sqgw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpki49tbptl66n1h8sqgw.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Number two: Outbound authentication from the gateway to the tool
&lt;/h2&gt;

&lt;p&gt;This is usually the point where things start getting messy. A team does a decent job on inbound authentication, then quietly lets the gateway or agent call downstream tools with whatever credentials happen to work. That might get the system running, but it is not much of a security model.&lt;/p&gt;

&lt;p&gt;Outbound authentication needs to be treated as its own control layer. Once the gateway, runtime, or agent starts talking to tools, APIs, or MCP servers, it still needs a clear and deliberate way to prove its identity to those downstream targets.&lt;/p&gt;

&lt;p&gt;AWS separates that part properly. Depending on the target, AgentCore Gateway can use IAM-based authorization with a service role and SigV4, OAuth flows, or API keys. For MCP server targets, OAuth client credentials are supported as well. That matters because not every downstream target should be trusted in the same way, and not every tool should accept the same type of credential.&lt;/p&gt;

&lt;p&gt;This is also where AgentCore Identity starts to become genuinely useful. Instead of scattering tokens, secrets, and auth logic across runtimes, tools, and bits of glue code, you can centralize that machinery in a service designed for agent identity and credential handling. That is a much cleaner setup, especially once the number of tools starts growing.&lt;/p&gt;

&lt;p&gt;The main thing I would avoid here is treating downstream access as an implementation detail. It is not. If the gateway or runtime can call tools, then the way it authenticates to those tools needs to be deliberate, narrow, and easy to reason about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Number three: Authorization inside the application path
&lt;/h2&gt;

&lt;p&gt;Being authenticated does not automatically mean being allowed to use every tool or perform every action. That sounds obvious, but this is exactly where systems start to drift. A token is valid, the request gets through, and before long the system is treating "known caller" as if it means "allowed to do whatever comes next."&lt;/p&gt;

&lt;p&gt;That is why I would treat authorization as its own layer. Once the caller is authenticated and the gateway can reach the tool, the system still needs to decide whether the specific action should be allowed in that context.&lt;/p&gt;

&lt;p&gt;Cognito gives you a good starting point for broad API permissions through resource servers and custom scopes. That works well when you want to express coarse-grained capabilities such as billing.read, orders.update, or reports.export. It gives you a cleaner way to separate what a token can generally do from who the caller is.&lt;/p&gt;

&lt;p&gt;But scopes only get you so far. The moment the decision depends on tenant membership, resource ownership, role, environment, or some other piece of context, you are no longer dealing with simple scope checks. You are in fine-grained authorization territory.&lt;/p&gt;

&lt;p&gt;That is where something like Amazon Verified Permissions starts to make more sense. Instead of burying authorization logic across handlers, services, and bits of application code, you can move those decisions into a more explicit policy layer. That tends to be easier to reason about and much easier to change later without creating a mess.&lt;/p&gt;

&lt;h3&gt;
  
  
  The split I would use will pretty much reflect that:
&lt;/h3&gt;

&lt;p&gt;Firstly, I want authentication to establish identity, my OAuth scopes will establish broad capabilities, whilst policy checks will be the decision-making body on whether the exact action should be allowed in context. And that separation is much healthier than trying to force every decision into token validation or scope checks alone because not everything can be solved a token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc12qams86ydg2rovqtq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc12qams86ydg2rovqtq.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Number four: Least privilege for the infrastructure layer
&lt;/h2&gt;

&lt;p&gt;Even if the identity and authorization layers look good on paper, the system is still weak if the underlying roles and permissions are too broad.&lt;/p&gt;

&lt;p&gt;This is the part teams often underestimate. They spend time on tokens, OAuth flows, and gateway design, then quietly give the runtime or supporting services far more access than they actually need. At that point, the front door may look secure, but the blast radius behind it is still too large.&lt;/p&gt;

&lt;p&gt;Least privilege matters here because MCP-based systems usually involve several moving parts: the gateway, runtimes, identity services, tokens, and the downstream AWS APIs or tools those components need to reach. If one of those layers is over-scoped, the whole system becomes easier to abuse.&lt;/p&gt;

&lt;p&gt;AWS's own AgentCore examples point in a much healthier direction. In the FinOps agent architecture, the gateway uses IAM authentication to call runtimes, AgentCore Identity handles the OAuth credential lifecycle, and Cognito client credentials are used so the gateway can obtain tokens for MCP runtimes. The runtime roles themselves are then scoped to the AWS APIs they actually need, such as billing or pricing. That is a much better model than handing the entire agent stack one broad role and hoping the application layer keeps everything under control.&lt;/p&gt;

&lt;p&gt;In practice, least privilege here means a few simple things:&lt;/p&gt;

&lt;p&gt;the gateway role should only have permission to invoke the specific targets it actually needs&lt;br&gt;
MCP runtimes should only have access to the AWS APIs required for their own domain&lt;br&gt;
tokens and scopes should stay as narrow as they can while still allowing the system to work&lt;/p&gt;

&lt;p&gt;The same applies to flow selection. If you use the wrong OAuth flow for the wrong kind of caller, the access model gets messy very quickly. Keeping human and non-human access paths separate from the start makes it much easier to keep scopes, roles, and permissions under control later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Cognito actually fits in this design
&lt;/h2&gt;

&lt;p&gt;Cognito is useful in this design, but it is not the whole story.&lt;/p&gt;

&lt;p&gt;It fits well at the front of the system when you want an OAuth provider for inbound gateway authorization, especially if you want familiar OAuth flows, JWT validation, app-client control, and support for machine-to-machine access through client credentials. That makes it a practical option when the gateway, agent, or MCP runtime needs token-based identity rather than a human session.&lt;/p&gt;

&lt;p&gt;Where people get confused is assuming Cognito solves the whole access model on its own. It does not. It can help establish identity and broad access boundaries, but it does not automatically solve downstream tool authentication, cross-system credential handling, or fine-grained authorization decisions.&lt;/p&gt;

&lt;p&gt;That is why I see Cognito as one building block in the overall design, not the design itself. It works well for inbound identity and token issuance, but once you move into downstream credentials, agent-to-tool access, or context-heavy policy decisions, you need other layers around it. That is where services like AgentCore Identity and a proper authorization layer start to matter much more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't ignore private connectivity
&lt;/h2&gt;

&lt;p&gt;If your gateway or tool layer is reachable over the public internet by default, you are increasing exposure before you even get to identity or policy. That does not automatically make the design wrong, but it does mean you need to be much more deliberate about where your control paths actually live.&lt;/p&gt;

&lt;p&gt;Private connectivity is not the whole security model, but it should still be part of it. Once you have agents, gateways, policy services, and downstream tools talking to each other, it makes sense to keep the more sensitive service-to-service paths inside private network boundaries where you can. That reduces unnecessary exposure and gives you a cleaner production shape around the parts of the system that matter most.&lt;/p&gt;

&lt;p&gt;It is also worth being clear about what private connectivity does and does not solve. It does not replace authentication, authorization, or least privilege. It does not make a bad access model good. What it does do is reduce the attack surface and make those identity and policy controls easier to enforce within tighter boundaries.&lt;/p&gt;

&lt;p&gt;So I would treat private connectivity as a supporting layer in the design: not the main event, but definitely not something to bolt on at the very end either.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8wiomkgdxcj01kfcf1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8wiomkgdxcj01kfcf1w.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A practical reference architecture
&lt;/h2&gt;

&lt;p&gt;If I were putting this together for a real team, the shape would be fairly straightforward.&lt;/p&gt;

&lt;p&gt;A user signs into the application through Cognito or another OIDC provider, and the application calls the agent-facing layer with a token that matches that user journey. AgentCore Gateway then validates the inbound token and checks that the client and audience are actually allowed. From there, downstream tool calls use the auth mode that makes sense for the target, whether that is IAM SigV4 for AWS-native targets or OAuth client credentials for MCP runtimes and APIs.&lt;/p&gt;

&lt;p&gt;AgentCore Identity handles the OAuth client setup and token retrieval so that each component does not have to manage its own secrets and token logic. On top of that, the application or policy layer still needs to decide whether a sensitive action should be allowed, ideally using narrow scopes and more fine-grained rules where the context matters. Underneath all of that, the IAM roles for the gateway and runtimes should stay tightly scoped to their real responsibilities.&lt;/p&gt;

&lt;p&gt;That is much closer to a production-grade setup than the very loose model where an agent has a token and can just start calling things.&lt;/p&gt;

&lt;h2&gt;
  
  
  The mistakes I would actively avoid
&lt;/h2&gt;

&lt;p&gt;The first mistake is using one broad machine credential for every tool call. Inbound access and outbound access are not the same thing, and different targets should not automatically inherit the same trust model. Once one credential starts working for everything, the system gets convenient very quickly and safe very slowly.&lt;/p&gt;

&lt;p&gt;The second mistake is mixing user-delegated access with autonomous machine access without being explicit about the difference. Authorization code and client credentials exist for different reasons. If you blur those paths together, it becomes much harder to tell who actually authorized the action and what kind of trust you are relying on.&lt;/p&gt;

&lt;p&gt;The third mistake is assuming JWT validation equals authorization. It does not. Token validation tells you the token is valid. It does not tell you whether the caller should be allowed to perform a specific action on a specific tool or resource in the current context. That gap is where a lot of bad access decisions get hidden.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qimb2clzeo4sgvgn2x8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qimb2clzeo4sgvgn2x8.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fourth mistake is relying on static secrets when a managed OAuth or IAM-based pattern is available. Static credentials tend to spread, linger, and get reused in places they should not. The more agent and tool integrations you add, the worse that gets.&lt;/p&gt;

&lt;p&gt;The fifth mistake is treating networking as irrelevant. Identity and authorization matter more, but that does not mean network boundaries stop mattering. If you can keep sensitive control paths private, you should. It is a sensible extra layer, especially once you have multiple services making sensitive calls to each other.&lt;/p&gt;

&lt;p&gt;The common thread across all of these mistakes is pretty simple: they make the system easier to build in the short term, but much harder to trust later.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The dangerous version of an AI agent is not the one that can call tools. It is the one that can call tools with vague identity, broad standing access, and no policy boundary.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Final take
&lt;/h2&gt;

&lt;p&gt;Once AI agents can reach tools, the security model has to get more serious. At that point, this stops being just an integration problem and becomes an access-control problem across identity, credentials, authorization, and least privilege.&lt;/p&gt;

&lt;p&gt;AWS already gives you the right building blocks to design for that more deliberately: Cognito for inbound identity, AgentCore Gateway for controlled MCP tool access, AgentCore Identity for agent credential handling, IAM for scoped AWS permissions, and Verified Permissions when broader token scopes are no longer enough.&lt;/p&gt;

&lt;p&gt;The main thing is not to collapse all of that into one vague idea of "auth." Authenticate every boundary, authorize every sensitive action, and keep every identity and permission narrower than feels convenient. That is the difference between an agent system you can trust and one that only looks tidy until something goes wrong.&lt;/p&gt;

&lt;p&gt;If you're working on securing agent tool access on AWS, I'd be curious to hear how you're handling inbound auth, downstream credentials, and policy checks in practice.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mcp</category>
      <category>ai</category>
      <category>security</category>
    </item>
  </channel>
</rss>
