<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Allen Helton</title>
    <description>The latest articles on Forem by Allen Helton (@allenheltondev).</description>
    <link>https://forem.com/allenheltondev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/allenheltondev"/>
    <language>en</language>
    <item>
      <title>Your AI agents are a security nightmare</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Fri, 27 Mar 2026 15:04:48 +0000</pubDate>
      <link>https://forem.com/allenheltondev/your-ai-agents-are-a-security-nightmare-4omp</link>
      <guid>https://forem.com/allenheltondev/your-ai-agents-are-a-security-nightmare-4omp</guid>
      <description>&lt;p&gt;I've noticed something about the tech industry over the past couple of months. Through the use of agentic code editors, developers are building small, custom solutions to problems they're having. Recently, there's been a surge in developers &lt;a href="https://www.readysetcloud.io/newsletter/206/" rel="noopener noreferrer"&gt;writing their own content platforms&lt;/a&gt;, abandoning bigger platforms like Hashnode and Medium. While I think this is genuinely amazing, we're creating a gap that we have to address sooner rather than later.&lt;/p&gt;

&lt;p&gt;That gap, of course, is maintenance. Even though AI coding is genuinely good now, new development is only a fraction of the software lifecycle. What happens when you get your app to prod and it suddenly breaks?&lt;/p&gt;

&lt;p&gt;Do you know the codebase like you did back when everything was hand coded? Would you even know where to begin troubleshooting if your API started returning a bunch of 500s?&lt;/p&gt;

&lt;p&gt;Let's assume the answer is yes to both of those. Now let me ask the hard-hitting one: &lt;em&gt;do you have time to troubleshoot and fix issues?&lt;/em&gt; This is a side project, not your day job. Many of us don't have the luxury of dropping the “real work” tasks for a few hours while we figure out what's going on.&lt;/p&gt;

&lt;p&gt;I'm no exception. I've been building &lt;a href="https://github.com/allenheltondev/community-garden" rel="noopener noreferrer"&gt;Good Roots Network&lt;/a&gt;, an app that connects local gardeners with people in their communities who need food. Growers list produce, gatherers like social workers and non-profits request it, and the system handles the coordination from availability through pickup. It's API-driven from the ground up. State machines, role-based access, real-time inventory transitions. Real production infrastructure running in my personal AWS account.&lt;/p&gt;

&lt;p&gt;There's a lot of surface area to cover for a solo project. And when something goes wrong, I don't have the bandwidth to dig in and figure out how to get it back on its feet - which means I'm potentially blocking people from getting access to fresh produce for themselves or others.&lt;/p&gt;

&lt;p&gt;So I built an agent to be on call.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Before I continue, thank you to &lt;a href="https://fandf.co/4bBLhpV" rel="noopener noreferrer"&gt;Teleport&lt;/a&gt; for sponsoring this post. All opinions are my own.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Building an agent that runs locally
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/allenheltondev/oncall-agent" rel="noopener noreferrer"&gt;oncall-agent&lt;/a&gt; is a local, persistent process that automatically responds to production incidents. It subscribes to a Momento topic for real-time notifications and when something fires, it gets to work on its own. No need to wait for me to react.&lt;/p&gt;

&lt;p&gt;The agent has an internal state machine: receive the incident signal, authenticate against AWS, investigate, then post a summary to Slack. During investigation, the agent loop runs on Amazon Bedrock, using the AWS CLI to query CloudWatch logs and metrics, inspect Lambda functions and DynamoDB tables. It also has access to my source code and deployments via a GitHub app to diagnose root cause and open a PR with the recommended code change.&lt;/p&gt;

&lt;p&gt;I run it locally (rather than as a deployed service) for a couple of reasons. First, I specifically want it to make code changes if they're needed, and pulling the source into a Lambda function in my account doesn't seem like a great use of resources. Second, and more importantly, it touches production infrastructure directly. I want full control over the process, and I want the blast radius of any security issue limited to my machine, not a cloud-hosted process that's running 24/7.&lt;/p&gt;

&lt;p&gt;By the time I check Slack in the morning, the investigation and proposed solution is done and awaiting my approval.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keeping the agent secure for a change
&lt;/h2&gt;

&lt;p&gt;It's tempting to just open up the doors on your machine and give the agent everything it needs to be successful (and then some). But as we've seen throughout 2026, &lt;a href="https://www.businessinsider.com/meta-ai-alignment-director-openclaw-email-deletion-2026-2" rel="noopener noreferrer"&gt;doing so is a security nightmare&lt;/a&gt;. I don't want to be another statistic who let an agent wreak havoc on my system because it was easy.&lt;/p&gt;

&lt;p&gt;When it came time to give the agent AWS access, my first instinct was to put an access key in the &lt;code&gt;.env&lt;/code&gt; file. Long-lived credentials. It's the familiar pattern, it takes 30 seconds, and it works. But the more I sat with that approach, the more it felt like I was solving the wrong problem.&lt;/p&gt;

&lt;p&gt;My issue with that approach was that the agent didn't really exist as an &lt;em&gt;identity&lt;/em&gt; in my system. It was just a privileged background process acting on my behalf with standing access.&lt;/p&gt;

&lt;p&gt;So, I reconsidered the agent as a first-class principal. If it was going to investigate production issues autonomously, it needed to prove who it was at runtime, operate within a scoped session, and leave behind a traceable chain of actions.&lt;/p&gt;

&lt;p&gt;Once I started thinking about the design this way, long-lived keys stopped making sense entirely. 👎&lt;/p&gt;

&lt;h3&gt;
  
  
  Identity is not the same thing as access
&lt;/h3&gt;

&lt;p&gt;With this in mind, I needed something more than a vault. I needed a way for the agent to establish its identity at runtime.&lt;/p&gt;

&lt;p&gt;That's where &lt;a href="https://fandf.co/3NOF8Ob" rel="noopener noreferrer"&gt;Teleport&lt;/a&gt; fits into the design. Instead of access tokens being defined in a config file, the agent starts every investigation by authenticating into a new session. From there, it can request scoped AWS access and perform specific actions with clear attribution.&lt;/p&gt;

&lt;p&gt;What I can now see in CloudTrail is a stable GUID that uniquely identifies the agent as its own principal. I can filter to that identity and see everything the agent has ever touched.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80g0lcsissbvi64q7xuh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80g0lcsissbvi64q7xuh.jpg" alt="Amazon CloudTrail logs showing a unique identifier for actions taken" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If something unexpected shows up, I know immediately it was the agent, and not a human or another process. That's already a huge improvement over a shared access key where attribution is basically impossible.&lt;/p&gt;

&lt;p&gt;Now, I have a legitimate session-bound identity operating on my system. Something that only has the permissions I gave it, but more importantly, is independently auditable.&lt;/p&gt;

&lt;p&gt;When the agent runs &lt;code&gt;tsh login&lt;/code&gt;, I set it up so I have to approve the MFA challenge myself. Fully autonomous credential rotation via &lt;a href="https://fandf.co/40GRJ92" rel="noopener noreferrer"&gt;Teleport Machine ID&lt;/a&gt; is available–and it's the natural next step–but I'm not there yet. I want a few more practice reps with the agent before I remove myself from the auth flow entirely. &lt;a href="https://www.readysetcloud.io/blog/allen.helton/trust-will-make-or-break-ai-agents/" rel="noopener noreferrer"&gt;Trust has to be earned&lt;/a&gt;, and I'd rather manually approve an MFA prompt than wake up to an agent that went sideways with no human checkpoint anywhere in the chain.&lt;/p&gt;

&lt;p&gt;Agent security posture is something I'm really honing in on. I'm changing my mental approach from "does this agent have the right key?" to "was this agent authorized to do this specific thing, in this specific context, at this specific moment?" It might feel like a subtle difference, but it's how we need to be thinking before we give agents the keys to our production systems. And a static credential sitting in an env file isn't the way to answer it.&lt;/p&gt;

&lt;h2&gt;
  
  
  This isn't a side project problem
&lt;/h2&gt;

&lt;p&gt;The most common pattern I see these days is the same one I almost fell into: drop a long-lived token into the environment, grant it the permissions it needs, and ship it. Even if you do it as a “temporary stop gap” and swear you'll come back to fix it, &lt;a href="https://www.linkedin.com/posts/allenheltondev_weve-all-shipped-a-temporary-fix-we-planned-activity-7432825143221108736-ag0U?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAAArWvkYBD1-hWpf7w_0jiGdn8x3RQeihlmc" rel="noopener noreferrer"&gt;you won't&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It always feels like it's fine until something unexpected happens. The problem with unexpected behavior in autonomous systems is that you need &lt;em&gt;more&lt;/em&gt; audit resolution, not less. Imagine an agent made unusual calls to your production services. Was it a bug? A prompt injection? A legitimate investigation path you just didn't anticipate? With static credentials you can't tell. The trail goes cold at "this IAM principal made these calls."&lt;/p&gt;

&lt;p&gt;Agentic AI is exposing some serious issues with the access model many of us are running. IAM was originally designed around humans doing episodic work. Even automated pipelines are episodic in that they run, finish, and stop. But that doesn't map cleanly to processes that persist indefinitely, reason autonomously, and act faster than the blink of an eye.&lt;/p&gt;

&lt;p&gt;When I think about what agents need to operate safely, I land on a few things. They need a cryptographic identity of their own, meaning something that can be uniquely identified (much like an individual developer). They need access that's scoped and enforced at runtime–not granted once like an API key, and left open. They need some way to detect when they're behaving outside expected parameters, or when the operating context has been tampered with. And if they're coordinating with other agents or MCP servers, those connections need to be governed, too (you can't have a secured agent talking to an unsecured tool).&lt;/p&gt;

&lt;p&gt;These are all things Teleport is building with their &lt;a href="https://fandf.co/4lHOZSu" rel="noopener noreferrer"&gt;Agentic Identity Framework&lt;/a&gt;. My agent only scratches the surface of all this with its short-lived credentials and isolated identity. But starting there forces you to think about the agent as a principal that needs its own security model, which is the right mental shift we need heading into mid-2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  This probably sounds a little familiar
&lt;/h2&gt;

&lt;p&gt;We've all seen this play out before in different forms. When containerization became mainstream, teams that had good practices with VM security often skipped the basics on containers because it felt like a "smaller" problem. When Lambda came out, teams that carefully managed server permissions dropped service roles for wildcard policies because it was a new execution model and they were moving fast (hopefully I'm not the only one who did this 😬).&lt;/p&gt;

&lt;p&gt;Agentic AI is the same thing happening again. A new execution model, a lot of excitement, and a temptation to skip the identity and access fundamentals–because it feels like a side project, a POC, or just something you're trying out.&lt;/p&gt;

&lt;p&gt;The lesson is the same every time: the important components in production are still important even if the thing running is new, or small, or experimental. An agent with standing access and no attribution is a liability, whether it's running inside a Fortune 500 or on your laptop handling incidents for a community gardening app.&lt;/p&gt;

&lt;p&gt;The oncall-agent repo is &lt;a href="https://github.com/allenheltondev/oncall-agent" rel="noopener noreferrer"&gt;open source&lt;/a&gt;. The Teleport integration is in the state machine if you want to see how it fits together in practice. And if you're experimenting with agents at all, it's worth triple-checking how they establish identity and access production systems. Move a small workflow to session-bound credentials to help you make better architectural decisions and make your automations easier to trust.&lt;/p&gt;

&lt;p&gt;Tools like &lt;a href="https://fandf.co/4bBLhpV" rel="noopener noreferrer"&gt;Teleport&lt;/a&gt; make that change practical without completely reworking your stack. It's a small change in implementation, but a big change in how much autonomy you're willing to give your systems.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>security</category>
    </item>
    <item>
      <title>When serving images from S3 stopped being good enough</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Wed, 07 Jan 2026 15:12:39 +0000</pubDate>
      <link>https://forem.com/aws-heroes/when-serving-images-from-s3-stopped-being-good-enough-233f</link>
      <guid>https://forem.com/aws-heroes/when-serving-images-from-s3-stopped-being-good-enough-233f</guid>
      <description>&lt;p&gt;I published my first blog post 7 years ago. I wrote on &lt;a href="https://medium.com/@allenheltondev" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; for about a year before I built &lt;a href="https://readysetcloud.io" rel="noopener noreferrer"&gt;Ready, Set, Cloud&lt;/a&gt;. For most of its life, the site hasn't had much of a facelift or performance updates. It's primarily served as a home for my writing, newsletter, and podcast.&lt;/p&gt;

&lt;p&gt;I've been running this site for six years. I'm usually the kind of person who can't leave things alone for long, yet this has been largely unchanged for years. It worked. Until it didn't.&lt;/p&gt;

&lt;p&gt;When I finally took a look at site performance, something was painfully obvious. Images. &lt;a href="https://pagespeed.web.dev/" rel="noopener noreferrer"&gt;PageSpeed Insights&lt;/a&gt; showed that large, unoptimized images were dominating load time. Single file sizes served directly from S3, no format negotiation, and no caching were starting to add up.&lt;/p&gt;

&lt;p&gt;That setup wasn't wrong when I built it. It was a perfectly reasonable tradeoff at the time. It was simple, low maintenance, and honestly, &lt;em&gt;good enough&lt;/em&gt;. The site had a couple dozen readers, and small-scale simplicity won. But as the site grew, both in content and audience, performance expectations changed. Serving raw images straight from S3 no longer cut it. The site outgrew its initial build.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I needed from image delivery
&lt;/h2&gt;

&lt;p&gt;The knee-jerk reaction is to jump into S3 and manually optimize a bunch of images. That fixes a symptom, but it doesn't solve the underlying problem. I needed to step back and rethink what image delivery should look like for a content-heavy site serving tens of thousands of monthly visitors.&lt;/p&gt;

&lt;p&gt;Manually resizing images, converting them to web-friendly formats, and deciding which size to upload per post was never going to scale. More importantly, I didn't want to change my workflow. I wanted to continue to write, publish, and move on without adding in layers of performance checking.&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;good&lt;/em&gt; system should make that possible. It should optimize images automatically in the background. It should serve modern formats like &lt;a href="https://developers.google.com/speed/webp" rel="noopener noreferrer"&gt;WebP&lt;/a&gt; when the browser supports them. It should provide multiple image sizes so mobile devices aren't downloading desktop-sized assets. And it should be aggressively cacheable, because image delivery is a solved problem, and CDNs already do this exceptionally well.&lt;/p&gt;

&lt;p&gt;Thinking about these requirements, I realized I wasn't really looking to build a new image optimization tool (like I would have done in the past). Instead, I was updating the site's image handling capabilities so these decisions were made once, centrally, in an industry-standard way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fitting a CDN into the workflow
&lt;/h2&gt;

&lt;p&gt;I upload images to Ready, Set, Cloud using a small JavaScript script that lives in the repo. It takes a file prefix, scans a local folder for matching files, and uploads them directly to S3. There's no web UI and no automation that tries to interpret content. It's simple by design, and I wanted to keep it that way.&lt;/p&gt;

&lt;p&gt;That simplicity turns out to be a feature. Because images already flow through S3, the system can react to uploads without changing the workflow at all.&lt;/p&gt;

&lt;p&gt;Image uploads automatically trigger a Rust Lambda function (&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventBridge.html" rel="noopener noreferrer"&gt;through EventBridge&lt;/a&gt;) that converts the original image to WebP and generates a handful of standard sizes. Those optimized versions are written back to S3 alongside the original.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftply36ibqknjhipcvlmz.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftply36ibqknjhipcvlmz.webp" alt="Small architecture diagram showing the new workflow" width="718" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This does a few important things. First, it makes image optimization deterministic - meaning every image goes through the same process every time. Second, it keeps work off the critical path. All of this happens asynchronously and doesn't interfere with the rest of the publishing workflow like &lt;a href="///blog/allen.helton/how-i-built-a-serverless-automation-to-cross-post-my-blogs"&gt;cross-posting&lt;/a&gt;, &lt;a href="///blog/allen.helton/blog-level-up-writer-analytics-and-text-to-speech"&gt;writing analytics&lt;/a&gt;, or &lt;a href="///blog/allen.helton/serverless-post-scheduler-for-static-sites"&gt;scheduling&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, I added a CloudFront distribution in front of the S3 bucket. That keeps delivery fast and predictable by serving cached assets from points of presence around the world. Because it points at the existing bucket, there's no data migration involved. Image delivery improves without changing where anything lives.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serving the right format without changing URLs
&lt;/h2&gt;

&lt;p&gt;Once image processing was handled, delivery became the next concern. All optimized assets were already in S3, but I wasn't interested in changing 2,000+ image links across existing content just to take advantage of them.&lt;/p&gt;

&lt;p&gt;The answer was to push that behavior into the CDN. By updating the distribution to look for WebP support in the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept" rel="noopener noreferrer"&gt;&lt;code&gt;Accept&lt;/code&gt; header&lt;/a&gt;, requests for images could be routed to the optimized versions without changing a single URL. Modern browsers already advertise their supported formats, so content negotiation becomes a routing concern rather than a frontend one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ozogejfrmitgmkptahr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ozogejfrmitgmkptahr.webp" alt="Small architecture diagram showing how the Accept header works with CloudFront functions" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A CloudFront function runs on the &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-cloudfront-trigger-events.html" rel="noopener noreferrer"&gt;viewer request event&lt;/a&gt; and rewrites the request path whenever the browser advertises WebP support. It doesn't check whether the WebP file exists - that's resolved later when CloudFront fetches from the origin (the S3 bucket). The important part is that the rewrite happens consistently at the edge, and the resulting response is cached, so subsequent requests follow the same path.&lt;/p&gt;

&lt;h3&gt;
  
  
  Letting the browser choose the right size
&lt;/h3&gt;

&lt;p&gt;Using WebP was only one part of the solution. Once multiple sizes of every image existed automatically, the next question was which one to serve. Mobile screens don't need desktop-sized images, and high-DPI displays can benefit from larger ones. Not to mention saving on load times for thumbnails on the home page.&lt;/p&gt;

&lt;p&gt;Using &lt;code&gt;srcset&lt;/code&gt; allows the browser to make that choice on its own. The markup stays the same, there's no runtime logic, and each client downloads only what it actually needs. The responsibility for choosing the right size moves to the client, where that decision can be made with full context.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;img&lt;/span&gt;
  &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"/images/diagram.png"&lt;/span&gt;
  &lt;span class="na"&gt;srcset=&lt;/span&gt;&lt;span class="s"&gt;"
    /images/diagram-480.webp 480w,
    /images/diagram-960.webp 960w,
    /images/diagram-1600.webp 1600w"&lt;/span&gt;
  &lt;span class="na"&gt;sizes=&lt;/span&gt;&lt;span class="s"&gt;"(max-width: 768px) 100vw, 768px"&lt;/span&gt;
  &lt;span class="na"&gt;alt=&lt;/span&gt;&lt;span class="s"&gt;"..."&lt;/span&gt;
&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ready, Set, Cloud is a static site generated by &lt;a href="https://gohugo.io/" rel="noopener noreferrer"&gt;Hugo&lt;/a&gt;. In my case, adding &lt;code&gt;srcset&lt;/code&gt; support meant overriding the &lt;a href="https://gohugo.io/render-hooks/images/" rel="noopener noreferrer"&gt;render-image hook&lt;/a&gt; and adding a small amount of logic to create the additional attribute.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance improvements and an unexpected bonus
&lt;/h2&gt;

&lt;p&gt;My primary goal with all this was to decrease load times so my site felt snappy without adding anything to my existing workflow. With the image optimization component plus the CDN for cached delivery (and the client-side &lt;code&gt;srcset&lt;/code&gt; addition), I'm pleased to say page payloads dropped significantly. Pages render faster, Largest Contentful Paint (LCP) improved, and overall performance is more predictable.&lt;/p&gt;

&lt;p&gt;On my homepage, the First Contentful Paint (FCP) dropped from &lt;em&gt;4.5 seconds to under a second&lt;/em&gt;. Load times are much more consistent across desktop and mobile as well, which has been an ongoing challenge.&lt;/p&gt;

&lt;p&gt;The unexpected bonus was that these same changes also helped with search engine rankings. Faster pages, smaller downloads, and aggressively cached assets all feed into &lt;a href="https://developers.google.com/search/docs/appearance/core-web-vitals" rel="noopener noreferrer"&gt;Core Web Vitals&lt;/a&gt;. So without targeting SEO explicitly, the site became easier to crawl, faster to index, and rank higher in search results simply by prioritizing user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making this easy to reuse
&lt;/h2&gt;

&lt;p&gt;If you've built your own blog, you've probably run into these performance bottlenecks just like I have, and that's okay. Serving assets out of S3 is a valid solution and has worked well for me for six years as the site grew.&lt;/p&gt;

&lt;p&gt;When my constraints changed, I wanted to extend the system without changing the deployment workflow. If you're interested in the same improvements I've described here, this setup is available in the &lt;a href="https://serverlessrepo.aws.amazon.com/applications/us-east-1/745159065988/image-optimizer-for-blogs" rel="noopener noreferrer"&gt;Serverless Application Repository&lt;/a&gt; as a drop-in addition to an existing system. The full source is also &lt;a href="https://github.com/allenheltondev/image-downscaler" rel="noopener noreferrer"&gt;available in GitHub&lt;/a&gt; if you want to dig into the details or adapt it further.&lt;/p&gt;

&lt;p&gt;Above all else, this was about taking an entire class of decisions around image formats, sizes, and caching, and keeping them out of the day-to-day workflow. With everything in place, performance simply happens by default.&lt;/p&gt;

&lt;p&gt;This was as fun as it was important for Ready, Set, Cloud. I wanted something easy to adopt, easy to remove, and something that quietly does the right thing as everything else moves around it.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>blogging</category>
    </item>
    <item>
      <title>The Real Cost of Swapping Infrastructure</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Mon, 15 Dec 2025 20:43:40 +0000</pubDate>
      <link>https://forem.com/allenheltondev/the-real-cost-of-swapping-infrastructure-3a73</link>
      <guid>https://forem.com/allenheltondev/the-real-cost-of-swapping-infrastructure-3a73</guid>
      <description>&lt;p&gt;I've gone through enough infrastructure evaluations as an architect to recognize the moment when the energy leaves the room. It's not when someone questions the performance numbers or the cost model. It's when someone pulls up the codebase and starts counting how many services need to change.&lt;/p&gt;

&lt;p&gt;The infrastructure might be more reliable, easier to operate, or have better economics, but it doesn't matter if getting there means touching stable production code across dozens of services. The conversation shifts from "should we do this?" to "can we afford to do this?" and the answer is usually no.&lt;/p&gt;

&lt;p&gt;That gap between "this is better" and "we can actually adopt this" is where many decisions stall or get turned down.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real cost of infrastructure change
&lt;/h2&gt;

&lt;p&gt;Architecture discussions tend to follow a familiar pattern. The whiteboard fills up with boxes and arrows, the tradeoffs look reasonable, everyone agrees that you'll come out the other end better. Then someone asks: how much code do we have to touch?&lt;/p&gt;

&lt;p&gt;That question isn't about features or benchmarks. &lt;em&gt;It's about risk&lt;/em&gt;. Architects evaluate the blast radius of change alongside performance and reliability. Every line of application code that needs to move, every client library that needs to be swapped, every behavior that needs to be re-learned increases the cost before you can even run a proof of concept.&lt;/p&gt;

&lt;p&gt;For systems already in production, touching stable code introduces uncertainty. It stretches review cycles, kicks off regression testing, and makes rollback complicated. Good ideas often don't make it past this point because weaving them into existing applications costs too much.&lt;/p&gt;

&lt;p&gt;This is especially relevant for infrastructure on the hot path. When caching misbehaves, it takes other systems down with it. Teams are rightfully cautious about changes here, even when the infrastructure side of the proposal is compelling.&lt;/p&gt;

&lt;h2&gt;
  
  
  What teams actually trust
&lt;/h2&gt;

&lt;p&gt;Teams trust behavior they've observed in production, like how commands serialize, how errors surface, or how retries behave under load. That behavior has been exercised millions of times. It's hardened by real traffic, load testing, and years of incremental fixes. In practice, this behavior acts as a contract between the app code and infrastructure.&lt;/p&gt;

&lt;p&gt;This is why client changes feel expensive even when two libraries look similar on the surface. Timeouts, connection handling, pipelining behavior, and edge cases around failures all shape how systems respond when stressed. At scale, subtle differences show up as tail latency spikes or incident tickets that are hard to explain.&lt;/p&gt;

&lt;p&gt;For cache-heavy systems built on Redis or Valkey, this contract is often the wire protocol itself – &lt;a href="https://docs.momentohq.com/cache/resp" rel="noopener noreferrer"&gt;RESP&lt;/a&gt;, the wire format the client already speaks. The application doesn't depend on “a cache,” it depends on this specific way of talking to one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Changing what sits behind the contract
&lt;/h2&gt;

&lt;p&gt;When you hold the contract constant and change what sits behind it, the risk potential drops dramatically.&lt;/p&gt;

&lt;p&gt;Instead of rewriting cache layers or swapping SDKs across services, teams can &lt;a href="https://docs.momentohq.com/cache/resp#code-it-your-way" rel="noopener noreferrer"&gt;point existing Redis or Valkey clients at Momento&lt;/a&gt;, authenticate, and issue the same commands they already use. The infrastructure changes. The operational model changes. The application code largely does not.&lt;/p&gt;

&lt;p&gt;That distinction turns evaluation from a refactor into a configuration change. It lets teams observe real production behavior without committing to a rewrite upfront. More importantly, it makes rollbacks boring. Simply change an endpoint back and that's the extent of it.&lt;/p&gt;

&lt;p&gt;This doesn't eliminate &lt;em&gt;all&lt;/em&gt; risk. RESP compatibility has edges and limitations worth understanding. Not every Redis command is supported, but it shifts the evaluation risk from application code to infrastructure, where it's far easier to observe and reason about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lowering the cost of evaluation
&lt;/h2&gt;

&lt;p&gt;I've noticed the infrastructure platforms that gain real adoption share a common trait: they meet teams where they already are. They respect existing contracts, existing mental models, and the realities of systems that have been running in production for years.&lt;/p&gt;

&lt;p&gt;"Better" isn't enough if getting there requires destabilizing code no one wants to touch. The platforms that succeed make it easy to start small, observe real behavior, and back out safely when something doesn't line up. When evaluation feels reversible, teams engage honestly with the tradeoffs instead of inventing reasons to stay put.&lt;/p&gt;

&lt;p&gt;RESP compatibility fits this philosophy. It doesn't ask teams to abandon the clients or patterns they rely on. It allows them to keep the contract they trust while changing the parts that benefit most from being managed: scaling, availability, and operational complexity.&lt;/p&gt;

&lt;p&gt;In practice, that's often what separates interesting technology from technology that actually gets adopted.&lt;/p&gt;

</description>
      <category>infrastructure</category>
      <category>architecture</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>Breakthroughs are just boring improvements that pile up</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Mon, 24 Nov 2025 21:04:05 +0000</pubDate>
      <link>https://forem.com/allenheltondev/breakthroughs-are-just-boring-improvements-that-pile-up-299h</link>
      <guid>https://forem.com/allenheltondev/breakthroughs-are-just-boring-improvements-that-pile-up-299h</guid>
      <description>&lt;p&gt;We romanticize big engineering wins.&lt;/p&gt;

&lt;p&gt;The performance chart that hockey-sticks up. The keynote slide with the impossible number. The "we hit a billion requests per second" humble brag.&lt;/p&gt;

&lt;p&gt;The truth behind those moments is never one big breakthrough. It's dozens, possibly even hundreds, of small changes and enhancements. The kind that look ordinary when they land in a PR. The kind that your eyes casually skip over in release notes. The kind nobody celebrates… until they compound into something revolutionary.&lt;/p&gt;

&lt;p&gt;I was recently reminded of this while listening to the &lt;a href="https://youtu.be/c57lAopUh0k" rel="noopener noreferrer"&gt;&lt;em&gt;Cache It&lt;/em&gt; podcast&lt;/a&gt;. Valkey project maintainer &lt;a href="https://www.linkedin.com/in/harkrishn-patro/" rel="noopener noreferrer"&gt;Harkrishn Patro&lt;/a&gt; described how his team scaled their system to &lt;a href="https://valkey.io/blog/1-billion-rps/" rel="noopener noreferrer"&gt;handle a billion requests per second&lt;/a&gt;. He wasn't describing a feature launch, but rather the result of continuous refinement across every corner of the system.&lt;/p&gt;

&lt;p&gt;Engineers, myself included, often hone in on small pieces of a system and lose sight of the forest for the trees. But with bold headlines and sensational marketing, we forget the forest is made from trees. As a result, major innovations overshadow the hard work that was done in the years and months leading up to "the big reveal."&lt;/p&gt;

&lt;p&gt;So what happens if we take a stroll through that forest to see what the big announcement was &lt;em&gt;really&lt;/em&gt; about?&lt;/p&gt;

&lt;h2&gt;
  
  
  Innovation removes friction
&lt;/h2&gt;

&lt;p&gt;We tend to think innovation shows up wearing a cape. A disruptive new framework. A patented algorithm. A major architectural redesign. But most real innovation is far less glamorous – it's the process of shaving off tiny sources of drag until the entire system starts to glide. The "death by a thousand paper cuts" phenomenon is real.&lt;/p&gt;

&lt;p&gt;Distributed systems make this painfully clear. The moment you push scale beyond "normal," you reveal dozens of inefficiencies that rarely show up in happy-path benchmarks. And this doesn't just refer to hyper scale! Problems change at every tier of scale. From POC to startup. From startup to regional. From regional to global. Suddenly, small issues aren't small anymore, they're multiplied across thousands of functions, nodes, and containers processing millions of requests per second.&lt;/p&gt;

&lt;p&gt;That's exactly what the Valkey team discovered while pushing cluster sizes past limits that were never originally intended. Instead of rewriting the system from scratch, they focused on identifying and removing friction one piece at a time. For example, early pub/sub workloads broadcast messages to every node in the cluster, regardless of who needed the data. Reducing that via &lt;a href="https://valkey.io/topics/pubsub/#sharded-pub-sub" rel="noopener noreferrer"&gt;sharded messaging&lt;/a&gt; meant drastically less traffic flowing through the cluster bus,  which freed up capacity for the work that mattered.&lt;/p&gt;

&lt;p&gt;Or the size and frequency of &lt;a href="https://valkey.io/topics/cluster-spec/#heartbeat-packet-content" rel="noopener noreferrer"&gt;internal coordination messages&lt;/a&gt;. By making those packets smaller and more efficient, Valkey stopped wasting cycles on communication overhead, freeing up capacity for serving actual requests.&lt;/p&gt;

&lt;p&gt;Individually, neither of these changes would get a developer on stage at a conference. Nobody ships a "lighter gossip protocol!" feature release. Stack those improvements long enough, and the system starts performing at a level that once felt impossible.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Innovation isn't about adding more. It's about taking away.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you remove enough drag, the system suddenly feels like it's leaping forward… even though it got there one subtle change at a time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reliability is a force multiplier
&lt;/h2&gt;

&lt;p&gt;A fast system under perfect conditions is just a prototype. The moment you introduce chaos, like what happens regularly in production, performance either collapses or compounds. Reliability determines which way it goes.&lt;/p&gt;

&lt;p&gt;Just like removing friction creates capacity, improving reliability creates opportunity. In distributed systems, fragility multiplies just as quickly as performance. One node degrades, then another, and suddenly a perfectly healthy cluster wastes capacity on recovery instead of serving requests. Failover becomes a performance bottleneck waiting to happen if handled poorly.&lt;/p&gt;

&lt;p&gt;Back to the small wins with Valkey – when multiple primaries failed simultaneously (like during an Availability Zone outage), the system would struggle to elect new leaders because candidate requests flooded in all at once. Leadership battles caused downstream stalls that made the whole cluster feel unstable under pressure. The solution was introducing strategic delays to election behavior, allowing replicas to take over cleanly and confidently without overwhelming the system. &lt;/p&gt;

&lt;p&gt;But that's the pattern – resilience opens up throughput. A cluster that can route around failure without spiking CPU or tail latency stays fast when users need it most. In Valkey's case, the additional throughput in unfavorable situations kept cycles free to handle more requests per second. Another blip on the release notes that continued to add up to the breakthrough.&lt;/p&gt;

&lt;p&gt;Reliability amplifies every other improvement in the system. It creates breathing room. It enables aggressive scaling. It gives teams the confidence to grow beyond today's limits. When recovery becomes routine and uneventful, progress accelerates. &lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling is an operational behavior
&lt;/h2&gt;

&lt;p&gt;Contrary to what you might think, scaling isn't a hardware problem. It's a behavior problem. If changing scale feels risky, disruptive, or expensive, teams simply don't do it – even when they need to. They take the safe route: over-provisioning up front and praying they don't get surprised later.&lt;/p&gt;

&lt;p&gt;There's nothing innovative about survival mode.&lt;/p&gt;

&lt;p&gt;Real scalability emerges when the cost of adjusting capacity falls so low that it becomes a non-event (that's one of the reasons people like serverless so much). The faster and safer adding nodes, redistributing data, or shifting workload patterns feel, the more frequently teams perform them. And the more frequently teams perform them, the closer infrastructure can track real-world demand without waste, stress, and cost.&lt;/p&gt;

&lt;p&gt;Once again, the Valkey team learned this firsthand. Since Redis 1.0, data migration has been a scary operation. Moving slots between nodes meant risking performance hits, stalled operations, or complicated (and error prone) manual intervention. So people avoided it, and clusters either stayed small or stayed oversized. Innovation would stall because operating it at its potential felt like walking a tightrope.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gomomento.com/blog/cache-rebalancing-was-broken-heres-how-valkey-9-0-fixed-it/" rel="noopener noreferrer"&gt;Atomic slot migration&lt;/a&gt; changed the behavior. Now, moving data around a cluster is predictable, controlled, and boring – exactly what operators need when managing production systems. This feature opened up capabilities that already existed but weren't safe to reach for.&lt;/p&gt;

&lt;p&gt;When scaling becomes effortless, teams stop treating it like a last resort and start doing it as a reflex. Progress begins to compound. Capacity and confidence reinforce one another. And suddenly, what once felt like a milestone becomes just another Tuesday.&lt;/p&gt;

&lt;h2&gt;
  
  
  Breakthroughs happen gradually… then suddenly
&lt;/h2&gt;

&lt;p&gt;From the outside, breakthroughs look like a leap. One day the headline appears. The graph turns vertical. Everyone wonders how a system could possibly jump from "pretty good" to "borderline unbelievable" in such a short time.&lt;/p&gt;

&lt;p&gt;But from the inside, it looks like months, sometimes years, of sanding down rough edges. Fixing the tiny things that don't feel worthy of a celebration. Optimizing behavior that only shows up in profiling. Eliminating friction one pull request at a time.&lt;/p&gt;

&lt;p&gt;That's exactly what happened with Valkey. A billion RPS was never the goal, but it was the receipt of countless invisible improvements. The milestone only looks magical because we didn't see the work it took to make it boring.&lt;/p&gt;

&lt;p&gt;And this isn't unique to distributed systems or caching engines. It's universal. Every major engineering accomplishment is built from the moments where someone looked at a sharp corner and decided it didn't have to stay sharp.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Progress compounds. Confidence compounds. Capability compounds. Breakthroughs happen gradually… then suddenly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So if you're in the middle of the grind, whether you're shipping incremental changes, cleaning up edge cases, or improving reliability and operability in small ways, take a breath of solace. You're stacking. You're creating the conditions where something extraordinary is going to look effortless.&lt;/p&gt;

&lt;p&gt;The day is coming when the big announcement arrives, the system holds steady, and people assume it was easy. But you'll know better. You'll know it was earned one small improvement at a time.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>engineering</category>
      <category>innovation</category>
    </item>
    <item>
      <title>Cache Rebalancing Was Broken. Here's How They Fixed It.</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Tue, 18 Nov 2025 22:16:54 +0000</pubDate>
      <link>https://forem.com/allenheltondev/cache-rebalancing-was-broken-heres-how-they-fixed-it-hh5</link>
      <guid>https://forem.com/allenheltondev/cache-rebalancing-was-broken-heres-how-they-fixed-it-hh5</guid>
      <description>&lt;p&gt;Few things make SREs more nervous than rebalancing a cache cluster.&lt;/p&gt;

&lt;p&gt;You know the feeling. You add a node, trigger a rebalance, and suddenly latency graphs start jumping. It's a familiar risk of the job, especially when your cache sits between your users and your database. A small configuration mistake here can suddenly unleash a storm of GET requests on your primary data store.&lt;/p&gt;

&lt;p&gt;I admit, I never really understood the concept of a slot or why it was needed. But after listening to a recent episode of the Cache It podcast on the new &lt;a href="https://www.youtube.com/watch?v=q5L2oW3YRZQ" rel="noopener noreferrer"&gt;atomic slot migration feature in Valkey 9.0&lt;/a&gt;, I finally decided to dig in. The deeper I went (and the more times I replayed the episode), the more it clicked.&lt;/p&gt;

&lt;p&gt;My learning adventure led me to ask, and finally answer, four important questions regarding slots and what happens to them when a cluster resizes. Let's dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are slots?
&lt;/h2&gt;

&lt;p&gt;Most caching systems rely on consistent hashing to decide where data lives. It keeps keys evenly balanced across nodes while allowing clusters to grow or shrink with minimal reshuffling.&lt;/p&gt;

&lt;p&gt;Valkey uses a fixed hash-slot model, specifically 16,384 slots, that together represent the entire keyspace. While 16,384 slots might seem random, it's actually 2¹⁴, a power of two chosen because it balances just the right amount of precision and efficiency. With that many slots, data can be distributed across large clusters without adding unnecessary overhead to routing or metadata.&lt;/p&gt;

&lt;p&gt;Every key hashes to one of those slots, and each node in a cluster owns a subset of them. Rather than mapping every key directly to a node, Valkey maps slots to nodes. Since keys are deterministically hashed to slots, this makes scaling predictable. When you add or remove a node, Valkey only has to move the slots it owns, not millions of individual keys.&lt;/p&gt;

&lt;p&gt;When the Valkey client library hashes a key, it already knows which node owns the corresponding slot. If the topology changes (aka slots being assigned to a different node due to a scaling event), Valkey issues a quick redirect so the client can retry against the correct node.&lt;/p&gt;

&lt;h2&gt;
  
  
  What was broken about the old migration model?
&lt;/h2&gt;

&lt;p&gt;The old migration model moved one key at a time between nodes, triggering a flurry of redirects and topology changes. Clients were constantly told, "&lt;em&gt;Sorry, this key moved, try over there.&lt;/em&gt;"&lt;/p&gt;

&lt;p&gt;It was essentially a brute-force way of moving slots from one node to another. It worked, but it wasn't elegant – and it definitely wasn't fast.&lt;/p&gt;

&lt;p&gt;Each key transfer required multiple round trips, and every slot migration forced clients to refresh cluster topology. Large values could even block threads during serialization.&lt;/p&gt;

&lt;p&gt;When you're working with millions of keys across your cluster, that adds up to a resource-intensive process that can take minutes to complete, all while the cluster remains live and serving traffic.&lt;/p&gt;

&lt;p&gt;The result was instability and slower tail latencies during migration. Which means it was something you'd postpone unless you absolutely had to run it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does atomic slot migration fix it?
&lt;/h2&gt;

&lt;p&gt;Valkey 9.0 redesigned the slot migration process using the same principles that power replication. Instead of moving keys one by one, atomic slot migration runs in three distinct phases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snapshot&lt;/strong&gt; – The source node forks a background process and captures a point-in-time snapshot of the slots being migrated while continuing to serve live traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streaming&lt;/strong&gt; – Any writes that happen during the snapshot are captured in a buffer and streamed incrementally to the target node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finalization&lt;/strong&gt; – Once all data is synchronized, Valkey briefly pauses new writes, sends a final marker, and performs a single, atomic handover.&lt;/p&gt;

&lt;p&gt;This three-phase approach eliminates the multiple round trips, client redirects, and fragile user experience that used to come with slot migration. Because the process runs in the background and atomically switches ownership when complete, there is no risky in-between state. Once it finishes, your slots are already assigned to the target nodes without interruption or confusion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is this better?
&lt;/h2&gt;

&lt;p&gt;For operators, this is a big deal.&lt;/p&gt;

&lt;p&gt;Fewer round trips, fewer topology changes, and no split-brain state during migration. You can rebalance entire clusters without disturbing workloads or waking up the on-call engineer (hooray!).&lt;/p&gt;

&lt;p&gt;Both models still exist today, but atomic slot migration represents a new standard for reliability. It shows how thoughtful engineering can make the hardest operational tasks feel invisible.&lt;/p&gt;

&lt;p&gt;As &lt;a href="https://www.linkedin.com/in/kshams/" rel="noopener noreferrer"&gt;Khawaja&lt;/a&gt; said in the latest Cache It podcast episode with &lt;a href="https://www.linkedin.com/in/jacob-murphy-801078127/" rel="noopener noreferrer"&gt;Jacob Murphy&lt;/a&gt;, "&lt;em&gt;This moves us closer to a world where scaling a cache never means taking it offline.&lt;/em&gt;"&lt;/p&gt;

&lt;p&gt;And that's a world every SRE wants to live in.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>caching</category>
      <category>distributedsystems</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Designing smarter caches with Valkey 9.0's numbered databases</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Fri, 07 Nov 2025 16:54:32 +0000</pubDate>
      <link>https://forem.com/allenheltondev/designing-smarter-caches-with-valkey-90s-numbered-databases-4d18</link>
      <guid>https://forem.com/allenheltondev/designing-smarter-caches-with-valkey-90s-numbered-databases-4d18</guid>
      <description>&lt;p&gt;I recently watched a podcast episode where &lt;a href="https://www.linkedin.com/in/kshams/" rel="noopener noreferrer"&gt;Khawaja Shams&lt;/a&gt;, CEO of &lt;a href="https://gomomento.com" rel="noopener noreferrer"&gt;Momento&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/kyle-davis-linux/" rel="noopener noreferrer"&gt;Kyle Davis&lt;/a&gt;, Developer Advocate at Valkey, talked about &lt;a href="https://www.youtube.com/watch?v=Q0kqS3s2cAQ&amp;amp;list=PLeRsXz8i6Cw-gloqAjW42WfJR49BHHZZV" rel="noopener noreferrer"&gt;everything BUT performance of Valkey 9.0&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;"That's the first time anybody's asked me to talk about Valkey and not talk about performance," Kyle laughed when Khawaja asked him to skip the speed benchmarks. What followed was one of those conversations where you realize a seemingly simple feature opens up architectural patterns you didn't even know were possible.&lt;/p&gt;

&lt;p&gt;I didn't quite understand it at first. Kyle mentioned &lt;a href="https://valkey.io/blog/numbered-databases/" rel="noopener noreferrer"&gt;numbered databases in Valkey 9.0&lt;/a&gt;. Then he mentioned clustering and how numbered databases get a boost when used together. The terminology was throwing me off until I heard it explained a different way. &lt;/p&gt;

&lt;p&gt;Kyle referred to numbered databases as "namespaces" (which makes way more sense), and explained that they let you logically separate your keys. The same key name can exist in database 0 and database 42 with completely different data, and they'll never collide.&lt;/p&gt;

&lt;p&gt;"This is a feature that goes way back to Redis one," Kyle explained. "They had this concept of numbered databases… but it allows you to take the keys that you have and separate them up logically into, by default, 16 databases."&lt;/p&gt;

&lt;p&gt;Up until v9, this only worked in standalone mode. The moment you needed cluster mode (which basically every production workload does), you were stuck with just database 0. For over a decade. An update has been due for a long time.&lt;/p&gt;

&lt;p&gt;When Valkey calculates which slot a key belongs to, it only looks at the key name. The database number doesn't matter. So if you have a key called &lt;code&gt;user:100&lt;/code&gt; in database 0, and another key called &lt;code&gt;user:100&lt;/code&gt; in database 5000, &lt;em&gt;they both live in the same slot&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;On the surface, this might not seem like a big deal. But digging into what that means – you can move data between databases almost for free. Moving keys between databases is literally a pointer change, not a data migration. You can move a sorted set with 3 million elements between databases with no network transfer. Just… instant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problems this solves
&lt;/h2&gt;

&lt;p&gt;I immediately started thinking about real problems this solves. And I found a few that made me really excited.&lt;/p&gt;

&lt;h3&gt;
  
  
  No more key prefixing
&lt;/h3&gt;

&lt;p&gt;The obvious use case we jump to when we hear "multiple databases" is multi-tenancy. In a multi-tenant SaaS app, you probably build your keys like this (if you're anything like me):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tenant_12345:user:100&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tenant_12345:session:abc&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;session_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tenant_12345:cart:xyz&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cart_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every key you get, set, and delete has the tenant id prefixed on it. This is a (generally) safe way to make sure you logically separate customer data. If you have 100 million keys with a 15-byte prefix, that's 1.5GB of RAM just storing the same string over and over. It's an incredibly wasteful pattern!&lt;/p&gt;

&lt;p&gt;But now, we can save tenant data in a tenant-specific database for a set of keys. It not only makes the code cleaner, it also frees up tons of memory to be used on… more keys!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;12345&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;// Tenant 12345's namespace&lt;/span&gt;
&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user:100&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;session:abc&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;session_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cart:xyz&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cart_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In other words, we've effectively expanded our potential working set size without any hardware changes. Same isolation. Zero prefix overhead. And remember, Valkey 8.0 made headlines for saving 8 bytes per key. We're talking about eliminating 10-20 bytes here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Black Friday-scale price switches
&lt;/h3&gt;

&lt;p&gt;Kyle described a use case that made me immediately think of every e-commerce system I've ever seen:&lt;/p&gt;

&lt;p&gt;"Let's say you have something that you want to instantly switch over, atomically switch over from one set of data to another. So you've said, here it has a million elements and then at 12:01 AM you want to have a different set of million elements."&lt;/p&gt;

&lt;p&gt;Sounds exactly like holiday pricing to me. Businesses have their entire catalog of products and prices being served out of their cache cluster. The midnight after Thanksgiving, they need to flip to sale prices. All of them. Instantly.&lt;/p&gt;

&lt;p&gt;Typically, you'd have a batch job that updates prices in bulk or update your data with feature flags, or some other process equally prone to error that results in customers seeing random mixes of old and new prices.&lt;/p&gt;

&lt;p&gt;But with multiple databases, you can build your catalog in a different database and instantly move it over with a single command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;// holiday pricing database&lt;/span&gt;
&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;move&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;product:prices&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// moved to production database&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No data copying. No network overhead. Really, just a configuration change and suddenly your entire pricing model flips. This is the kind of thing that sounds impossible until someone shows you it's not.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parallel test environments
&lt;/h3&gt;

&lt;p&gt;This one hit home for me. If you're running tests in CI/CD, you've probably tried to run multiple parallel test jobs, all needing isolated data – but all stepping on each other's toes.&lt;/p&gt;

&lt;p&gt;Of course, you could get around this by not running your tests in parallel and resetting your test data after every job, but nobody has time for that anymore. Multiple databases makes this one a trivial problem.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;test_db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CI_JOB_ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt; 
&lt;span class="n"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Valkey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;valkey-cluster&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_db&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user:test@example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_json&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user:test@example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;user_json&lt;/span&gt; 

&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;flushdb&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it scales, too! Kind of. The podcast talked about running a cluster with 10 million databases just to see what would happen. Turns out, it worked fine! Valkey does lazy allocation on databases, so they only consume resources when you actually put data in it. So you can assign unique database numbers to thousands of concurrent test runs and it costs you nothing until you write data.&lt;/p&gt;

&lt;p&gt;This is the kind of engineering that makes you want to high-five someone through the internet 🙌&lt;/p&gt;

&lt;h2&gt;
  
  
  More is (probably) coming
&lt;/h2&gt;

&lt;p&gt;In the podcast, Kyle hinted at – but didn't commit – to some cool feature enhancements for numbered databases that make it feel like a first-class citizen in Valkey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Named databases&lt;/strong&gt; – Instead of SELECT 42, imagine &lt;code&gt;SELECT "production"&lt;/code&gt; or &lt;code&gt;SELECT "tenant_acme_corp"&lt;/code&gt;. This is a huge boost for developer experience for obvious reasons, but also makes it easier in multi-tenant scenarios so you don't have to track KV mappings of your tenant ids to numbers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per-database memory limits&lt;/strong&gt; – Prevent your test database from eating all your RAM. Sandbox test environments so they don't take up all the resources. It also fights noisy neighbors in multi-tenant production scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per-database rate limiting&lt;/strong&gt; - True multi-tenant resource isolation at the namespace level.&lt;/p&gt;

&lt;p&gt;Numbered databases in cluster mode give us something new – predictable, lightweight isolation built right into the cache layer. That's a big deal for enterprise workloads. Multi-tenant apps get simpler. A/B testing and CI environments get safer. Rollouts and migrations get less risky.&lt;/p&gt;

&lt;p&gt;Kyle summed it up perfectly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Somebody can roll in and just take something, start saving memory with it, or have new use cases entirely by going SELECT 4 instead of not worrying about a database at all.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's a subtle-yet-powerful new feature that might end up shaping how we architect distributed caching in the future.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>caching</category>
      <category>valkey</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>This is the best new feature from POST/CON</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Wed, 01 May 2024 19:05:16 +0000</pubDate>
      <link>https://forem.com/allenheltondev/this-is-the-best-new-feature-from-postcon-340</link>
      <guid>https://forem.com/allenheltondev/this-is-the-best-new-feature-from-postcon-340</guid>
      <description>&lt;p&gt;I make it no secret that I think APIs will literally &lt;a href="///blog/allen.helton/seriously-write-your-spec-first/"&gt;shape the future of tech&lt;/a&gt;. APIs give us access to everything - the lights in our homes, our favorite books, the weather, inventory of the grocery store down the street. Being able to use these APIs as tools enables us to build automations and abstractions that take care of everything in our daily lives without lifting a finger.&lt;/p&gt;

&lt;p&gt;My daily routine starts with an alarm that wakes me up within a 20-minute window at the best possible time based on my sleep cycle. I get out of bed and start brushing my teeth to some energizing music that turned on automatically after my alarm went off.&lt;/p&gt;

&lt;p&gt;I work my way into the kitchen where the lights turn on when my presence is detected. I drink my pre-workout as an email hits my phone with the &lt;a href="https://dev.to/aws-heroes/solo-saas-how-i-built-a-serverless-workout-app-by-myself-1c62"&gt;generated workout of the day&lt;/a&gt;. I make it over to my home gym where a catered workout playlist is already playing. I have all these conveniences follow me around my house without having to do anything. It just does it.&lt;/p&gt;

&lt;p&gt;How does all this happen? &lt;strong&gt;With APIs&lt;/strong&gt;. Not just APIs though, I've built this particular set of capabilities as a workflow. This happens, then that happens, and as a result, these other two things kick off.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Chaining actions together to create a workflow is where the power of APIs really shines.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Think of APIs as LEGO bricks. An individual brick doesn't do much. But when you put them together in a meaningful way, you quickly go from a bunch of random squares to the Taj Mahal.&lt;/p&gt;

&lt;p&gt;This is why I'm so excited about related requests inside of Postman. There were a bunch of cool announcements at POST/CON, but related requests has the potential for some serious disruption and incredible opportunities.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are related requests?
&lt;/h2&gt;

&lt;p&gt;Inside of Postman, when you add a request from a &lt;a href="https://www.gomomento.com/blog/verified-in-postman" rel="noopener noreferrer"&gt;verified API&lt;/a&gt; to your collection, you'll see a new little icon on right-hand toolbar. If you click it, you'll see some recommendations of other requests to use alongside it. You can click on one of these recommendations and have it brought into your collection automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fpostcon_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fpostcon_1.png" alt="Related requests in the toolbar"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've built so many workflows in Postman where I add a request and think to myself, "now what?" I have to figure out what to do next to accomplish my tast end-to-end. Usually this means going back to the &lt;a href="///blog/allen.helton/why-api-specs-are-the-backbone-of-successful-development/"&gt;API spec&lt;/a&gt; - if there is one - and trying to identify the next endpoint to add to my workflow. But with related requests, Postman will identify what you're working on and offer intelligent recommendations to move you along faster. These recommendations show you full documentation so you can make informed decisions on what you're adding to your workflow.&lt;/p&gt;

&lt;p&gt;Sometimes when I'm building a workflow I know exactly what I want to do. The hard part isn't knowing what I want the outcome to be, but rather the &lt;em&gt;steps to get there&lt;/em&gt;. Chances are high the workflow I'm building is not reinventing the wheel - meaning I'm probably trying to create something that has been done before.&lt;/p&gt;

&lt;p&gt;API vendors (usually) know how customers use their products. In addition to an API reference, they regularly include tried and true patterns and best practices for using their services in the documentation. It's often up to developers to read the docs, learn the pattern, and implement it themselves. There's nothing wrong with this at all - in fact, it's been a standard practice in the tech industry for years. I often refer to it as the "copy, paste, replace" method.&lt;/p&gt;

&lt;p&gt;Related requests are taking that a step further. It sees what you're building and offers recommendations from the vendor to get you to your end goal faster. Not only is it offering suggestions, but it's also adding functional requests to your collections. Think of this almost like &lt;a href="https://docs.aws.amazon.com/codewhisperer/latest/userguide/what-is-cwspr.html" rel="noopener noreferrer"&gt;Amazon CodeWhisperer&lt;/a&gt; or &lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt; but for Postman. Except that it's a version trained on exclusively reputable sources with training data aimed at doing what you're already trying to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I can see this going
&lt;/h2&gt;

&lt;p&gt;I wish I had insider information on the roadmap for Postman. But I don't. So I'm going to offer some speculation about what I see this turning into.&lt;/p&gt;

&lt;p&gt;This initial release is step one of three:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a searchable index of requests from reputable API vendors&lt;/li&gt;
&lt;li&gt;Update the index with cross-vendor related requests&lt;/li&gt;
&lt;li&gt;Unleash PostBot on the index&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are pretty big steps with a lot of implications, so let's quickly touch on each one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Searchable vendor-specific index
&lt;/h3&gt;

&lt;p&gt;This is what we have right now. Postman has indexed collection data from verified APIs and is using it to offer recommendations for the next request to use. These collections are maintained by individual companies and generally exclusively include requests specific to them - which is a totally acceptable and understandable thing to do!&lt;/p&gt;

&lt;p&gt;But because of this, you likely won't be getting recommendations of how to turn on your lights after adding a request to play a specific playlist on Spotify. To do that, we need to progress to step two.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-vendor collections from trusted sources
&lt;/h3&gt;

&lt;p&gt;Right now the only data being indexed for related requests is coming from verified teams. These teams represent individual companies like &lt;a href="https://mastercard.com" rel="noopener noreferrer"&gt;Mastercard&lt;/a&gt;, &lt;a href="https://www.hubspot.com/" rel="noopener noreferrer"&gt;HubSpot&lt;/a&gt;, and &lt;a href="https://gomomento.com" rel="noopener noreferrer"&gt;Momento&lt;/a&gt;. But what if Postman started indexing data from verified builders, like the individuals in the &lt;a href="https://www.postman.com/company/supernovas-program/" rel="noopener noreferrer"&gt;Supernova program&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;You can't reasonably expect companies to build and maintain collections that contain requests from other companies. But you &lt;em&gt;can&lt;/em&gt; expect that behavior from builders. If open-source software has taught us anything, it's that builders love finding and sharing creative ways to do something.&lt;/p&gt;

&lt;p&gt;So if I was to build a public collection that contained all the requests from my morning routine, it could potentially be added to the related requests index. Then when you start building a collection that turns your lights on with the Kasa API, you might see a recommendation of how to use the Spotify API to turn on a playlist of your choice. This would start bringing to light clever use cases and possibly sparking new, innovative ideas in the process.&lt;/p&gt;

&lt;p&gt;Doing this involves a lot of &lt;strong&gt;trust-building&lt;/strong&gt;. Trust from Postman to individual builders. Trust from Postman consumers to the related request functionality. Open-source projects have also taught us that not everything you see on the Internet is trustworthy. So establishing the trust early with reputable builders is going to be pre-req #1.&lt;/p&gt;

&lt;p&gt;You also need to balance priorities. Prioritize vendor collections for recommendations over community collections. Maybe the user building a collection wants to turn their lights off instead of playing a playlist next. They'd be able to do this easily if the vendor specific recommendations were at the top.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI recommendations
&lt;/h3&gt;

&lt;p&gt;Postman has been heavily investing in generative AI with &lt;a href="https://www.postman.com/product/postbot/" rel="noopener noreferrer"&gt;PostBot&lt;/a&gt;. It's a built-in companion that can help you debug issues, generate documentation, write tests, &lt;a href="///blog/allen.helton/i-didnt-know-it-did-that-postman-postbot"&gt;and much more&lt;/a&gt;. But what if we started training it on the recommended requests index?&lt;/p&gt;

&lt;p&gt;Assuming we have verified builders contributing to related requests in addition to the vendors themselves, there's a lot of data to train on. By then, maybe the public API network has also evolved to the point where vendors can classify themselves as different categories, like Spotify being classified as a music and podcast API, Stripe as a payment processor, and Momento as a serverless cache and storage provider. This type of context can provide an LLM with everything it needs to offer next-gen recommendations.&lt;/p&gt;

&lt;p&gt;And why stop at recommendations? If PostBot is trained on thousands of verified collections and workflows and knows what each set of APIs are useful for, it could absolutely generate and build entire workflows automatically. We know PostBot has this capability, as it was announced at POST/CON that it can now help you build flows.&lt;/p&gt;

&lt;p&gt;But with the power of the entire public API network behind it, imagine what would happen if you gave it a prompt of "Build me a payment processing app that sends confirmation emails to the recipient, updates a spreadsheet, posts the transaction to Slack, and plays a 'tada' sound every time a transaction is successful."&lt;/p&gt;

&lt;p&gt;In seconds, you could have a flow that uses the Stripe API to post a payment, the Google Sheets API to add a row for internal bookkeeping, SendGrid to send emails, Slack for internal messaging, and Spotify for the fun sound. All you have to do is authenticate (which just got easier as well)!&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;The game has just been changed. Related requests are a huge step in the right direction for developer experience and time to market. Knowing what to do next as instructed by the vendor is a big deal.&lt;/p&gt;

&lt;p&gt;APIs can be intimidating - especially when it has hundreds of calls you can make. Knowing what to do next might only be intuitive to the developers of the API, which makes it hard to consume. Back to the LEGO analogy, if you dumped all the bricks out from a set into a big pile and told someone to go build it without any instructions, they might look at you in disbelief. They'd try, fail a few times, try again, and &lt;em&gt;maybe&lt;/em&gt; get it eventually. But if you give them the instructions, chances are good they'll come out the other side of it with what they wanted.&lt;/p&gt;

&lt;p&gt;This is what related requests are doing for builders! They're providing clear instructions on what to do next. Once we get additional data in there for building workflows across multiple APIs and the help of an LLM, software might actually be easy to build 🤔&lt;/p&gt;

&lt;p&gt;I envision a future where capabilities like this enable both non-technical and technical people alike to build what they want with minimal effort. What a time to be alive!&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>tech</category>
      <category>postman</category>
      <category>api</category>
    </item>
    <item>
      <title>Serverless Postgres with Neon - My first impression</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Wed, 24 Apr 2024 13:04:40 +0000</pubDate>
      <link>https://forem.com/aws-heroes/serverless-postgres-with-neon-my-first-impression-1n3p</link>
      <guid>https://forem.com/aws-heroes/serverless-postgres-with-neon-my-first-impression-1n3p</guid>
      <description>&lt;p&gt;When you decide to go serverless, be it a personal decision or enterprise-wide, you're signing up to be a forever student. Modern technology moves fast and keeping up with all the new features, services, and offerings week after week is something that you need to do to stay effective. Cloud vendors are continuously releasing higher and higher abstractions and integrations that make your job as a builder easier. So rather than reinventing the wheel and building something you'll have to maintain, if you keep up with the new releases, someone may have already done that for you and is offering to maintain it themselves.&lt;/p&gt;

&lt;p&gt;Such is the case with &lt;a href="https://neon.tech/"&gt;Neon&lt;/a&gt;, a serverless Postgres service, that went generally available on April 15. Congrats &lt;a href="https://twitter.com/nikitabase"&gt;Nikita Shamgunov&lt;/a&gt; and team on the launch. When I saw the announcement, I knew I had to try it out for myself and report back with my findings.&lt;/p&gt;

&lt;p&gt;I host a weekly show with &lt;a href="https://twitter.com/andmoredev"&gt;Andres Moreno&lt;/a&gt; for the &lt;a href="https://believeinserverless.com"&gt;Believe in Serverless&lt;/a&gt; community called &lt;em&gt;Null Check&lt;/em&gt;. Andres and I find newly released services or buzzing features and try them out ourselves live on-air for a true "this is what it is" experience. We try to build some silly stuff for a &lt;a href="https://twitter.com/AllenHeltonDev/status/1780977768274002015"&gt;little bit of fun&lt;/a&gt; while we're at it.&lt;/p&gt;

&lt;p&gt;Last week, we did a &lt;a href="https://www.youtube.com/watch?v=dE-74qeyxgQ"&gt;full assessment on Neon&lt;/a&gt;, looking at pricing, developer experience, elasticity, and serverless-ness (we're going to pretend that's a word). Let's go over what we found.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;Neon has two pricing metrics - &lt;em&gt;storage&lt;/em&gt; and &lt;em&gt;compute&lt;/em&gt;, which feels spot-on for a managed database service. On top of the two pricing metrics, there are four plans to choose from ranging from free tier to enterprise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_HLi-0a9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://readysetcloud.s3.amazonaws.com/neon_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_HLi-0a9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://readysetcloud.s3.amazonaws.com/neon_1.png" alt="Pricing chart for Neon with the different plans" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Storage is a metric we should all be familiar with. It refers to the amount of data you're storing inside of Neon. Each plan includes a certain amount of data, and if you exceed it you pay for the overage. The overage cost decreases the higher the plan. Meaning it's more per GB of data on the &lt;em&gt;Free&lt;/em&gt; tier than it is at the &lt;em&gt;Scale&lt;/em&gt; tier.&lt;/p&gt;

&lt;p&gt;Compute, on the other hand, is similar to what we see in other serverless services, but also kind of different. It's the same in that you're charged for the amount of compute in hours x vCPUs consumed, but it's different from other managed DB services in that you're charged for this time while queries/operations are running. With something like DynamoDB, you're charged for read and write capacity units - which are determined by the size of the data you're handling. With Neon, it's all about the effort the machines are putting in with seemingly no direct charge for the amount of data handled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer experience
&lt;/h2&gt;

&lt;p&gt;Developer experience (DX) &lt;a href="https://dev.to/aws-heroes/5-tips-for-building-the-best-developer-experience-possible-43m8"&gt;refers to a lot of things&lt;/a&gt;. Onboarding, ease of use, clearness of documentation, intuitiveness, etc... all play a role when talking about how a developer uses your service. Andres and I tried to see how far we could get without needing to dive into the docs - which is a great indicator of how easy and intuitive the experience is.&lt;/p&gt;

&lt;p&gt;We started in the console, where a wizard guided us through setting up our first project and database. This was a simple form that required us to come up with a name for everything and select a region to use. After that, it was set up and ready to use! We were even presented with a quickstart for getting connected to the DB we just created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8O_DL2z5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://readysetcloud.s3.amazonaws.com/neon_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8O_DL2z5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://readysetcloud.s3.amazonaws.com/neon_2.png" alt="Dialog with quick start code" width="792" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this popup, there was a language picker that allowed us to select from psql, Next.js, Primsa, Node.js, Django, Go, and several others. As far as quickstarts go, this one was fast. I put the provided code as-is in a JavaScript file and was able to connect and query data within seconds.&lt;/p&gt;

&lt;p&gt;Since the database was created void of tables and data, my query didn't do anything 😅 so we used the integrated SQL editor in the console to create tables and the Neon CLI to insert data in bulk from a csv. Overall, we went from idea to queryable data inside of tables in less than 5 minutes!&lt;/p&gt;

&lt;p&gt;The Neon SDK was easy to use as well. Granted all I did was use raw SQL with the SDK, which is ripe for a &lt;a href="https://owasp.org/www-community/attacks/SQL_Injection"&gt;SQL injection attack&lt;/a&gt;, it still did exactly what I needed it to do without needing to go digging through docs for 30 minutes. In fact, I could have easily used the industry standard &lt;a href="https://www.npmjs.com/package/pg"&gt;node-postgres package&lt;/a&gt; instead of the one from Neon and communicated with my database the same way I've always done. This means if I migrate from another service to this one, all I'd need to do is update the connection string!&lt;/p&gt;

&lt;p&gt;Overall, I was delighted at how easy the developer experience was to get started with this service. It also feels like a breeze to maintain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Elasticity
&lt;/h2&gt;

&lt;p&gt;A service is only as good as its elasticity. If your app has more traffic than a service can handle you're forced to either throttle requests or go with a service that &lt;em&gt;can&lt;/em&gt; handle it. So we ran some tests to see how Neon could scale with traffic spikes.&lt;/p&gt;

&lt;p&gt;For this benchmark, we ran two tests: one for &lt;em&gt;heavy reads&lt;/em&gt; and another for &lt;em&gt;heavy writes&lt;/em&gt;. To run the benchmark, we built a small web server inside of a Lambda function and created a function url for public access to the internet. We created an endpoint that would do a large read, joining across several tables and returning several hundred results, and another for a write operation which created a single row in one table.&lt;/p&gt;

&lt;p&gt;Then, we used the &lt;a href="https://learning.postman.com/docs/collections/performance-testing/testing-api-performance/"&gt;Postman load generator&lt;/a&gt; to push load onto each of these endpoints, running at a sustained 75 requests per second (RPS) for two minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MPmks6fS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://readysetcloud.s3.amazonaws.com/neon_3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MPmks6fS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://readysetcloud.s3.amazonaws.com/neon_3.png" alt="Postman performance test runner" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Granted this isn't a ridiculously high load, but keep in mind I was testing out the free tier that has limited compute usage. So 7,500 requests over 2 minutes will have to do. And it did well!&lt;/p&gt;

&lt;p&gt;Funnily enough, the only time we were getting errors in this test was when the Lambda function was trying to keep up with scaling. When we sent burst requests out of nowhere, we'd get back &lt;code&gt;429 - Too Many Request&lt;/code&gt; status codes from Lambda as it was scaling up. But neither the write nor the read tests overwhelmed Neon - even at the lowest tier.&lt;/p&gt;

&lt;p&gt;From what I can tell, there is a little bit of a cold start when Neon is initiating compute. The documentation says the compute instances wind down after 5 minutes of inactivity in the free tier. Running a large read query on an inactive database resulted in about 800ms round-trip time (RTT) for the endpoint. This included the Lambda cold start as well, so overall not a terrible latency. Subsequent runs dropped to about 350ms.&lt;/p&gt;

&lt;p&gt;It appears to me that Neon will cache reads for about 1-2 seconds. After a warm start, running the same query resulted in an 80ms RTT for a couple of seconds, then would spike up to 300ms for a single request, then back down to 80ms again. Given that pattern, I imagine there's built-in caching with a short &lt;a href="https://docs.momentohq.com/cache/learn/courses/cache-concepts/time-to-live"&gt;time-to-live (TTL)&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless-ness
&lt;/h2&gt;

&lt;p&gt;Any time something is branded as serverless, I give it a bit of a skeptical eye roll. Depending on who you believe, &lt;a href="https://dev.to/aws-heroes/i-dont-know-what-serverless-is-anymore-k3a"&gt;serverless has no meaning&lt;/a&gt;. Rather than thinking about serverless as a "thing" I like to approach it as a &lt;a href="https://www.youtube.com/watch?v=GmCn9c_w4ak"&gt;set of capabilities&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Usage-based pricing with no minimums&lt;/li&gt;
&lt;li&gt;No instances to provision or manage&lt;/li&gt;
&lt;li&gt;Always available and reliable&lt;/li&gt;
&lt;li&gt;Instantly ready with a single API call&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Given what we've already discussed about Neon, I feel like it checks the boxes on the serverless test. For pricing, we're only paying for what we're using - the amount of data stored in the service and the amount of compute time we're consuming in our read and write operations. There is no minimum cost and it scales linearly with the amount I use it.&lt;/p&gt;

&lt;p&gt;It took me a few days to land on my opinion for &lt;em&gt;no instances to provision or manage&lt;/em&gt;. There definitely is compute with an on/off status and it's made clearly visible to end users. However, users can't do anything with it. It's managed completely by Neon. If more compute is needed, it is automatically added. I'm not responsible for configuring how it scales, it just does it. I initially didn't like that I was given a peek behind the curtain, but that's really all it is - a peek. I'm not managing anything, Neon is.&lt;/p&gt;

&lt;p&gt;Always available and reliable is a big one for serverless. If I need to use it now - by golly I need it now. Even though compute instances start idling after 5 minutes, they're still available at a moments notice when traffic comes in. There are no maintenance windows or planned downtime, so this one is checked as another win in my book.&lt;/p&gt;

&lt;p&gt;The onboarding experience for Neon was minimal. I typed in a name for my database and hit the "Go" button and it was immediately available. This feels serverless to me. I don't have to know the amount of traffic I expect or configure auto-scaling groups or decide what operating system I want the compute to run on. Again, this is all managed for me by Neon.&lt;/p&gt;

&lt;p&gt;By my count, that makes this service &lt;strong&gt;definitely serverless&lt;/strong&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;This was a great test of a super cool service. It's nice to see SQL catching up a bit in the serverless game. One of the really cool features I liked about Neon was &lt;a href="https://neon.tech/docs/introduction/branching"&gt;data branching&lt;/a&gt;. They treat database data similar to code in a GitHub repository. You can branch your data and use it in a sandbox environment, like a CI/CD pipeline or a trial run for an ETL job. When you're done with the branch, you can either discard it or merge it back into its parent.&lt;/p&gt;

&lt;p&gt;This is a huge capability for any managed database. It simplifies many workflows and provides an easy and instant way to get a snapshot of data. It follows a copy-on-write principle, meaning the data is copied whenever it's modified in your branch. If you're just doing reads it uses a pointer to look at the parent branch data. This helps reduce storage costs and increase availability of data when you branch.&lt;/p&gt;

&lt;p&gt;Overall, I think this is a great product and I will be building more with it. It's a nice alternative to something like Aurora Serverless, which &lt;a href="https://aws.amazon.com/rds/aurora/pricing/"&gt;has a usage minimum&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; requires a VPC 😬&lt;/p&gt;

&lt;p&gt;If you're into relational databases and are a fan of Postgres, it's worth a shot.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>database</category>
      <category>dx</category>
    </item>
    <item>
      <title>Show your personality</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Wed, 17 Apr 2024 14:37:54 +0000</pubDate>
      <link>https://forem.com/aws-heroes/show-your-personality-7f2</link>
      <guid>https://forem.com/aws-heroes/show-your-personality-7f2</guid>
      <description>&lt;p&gt;Yesterday was my birthday. I just turned 34, landing me on the soft side of the "mid-30s". I've always had a hard time with birthdays because in my head I have that feeling of "I'm going to be young forever." But aches, pains, and hangovers are beginning to tell me otherwise.&lt;/p&gt;

&lt;p&gt;When I turned 30, I had someone tell me that your 30's are much more fulfilling than your 20's. I didn't know what that meant at the time, but 4 years into it, I'm starting to understand what he meant. When I was in my 20's, believe it or not, I was very shy. I fully identified as an introvert and wouldn't give you an opinion unless I was specifically asked for it.&lt;/p&gt;

&lt;p&gt;I didn't know much about myself, and I would constantly watch others to see what they did and how they reacted to things. I was learning.&lt;/p&gt;

&lt;p&gt;In my 30's, with my kids starting to walk, talk, and go to school, plus a few &lt;a href="https://dev.to/aws-heroes/be-an-enabler-592c"&gt;nudges from some enablers&lt;/a&gt;, I stopped watching all the time. I knew what I liked and to be honest, &lt;em&gt;I cared a lot less about what other people thought&lt;/em&gt;. Sounds like a harsh statement, but hear me out.&lt;/p&gt;

&lt;p&gt;Once I fully realized the things that made me happy, I wanted to do those things. All the time.&lt;/p&gt;

&lt;p&gt;Before, I would be nervous about being judged or ridiculed and back off doing some of the things that made me happy because &lt;em&gt;I cared too much about what other people thought of me&lt;/em&gt;. I'd act reserved or not do something because I didn't want to stand out at the risk of being embarrassed. Now though, I do it anyway and try to get other people to have some fun with me because &lt;em&gt;it doesn't matter&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A person passing you in the grocery store might hear you singing along to your favorite song that just came on over the speakers. Sing it! It doesn't matter if they laugh at you! You know what? You probably just brightened their day.&lt;/p&gt;

&lt;p&gt;I like making people laugh. That's one of my strongest personality traits. For too long, I've held that back to just my friends and family because I was too shy to do it to strangers. But something about being in my 30's changed my outlook on life. I know what I like, and by golly I'm gonna do it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How things changed by showing some personality
&lt;/h2&gt;

&lt;p&gt;In addition to becoming more and more extroverted as I got older, other aspects of my life changed considerably as well. From how I produced content to who I was hanging out with, leaning into your personality has a huge ripple effect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Content creation
&lt;/h3&gt;

&lt;p&gt;The first time I ever published a blog post, I almost threw up with anxiety. The imposter syndrome was &lt;strong&gt;fierce&lt;/strong&gt;. I wrote an article about &lt;a href="https://dev.to/allenheltondev/8-steps-to-facilitating-a-captivating-retrospective-53ij"&gt;the way I ran Scrum retrospectives&lt;/a&gt; for my team. I had come up with a way to get the other introverted developers on my team to talk and I was excited to share that with world.&lt;/p&gt;

&lt;p&gt;But I was &lt;em&gt;so nervous&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Classic imposter syndrome hit me like a bag of doorknobs. "Why should people listen to you?" "There are tons of articles on retrospectives out there, the world doesn't need another one." "Nobody is going to read this, it's just by some random dev."&lt;/p&gt;

&lt;p&gt;But whether I knew it or not, I wrote something unique. Something with my personality embedded directly in it. Even if someone had written about my exact topic before, they didn't write it the way I did - because I showed my personality.&lt;/p&gt;

&lt;p&gt;It was something I didn't know I was doing at first, then something I intentionally did and was extremely nervous about. The first time I ever wrote &lt;a href="https://dev.to/aws-builders/what-does-the-future-hold-for-serverless-2nlj"&gt;an opinion piece&lt;/a&gt; I remember telling my wife at the dinner table, "we're about to see how the community views my opinion." I was intimidated, to say the least, because it was very bluntly showing my opinions and personality.&lt;/p&gt;

&lt;p&gt;Turns out it was received really well. As was the next opinion piece I wrote. And the next one. And the next one. With each post, my confidence went up as did my understanding of what I liked.&lt;/p&gt;

&lt;p&gt;This led to my passion for content creation. I took a big leap of faith to &lt;a href="https://dev.to/aws-heroes/growing-your-career-in-the-evolving-tech-field-3de"&gt;leave enterprise architecture and become a developer advocate&lt;/a&gt;. I'm thankful every day for that decision. But if I hadn't started showing my personality and begun to build a portfolio of blog posts, newsletters, podcast episodes, and conference talks, I wouldn't have been qualified to make a move.&lt;/p&gt;

&lt;p&gt;Now, I've discovered that personal touches make all the difference. I moved from generic vector graphics to expressive pictures of my face for blog post header images. I switched from generated text-to-speech to recording myself read my blog posts. I try to throw as much of myself into everything I create - not just for fun, but to show that there's a person behind the content, not some random LLM.&lt;/p&gt;

&lt;h3&gt;
  
  
  So many friends!
&lt;/h3&gt;

&lt;p&gt;It's good to be different. People feel connected to other people they feel are genuine. I show lots of emotion and make strong claims in the content I create, and as a result I've had lots of people reach out to me for help, to meet up for coffee, or challenge my opinion. I take every chance I get to take someone up on offers like this.&lt;/p&gt;

&lt;p&gt;I recently released an episode on my podcast about &lt;a href="https://readysetcloud.io/podcast/season-2/6"&gt;the value of meeting in person&lt;/a&gt;, where I have a discussion with my guest about the tremendous value of networking. Making friends and building connections not only helps you professionally, but personally as well.&lt;/p&gt;

&lt;p&gt;I have friends all over the world that I try to visit any time I travel. Not to talk tech, but to chat, experience their culture, and build stronger friendships. If I hadn't shown my personality and started with trust-building in my content, I'd have a completely different experience when travelling - usually a much less fulfilling one!&lt;/p&gt;

&lt;p&gt;Fun observation - because I share so much, many of my new-found friendships have been accelerated! People feel like they know me already when we meet in person, which is a fun and surreal feeling to say the least 😅&lt;/p&gt;

&lt;h2&gt;
  
  
  Growing into it
&lt;/h2&gt;

&lt;p&gt;Showing your personality is all about comfort. Are you comfortable showing who you are to the "open internet?" Now, you don't have to go and tell your deepest darkest secrets, but throwing a little bit of your flair into the mix when writing or recording a video is absolutely the way to get started. Tell a joke! Share a personal story. Throw in something that makes you uniquely you.&lt;/p&gt;

&lt;p&gt;Over time your comfort will grow. Like I said earlier, I like making people laugh. I started by telling funny little anecdotes in my blogs. Now I dress up like a farmer and post stuff like this on social media.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1778788914708332740-325" src="https://platform.twitter.com/embed/Tweet.html?id=1778788914708332740"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1778788914708332740-325');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1778788914708332740&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;But it was a journey to get here. A combination of age, confidence, and lots of practice have all led to how I create content and interact with people on social media. And let me tell you, &lt;em&gt;I'm having a blast&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Start with what you're comfortable with, it doesn't have to be much. If you're anything like me, you're probably already showing your personality in whatever you're creating and you might not realize it. People like opinions and it's totally ok if they don't agree with yours. I often talk about &lt;em&gt;diversity of thought&lt;/em&gt;, which is just another way to say getting a bunch of other opinions before you do something. Hearing the opinions of others and considering them in your decision makes you more balanced and generally leaves you with a more complete view of something.&lt;/p&gt;

&lt;p&gt;So be funny. Or witty. Or terse. The point is to be &lt;em&gt;you&lt;/em&gt;. You don't need to spend effort to abstract away your personality from the content you create. Share it! Over time, you'll really pick up on what you like and you'll want to do it more and more. And that's why we're all here anyway, isn't it?&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>personalgrowth</category>
      <category>career</category>
    </item>
    <item>
      <title>How to trigger events every 30 seconds in AWS</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Wed, 10 Apr 2024 13:09:18 +0000</pubDate>
      <link>https://forem.com/aws-heroes/how-to-trigger-events-every-30-seconds-in-aws-3957</link>
      <guid>https://forem.com/aws-heroes/how-to-trigger-events-every-30-seconds-in-aws-3957</guid>
      <description>&lt;p&gt;Software engineering is one of those industries that likes to periodically humble you. You think you know exactly how to build something, come up with a plan &lt;a href="https://dev.to/allenheltondev/the-beginner-s-guide-to-software-estimation-3bpp"&gt;and an estimate&lt;/a&gt;, sit down to start writing some code, and BOOM. It does not at all work like you thought it did.&lt;/p&gt;

&lt;p&gt;That happened to me the other day.&lt;/p&gt;

&lt;p&gt;I was building a &lt;a href="https://www.gomomento.com/blog/yes-we-built-a-multiplayer-squirrel-themed-replica-of-flappy-bird-on-momento" rel="noopener noreferrer"&gt;basic multiplayer puzzle game&lt;/a&gt; that required all the players to start playing at the same time. The game took roughly 20 seconds to play, so I figured I'd send out a &lt;a href="https://dev.to/aws-heroes/websockets-grpc-mqtt-and-sse-which-real-time-notification-method-is-for-you-52n7"&gt;broadcast push notification&lt;/a&gt; every 30 seconds to start the next round. I've used the EventBridge Scheduler a bunch of times over the past year, this sounded like an easy task. I just needed to &lt;em&gt;create a recurring schedule every 30 seconds that triggers a Lambda function to randomize some data and send the broadcast&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;It wasn't that easy. The smallest interval you can set on a recurring job via the scheduler, or any cron trigger in AWS for that matter, is &lt;strong&gt;1 minute&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I asked &lt;a href="https://aws.amazon.com/q/" rel="noopener noreferrer"&gt;Amazon Q&lt;/a&gt; what my options were and had mixed feelings about what it said.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://readysetcloud.s3.amazonaws.com/sub_minute_schedules_1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fsub_minute_schedules_1.png" alt="Mix of accurate and inaccurate answers from Amazon Q"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I just mentioned, the first option Q gave me is wrong. If you put in a value of &lt;code&gt;rate(30 seconds)&lt;/code&gt; into a schedule, you get an error saying the smallest rate it can do is 1 minute. So that option was out. Option 2 with Step Functions was intriguing and we'll talk more about that one in a second.&lt;/p&gt;

&lt;p&gt;Option 3 where you recursively call a Lambda function seems like a terrible idea. Not only would you need to pay for the 30-second sleep time in the execution, but this method is also imprecise. You have your variable time of compute for the logic, then a sleep time to bring you up to 30 seconds. What's more, this puts you in a loop, which &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-recursion.html" rel="noopener noreferrer"&gt;AWS detects and shuts down automatically&lt;/a&gt;. You can request to turn it off, but the docs make me feel like you really shouldn't.&lt;/p&gt;

&lt;p&gt;The last option given to us by Q was triggering the Lambda function off a database stream - which in itself sounds like a reasonable idea, except you'd have to have a mechanism that &lt;em&gt;writes to the database&lt;/em&gt; every 30 seconds, which is what we're trying to do in the first place. So that doesn't solve our problem at all. This leaves us with Step Functions as our only real option according to Amazon Q. Let's take a look at that.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Before we continue, if you're here looking for the code you can &lt;a href="https://github.com/allenheltondev/sub-minute-serverless-schedules" rel="noopener noreferrer"&gt;jump straight to GitHub&lt;/a&gt; and grab it there.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Step Functions for 30-second timers
&lt;/h2&gt;

&lt;p&gt;Step Functions itself doesn't allow you to trigger things at sub-minute intervals. It is still constrained to the 1 minute minimum timer. However, what Step Functions has that other things do not is the &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-wait-state.html" rel="noopener noreferrer"&gt;wait state&lt;/a&gt;. You can tell Step functions to run your Lambda function, wait for 30 seconds, then run the Lambda function again.&lt;/p&gt;

&lt;p&gt;This is what I did in my first iteration and received a wealth of mixed opinions on Twitter 😬&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1761138903874928761-962" src="https://platform.twitter.com/embed/Tweet.html?id=1761138903874928761"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1761138903874928761-962');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1761138903874928761&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;The premise was simple. I would trigger this state machine to run every minute. It would immediately execute my Lambda function that did the broadcast, wait 30 seconds, then run the Lambda function again. Easy enough, right?&lt;/p&gt;

&lt;p&gt;After being told time and time again that this solution seemed really expensive, I wanted to see how much it cost to run for a month. I was using standard Step Function workflows, which are billed by number of state transitions as opposed to execution time and memory, which is how Lambda and &lt;a href="https://aws.amazon.com/step-functions/pricing/" rel="noopener noreferrer"&gt;express workflows are billed&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Standard workflows cost $0.000025 per state transition. As far as costs go in AWS serverless services, this one is generally viewed as expensive. I don't disagree, especially if you make heavy use of Step Functions on a monthly basis. But what you're paying for here is much more than state transitions, you're also paying for top-of-the-line visibility into your orchestrated workflows and significantly easier troubleshooting. It's an easy trade-off for me.&lt;/p&gt;

&lt;p&gt;My workflow above has three states, &lt;em&gt;but five state transitions&lt;/em&gt;! We are always charged a transition from "start" to our first state and from our last state to "end". If we run our workflow every 30 seconds for 24 hours, we can calculate the math like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;5 transitions x 60 minutes x 24 hours x 2 iterations per minute x $.000025 = $0.18&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ok $0.18 per day isn't that bad. That's $65 per year for the ability to run something every 30 seconds. But I'm confident we can do better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimizing state transitions
&lt;/h3&gt;

&lt;p&gt;We know that state transitions literally equal cost with Step Function standard workflows. We also know that every time you start a workflow you're charged 2 transitions for start and end states. So what if we just added more wait states and function calls?&lt;/p&gt;

&lt;p&gt;I started down this path and &lt;em&gt;very&lt;/em&gt; quickly realized it was not a good idea. It's probably not very hard to see why based on the picture below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fsub_minute_schedules_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fsub_minute_schedules_2.png" alt="Workflow diagram alternating between wait states and executions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This workflow is a nightmare to maintain. It's the same function being repeated over and over again with the same wait state between each execution. If I wanted to extend it with more executions, I'd add more states. If I ever changed the name of the function, then I'm in for a lot of unnecessary work updating that definition in X number of spots.&lt;/p&gt;

&lt;p&gt;However, this did cut down on the number of state transitions. Instead of triggering the workflow every 60 seconds, I could now trigger it every 2 minutes, eliminating half of the start/end transitions every day. I liked that.&lt;/p&gt;

&lt;p&gt;Doing the math, it ended up saving $.02 per day. What I needed was to queue up something like that over the course of an hour, but I wasn't about to maintain a workflow that used the same function + wait combo 120 times in it. So I made it dynamic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dynamic loops
&lt;/h3&gt;

&lt;p&gt;My goal here was to minimize the number of times I ran the workflow each day so I could keep the number of state transitions to a minimum.&lt;/p&gt;

&lt;p&gt;Turns out I can use a &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-map-state.html" rel="noopener noreferrer"&gt;Map state&lt;/a&gt; to loop over the function execution and wait states. &lt;strong&gt;&lt;code&gt;Maps&lt;/code&gt; do not charge you a state transition when it goes back to the start&lt;/strong&gt;. This is perfect! Exactly what we needed to reduce the number of start/end transitions. It also is a much, much simpler workflow to maintain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fsub_minute_schedules_3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fsub_minute_schedules_3.png" alt="Workflow diagram using map state"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But there are a few things we need to keep in mind.&lt;/p&gt;

&lt;h4&gt;
  
  
  Maps aren't "for loops"
&lt;/h4&gt;

&lt;p&gt;As much as I'd like to say "run this loop 100 times," in a &lt;code&gt;Map&lt;/code&gt; state, it doesn't work that way. These states iterate over an array of values, passing the current iteration value into the execution. Think of it like a &lt;code&gt;foreach&lt;/code&gt; rather than a &lt;code&gt;for&lt;/code&gt;. To get around this, we need to pass in a dummy array with N number of values in it, with N being the number of executions we want to run. For our workflow above, that means an initial input state like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "iterations": [1,2,3,4,5,6,7,8,9,10]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would result in 10 executions of the &lt;code&gt;Map&lt;/code&gt; state. We can include as many iterations as we want here to reduce costs with one exception. More on that in a minute.&lt;/p&gt;

&lt;h4&gt;
  
  
  Threading
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;Map&lt;/code&gt; states are concurrent execution blocks. The Step Functions team recommends going &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/concepts-asl-use-map-state-inline.html" rel="noopener noreferrer"&gt;up to 40 concurrent executions&lt;/a&gt; in your &lt;code&gt;Map&lt;/code&gt; states for normal workflows. But we don't want multi-threaded execution in this workflow. We want the iterations to go one after another to give us that &lt;em&gt;every 30 second&lt;/em&gt; behavior.&lt;/p&gt;

&lt;p&gt;To do this, we must set the &lt;code&gt;MaxConcurrency&lt;/code&gt; property to 1 so it behaves synchronously. Without this, our state machine would be starting games all over the place!&lt;/p&gt;

&lt;h4&gt;
  
  
  Execution event history service quota
&lt;/h4&gt;

&lt;p&gt;Did you know there is a service quota for a state machine execution that &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/bp-history-limit.html" rel="noopener noreferrer"&gt;limits the number of history events&lt;/a&gt; it can have? Me neither until I ran into it the first time.&lt;/p&gt;

&lt;p&gt;You can have up to 25,000 entries in the execution event history for a single state machine execution. Before you ask, no that &lt;strong&gt;does not mean state transitions&lt;/strong&gt;. These are actions the Step Function service is taking while executing your workflow. Take a look at the entries for our state machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fsub_minute_schedules_4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fsub_minute_schedules_4.png" alt="Example entries for state machine execution"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that each &lt;code&gt;Map&lt;/code&gt; iteration adds 3 entries to the execution history. On top of that, each Lambda function execution adds 5 entries, and the wait state adds 2. So each &lt;code&gt;Map&lt;/code&gt; iteration for our specific workflow adds 10 entries. There are also entries for starting and stopping the execution and the &lt;code&gt;Map&lt;/code&gt; state itself. To figure out the maximum number of loops we can pass into our map, we need to take all of these things into consideration.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;(25,000 max entries - 5 start/stop entries) / 10 loop entries per iteration = 2499 iterations&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So we can safely loop 2499 times without exceeding the service quota. At 30 seconds an iteration, that will get us through a little over 20.5 hours. To give it a little buffer and to make nice, round numbers we can safely run this state machine for 12 hours. This would result in a starting payload like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "iterations": [1,2,3,4,5...1440]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will result in our &lt;code&gt;Map&lt;/code&gt; state looping 1440 times over the course of 12 hours, putting us well within our execution event history service quota and reducing the number of start/stop state transitions by 2872 every day or roughly $.07, which is a &lt;strong&gt;39% cost reduction&lt;/strong&gt;!&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment and configuration
&lt;/h3&gt;

&lt;p&gt;Now that we understand conceptually how this works, it would be nice to look at an example so we can understand the moving parts. If we want our state machine to run every twelve hours, we can add a &lt;code&gt;ScheduleV2&lt;/code&gt; trigger in our IaC along with the &lt;em&gt;iterations&lt;/em&gt; array we defined earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;HeartbeatImplicitStateMachine:
  Type: AWS::Serverless::StateMachine
  Properties:
    Type: STANDARD
    DefinitionUri: state machines/heartbeat-implicit.asl.json
    DefinitionSubstitutions:
      LambdaInvoke: !Sub arn:${AWS::Partition}:states:::lambda:invoke
      HeartbeatFunction: !GetAtt HeartbeatFunction.Arn
    Events:
      StartExecution:
        Type: ScheduleV2
        Properties:
          ScheduleExpression: rate(12 hours)
          ScheduleExpressionTimezone: America/Chicago
          Input: "{\"iterations\": [1,2,3,4,5...1440]}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the real implementation, you would include all the numbers in the array and not use &lt;em&gt;5...1440&lt;/em&gt; like I did. I did it that way for brevity. Nobody wants to read an array with 1440 items in it 😂&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-schedulev2.html" rel="noopener noreferrer"&gt;This type of trigger in SAM&lt;/a&gt; sets up an EventBridge Schedule with the rate you configure and passes in whatever stringified JSON you have for input to the execution. Depending on your use case, you can change the rate and iteration count to whatever you want - as long as it doesn't surpass the execution event history quota.&lt;/p&gt;

&lt;p&gt;There is no magic with the &lt;a href="https://github.com/allenheltondev/sub-minute-serverless-schedules/blob/main/state%20machines/heartbeat-implicit.asl.json" rel="noopener noreferrer"&gt;state machine itself&lt;/a&gt;. It is looping over the function execution and wait state as many times as we tell it to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced usage of this pattern
&lt;/h2&gt;

&lt;p&gt;This is a great pattern with straightforward implementation if you want to build something like a heartbeat that runs at a consistent interval around the clock. But what if you don't want it to run all the time?&lt;/p&gt;

&lt;p&gt;The use case I had for my game fit this description. I wanted it to trigger every 30 seconds between 8 am and 6 pm every day, not 24/7. So I needed to come up with a way to turn the trigger on/off on a regular interval. This made the problem feel much harder.&lt;/p&gt;

&lt;p&gt;To get the behavior we want, we need to &lt;em&gt;turn on and off the recurring EventBridge schedule for our state machine&lt;/em&gt; at specific times. So I built another Step Function workflow that simply toggles the state of the schedule given an input.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fsub_minute_schedules_5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fsub_minute_schedules_5.png" alt="Schedule Toggle State Machine"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This state machine is triggered by two EventBridge schedules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every day at 8 am that &lt;em&gt;enables&lt;/em&gt; the 30-second timer state machine schedule&lt;/li&gt;
&lt;li&gt;Every day at 6 pm that &lt;em&gt;disables&lt;/em&gt; the 30-second timer state machine schedule&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To change the behavior of each execution, I pass in either &lt;code&gt;ENABLED&lt;/code&gt; or &lt;code&gt;DISABLED&lt;/code&gt; into the input parameters in the schedule.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EnableDisableScheduleStateMachine:
  Type: AWS::Serverless::StateMachine
  Properties:
    Type: EXPRESS
    DefinitionUri: state machines/enable-disable-schedule.asl.json
    DefinitionSubstitutions:
      GetSchedule: !Sub arn:${AWS::Partition}:states:::aws-sdk:scheduler:getSchedule
      UpdateSchedule: !Sub arn:${AWS::Partition}:states:::aws-sdk:scheduler:updateSchedule
      ScheduleName: !Ref HeartbeatSchedule
    Events:
      Enable:
        Type: ScheduleV2
        Properties:
          ScheduleExpression: cron(0 8 * * ? *)
          ScheduleExpressionTimezone: America/Chicago
          Input: "{\"state\":\"ENABLED\"}"
      Disable:
        Type: ScheduleV2
        Properties:
          ScheduleExpression: cron(0 18 * * ? *)
          ScheduleExpressionTimezone: America/Chicago
          Input: "{\"state\":\"DISABLED\"}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By doing this, I also need to change the iterations of my 30-second state machine since it isn't running 24/7 anymore. Since it's only running for 10 hours, I can update the interval to &lt;code&gt;rate(5 hours)&lt;/code&gt; and update my input intervals from 1440 to 600 and I'm all set!&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;There is more than one way to trigger tasks in sub-minute intervals. In fact, there is an entire chapter on the concept in the &lt;a href="https://www.manning.com/books/serverless-architectures-on-aws-second-edition" rel="noopener noreferrer"&gt;Serverless Architectures on AWS, Second Edition&lt;/a&gt; book by Peter Sbarski, Yan Cui, and Ajay Nair. I highly recommend that book in general, not just for the content on scheduling events.&lt;/p&gt;

&lt;p&gt;Everything is a trade-off. You'll often find scenarios like this where cost and complexity become your trade-off. Figure out a balance to make things maintainable over time. Human time almost always costs more than the bill you receive at the end of the month. Time spent troubleshooting and debugging things that are overly complex results in lost opportunity costs, meaning you aren't innovating when you should be.&lt;/p&gt;

&lt;p&gt;The solution I outlined above balances cost and "workarounds". I had to use a workaround to get the Step Functions &lt;code&gt;Map&lt;/code&gt; state to behave like a for loop. I had to work around not being able to schedule tasks more frequently than 1 minute. I worked around scheduling tasks over different parts of the day. All of these things add up to make for an unintuitive design if you're looking at if for the first time. Somebody will look at all the components that went into it and say "this is just to trigger an event every 30 seconds? It's way too much!"&lt;/p&gt;

&lt;p&gt;In the end, it worked out for me and my use case. I was able to save some money while getting the features I was looking for.&lt;/p&gt;

&lt;p&gt;Do you have another way to do this? What have you found that works for you? Let me know on &lt;a href="https://twitter.com/AllenHeltonDev" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/allenheltondev/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, I'd love to hear about it!&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>stepfunctions</category>
    </item>
    <item>
      <title>I didn't know it did that - Postman Postbot</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Wed, 03 Apr 2024 15:31:14 +0000</pubDate>
      <link>https://forem.com/aws-heroes/i-didnt-know-it-did-that-postman-postbot-3e8b</link>
      <guid>https://forem.com/aws-heroes/i-didnt-know-it-did-that-postman-postbot-3e8b</guid>
      <description>&lt;p&gt;Last week I was updating the &lt;a href="https://www.postman.com/gomomento/workspace/momento-http-api/overview" rel="noopener noreferrer"&gt;public workspace for Momento&lt;/a&gt; in Postman when I saw a little helmet icon in the corner of my screen. I hadn't seen that icon before, and being curious by nature, I stopped what I was doing and clicked on it. Much to my surprise, it opened up &lt;a href="https://www.postman.com/product/postbot/" rel="noopener noreferrer"&gt;Postbot&lt;/a&gt; - the native AI helper inside of Postman.&lt;/p&gt;

&lt;p&gt;I've been a Postman user for a long time and Postbot is relatively new, I always forget it exists. So I took this as an opportunity to procrastinate on my work task and do a bit of exploration. I wanted to see how good it actually was. I knew Postbot was supposed to help you build your tests and answer questions about your requests, but I didn't know how involved I could get or how much time it would save me in real life. I started off by asking it to &lt;a href="https://www.linkedin.com/posts/allenheltondev_i-cant-get-over-how-awesome-this-is-i-was-activity-7178813053587664896-q3-X" rel="noopener noreferrer"&gt;visualize some data for me&lt;/a&gt; by turning a JSON list into a bar graph grouped by location. Then I followed up by asking it to create a JSON schema for the response and using it in the documentation and &lt;a href="https://www.linkedin.com/posts/allenheltondev_day-two-of-what-can-the-postman-postbot-activity-7179218677093842945-fiHL" rel="noopener noreferrer"&gt;tests for validation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It did surprisingly well on both tasks, so it was time to really put it through its paces. I gave &lt;a href="https://twitter.com/andmoredev" rel="noopener noreferrer"&gt;Andres Moreno&lt;/a&gt;, my &lt;a href="https://dev.to/aws-heroes/be-an-enabler-592c"&gt;enabler friend&lt;/a&gt; who loves APIs as much as I do, and &lt;a href="https://twitter.com/SilverJaw82" rel="noopener noreferrer"&gt;Sterling Chin&lt;/a&gt;, engineering manager over Postbot, a call and we hopped on a live stream in the &lt;a href="https://believeinserverless.com" rel="noopener noreferrer"&gt;Believe in Serverless community&lt;/a&gt; to figure out exactly how far we could push it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Before I continue, I want to share this is **not&lt;/em&gt;* a paid endorsement. I'm just really excited about a new-ish feature in an app I know and use on a daily basis.*&lt;/p&gt;

&lt;h2&gt;
  
  
  What Postbot does well
&lt;/h2&gt;

&lt;p&gt;In the live stream, we went through several key areas of Postman, putting Postbot to the test to see just how much time it can save developers and how much work you can offload to it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Test writing
&lt;/h4&gt;

&lt;p&gt;I can't tell you how much time I've spent in Postman over the years writing tests. Not necessarily difficult or tricky ones, but simple validation tests or tests that set chaining variables to flow into the next request. It's a lot of typing the same thing over and over again with minor changes in each request. With Postbot, &lt;em&gt;that all changes&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I found great success typing phrases like "create a test that validates there is at least one record with an id of X" or "verify the response does not have any records where the person is older than Y." Postbot would look at the response of the request I'm on, get context of the schema, then write JavaScript and tests that do exactly what I asked. If there was a problem with the test or I needed to make an update to the criteria, conversation history is preserved so I could follow up with "fix that test to verify no people are older than Z." The test is updated in place without needing to touch anything 🔥&lt;/p&gt;

&lt;p&gt;It also has autocomplete based on the name of your test! Similar to inline coding assistants you see today like Copilot and Code Whisperer, all you need to do is intuitively name your tests and the code for it will show up automatically. Hit &lt;em&gt;Tab&lt;/em&gt; to accept the suggestion and you're done!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fpostman_postbot_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fpostman_postbot_1.png" alt="Suggested implementation of a test from Postbot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Schema creation and validation
&lt;/h4&gt;

&lt;p&gt;While I'm a firm believer in &lt;a href="https://dev.to/aws-heroes/seriously-write-your-api-spec-first-4hji"&gt;API-first development&lt;/a&gt; and fully building your API spec before anything else, I do realize many people prefer a code-first approach and might not have a strongly defined specification at all. Turns out that might not be a problem anymore.&lt;/p&gt;

&lt;p&gt;You can tell Postbot to "create a JSON schema for the response and add it to the documentation with meaningful definitions for each property" followed by "now add a test that validates the response against this schema" to fully document and validate your payloads. I was genuinely impressed by the quality of the produced schema and how quickly I could get a starting point for meaningful documentation. If you have flexible objects in your schemas defined by a &lt;code&gt;oneOf&lt;/code&gt;, &lt;code&gt;anyOf&lt;/code&gt;, or &lt;code&gt;allOf&lt;/code&gt;, you &lt;em&gt;might&lt;/em&gt; not get the best results right now, but I assume that will improve over time as LLMs continue to get better.&lt;/p&gt;

&lt;p&gt;The main takeaway here is to use it as a starting point and make sure you double-check the schemas for minor details.&lt;/p&gt;

&lt;h4&gt;
  
  
  Visualizations
&lt;/h4&gt;

&lt;p&gt;I always forget the &lt;a href="https://learning.postman.com/docs/sending-requests/response-data/visualizer/" rel="noopener noreferrer"&gt;visualizer&lt;/a&gt; exists. When I do remember it's a thing, I always struggle to get it working properly. It's super flexible in what it can do, but when something is flexible that usually means it's complex, too.&lt;/p&gt;

&lt;p&gt;If you want to take your JSON payloads and turn them into meaningful graphs and charts, Postbot can do that for you in seconds. Not only can you ask it for exactly what you want, but you also have a wizard that will walk you through options if you don't know what you're looking for. Bar charts, line graphs, and pie charts are some of the options you can generate with a single command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fpostman_postbot_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fpostman_postbot_2.png" alt="Pie graph visualization in Postman"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I did find it difficult to update the visualizations in chained commands. So if you ask it to "create a bar chart of the results" followed by "now group it by city/state" then "update the colors to various pastels", it struggles a bit. Your best bet is to figure out exactly what you want and ask it in a single, detailed command.&lt;/p&gt;

&lt;h4&gt;
  
  
  Documentation
&lt;/h4&gt;

&lt;p&gt;Ah, documentation. The thing that developers hate writing yet are so angry if it doesn't exist. Meaningful documentation goes a long way with APIs and it's much more involved than simple schema definition. Explaining context and building story around use cases turn flat, code-generated documentation into engaging, effective tools for developers. This can be hard to do for many of us! While developers tend to be creative, it's not often the &lt;a href="https://dev.to/aws-builders/the-mighty-metaphor-your-new-secret-weapon-in-tech-30a2"&gt;storytelling type of creativity&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fpostman_postbot_3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Freadysetcloud.s3.amazonaws.com%2Fpostman_postbot_3.png" alt="Generated documentation for a request"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since Postbot uses generative AI to come up with responses, you can seed it with anything and have it come up with a consistent story across all of your requests. To me, this is probably the &lt;em&gt;most significant yet underused capability of Postbot&lt;/em&gt;. Using it to elevate your documentation is a game-changer, and I expect it will open up a swath of possibilities down the road.&lt;/p&gt;

&lt;h4&gt;
  
  
  Fixing things
&lt;/h4&gt;

&lt;p&gt;I'll be honest, when I saw "fix tests" as a suggestion in Postbot I glanced right over it. Seeing something as generic as "fix it" doesn't typically fill me with optimism. I see something like that and I usually look right past it. That's too vague to do anything meaningful.&lt;/p&gt;

&lt;p&gt;But it works!&lt;/p&gt;

&lt;p&gt;Per Sterling's suggestion, I intentionally wrote broken code and ran a request. After it broke, I told Postbot "fix the tests". It was able to analyze my code, find my error (I changed a variable from &lt;code&gt;responseData&lt;/code&gt; to &lt;code&gt;responseData2&lt;/code&gt;), and update it to use the correct variable name. So it worked for syntax/code errors!&lt;/p&gt;

&lt;p&gt;I also tried another test that's all too real. I wrote a status code test to verify the response was a &lt;code&gt;201 Created&lt;/code&gt;. My endpoint actually returns a 200 upon success instead of a 201. When my test failed, I asked Postbot to fix it and it updated my test to check for a &lt;code&gt;200 OK&lt;/code&gt; instead of a 201. I've had so many copy/paste test failures like this over the years, it's really cool to see something that can find and fix them for me automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does not do
&lt;/h2&gt;

&lt;p&gt;One of the fun things I did on the live stream was simply see where my assumptions were wrong. I was tempted many times to ask Sterling "what would happen if I..." but decided to just try it myself. I learned real quick I had several assumptions that were flat out wrong.&lt;/p&gt;

&lt;h4&gt;
  
  
  Work with API definitions
&lt;/h4&gt;

&lt;p&gt;Postbot doesn't do anything with API definitions. I tried asking it things like "Does this specification meet OAS best practices" and "Update the endpoint descriptions to be more meaningful" to no avail. It doesn't understand the spec as context. Seems like it might be limited to requests only for now.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create or work with other resources
&lt;/h4&gt;

&lt;p&gt;You can't ask Postbot to create a new collection for you. Or a request. It's intentionally not a creation tool. It's intended to be used as a pair programmer to work on a single request at a time. With that in mind, you also can't ask it to update a request you don't currently have focused on screen. The conversation history in Postbot is scoped to a specific request, so keep your conversations focused on the tab you have in front of you.&lt;/p&gt;

&lt;h4&gt;
  
  
  Undo
&lt;/h4&gt;

&lt;p&gt;While I was trying to update my visualization to render bar graphs in various pastel colors for Easter, something went wrong and put my request in a bad state. My tests were full of syntax errors and the visualization completely broke. I had definitely stumbled across a bug, which is fine - everything has bugs. But the part that irked me was that there is no undo button. I pressed ctrl + Z (or command + Z for you Mac folk) to get back into a working state, but nothing happened. This led me to realize that &lt;em&gt;everything Postbot does is destructive&lt;/em&gt;. Kinda scary phrased that way, but technically it's true. So be careful when prompting, because it's possible to lose your work.&lt;/p&gt;

&lt;h4&gt;
  
  
  Answer random questions
&lt;/h4&gt;

&lt;p&gt;Postbot is designed to help you build things inside of Postman. It knows how to write and fix tests, create documentation, and build rich visualizations. It's not meant for answering things like "What is the weather today?" It will be the first to tell you that! If you ask it a random question, you'll get back a "I can only help you with Postman and API related queries" response. But you can ask it questions around APIs, like "What status code should I return on a POST" or "How should I implement idempotency on a PUT" you'll get meaningful answers and links to documentation or blog posts on Postman's website.&lt;/p&gt;

&lt;p&gt;I had incorrectly assumed that Postbot only worked when asking it to change something on a request. Knowing it can also be a great reference for API-related questions is a huge win!&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned
&lt;/h2&gt;

&lt;p&gt;Postbot is a big deal. If you haven't used it before, it's easy to shrug off as Postman's answer to the AI boom. Now having used it for real, I realize that this can fundamentally change how I use Postman. It not only has me moving faster when it comes to API creation, but it has me creating more complete, thorough tests and documentation.&lt;/p&gt;

&lt;p&gt;Some "insider information" I discovered on the stream was that Postbot is powered by OpenAI. I wasn't able to get more info from Sterling on this, but I'm always willing to speculate. It appears to me that Postbot has a layer of Postman-specific context that it gathers and sends to OpenAI when someone types a message. The response from OpenAI is then processed by this layer and either turned into an action, like updating tests or other request information, or into a simple answer to your question. Based on the quality of the generated code and Postman-specific influence, I'd guess that they are using LangChain with GPT-3.5 and a Postman Knowledge Base for a RAG query. But again, &lt;em&gt;it's just speculation&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I get periodic errors with extensive use of Postbot. Randomly I'll get a response back that tells me there was an unexpected error and that I should try again. When this happens, I send the same message again and usually get a working response. Remember, there's a ton of stuff going on when you hit that Enter button. Errors happen - especially in new tech. Try it again if you get an error. If you get an error again, try once more.&lt;/p&gt;

&lt;p&gt;Postman seems to be investing a lot of time and energy into Postbot and I can see why. This is quickly becoming an enabler for both technical and non-technical users. I expect to see it doing things like fetching requests from the public API network and building workflows in the not too distant future. It's an exciting time to be a developer and an API builder.&lt;/p&gt;

&lt;p&gt;Personally, I'm sold. While it's not perfect, it certainly beats doing everything yourself. So go give it a shot and tell me how you use it. I'm always looking for new, creative ways to build.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>api</category>
      <category>postman</category>
      <category>ai</category>
    </item>
    <item>
      <title>Be an enabler</title>
      <dc:creator>Allen Helton</dc:creator>
      <pubDate>Wed, 27 Mar 2024 13:08:24 +0000</pubDate>
      <link>https://forem.com/aws-heroes/be-an-enabler-592c</link>
      <guid>https://forem.com/aws-heroes/be-an-enabler-592c</guid>
      <description>&lt;p&gt;I'm an enabler.&lt;/p&gt;

&lt;p&gt;Let me explain. Some of you might see that word and think of it negatively. While it's true that being an enabler often refers to encouraging others to do bad or self-destructive things, like lending money to someone you know who has a gambling addiction, it doesn't always mean that.&lt;/p&gt;

&lt;p&gt;You can enable people to empower themselves, too. Encourage them to take the risk they're too afraid to commit to or provide them with an opportunity they couldn't normally get on their own. Get them out of their comfort zone so they have a chance to learn and grow. &lt;em&gt;This is the type of enabler I am&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where it started
&lt;/h3&gt;

&lt;p&gt;I had a really hard time mentally when I turned 30. On top of being a milestone birthday people generally associate with "starting to get old," we were two months into the COVID lockdown, meaning I wasn't able to get much reassurance from friends or family that life wasn't rapidly passing me by. Instead, my wife and I ate pizza in the yard watching cars drive by. I had a lot of time to think about what I was doing with my life and decide if I was happy with the diversity I had and the opportunities I'd fallen into.&lt;/p&gt;

&lt;p&gt;To be honest, &lt;strong&gt;the answer was no&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Don't get me wrong, I've lived an absolutely blessed life. I've had the opportunity to do what I want, when I want, for most of my life. I've traveled all over the globe for both fun and work. I've competed on world stages for running, triathlons, and obstacle course racing. I've had content I created go viral. As far as the highlight reel goes, &lt;em&gt;my life is awesome&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;But it was safe. At 30, I had been at the same job for 9 years, climbing the corporate ladder in a large enterprise in the software engineering department. There was no risk. It was cushy. I rarely, if ever, woke up with a rush of adrenaline wondering what the day was going to bring and how I was going to be pushed out of my comfort zone. At the time, cushy felt nice. But 30 did something to me. I needed something else. Something &lt;em&gt;more&lt;/em&gt;. But I didn't know what.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where it needed to go
&lt;/h3&gt;

&lt;p&gt;What I needed &lt;a href="https://readysetcloud.io/blog/allen.helton/getting-started-with-mentorship-what-is-it-all-about-ce475ff4710"&gt;was a mentor&lt;/a&gt;. Someone to tell me, "What are you still doing here" or "Why don't you go try X, Y, and Z?"&lt;/p&gt;

&lt;p&gt;At least, so I thought. Throughout my career, I've had two amazing mentors - Mike and Mark. I attribute a lot of who I am today to these two. They've steered me in the right direction time and time again and provided me with advice that I hold near to my heart. Mike and Mark… thank you.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Buuuuuuut&lt;/em&gt; that wasn't always what I needed. At a certain point, I felt like I hit a glass ceiling. I could see where I needed to go but there was no way for me to get there. The advice I was getting wasn't bad, but it wasn't helping me grow in the way I wanted to grow anymore. &lt;em&gt;I was ready to take a risk&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Historically I've been a risk-averse guy. Everything I chose to do was a well-thought-out calculated risk. Once something passed a certain threshold, my internal voice would say "&lt;em&gt;Don't do it. What if you fail? What if it leaves you without a job? You have a family that depends on you having a job.&lt;/em&gt;" And I would listen to that voice. It made some good points.&lt;/p&gt;

&lt;p&gt;What I &lt;em&gt;actually&lt;/em&gt; needed was an enabler. Somebody to simply tell me "Yes. Do it, I'll go with you." I needed to be pushed past my doubts and into the unknown with someone I trusted who would go along with me, regardless of the risk. To me, this differs wildly from a mentor. A mentor gives you fantastic advice from their past experiences and helps you from a comfortable position. It's a tremendous way to grow and learn. An enabler is someone who understands there's risk or that what you need to do is going to be uncomfortable but says "&lt;em&gt;Hell yeah, let's do it!&lt;/em&gt;"&lt;/p&gt;

&lt;p&gt;They aren't just "yes men" though. Not every risk is worth taking. But they have your best interest at heart and are willing to do everything they can to back you up and get you there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Be comfortable being uncomfortable
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://twitter.com/andmoredev"&gt;Andres Moreno&lt;/a&gt; was my first enabler. A close friend I've had for over 10 years and trust beyond measure. If you've met me at re:Invent before, chances are you've met Andres too. We're always together... for good reason! I can't tell you how many times he's told me "You should go talk to that person" or even something as simple as "Why not?" We always walk around enabling each other to be uncomfortable. Ultimately it was his nudges that gave me the courage to leave my cushy job for something completely different.&lt;/p&gt;

&lt;p&gt;I was a full-time enterprise architect who did content creation as a hobby project. I had been &lt;a href="https://dev.to/aws-heroes/how-i-became-an-aws-serverless-hero-i17"&gt;named an AWS Hero&lt;/a&gt; as part of my fun content creation, but it scared me to death considering doing it as my job. In my head, moving to developer advocacy was a career change. Not only was I going to be not building production code or planning long-term technical strategies anymore, but I also wanted to go fast. I wanted startup. When I told Andres about &lt;a href="https://gomomento.com"&gt;Momento&lt;/a&gt;, he said "I will be sad to see you go, but you need to do it."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-heroes/growing-your-career-in-the-evolving-tech-field-3de"&gt;So I did&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And I love it. I took a tremendous leap of faith and couldn't be happier. I've learned so much in the past year and have built so many strong relationships that have been genuinely life-changing. Now I get the opportunity to be an enabler for hundreds of people in tech - AS PART OF MY JOB!!&lt;/p&gt;

&lt;p&gt;To think, all this because I had someone help me recognize and realize I wasn't doing what I really wanted. And they gave me the courage to go out and do it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pay it forward
&lt;/h3&gt;

&lt;p&gt;I am always willing to be an enabler. I love helping people get out of their comfort zones and encouraging them to live the life they want to live. This is about more than just your day job. It's about finding out who you are and what makes you happy. It's about learning and growing in completely new ways. It's about you.&lt;/p&gt;

&lt;p&gt;It's hard to take risks by yourself. Self-doubt always rears its ugly head and dashes your confidence, leaving you questioning if something is the right move or not. There's power in numbers. Get an enabler to ride along or pave the way to an opportunity you couldn't reach yourself. You're helping them as much as they're helping you. &lt;strong&gt;Trust me&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Over the past year, I've surrounded myself with as many enablers as possible, and what a ride it has been. Find the people who encourage you to do hard things, say "&lt;em&gt;why not&lt;/em&gt;", and are willing to ride along with you. You'll find the journey is just as fulfilling as the destination.&lt;/p&gt;

&lt;p&gt;That's the end of my story. If you walk away from this with only one thing, I want you to take an inward look at yourself and figure out if what you're doing both in and out of work is making you happy. Are you getting the thrills you need? Do you even want thrills? Maybe long-term job security is what makes you the happiest. Find who you really want to be and find the people who are willing to go with you to get there.&lt;/p&gt;

&lt;p&gt;Thank you to all my enablers, you know who you are. You really have made me a happier person.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>personalgrowth</category>
      <category>career</category>
    </item>
  </channel>
</rss>
