<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sathiesh Veera</title>
    <description>The latest articles on Forem by Sathiesh Veera (@sathieshveera).</description>
    <link>https://forem.com/sathieshveera</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sathieshveera"/>
    <language>en</language>
    <item>
      <title>Things I Wish I Knew Before I Started Using DynamoDB</title>
      <dc:creator>Sathiesh Veera</dc:creator>
      <pubDate>Sun, 01 Feb 2026 16:44:26 +0000</pubDate>
      <link>https://forem.com/aws-builders/things-i-wish-i-knew-before-i-started-using-dynamodb-5hbm</link>
      <guid>https://forem.com/aws-builders/things-i-wish-i-knew-before-i-started-using-dynamodb-5hbm</guid>
      <description>&lt;p&gt;If there's one database that promises both performance and cost and also delivers, then it's Amazon DynamoDB. Single digit millisecond latency, fully managed with automatic scaling, pay per use - DynamoDB is genuinely impressive. But here's the thing, there are rarely any mentions about what happens when your data model doesn't fit DynamoDB's worldview, or when you discover a limitation (I would rather say understand DynamoDB architecture better) three months into production that forces a complete redesign.&lt;/p&gt;

&lt;p&gt;I've spent considerable time working with DynamoDB across various projects… some successful, some educational. Along the way, I've collected a list of things I desperately wish someone had told me before I started. These aren't just the basics that we can find in tutorials. These are some gotchas that surface when your table has a few million items and your access patterns have evolved beyond the original design. Let’s dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Items &amp;amp; Attributes are NOT Rows and Columns
&lt;/h2&gt;

&lt;p&gt;In DynamoDB, every record is an item, with attributes. If you are from a SQL background, it’s very easy and tempting to relate them to rows and columns. Even the table view in the AWS console looks like a sql table, with rows and columns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rax5kwwobsxxzqvau9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rax5kwwobsxxzqvau9p.png" alt=" " width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But they are NOT ! And this fundamental difference shapes everything else about how DynamoDB works.&lt;/p&gt;

&lt;p&gt;In SQL, a row is just… a row. It has a primary key, a unique identifier for that single row, and we can query it however we want. Need to find all rows that has &lt;code&gt;year = 1994&lt;/code&gt; ? Sure, just use a &lt;code&gt;WHERE&lt;/code&gt; clause. Need to join two tables on a particular column, simple.&lt;/p&gt;

&lt;p&gt;But none of these are possible in DynamoDB. DynamoDB does have a primary key, which has a partition key that that gets hashed, and used to decide the partition the item should live on. This isn't just a storage optimization detail we can ignore. It fundamentally constrains how we can access our data. We cannot query items without knowing their partition key, because DynamoDB literally doesn't know where to look.&lt;/p&gt;

&lt;p&gt;In SQL databases, columns are uniform across rows. Every row has the same columns (nullable or not) and we can add an index on any column, but in DynamoDB attributes are per item. Each item can have different attributes, and there is no enforced schema. Now, this doesn’t mean, DynamoDB is similar to MongoDb or couchbase which are document databases. Attributes that aren't part of the key structure or a secondary index are essentially invisible to queries. Now if you are thinking what about GSI, hold on to that thought for a few minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why does this matter?&lt;/strong&gt; Because every "limitation" I'm about to describe flows from this fundamental architecture. If you take one thing from this section, let it be this: &lt;strong&gt;stop thinking about DynamoDB as "SQL but NoSQL."&lt;/strong&gt; It's a key-value store with some nice features. The moment you internalize that, the rest of its behavior starts making sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The Partition Key is Not Just a Primary Key
&lt;/h2&gt;

&lt;p&gt;If you have a SQL background, it's tempting to think of DynamoDB's partition key as just another primary key. Pick something unique and you are done. Right?&lt;/p&gt;

&lt;p&gt;But that’s so very wrong.&lt;/p&gt;

&lt;p&gt;Here's what actually happens: when we write an item to DynamoDB, it takes the partition key value and feeds it through an internal hash function. The output of that hash determines which physical partition the data lands on. All items with the same partition key end up on the same partition, stored together in what is called an "item collection." This isn't just a storage detail, it fundamentally shapes what we can and cannot do with our data.&lt;/p&gt;

&lt;h4&gt;
  
  
  The limitations that bite:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Partition key queries require exact equality.&lt;/strong&gt; We cannot do &lt;code&gt;LIKE&lt;/code&gt;, &lt;code&gt;CONTAINS&lt;/code&gt;, &lt;code&gt;BEGINS_WITH&lt;/code&gt;, or range operations on partition keys. DynamoDB needs the complete partition key value to compute the hash and locate the partition. There's no negotiating here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sort key operations are limited too.&lt;/strong&gt; We get &lt;code&gt;=&lt;/code&gt;, &lt;code&gt;&amp;lt;&lt;/code&gt;, &lt;code&gt;&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;=&lt;/code&gt;, &lt;code&gt;&amp;gt;=&lt;/code&gt;, &lt;code&gt;BETWEEN&lt;/code&gt;, and &lt;code&gt;BEGINS_WITH&lt;/code&gt;. Notably missing? &lt;code&gt;CONTAINS&lt;/code&gt;, &lt;code&gt;ENDS_WITH&lt;/code&gt;, and anything resembling SQL's &lt;code&gt;LIKE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let me give an example of how this might become an issue. Say we're building an employee management system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Table: employees
Partition Key: orgId
Sort Key: empId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This seems reasonable. However, if this table has data from companies that vary largely in size, that is we have a company with 5-10 employees and a company with hundreds and thousands of employees we already have a partition skew.&lt;/p&gt;

&lt;p&gt;To make things worse, what if the empId is a non sortable id, i.e., an UUID. Now we have many more restrictions. We cannot do any range based queries to get only a few employees, unless we know the full IDs. That is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM "employees" WHERE "orgid" = 'org#001' AND BEGINS_WITH("empid", '100')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;vs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * WHERE "employees" WHERE "orgid" = 'org#001' AND "empid" IN ('ca17b525-c67b-4351-bddc-724efba7f966', 'b2f78338-17c5-47d4-8fb0-4732446cb598', 'c0af48b2-1101-4eb5-ad4d-ccd535e4a7ff')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why does this happen?&lt;/strong&gt; Sort keys are stored in a B+ tree structure. DynamoDB can efficiently traverse ranges (slide from point A to point B) but cannot teleport to arbitrary non-sequential values. It's a fundamental architectural constraint, not a missing feature.&lt;/p&gt;

&lt;p&gt;Now with a skewed partition, if I need to find a couple of employee details from an organization that has 5000 employees, and if I dont have their empIds which is the sort key in this case, I need to read all 5K items and filter them at the application for the 2 items that I need.&lt;/p&gt;

&lt;p&gt;This is why knowing the access patterns ahead of time, and choosing the right keys is very crucial for a DynamoDB model. Some of the best practices to manage such situations include write sharding to avoid hot partition, using composite keys with prefixes such as &lt;code&gt;STATUS#ACTIVE#USER#1001&lt;/code&gt; based on anticipated access patterns. But the key point is, use DynamoDB only if its the right fit for your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Filter Expressions are not a traditional WHERE clause
&lt;/h2&gt;

&lt;p&gt;Remember how I said attributes that aren't keys are "invisible to queries"? Let's dig into what that actually means. DynamoDB has a feature called FilterExpression that lets us filter query results by any attribute. Sounds very similar to a WHERE clause right ? Except that it is not. Filter expressions are applied &lt;strong&gt;after&lt;/strong&gt; DynamoDB reads the data, not before.&lt;/p&gt;

&lt;p&gt;When we run a Query with a FilterExpression:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;DynamoDB uses the key condition to find matching items&lt;/li&gt;
&lt;li&gt;DynamoDB reads those items from storage (consuming RCUs)&lt;/li&gt;
&lt;li&gt;DynamoDB applies the filter&lt;/li&gt;
&lt;li&gt;DynamoDB returns only the filtered results&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Steps 2 and 4 are the key insight. &lt;strong&gt;We pay for everything read in step 2&lt;/strong&gt;, even if step 4 throws most of it away.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Query: Get orders for user123 where status = 'PENDING'
Key condition: userId = 'user123' (returns 10,000 orders)
Filter: status = 'PENDING' (matches 50 orders)

We pay for: 10,000 items read
We receive: 50 items
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's a 99.5% waste rate on read capacity. 😬&lt;/p&gt;

&lt;p&gt;The 1MB page limit also applies before filtering. So if we're filtering a lot of data, we might need multiple pagination requests to get a small number of results. Each page reads up to 1MB, filters it down, returns maybe a handful of items, and we go back for another page.&lt;/p&gt;

&lt;p&gt;This goes back to our fundamental point: attributes aren't columns. In SQL, a WHERE clause on any column can use indexes and query planning to minimize data read. In DynamoDB, if the attribute isn't in the key structure, we're doing a read-then-filter operation.&lt;/p&gt;

&lt;p&gt;And, this is where the composite keys I mentioned above are generally used. A sort key such as &lt;code&gt;STATUS#PENDING#2024-01-15&lt;/code&gt; includes a bunch of filterable attributes.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Secondary Indexes Are Complete Data Copies
&lt;/h2&gt;

&lt;p&gt;So far we've talked about querying with partition keys, filtering on attributes. The natural question becomes: "What if we need to query by a different attribute efficiently?" The answer is secondary indexes.&lt;/p&gt;

&lt;p&gt;If we're used to SQL databases, we might think of indexes as lightweight pointers to the main table. A few extra bytes per row, nothing dramatic. DynamoDB's Global Secondary Indexes (GSIs) are a completely different beast. They're not pointers. They're not lightweight. They're &lt;strong&gt;entire separate tables&lt;/strong&gt; with their own partition infrastructure.&lt;/p&gt;

&lt;p&gt;Think about this table.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;orgId (PK)&lt;/th&gt;
&lt;th&gt;empId (SK)&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Department&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Org#001&lt;/td&gt;
&lt;td&gt;emp#001&lt;/td&gt;
&lt;td&gt;John&lt;/td&gt;
&lt;td&gt;Engineering&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Org#001&lt;/td&gt;
&lt;td&gt;emp#002&lt;/td&gt;
&lt;td&gt;Dave&lt;/td&gt;
&lt;td&gt;Sales&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you have to represent that as a Key-Value pair, the simplest way would be&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "org#001+emp#001" = {'Name': 'John', 'Department': 'Engineering'},
  "org#001+emp#002" = {'Name': 'Dave', 'Department': 'Sales'}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if you have query these employees by department, the only option is to create another map like below, isn’t it ?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "org#001+dept#engineering" = {'Name': 'John', 'id'': 'emp#001'},
  "org#001+dept#sales" = {'Name': 'Dave', 'id': 'emp#002'}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what happens when we create secondary GSI in DynamoDB. While Dynamo abstracts all these details and makes it simple and easy for us to use the table, behind the scenes the data is copied.&lt;/p&gt;

&lt;p&gt;So, when we write to the base table, DynamoDB replicates the relevant attributes to every affected GSI. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Every write to the base table triggers writes to all GSIs&lt;/strong&gt; that include that item&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Updating an indexed attribute costs 2 GSI writes&lt;/strong&gt; — one to delete the old entry, one to insert the new one&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage is duplicated&lt;/strong&gt; for every projected attribute across every GSI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GSIs have their own throughput&lt;/strong&gt; that needs to be provisioned (or paid for in on-demand mode)
Let's do some quick math that nobody tells us upfront:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Base table write: 1 WCU
+ GSI #1 (ALL projection): 1 WCU
+ GSI #2 (ALL projection): 1 WCU  
+ GSI #3 (ALL projection): 1 WCU
= 4 WCUs for a single item write
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three GSIs with ALL projection means &lt;strong&gt;4x the write costs&lt;/strong&gt;. And if we update an attribute that's a key in one of those GSIs? That GSI alone costs 2 WCUs for the update. Surprise! 😬&lt;/p&gt;

&lt;p&gt;I've seen teams add GSIs like they're free, then wonder why their DynamoDB bill tripled.&lt;/p&gt;

&lt;p&gt;Some of the good practices to use GSIs include,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use&lt;/strong&gt; &lt;code&gt;KEYS_ONLY&lt;/code&gt; or &lt;code&gt;INCLUDE&lt;/code&gt; projection instead of &lt;code&gt;ALL&lt;/code&gt;. Only project what's actually needed in queries against that index.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit GSIs regularly&lt;/strong&gt;. Remove any that aren't being queried — they cost money even when unused.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design sparse indexes&lt;/strong&gt; where possible. Only items with the GSI's key attributes get indexed, so we can filter out items that don't need to be in the index.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consider single-table design patterns&lt;/strong&gt; before adding multiple GSIs. Sometimes a well-designed sort key eliminates the need for additional indexes entirely.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Strongly Consistent Reads Don't Work on GSIs
&lt;/h2&gt;

&lt;p&gt;Another key point that most people miss is, &lt;strong&gt;Global Secondary Indexes only support eventually consistent reads&lt;/strong&gt;. We cannot request strongly consistent reads from a GSI. Period.&lt;/p&gt;

&lt;p&gt;If your code does something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;QueryRequest request = QueryRequest.builder()
    .tableName("MyTable")
    .indexName("MyGSI")
    .consistentRead(true)  // This will fail!
    .build();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DynamoDB will reject this with a validation error.&lt;/p&gt;

&lt;p&gt;But it gets worse. GSI reads are also &lt;strong&gt;not monotonic&lt;/strong&gt;. What does that mean? During replication lag, we can read a value, then read an older value, then read the newer value again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Base table: Update item.status from "PENDING" to "COMPLETED"
GSI Read #1: Returns "COMPLETED" 
GSI Read #2: Returns "PENDING"   // Wait, what?
GSI Read #3: Returns "COMPLETED" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is completely valid DynamoDB behavior. The GSI is eventually consistent, and "eventually" means exactly that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why can't GSIs support strong consistency?&lt;/strong&gt; Because GSIs live on completely separate partition infrastructure from the base table. Updates are replicated asynchronously. DynamoDB could theoretically coordinate this, but it would destroy the performance characteristics that make DynamoDB valuable.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Single Table Design: Powerful When Done Right, Painful When Not
&lt;/h2&gt;

&lt;p&gt;If we've spent any time researching DynamoDB best practices, we've probably encountered the concept of "single table design." AWS documentation even states that we should "maintain as few tables as possible in a DynamoDB application."&lt;/p&gt;

&lt;p&gt;But here's the thing: single table design has become one of the most misunderstood and misapplied patterns in the DynamoDB ecosystem. Some teams implement it brilliantly and reap massive benefits. Others cargo-cult the pattern without understanding the "why," ending up with a convoluted mess that's harder to maintain than the multi-table design they were trying to avoid.&lt;/p&gt;

&lt;h4&gt;
  
  
  What Single Table Design Actually Solves
&lt;/h4&gt;

&lt;p&gt;The core problem single table design addresses is this: DynamoDB doesn't support joins. In a relational database, if we need a Customer and their Orders, we join two tables. In DynamoDB with separate tables, we'd need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Query the Customers table for the customer&lt;/li&gt;
&lt;li&gt;Take the customerId from that response&lt;/li&gt;
&lt;li&gt;Query the Orders table using that customerId&lt;/li&gt;
&lt;li&gt;Combine the results in application code&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's two sequential network round trips. At scale, this pattern gets slower and more expensive. The latency compounds.&lt;/p&gt;

&lt;p&gt;Single table design solves this by &lt;strong&gt;pre-joining&lt;/strong&gt; related data. We store Customers and Orders in the same table, using the same partition key &lt;code&gt;(e.g., customerId)&lt;/code&gt;. Now one Query retrieves both the Customer record and all their Orders in a single request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Query: pk = 'CUSTOMER#12345'
Returns:
  - { pk: 'CUSTOMER#12345', sk: 'PROFILE', name: 'Alice', email: '...' }
  - { pk: 'CUSTOMER#12345', sk: 'ORDER#001', total: 50.00, ... }
  - { pk: 'CUSTOMER#12345', sk: 'ORDER#002', total: 75.00, ... }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One request. One network round trip. All related data together. This is the &lt;strong&gt;item collection&lt;/strong&gt; pattern, and it's genuinely powerful.&lt;/p&gt;

&lt;h4&gt;
  
  
  When Single Table Design Adds Real Value
&lt;/h4&gt;

&lt;p&gt;Single table design shines when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;We frequently need related entities together.&lt;/strong&gt; Customer + Orders, User + Preferences + Sessions, Product + Reviews — if the access pattern is "get parent and children together," single table design eliminates the join problem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access patterns are well-defined and stable.&lt;/strong&gt; Single table design requires knowing our queries upfront. If we can confidently list the access patterns, we can model the table to serve them efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;We're operating at scale where latency matters.&lt;/strong&gt; The difference between 1 request and 3 sequential requests might not matter at 100 QPS. At 100,000 QPS, it's the difference between a responsive app and a sluggish one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Entities share a natural relationship.&lt;/strong&gt; Orders belong to Customers. Comments belong to Posts. When there's a clear hierarchical relationship, modeling them in the same item collection makes intuitive sense.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  When Single Table Design Adds Zero Value (Or Makes Things Worse)
&lt;/h4&gt;

&lt;h4&gt;
  
  
  1: Storing unrelated data in the same table
&lt;/h4&gt;

&lt;p&gt;If our application has Users, Products, Inventory, and Analytics Events, do they all need to be in one table? Probably not. If we never query Users and Inventory together, putting them in the same table gains us nothing. We're just making the table harder to understand and operate.&lt;/p&gt;

&lt;p&gt;Worse, we lose operational flexibility. Different data types might need different backup strategies, different capacity modes, different storage classes. With separate tables, we can configure each appropriately. With one mega-table, we're stuck with one-size-fits-all.&lt;/p&gt;

&lt;h4&gt;
  
  
  2: Implementing single table design but still making multiple requests
&lt;/h4&gt;

&lt;p&gt;I've seen this pattern too many times: a team implements single table design with complex composite keys and overloaded GSIs, but their application code still makes separate queries for each entity type. They've added all the complexity of single table design without any of the benefits.&lt;/p&gt;

&lt;p&gt;If the code does &lt;code&gt;GetItem(customer)&lt;/code&gt; followed by &lt;code&gt;Query(orders)&lt;/code&gt; followed by &lt;code&gt;Query(addresses)&lt;/code&gt;, it doesn't matter that they're all in the same table. We're still making three requests. We've gained nothing except confusion.&lt;/p&gt;

&lt;h4&gt;
  
  
  3: Ignoring the "adjacent" multi-table alternative
&lt;/h4&gt;

&lt;p&gt;Here's a secret: we can get most of the latency benefits without full single table design. If Customer and Orders are in separate tables but both keyed by &lt;code&gt;customerId&lt;/code&gt;, we can fetch them &lt;strong&gt;in parallel&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Parallel requests - not sequential!
customer_future = executor.submit(get_customer, customer_id)
orders_future = executor.submit(get_orders, customer_id)

customer = customer_future.result()
orders = orders_future.result()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Total latency is the max of the two requests, not the sum. For many applications, this "parallel multi-table" approach provides 80% of the benefit with 20% of the complexity.&lt;/p&gt;

&lt;h4&gt;
  
  
  4: Forcing single table design when access patterns are unknown
&lt;/h4&gt;

&lt;p&gt;Single table design requires knowing access patterns upfront. If we're building an early-stage product where requirements change weekly, locking ourselves into a rigid single-table schema is asking for pain. We'll end up with expensive data migrations every time the product evolves.&lt;/p&gt;

&lt;p&gt;For rapidly evolving applications, a simpler multi-table design (or even a different database entirely) might be more appropriate until access patterns stabilize.&lt;/p&gt;

&lt;p&gt;While the above points are more focused on proper data modeling with DynamoDB, I will also talk about a few operational issues, that are something to be aware of.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. JSON Stored as String = No Partial Updates
&lt;/h2&gt;

&lt;p&gt;DynamoDB has two ways to store structured data: the &lt;strong&gt;String type&lt;/strong&gt; (JSON serialized to a string) and &lt;strong&gt;Document types&lt;/strong&gt; (native Map and List). If we're storing JSON as a String:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "address": "{\"city\":\"Seattle\",\"zip\":\"98101\"}"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We &lt;strong&gt;cannot&lt;/strong&gt; do partial updates. Want to change just the city? We must:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read the entire item&lt;/li&gt;
&lt;li&gt;Deserialize the JSON string&lt;/li&gt;
&lt;li&gt;Modify the city&lt;/li&gt;
&lt;li&gt;Re-serialize to JSON&lt;/li&gt;
&lt;li&gt;Write the entire attribute back&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's a read + write for every tiny change, plus application code to handle the serialization.&lt;/p&gt;

&lt;p&gt;With native Document types (Map/List):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "address": {
    "M": {
      "city": {"S": "Seattle"},
      "zip": {"S": "98101"}
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UpdateExpression: "SET address.city = :newCity"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One update, one attribute, done. No read required. Much cheaper, much simpler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gotcha within the gotcha:&lt;/strong&gt; The parent map must exist before we can update nested attributes. If &lt;code&gt;address&lt;/code&gt; doesn't exist yet on an item, &lt;code&gt;SET address.city = :value&lt;/code&gt; will fail. We need to initialize the structure first or use a more complex update expression.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why do people use String for JSON anyway?&lt;/strong&gt; Often it's because we're serializing objects from application code without thinking about it. The ORM or SDK might default to JSON string serialization. Or the data came from another system as a JSON blob. Either way, we've now lost the ability to efficiently update parts of that data.&lt;/p&gt;

&lt;p&gt;Best suggestion here is to think about datatypes and check early how they are stored in the database, and consider the datatypes that best work for the application needs. If you need frequent partial updates to JSON data, then using native Map is a much better option, because you can actually UPDATE a record, instead of READ + UPSERT.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Global Tables + DAX = Stale Cache Problem
&lt;/h2&gt;

&lt;p&gt;DynamoDB Accelerator (DAX) is fantastic for read-heavy workloads. Microsecond latency from an in-memory cache. DynamoDB Global Tables are fantastic for multi-region deployments — automatic replication across regions.&lt;/p&gt;

&lt;p&gt;Using them together? That's where things get complicated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt; DAX is regional. It only knows about writes that happen in its region. Global Tables replicate directly to DynamoDB in other regions, &lt;strong&gt;completely bypassing DAX&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here's what happens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Region A: Write item → Updates DynamoDB (A) → Updates DAX (A) 
          ↓
          Replicates to DynamoDB (B) → DAX (B) has no idea 

User in Region B reads via DAX → Gets stale data until TTL expires
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Region B users might be reading data that's hours old, depending on DAX TTL settings, while the actual DynamoDB table has the latest data sitting right there.&lt;/p&gt;

&lt;p&gt;Some of the best practices to follow when using Global tables with Dax would be&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use very short TTL values (seconds, not minutes)&lt;/li&gt;
&lt;li&gt;Accept and design for eventual consistency across regions&lt;/li&gt;
&lt;li&gt;Consider implementing explicit cache invalidation for critical data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For strong cross-region consistency&lt;/strong&gt;, skip DAX entirely and accept the latency of going directly to DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. We Cannot Bulk Delete by Partition Key
&lt;/h2&gt;

&lt;p&gt;This one genuinely surprised me the first time I encountered it. Coming from SQL, I expected something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DELETE FROM Orders WHERE userId = 'user123'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DynamoDB has no equivalent. The &lt;code&gt;DeleteItem&lt;/code&gt; API requires the &lt;strong&gt;complete primary key&lt;/strong&gt; — partition key AND sort key (for composite keys). There's no &lt;code&gt;DeleteByPartitionKey&lt;/code&gt; operation.&lt;/p&gt;

&lt;p&gt;So how do we delete all orders for user123 if there are 10,000 of them?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Step 1: Query(userId = 'user123') → paginate through all 10,000 items
Step 2: For each item, extract the orderId (sort key)
Step 3: Call DeleteItem for each, or use BatchWriteItem in batches of 25
Step 4: That's 400+ API calls minimum
Step 5: 😭
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;BatchWriteItem&lt;/code&gt; can delete up to 25 items per request, but we still need the complete primary key for each item. There's no shortcut.&lt;/p&gt;

&lt;p&gt;The only truly "bulk" delete? Drop and recreate the table. I'm not joking — for large datasets, this is often faster and cheaper than iterating through millions of items.&lt;/p&gt;

&lt;p&gt;This is one reason to think about TTLs upfront, and add necessary timestamp attributes, because cleaning up obsolete data at the later time in production can be a nightmare.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. PartiQL: The SQL That can Hide Expensive Scans
&lt;/h2&gt;

&lt;p&gt;I was genuinely excited by the PartiQL support for DynamoDB. Not just because its SQL-like syntax but some of the operations we discussed above are possible only through PartiQL, and the Java SDK library at least do not have all the operations supported. And then I learned why it's actually dangerous.&lt;/p&gt;

&lt;p&gt;PartiQL looks like SQL, feels like SQL, but it absolutely does not behave like SQL. The critical difference? &lt;strong&gt;We cannot tell by looking at a PartiQL statement whether it will execute as an efficient Query or an expensive full-table Scan.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pop quiz: which of these will scan the entire table?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Query 1
SELECT * FROM Orders WHERE OrderID = 100

-- Query 2
SELECT * FROM Orders WHERE OrderID &amp;gt; 100

-- Query 3
SELECT * FROM Orders WHERE Status = 'PENDING'

-- Query 4
SELECT * FROM Orders WHERE OrderID = 100 OR Status = 'PENDING'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;OrderID&lt;/code&gt; is the partition key:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Query 1:&lt;/strong&gt; Efficient Query (exact partition key match)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query 2: Full Table Scan&lt;/strong&gt; (range on partition key not allowed)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query 3: Full Table Scan&lt;/strong&gt; (Status isn't a key)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query 4: Full Table Scan&lt;/strong&gt; (OR breaks the partition key optimization)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They all look like simple regular SQL. Three of them will read every single item in the table.&lt;/p&gt;

&lt;p&gt;With the native DynamoDB API, we'd explicitly call Scan() and feel the pain in our fingers as we type it. PartiQL lets us accidentally scan a million-item table with a query that looks completely reasonable.&lt;/p&gt;

&lt;p&gt;And even though PartiQL looks like SQL, it can only support what DynamoDB supports. That is&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No JOINs (it's still NoSQL)&lt;/li&gt;
&lt;li&gt;No aggregate functions (COUNT, SUM, AVG? Nope.)&lt;/li&gt;
&lt;li&gt;No subqueries or CTEs&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ORDER BY&lt;/code&gt; requires a WHERE clause on the partition key&lt;/li&gt;
&lt;li&gt;GSIs must be explicitly named in the FROM clause — no automatic index selection&lt;/li&gt;
&lt;li&gt;No &lt;code&gt;LIKE&lt;/code&gt; operator&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best and easy way to avoid this mishap is to restrict table scan on the tables using IAM Policy, this will avoid accidental table scans by a poorly written PartiQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Transactions: Handle With Extreme Caution
&lt;/h2&gt;

&lt;p&gt;DynamoDB transactions are powerful — ACID guarantees across up to 100 items! But they come with gotchas that can turn a production environment into a debugging nightmare.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The basics we probably know:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Max 100 items per transaction (increased from 25 in September 2022)&lt;br&gt;
Each transactional operation consumes &lt;strong&gt;2x the capacity units&lt;/strong&gt; of a regular operation&lt;br&gt;
4MB total data limit per transaction&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The part that will haunt us:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DynamoDB uses &lt;strong&gt;Optimistic Concurrency Control (OCC)&lt;/strong&gt;, not locks. This means transactions can fail due to conflicts with other concurrent operations — and debugging these failures is genuinely painful.&lt;/p&gt;

&lt;p&gt;When a transaction fails, we get a &lt;code&gt;TransactionCanceledException&lt;/code&gt; with a &lt;code&gt;CancellationReasons&lt;/code&gt; array. Sounds helpful, right? Here's the catch: the full &lt;code&gt;CancellationReasons&lt;/code&gt; details are a less helpful string representation. (In most SDKs)&lt;/p&gt;

&lt;p&gt;And the reasons themselves? They tell us that something failed, not which specific item or why it conflicted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TransactionCanceledException: Transaction cancelled, please refer 
cancellation reasons for specific reasons [TransactionConflict, None, None, None]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great. One of the four items conflicted. Which one? Why? The error doesn't say. Good luck!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this is especially insidious:&lt;/strong&gt; In development and staging environments with low traffic, we rarely hit transaction conflicts. Everything works beautifully. Then we deploy to production with 1000x the concurrency, and suddenly &lt;code&gt;TransactionConflict&lt;/code&gt; errors are everywhere — and we have no idea why because proper logging was never instrumented for them.&lt;/p&gt;

&lt;p&gt;As best practices&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Log everything&lt;/strong&gt;, Specifically log the complete &lt;code&gt;CancellationReasons&lt;/code&gt; array with context about what items were in the transaction:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;catch (TransactionCanceledException e) {
      log.error("Transaction failed for items: {}. Reasons: {}", 
          itemKeys,
          e.getCancellationReasons().stream()
           .map(r -&amp;gt; r.getCode() + ": " + r.getMessage())
           .collect(Collectors.toList()));
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keep transactions small&lt;/strong&gt;. Fewer items = lower probability of conflicts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design for idempotency&lt;/strong&gt; when possible. Sometimes transactions can be avoided entirely with conditional writes and careful operation ordering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement exponential backoff retry&lt;/strong&gt; specifically for &lt;code&gt;TransactionConflict&lt;/code&gt; errors — they're often transient.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load test with production-like concurrency&lt;/strong&gt; to surface timing-related conflicts before they hit production.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts: DynamoDB Rewards the Prepared
&lt;/h2&gt;

&lt;p&gt;DynamoDB is genuinely powerful. For the right use cases — high-scale, predictable access patterns, single-digit millisecond requirements — it's hard to beat. But it's also unforgiving. The constraints I've described aren't bugs; they're fundamental to how DynamoDB achieves its performance guarantees.&lt;/p&gt;

&lt;p&gt;The engineers I've seen struggle most with DynamoDB are those who approach it like a flexible SQL database. They design their schema first, figure out queries later, and add indexes when things get slow. That approach works reasonably well with PostgreSQL. With DynamoDB, it leads to expensive rewrites and frustrated teams.&lt;/p&gt;

&lt;p&gt;The engineers who thrive with DynamoDB do the opposite: they start with their access patterns, work backward to the data model, and treat every limitation as a design constraint to work within, not around.&lt;/p&gt;

&lt;p&gt;If there's one thing I hope you take from this article, it's this: &lt;em&gt;DynamoDB's documentation tells us what we can do. Understanding what we can't do — and why — is what separates successful DynamoDB projects from painful ones.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Know the access patterns. Understand the constraints. Design accordingly. And maybe bookmark this article for the next time someone's tempted to add "just one more GSI."&lt;/p&gt;

&lt;p&gt;Happy building!&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>singletabledesign</category>
      <category>database</category>
    </item>
    <item>
      <title>AWS Kiro: The real Development Environment</title>
      <dc:creator>Sathiesh Veera</dc:creator>
      <pubDate>Tue, 13 Jan 2026 19:11:22 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-kiro-the-real-development-environment-2p4j</link>
      <guid>https://forem.com/aws-builders/aws-kiro-the-real-development-environment-2p4j</guid>
      <description>&lt;p&gt;In the last 12 months, I have built quite a few applications outside of my work. Everything started on an AI-powered IDE and ended up with minimal help but more trouble from AI. I have been jumping between different AI-powered IDEs such as Copilot, Cursor and most recently Google's Antigravity. My observation so far, they are all good at generating code. They're all bad at understanding what I actually wanted to build.&lt;/p&gt;

&lt;p&gt;When I tried AWS Kiro, something clicked in a way the others never did. With Kiro, I was "Developing" an application, not just "Coding" it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Project I was working on
&lt;/h2&gt;

&lt;p&gt;I needed to build a simple Chrome extension that had some form-reading operations. Since Google's Antigravity has a built-in Chrome environment that can automatically open, test, and record videos, it seemed like the obvious choice. Great for Chrome extensions, right?&lt;/p&gt;

&lt;p&gt;Well, it was good at creating a nice-looking UI. I'll give it that. But even with the state of the art Gemini 3 Pro model, I couldn't get the plugin to a working state even after 3-4 days. The model kept going in circles, and I was burning time on fixes that led nowhere. I even lost track of what I was doing, and did not have a clear way out.&lt;/p&gt;

&lt;p&gt;I scrapped it. Started fresh. New project. But this time on AWS Kiro.&lt;/p&gt;

&lt;p&gt;Within 2 days, I had a working sample. So, what was different?&lt;/p&gt;

&lt;h2&gt;
  
  
  Spec-Driven Development: Think First, Code Later
&lt;/h2&gt;

&lt;p&gt;Most AI IDE tools are itching to write code the moment you type something. You give them a prompt, and boom — files everywhere. Sounds efficient until you realize you're 500 lines deep into something that doesn't match what you actually needed. We've all been there.&lt;/p&gt;

&lt;p&gt;Kiro flips this completely.&lt;/p&gt;

&lt;p&gt;When I told Kiro I wanted to build a Chrome extension with specific functionality, it didn't start generating files. Instead, it created clear specs, user stories, and acceptance criteria using EARS notation, something I am very much used to as a software engineer.&lt;/p&gt;

&lt;p&gt;Standard format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WHEN [condition/event] THE SYSTEM SHALL [expected behavior]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what a real requirement looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Requirements&lt;/span&gt;

&lt;span class="gu"&gt;### Requirement 1: Form Field Detection&lt;/span&gt;
&lt;span class="gs"&gt;**User Story:**&lt;/span&gt; As a user, I want the extension to automatically detect form fields on any webpage, so that I can interact with them programmatically.

&lt;span class="gs"&gt;**Acceptance Criteria:**&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; WHEN a webpage loads THE SYSTEM SHALL scan for all input, select, and textarea elements
&lt;span class="p"&gt;2.&lt;/span&gt; WHEN a form field is detected THE SYSTEM SHALL highlight it with a visual indicator
&lt;span class="p"&gt;3.&lt;/span&gt; WHEN no form fields exist THE SYSTEM SHALL display "No forms detected" message
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Call me old school, but this is much better than a 20 page wall of text. These are actual structured requirements, segregated by category: functional requirements, security considerations, performance improvements.&lt;/p&gt;

&lt;p&gt;Here's where it gets interesting. I started with about 10 requirements. After a few rounds of back and forth discussions, challenging the specs, and answering clarifying questions, "We" (ya, it was not just I anymore) ended up with over 50 well-defined requirements, many of them I did not even think about before I started.&lt;/p&gt;

&lt;p&gt;Kiro would actually point out contradictions between user stories and call things out: "Hey, this user story says X, but this other one implies Y. Which one do you want?" That kind of debate is gold when you're trying to nail down what you're actually building.&lt;/p&gt;

&lt;p&gt;And this is key: &lt;strong&gt;Spec-driven development avoids context drift.&lt;/strong&gt; When the AI has a clear, documented understanding of what you're building, it doesn't get lost during troubleshooting and start doing absurd things. The specs become the anchor.&lt;/p&gt;

&lt;p&gt;I haven't seen any other tool do this as well as Kiro. Not even close.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design Mode and Task Lists: Finally, Some Traceability
&lt;/h2&gt;

&lt;p&gt;After the specs were locked, Kiro generates a design document and task list. You might already know this - these three files form the foundation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;requirements.md&lt;/code&gt; — User stories with acceptance criteria in EARS notation&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;design.md&lt;/code&gt; — Technical architecture, sequence diagrams, implementation considerations&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;tasks.md&lt;/code&gt; — Discrete, trackable tasks sequenced by dependencies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But, the key here is, this isn't just a to-do list. It's a &lt;strong&gt;traceable implementation plan&lt;/strong&gt;. Each task maps back to a requirement. When you're deep in implementation and wondering "why are we building this again?", you can trace it right back to the original spec.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development is not always Linear
&lt;/h2&gt;

&lt;p&gt;Real development isn't linear. You don't always implement Task 1, then Task 2, then Task 3. Sometimes you jump ahead because you need to test something, or a dependency forces you to work on a later task first.&lt;/p&gt;

&lt;p&gt;I was shuffling around — implemented the backend API (Task 5) before the frontend changes (Task 4). When I tried to use the backend, Kiro pointed out: "You asked for the backend API, and I gave you that. But that's Task 5. We haven't done Task 4 yet, which is the frontend integration."&lt;/p&gt;

&lt;p&gt;Even though I was jumping around in the chat, not touching the task list file, Kiro kept track of what was done, what was tested, and what was pending.&lt;/p&gt;

&lt;p&gt;It's like pairing with someone who actually remembers what we talked about yesterday.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steering Files: This Changes Everything 🎯
&lt;/h2&gt;

&lt;p&gt;Here's something that frustrated me with every other AI IDE: you give feedback about one library not working, and suddenly the AI rips out your entire tech stack and introduces something completely different. You complain about a CSS issue, and next thing you know, it's migrated you from React to Vue. That's just confusion and trouble.&lt;/p&gt;

&lt;p&gt;Kiro's steering files solve this completely.&lt;/p&gt;

&lt;p&gt;Steering files are markdown documents stored in &lt;code&gt;.kiro/steering/&lt;/code&gt; that give Kiro persistent knowledge about your project. I created steering files that defined:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The technology stack Kiro must use&lt;/li&gt;
&lt;li&gt;Boundaries Kiro cannot cross during troubleshooting&lt;/li&gt;
&lt;li&gt;Libraries that are off-limits&lt;/li&gt;
&lt;li&gt;Coding conventions and patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's a sample &lt;code&gt;tech.md&lt;/code&gt; steering file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Technology&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Stack"&lt;/span&gt;
&lt;span class="na"&gt;inclusion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="gh"&gt;# Technology Stack Guidelines&lt;/span&gt;

&lt;span class="gu"&gt;## Frontend&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Framework: React 18+ with TypeScript
&lt;span class="p"&gt;-&lt;/span&gt; Styling: Tailwind CSS only (no styled-components, no CSS modules)
&lt;span class="p"&gt;-&lt;/span&gt; State: React Context for simple state, no Redux

&lt;span class="gu"&gt;## Constraints&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; DO NOT introduce new dependencies without explicit approval
&lt;span class="p"&gt;-&lt;/span&gt; DO NOT switch frameworks or major libraries during troubleshooting
&lt;span class="p"&gt;-&lt;/span&gt; DO NOT use jQuery under any circumstances

&lt;span class="gu"&gt;## Preferred Patterns&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Functional components only
&lt;span class="p"&gt;-&lt;/span&gt; Custom hooks for shared logic
&lt;span class="p"&gt;-&lt;/span&gt; Error boundaries for fault isolation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, just because I complain about a particular library, Kiro doesn't rip it off and do something completely different like other tools would. It stays within the boundaries. Closed context. No irreparable damage to the codebase. &lt;/p&gt;

&lt;p&gt;And there is much more that we can achieve with the steering files. &lt;/p&gt;

&lt;p&gt;When building an app with Cursor I remember how it created 2 different auth flows, since the first one did not meet the full requirements, and leaving the files related to both the flows in the code base creating a lot of confusing routes and mappings. With Kiro, I could totally avoid any such scenarios by defining a steering file with how to handle these cases. &lt;/p&gt;

&lt;p&gt;Another simple situation is, I noticed that whenever I had a discussion with Kiro, gave feedback, debated an approach, finalized a new direction, it would create an MD file documenting what changed and why. Over time, these files cluttered my workspace. I had about 20 random files like "encryption_fix.md", "testing_guide.md", "functionality_notes.md".&lt;/p&gt;

&lt;p&gt;It was annoying. So, I created a steering file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gs"&gt;**CRITICAL RULE**&lt;/span&gt;: All feedback, status, and documentation files created by Kiro must follow this naming convention:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;feedback/{NNN}_{descriptive-name}.md&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Where:
- `{NNN}` is a zero-padded 3-digit sequential number (001, 002, 003, etc.)
- `{descriptive-name}` is a kebab-case description of the content
- All files must be in the `feedback/` directory at the project root

### Examples

**Good**:
- `feedback/001_gradle-migration-complete.md`
- `feedback/002_jwt-authentication-fixed.md`

**Bad** (DO NOT USE):
- `GRADLE_MIGRATION_COMPLETE.md` (wrong location, no number)
- `JWT_AUTHENTICATION_FIXED.md` (wrong location, no number)
- `status.md` (wrong location, no number, not descriptive)
- `feedback/migration.md` (no number)
- `feedback/1_test.md` (not zero-padded)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now instead of random files scattered everywhere, I have a clear chronological track of all discussions and architectural decisions. It's actually become a feature — a decision log I can reference later.&lt;/p&gt;

&lt;p&gt;Plus, I created an agent hook to clean up this or summarize whenever I need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent Hooks: Automation That Runs in the Background 🤖
&lt;/h2&gt;

&lt;p&gt;Agent hooks are event-driven automations that trigger when specific events occur — saving files, creating new files, deleting files. Instead of manually asking for routine tasks, hooks handle them automatically.&lt;/p&gt;

&lt;p&gt;Here's one of the hooks I set up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Cleanup Unused Files"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Identify and remove unused files and folders"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"trigger"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"manual"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Clean Up Project"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sendMessage"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Performing project cleanup:&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;1. Scan for unused files and folders&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;2. Check for:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;   - Empty directories&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;   - Backup files (*.bak, *~)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;   - Temporary files&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;   - Unused dependencies&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;   - Old documentation files&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;3. List files to be removed&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;4. Ask for confirmation before deletion&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;5. Document cleanup in feedback file&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;Be careful not to remove important files!"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"enabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I configured hooks for: auto-generating test cases, running test cases after every task completion, auto-updating documentation, cleaning up unused files, and creating summaries of discussions on any changes to the initial plan.&lt;/p&gt;

&lt;p&gt;It's like pairing with someone who actually remembers to do the boring stuff you always forget.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context Management That Just Works
&lt;/h2&gt;

&lt;p&gt;Here's something that drives me crazy with other tools: token limits. You're mid-conversation, making progress, and suddenly the context window fills up. Now you have to start a new session, re-explain everything, and hope the AI picks up where you left off.&lt;/p&gt;

&lt;p&gt;Kiro handles this automatically. When tokens exceeded the limit, it compacted the discussion internally, keeping a summary of what mattered. I didn't have to manually open new sessions and explain everything as a fresh start — Kiro did it behind the scenes.&lt;/p&gt;

&lt;p&gt;When I looked at my session history, there were 7-8 sessions. But they weren't disconnected fresh starts. They were continuations of the same conversation, with context preserved.&lt;/p&gt;

&lt;p&gt;That's exactly what I needed — long, complex development sessions without worrying about losing the thread.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Cool features
&lt;/h2&gt;

&lt;p&gt;Kiro has many other cool development features, which now few other IDEs are also providing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Checkpoint Restore: The "Undo" Button that we all need
&lt;/h4&gt;

&lt;p&gt;I challenged an implementation but later asked Kiro to revert my decision, which kind of worked, but still some files and the ideas from that discussion were lingering.&lt;/p&gt;

&lt;p&gt;That's where checkpoint helps. I went back to the checkpoint before my "why this, change it..." comment, and Kiro simply forgot everything after that point. Context intact. Memory not corrupted. Back on track.&lt;/p&gt;

&lt;h4&gt;
  
  
  Permission Control Done Right 🔐
&lt;/h4&gt;

&lt;p&gt;When Kiro runs commands, it doesn't ask for blanket permissions. It's selective, per command.&lt;/p&gt;

&lt;p&gt;For example, when it wanted to run &lt;code&gt;rm -rf&lt;/code&gt; on the distribution folder, I could approve that specific command on that specific folder. But when it ran &lt;code&gt;curl&lt;/code&gt; commands, I could say "give you access for &lt;code&gt;curl *&lt;/code&gt;" if I trusted those.&lt;/p&gt;

&lt;p&gt;Everything shows up in the chat for easy selection. Much better than micromanaging every single command or giving away the keys to the kingdom.&lt;/p&gt;

&lt;p&gt;Pro tip: You can also configure Trusted Commands that auto-approve:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm *       # Allows all npm commands
git status  # Allow git status checks
python -m * # Allows Python module execution
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  MCP Integration: Connecting to Your World 🌐
&lt;/h4&gt;

&lt;p&gt;Kiro supports Model Context Protocol (MCP), which lets you connect to external tools and data sources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"aws-docs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"awslabs.aws-documentation-mcp-server@latest"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AWS_PROFILE"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"disabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Features I'm Still Exploring
&lt;/h2&gt;

&lt;p&gt;Kiro has even more capabilities I'm aware of but haven't deep-dived into yet:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kiro CLI&lt;/strong&gt; — Terminal-based workflows with the same steering files and MCP servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Autonomous Agent&lt;/strong&gt; (Preview) — A frontier agent announced at re:Invent 2025 that maintains context across sessions, learns from code review feedback, and works asynchronously across multiple repositories. Matt Garman called it "orders of magnitude more efficient" than first-generation AI coding tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kiro Powers&lt;/strong&gt; — One-click packages that add specialized capabilities (Datadog, Postman, Stripe, Figma integrations)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Property-Based Testing&lt;/strong&gt; — Extracts properties from specs and tests whether generated code meets them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are on my list. But even without them, Kiro has already transformed my workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finally, The Look and Feel 👻
&lt;/h2&gt;

&lt;p&gt;A trivial thing, but I love Kiro's ghost icon. It's a nice touch.&lt;/p&gt;

&lt;p&gt;More importantly, the IDE feels like a &lt;em&gt;real&lt;/em&gt; IDE — well-integrated, cohesive, not just a VS Code wrapper with some AI bolted on (looking at you, Cursor). Kiro is built on Code OSS, so you get VS Code familiarity with your existing settings and extensions, but it feels intentional and polished.&lt;/p&gt;

&lt;p&gt;Fun fact: Amazon is now using Kiro internally as their standard AI development environment company-wide. That's a pretty strong signal.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Wish Kiro Had
&lt;/h2&gt;

&lt;p&gt;Google's Antigravity has some features I genuinely miss:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inline comments and edits&lt;/strong&gt; — The ability to make surgical changes right in the code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple interactions with the same agent&lt;/strong&gt; — Running parallel conversations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent manager&lt;/strong&gt; — Coordinating multiple agents for complex tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If Kiro gets these features, I think it would be unmatched. The foundation is already strong — these additions would make it exceptional.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Tips for Getting Started with Kiro 💡
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start with Spec Mode&lt;/strong&gt; — Don't jump into code. Let Kiro generate requirements first, then challenge them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up steering files early&lt;/strong&gt; — Define your tech stack and boundaries before you start building. You'll thank yourself later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Supervised mode for risky areas&lt;/strong&gt; — Auth, payments, infrastructure. Switch to Autopilot for boilerplate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create cleanup hooks&lt;/strong&gt; — Automate the boring stuff: tests, docs, build artifacts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure MCP for AWS docs&lt;/strong&gt; — If you're building on AWS, the live documentation integration is invaluable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust the checkpoint&lt;/strong&gt; — When things go sideways (and they will), just restore. Don't waste time with manual fixes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Final Thoughts: Controlled VIBE Coding
&lt;/h2&gt;

&lt;p&gt;Here's the thing about AI IDEs: they're all trying to help you code faster. But faster doesn't matter if you're building the wrong thing, or if you lose your context halfway through, or if the AI keeps pivoting on you without warning.&lt;/p&gt;

&lt;p&gt;When I have Spec Mode engaged, steering files in place, hooks running in the background, and checkpoints available — it's like VIBE coding, but &lt;strong&gt;controlled&lt;/strong&gt;. It's similar to coding in Cursor or any other AI IDE, but with guardrails that give you confidence Kiro won't break things. And even if it does? Checkpoint restore is right there.&lt;/p&gt;

&lt;p&gt;If you're starting a project from scratch and want an AI that thinks before it codes, Kiro is worth your time. It's not just another autocomplete on steroids — it's the closest I've found to pairing with someone who actually understands what you're building.&lt;/p&gt;

&lt;p&gt;Happy coding! 🚀&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article reflects my personal experience using AWS Kiro for side projects. I build applications for business use cases and spend a lot of time testing these tools to understand what actually works. Your mileage may vary based on your use case and project complexity.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kiro</category>
      <category>vibecoding</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
