<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tycko Franklin</title>
    <description>The latest articles on Forem by Tycko Franklin (@tyckofranklin).</description>
    <link>https://forem.com/tyckofranklin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tyckofranklin"/>
    <language>en</language>
    <item>
      <title>CDK - Using Central Register Pattern for Resource Sharing</title>
      <dc:creator>Tycko Franklin</dc:creator>
      <pubDate>Sun, 01 Feb 2026 08:00:14 +0000</pubDate>
      <link>https://forem.com/tyckofranklin/cdk-using-central-register-pattern-for-resource-sharing-27k2</link>
      <guid>https://forem.com/tyckofranklin/cdk-using-central-register-pattern-for-resource-sharing-27k2</guid>
      <description>&lt;p&gt;Props drilling, massive functions/classes in one file, and deployment deadlock are just some of the common issues you can find in AWS CDK code written by yourself or others (me included). Amazon Web Services Cloud Development Kit is the framework I work with most. AWS CDK is very powerful. Built around CloudFormation it makes architecting, building, and deploying cloud resources very easy...until it's not. This blog post explores using a central register to share resources anywhere in the CDK code base, including between stacks and pitfalls to avoid when doing so.&lt;/p&gt;

&lt;p&gt;A year ago I ago I wrote an article "&lt;a href="https://dev.to/tyckofranklin/a-novel-pattern-for-documenting-dynamodb-access-patterns-2mp3"&gt;A Novel Pattern for Documenting DynamoDB Access Patterns&lt;/a&gt;" (update coming soon) and I mentioned in there the pattern for using a central register for my resources. So here I am over a year later finally writing about it. &lt;/p&gt;

&lt;p&gt;The Central Register pattern (check out &lt;a href="https://www.geeksforgeeks.org/system-design/registry-pattern/" rel="noopener noreferrer"&gt;this page&lt;/a&gt; about it on geeksforgeeks, I like how NehalNavlani explains it) has one part where you have a central place to register resources, and you can lookup those resources from anywhere in your code base using an ID. See below for an example of what I wrote for lambda's central register in CDK so I can register resources and have some error checking verifying that I'm not using the same IDs twice. I also have error checking to tell the code that a resources doesn't exist from a provided ID, which can help prevent bad references.&lt;/p&gt;

&lt;p&gt;Here's the code that includes 3 main functions: addLambda, getLambda, and validateAndReturnLambdaArn:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgjl7nvzdfmw605yrkfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgjl7nvzdfmw605yrkfb.png" alt=" " width="800" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To add a Lambda to the central register, all you have to do is import the addLambda and pass it an ID and the instantiated Lambda. I usually use the lambda name, as it's unique per region and makes debugging much easier as the name stays the same through all the steps. Here we create a Lamba and add it to the Lambda Central Register:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy6xkc4yn6u66hhtm7q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy6xkc4yn6u66hhtm7q7.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To use a Lambda in, say, a step function within the same stack, we can simply do getLambda with the same ID (This example of a different lambda from the above, but created the same way). The key is that we created the lambda somewhere else, but can easily reference it by just importing the getLambda function, and using the correct key:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cnu9fa4lq8jv74cfycl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cnu9fa4lq8jv74cfycl.png" alt=" " width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This works great, reduces the need to do props drilling or adding a lot of properties to look up and keep track of in the stack class or environments that get passed in from one stack to another, but it can come with issues when deploying resources in multiple stacks: Deployment Deadlock. This issue I first saw explained by Lee Gilmore, an AWS Serverless Hero, in one of &lt;a href="https://blog.serverlessadvocate.com/aws-cdk-stack-dependencies-1d42a18aaec2#:~:text=app%E2%80%99s%20deployment%20with-,deployment%20deadlock,-%2C%20which%20is%20usually" rel="noopener noreferrer"&gt;his articles&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One of the key gotchas to watch out for when using the cross-stack reference approach is breaking your app’s deployment with deployment deadlock, which is usually shown with the following error “Export cannot be deleted as it is in use by another Stack”.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I used to run into this often when using getLambda in an API Gateway stack that depends on a lambda in another stack. Then, either the lambda is renamed, deleted, or some other things can happen that cause a loop where the API Gateway stack cannot deploy before the Lambda stack does, but the Lambda stack won't deploy because the API Gateway stack depends on it. A great solution to this I have found is to use instantiated references using the "from arn" method (different resources name this type of function slightly differently, but the concept is the same). Lots of CDK resources you can create have similar methods, so it doesn't actually create a hard dependency between stacks. The resource does need to exist the first time you deploy, but after that you can delete or update either stack's resources and the other won't care about it. For Lambda, I created the validateAndReturnLambdaArn which acts just like getLambda, with error checking included, but returns the expected arn the lambda will have (not the one generated by CDK, which I believe would link them as dependencies...) so that the "Function.fromFunctionArn" can use it and be passed to API Gateway method creation functions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiorcqln4j5ch4bjuazdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiorcqln4j5ch4bjuazdl.png" alt=" " width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: on resources like DynamoDB where only the table name is needed, I have a similar function: validateAndReturnDynamoDBTableName that will validate the table is in our code, but not link to it and return just the table name for use. The table name can then be passed in as an environment variable, and can also be used in creating policies for roles for the resources in the other stacks to be given access to the DynamoDB Table.&lt;/p&gt;

&lt;p&gt;Using this has been successful on my projects, and makes it very quick to create and use resources in the CDK, while also reducing headaches when deploying updates in pipelines. In the past with Deployment Deadlock, it was almost always a painful manual process to unlink the stacks, redeploy each, then link back together. Something I didn't want to do in the CICD pipelines. Now the pipelines easily keep things up to date as they are updated in our code!&lt;/p&gt;

&lt;p&gt;Please feel free to connect with me on LinkedIn, or to join the Believe in Serverless discord where there are over 1000 serverless enthusiasts ranging from just a few days in to people who literally wrote the book on the subject.&lt;/p&gt;

&lt;p&gt;Images of code above are nice, but if you want to copy/paste here is the code in text:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Function } from "aws-cdk-lib/aws-lambda";
import { getPartitionIdentifier } from "../utilities";

const { region = "", account = "" } = process.env;

export const lambdaLookup: Map&amp;lt;string, Function&amp;gt; = new Map();

export const addLambda = (lambdaName: string, lambda: Function) =&amp;gt; {
    if (lambdaLookup.has(lambdaName)) {
        throw Error(`Lambda ID has already been used for this deployment: ${lambdaName}`);
    }
    lambdaLookup.set(lambdaName, lambda);
};

export const getLambda = (lambdaName: string) =&amp;gt; {
    const lambda = lambdaLookup.get(lambdaName);
    if (!lambda) {
        throw Error(`Could not find lambda in lookup from ID: ${lambdaName}`);
    }
    return lambda;
};

export const validateAndReturnLambdaArn = (lambdaName: string) =&amp;gt; {
    const awsPartition = getPartitionIdentifier(region);
    const lambdaExists = lambdaLookup.has(lambdaName);
    if (!lambdaExists) {
        throw Error(`Could not find lambda in lookup from ID: ${lambdaName}`);
    }
    return `arn:${awsPartition}:lambda:${region}:${account}:function:${lambdaName}`;
};
    const functionName = `${deploymentType}-getQueryData`;
    const lambdaInstance = new DockerImageFunction(construct, functionName, {
        code: DockerImageCode.fromImageAsset(dockerfile),
        functionName,
        timeout: cdk.Duration.seconds(30),
        memorySize: 256,
        role,
        loggingFormat:LoggingFormat.JSON,
    });

    addLambda(functionName, lambdaInstance);

    return lambdaInstance;

export const createLambdaGetIntegration = ({
    construct,
    lambdaName,
    methodName,   
    parentResource,
}: CreateLambdaPostIntegrationProps) =&amp;gt; {
    const resource = parentResource.addResource(methodName);
    const lambdaIntegrateArn = validateAndReturnLambdaArn(lambdaName);
    const lambdaToIntegrate = Function.fromFunctionArn(
        construct,
        `${parentResource.path}-GET-${lambdaName}`,
        lambdaIntegrateArn
    );

    resource.addMethod("GET",new LambdaIntegration(lambdaToIntegrate));
};

new LambdaInvoke(construct, `${uniquePrefix}-update-processing-status`, {
    stateName: `${uniquePrefix} - Update Processing Status`,
    lambdaFunction: getLambda(`${deploymentType}-updateProcessingStatus`),
    retryOnServiceExceptions: false,
    resultPath: "$.updateProcessingStatus",
    resultSelector: {
        "payload.$": "$.Payload",
    },
});


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>devops</category>
      <category>cdk</category>
      <category>serverless</category>
    </item>
    <item>
      <title>A Novel Pattern for Documenting DynamoDB Access Patterns</title>
      <dc:creator>Tycko Franklin</dc:creator>
      <pubDate>Wed, 01 Jan 2025 07:54:40 +0000</pubDate>
      <link>https://forem.com/tyckofranklin/a-novel-pattern-for-documenting-dynamodb-access-patterns-2mp3</link>
      <guid>https://forem.com/tyckofranklin/a-novel-pattern-for-documenting-dynamodb-access-patterns-2mp3</guid>
      <description>&lt;p&gt;This blog post will introduce and explore a way of documenting Access Patterns for DynamoDB in JSON format for easy updating and use in NodeJS lambdas. I haven't seen this structure used, but if it's out there and I missed it, I would love to talk about it!&lt;/p&gt;

&lt;p&gt;As I write this on the last day of the 2024 I hope those reading will have (or have had) a happy new years!&lt;/p&gt;

&lt;p&gt;I've been working with DynamoDB for years now. It seems there is always more to learn, always more to improve, when it comes to DynamoDB and using it to ever greater success. I can't give enough praise to people like Alex Debrie, Rick Houlihan, Khawaja Shams, and Pete Naylor for sharing their knowledge over the years through blog posts, re:invent presentations, books, chats, and in person conversations.&lt;/p&gt;

&lt;p&gt;Anyone who has been introduced to DynamoDB through the awesome materials already out there probably knows about access pattens. Access patterns are the way we need to access the data and strongly influences how we structure the data and store it for both read and write operations. If you have seen any content out there on this process, you'll probably recognize spreadsheets of DynamoDB data modeling of those access patterns. I started out my DynamoDB journey using spreadsheets to model my data, but started moving towards JSON instead because it was easier for me to move between the code and the modeling. It also felt a lot easier to refactor the access patterns and move things around if it was in JSON format instead of spreadsheets. I am not sharing this JSON style modeling to replace or advocate for not using spreadsheets, but I find it works better for me so I thought I would share and see if it matches their style and what others think on the topic. I love to share knowledge about using technology and this fits with that. I don't think I can create new content that compares to the mountain on content out there on access patterns and how to create them, so I will level set and say that this blog post is for those who have a good understanding of access patterns already and how to model in DynamoDB.&lt;/p&gt;

&lt;p&gt;Without further ado, onto the main content.&lt;/p&gt;

&lt;p&gt;First up, we need to have some data to model. I went back and forth on this, but I think a nation wide Kayaking Rental company might give us some easy wins for modeling access patterns and showcasing the method of documenting those in JSON.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndr4jwy72161jrrr55rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndr4jwy72161jrrr55rq.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's assume that there is a Kayak Rental location in almost all the States in the United States of America, we'll call it 40. Each location will have an address. Let's assume these addresses are constant through time and won't be changing. Each location will have on average 5 employees, each employee has a job title and compensation, name and address, and other details that we won't worry about at this time. Employees are people, and people are likely to move, so the address is likely to change over time. Each store has an inventory that changes day to day. Each store has rental records. There are customers and what rentals they currently have. We could get into a lot more that is needed for actually running a Kayak Rental franchise, but let's stop here as this will give us more than enough to start asking questions of our data, and therefore we'll be able to define our access patterns.&lt;/p&gt;

&lt;p&gt;I prefer to have the access pattern questions as close to the data modeling as possible so things aren't spread out and hard to find. I'll switch to JSON for this next part, and have the content implemented there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Access Patterns": [
        "get the location of all rental stores",
        "get the inventory of a store",
        "get the current employees of a store",
        "get all employees who have worked at a store",
        "get all stores an employee has worked at",
        "get all rentals a customer has out",
        "get customer rental history for a location",
        "get customer rental history for all locations"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this blog post, I don't think we will get through everything. Let's pick some topics and start modeling! First up, let's talk about Single Table Design: where all our items are in the same DynamoDB table. This will mean that we need to keep track of data modeling in a way that all our items can exist at the same time and not conflict. Being thorough in documenting the access patterns and the data models will be key to make sure we keep things working well. The access patterns we have defined will need a brief introduction to cardinality. Traditional DynamoDB data modeling is extremely good at creating patterns for data with 1:1, many:1, and many:many relationships, given that on the many:many relationships, each pairing of items only occurs once. Item collections model 1:1, many:1, and many(1):(1)many relationships well, and there's a lot out there on it. Relationships where you can have many(1..n):(1..m)many situations, where each item can have any number of relationships to any other item 1 or numerous times are really hard to model with item collections, index overloading, or global secondary indexes on single items. This is getting into some complex situations to talk about abstractly, so let's give a real world example: an employee has worked for the company for 20 years. They have moved to a different location about once a year, with 10 different locations through promotions or lateral movements through the company, and they have worked at the same locations a few times. Say they have worked at location "A" 5 different times. Another situation, is an employee could be living at a location that is close to 2 stores, and might be working at both at the same time. This presents an issue when you need to define that relationship more than once. Using reverse PK/SK (SK as the partition key, and PK as the sort key in the global secondary index) lookups can work for many(1):(1)many relationships quite well. I've had issues with them for many(1..n):(1..m)many situations and have moved on to other patterns. For this, I have found Materialized Graphs are an awesome&lt;br&gt;
pattern to use that keeps things generic and easy to understand, and powerful for querying. In Alex Debrie's "The DynamoDB Book" he has a section on Materialized Graphs, and I would highly recommend checking it out if you haven't already. For the modeling today, we'll make use of them for some situations, and keep to traditional items collections and index overloading for others.&lt;/p&gt;

&lt;p&gt;Long ago I made the decision to keep the indexes and keys generic. This cuts off some access patterns that are perfectly valid, but also allows me to set things up once and keep on that path without having to continue to redo things or add customizations and worry about how they will affect previous data modeling. Thus, I have the primary index (what uniquely identifies each item) set to use "PK" as the partition key string, and "SK" set as the sort key string. I don't use local secondary indexes. I set up my tables to start with 6 Global Secondary Indexes as this is generally 1 more GSI than I have needed in most projects as they grow. Each Global Secondary Index is set up to use PK{x} and SK{x} as their partition and sort keys. Because Global Secondary Indexes are sparse, and will only be populated if the item contains the PK{x} and SK{x}, I can add these up front and not be penalized in performance or cost up starting out, and they will be there when I need them. Because I overload the Global Secondary Indexes, I don't project specific fields into each, I use the default of all fields being projected into the Global Secondary Index shadow table.&lt;/p&gt;

&lt;p&gt;Alright, now that we have those two paragraphs of initial explanations and level setting out of the way, let's start modeling some basic entities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "store": [
        {
            "PK": "${version}#store#storeULID#${storeULID}",
            "SK": {
                "metadata": {
                    "entityType": "storeMetadata"
                }
            }
        }
    ],
    "person": [
        {
            "PK": "${version}#person#personULID#${personULID}",
            "SK": {
                "metadata": {
                    "entityType": "personMetadata"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we have store and person. I decided to go with person as an entity because we have employees and we have customers, so this will allow us to model both. We may or may not stay with this, but for starters, this is a decisions I can support. I will also be using ULIDs. ULIDS are like UUIDs, but include time in a way to make them sortable based on when they were created.&lt;/p&gt;

&lt;p&gt;For each PK, I have made the habit to include a version in case in the future I need to switch to a new version of data while working through the old data. I have yet to really need this, but I've kept it. There are situations where you can model in the version for keeping latest items as they are updated and achieving the changes to each one by changing the version on the fly, but I won't be covering that here.&lt;/p&gt;

&lt;p&gt;For each category, or often entities, I create an entry in a JSON dictionary or object with a title. This can be something we can refer to, but it's not too strict on naming and could change on a whim if something else makes more sense. For each category of item (e.g. store and person) I have an array of objects that are one PK and at least one SK. The PK is a string, the SK is a dictionary, with each unique SK pattern as the key. For now, there are only two PK/SK unique pairing patterns, but we'll build more. Each unique PK/SK pair pattern is one entity, and we have the "entityType" field to give it a unique entity name. At one point in time I would do queries, parse the PK and SK, and use logic to figure out which entity it was. This worked well, but was pretty confusing to go through, had a lot of boilerplate code, and didn't help much if you working with the items in the AWS DynamoDB Web Console, without the assistance of these helper functions. Thus, I switched to the rule that all items in the DynamoDB table will have an entity type. We now have a storeMetadata entity, as well as a personMetadata entity.&lt;/p&gt;

&lt;p&gt;Next up, let's add employees.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "store": [
        {
            "PK": "${version}#store#storeULID#${storeULID}",
            "SK": {
                "metadata": {
                    "entityType": "storeMetadata"
                },
                "employee#metadata#personULID#${personULID}":{
                    "entityType": "storeEmployee"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we use the personULID, and we'll populate it with just a few fields like name and id, things that won't change often (and when they do, it's a quick update). This item we will add as employees are hired, and remove them if they no longer work at the store.&lt;/p&gt;

&lt;p&gt;Next let's add inventory. Very similar to employees, but we'll go ahead and take the hit and have all the metadata for each item, even if duplicate, for each item in inventory. Later we could move these out into their own like people, but for now for a rental company, there are benefits of tracking details on each and having some duplication of details. Having worked previously in retail for 11 years, I can tell you that even with specific products there are often countless different variations of it that make single lookups for an item that removes duplication sometimes useless.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "store": [
        {
            "PK": "${version}#store#storeULID#${storeULID}",
            "SK": {
                "metadata": {
                    "entityType": "storeMetadata"
                },
                "employee#metadata#personULID#${personULID}":{
                    "entityType": "storeEmployee"
                },
                "inventory#metadata#inventoryULID#${inventoryULID}":{
                    "entityType": "storeInventoryItem"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's review access patterns and see how the current model matches up and what queries the model would support. So far it's 2 out of 8: "get the inventory of a store" and "get the current employees of a store". For get inventory of a store, we can query for the PK &lt;code&gt;"${version}#store#storeULID#${storeULID}"&lt;/code&gt; and for the SK we would do SK begins with &lt;code&gt;"inventory#metadata#inventoryULID#"&lt;/code&gt;. For this query we would need the version and the storeULID. Very similar query for the current employees of a store: PK &lt;code&gt;"${version}#store#storeULID#${storeULID}"&lt;/code&gt; and SK begins with &lt;code&gt;"employee#metadata#personULID#"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now, let's use a Global Secondary Index (GSI) to grab all the stores. Adding PK1 and SK1 to the storeMetadata, we can query by the PK1 on the GSI1 index and it will give us all the stores. This is where the JSON really starts to shine through over spreadsheets for my style; each different entity has its Global Secondary Indexes explicitly defined as they might appear (templated) in DynamoDB with JSON view on. To me it makes it very easy to see what the raw data will start to look like, and we can copy and paste many parts into our code where needed. Often, we can paste in the template strings as is, and define the variables that they would use e.g. "storeULID#${storeULID}" becomes &lt;code&gt;storeULID#${storeULID}&lt;/code&gt; and would just require the variable storeULID to be present for the string to be completed. Notice that now only the metadata entity has the GSI1 values. The employee and inventory do not.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "store": [
        {
            "PK": "${version}#store#storeULID#${storeULID}",
            "SK": {
                "metadata": {
                    "entityType": "storeMetadata",
                    "PK1": "${version}#stores",
                    "SK1": "storeULID#${storeULID}",
                    "_commentGSI1": "GSI to grab all the stores at once."
                },
                "employee#metadata#personULID#${personULID}":{
                    "entityType": "storeEmployee"
                },
                "inventory#metadata#inventoryULID#${inventoryULID}":{
                    "entityType": "storeInventoryItem"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's get into how we might solve the many(1..n):(1..m)many problem with materialized graph pattern. In this pattern we create edges between the nodes, or a relationship item in the database that points to both items and can have some properties on it. The nodes are the items we want to relate together e.g. a person and employment, and the edges are that relationship e.g. Person A worked at Company B. This pattern is set up using a pair of Global Secondary Indexes where each one allows you to query the inverse of the other Global Secondary Index. Let's look at how this might be set up with a relationship grouping.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "relationships": [
        {
            "PK": "${version}#employment#employmentULID#${employmentULID}",
            "SK": {
                "metadata": {
                    "entityType": "employmentRelationship",
                    "PK2": "${version}#employment#${xULID}",
                    "SK2": "${version}#employment#${yULID}",
                    "_commentGSI2": "Can query by xULID, and get all employment for x. Company location is x, person is y",
                    "PK3": "${version}#employment#${yULID}",
                    "SK3": "${version}#employment#${xULID}",
                    "_commentGSI3": "Can query by yULID, and get all employment for y, Company location is x, person is y"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key part of this is the 2 Global Secondary Indexes 2 and 3. Each is a flipped version of itself. We could put these into a specific Global Secondary Index that just swaps these for us, but I find until you get to large scale this pattern is just fine. We define that in employment relationships, x will be the company location and y will be the person. Any time a person starts working at a Kayak Rental location, the will have one of these items created. The item can also contain many fields such as job position, salary, date range of employment, etc. For now we won't worry about filling those in; however, because we are using ULIDs we can sort on them and know when an employee started at a location compared to other locations, or vice versa if we want to know the general order of when employees at a location were hired.&lt;/p&gt;

&lt;p&gt;Now that we have this Materialized Graph pattern we can build some queries to satisfy more of our access patterns. &lt;code&gt;"get all employees who have worked at a store"&lt;/code&gt; we query on GSI2 with PK2 &lt;code&gt;"${version}#employment#${xULID}"&lt;/code&gt; with xULID being the store location. This will give us all the employment records of that store. We could duplicate fields from person and store on the relationship object, things likely not to change much, and that could be the end of it. The other option is that we loop through the results and do a query for each person to get their full information. This can get expensive at scale, but it's still very efficient and does scale. With DynamoDB you can also send numerous queries at it in parallel to get data back, so our options are open here. &lt;code&gt;"get all stores an employee has worked at"&lt;/code&gt; we do the exact same as for stores, but use GSI3 with PK3 &lt;code&gt;"${version}#employment#${yULID}"&lt;/code&gt; where yULID is the person ULID. With this one, we could also just query all stores and then have them locally to loop through for lookups, bypassing the need to make 40+ queries to get each store's information.&lt;/p&gt;

&lt;p&gt;We have satisfied 5 of the 8 access patterns. Now we will focus on the rental invoices and history. We will essentially duplicate the employment relationship, and change it to meet the needs of rentals. We will also go back to item collections of the store, where the current rentals out will be kept. First, here is the update to the store item collection.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "store": [
        {
            "PK": "${version}#store#storeULID#${storeULID}",
            "SK": {
                "metadata": {
                    "entityType": "storeMetadata",
                    "PK1": "${version}#stores",
                    "SK1": "storeULID#${storeULID}",
                    "_commentGSI1": "GSI to grab all the stores at once."
                },
                "employee#metadata#personULID#${personULID}":{
                    "entityType": "storeEmployee"
                },
                "inventory#metadata#inventoryULID#${inventoryULID}":{
                    "entityType": "storeInventoryItem"
                },
                "activeRentals#personULID#${personULID}#inventoryULID#${inventoryULID}":{
                    "entityType": "storeActiveRental"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each time a person rents a kayak, the system will create a record with a PK of "${version}#store#storeULID#${storeULID}" and an SK of "activeRentals#personULID#${personULID}#inventoryULID#${inventoryULID}". This will allow us to query with the PK and do SK begins with "activeRentals#personULID#${personULID}#inventoryULID#" to see all the rentals a person has out, satisfying our access pattern. But wait, this is only for one location? We need to know for all locations, as location isn't a requirement of the access pattern. This we can solve with a GSI. We'll use GSI4 on the same item to make it so we can query across all locations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "store": [
        {
            "PK": "${version}#store#storeULID#${storeULID}",
            "SK": {
                "metadata": {
                    "entityType": "storeMetadata",
                    "PK1": "${version}#stores",
                    "SK1": "storeULID#${storeULID}",
                    "_commentGSI1": "GSI to grab all the stores at once."
                },
                "employee#metadata#personULID#${personULID}":{
                    "entityType": "storeEmployee"
                },
                "inventory#metadata#inventoryULID#${inventoryULID}":{
                    "entityType": "storeInventoryItem"
                },
                "activeRentals#personULID#${personULID}#inventoryULID#${inventoryULID}":{
                    "entityType": "storeActiveRental",
                    "PK4": "${version}#activeRentals#personULID#${personULID}",
                    "SK4": "inventoryULID#${inventoryULID}",
                    "_commentGSI4": "GSI for looking up all rentals across all locations a person has out."
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now remember, we could start to overload each Global Secondary Index and use them for different things as long as the unique values are separate and we don't need both on one entity. I do like to start to do that, but I generally have enough access patterns satisfied by going up to 6 Global Secondary Indexes and not reusing each one.&lt;/p&gt;

&lt;p&gt;All that is left in our initial access patterns is to set up rental history. We can again use the materialized graph pattern to meet this. A person at a location may rent out the same kayak countless times over the years. That Kayak could potentially also be moved to another location and the person loves it so much they drive there to rent it. More than one person can rent the same kayak, although not at the same time, so we have our many to many relationship, with a cardinality on both sides of 0 (database entry wouldn't be created with 0) to n. Back to the rental records and access patterns, we can use this to handle it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "relationships": [
        {
            "PK": "${version}#rental#rentalULID#${rentalULID}",
            "SK": {
                "metadata": {
                    "entityType": "rentalRelationship",
                    "PK2": "${version}#rentalLocationPerson#${xULID}",
                    "SK2": "${version}#rentalPersonLocation#${yULID}",
                    "_commentGSI2": "Can query by xULID, and get all rental for x. Company location is x, person is y",
                    "PK3": "${version}#rentalPersonLocation#${yULID}",
                    "SK3": "${version}#rentalLocationPerson#${xULID}",
                    "_commentGSI3": "Can query by yULID, and get all rental for y, Company location is x, person is y",
                    "PK5": "${version}#rentalInventoryPerson#${xULID}",
                    "SK5": "${version}#rentalPersonInventory#${yULID}",
                    "_commentGSI5": "Can query by xULID, and get all rental for x. Inventory is x, person is y",
                    "PK6": "${version}#rentalPersonInventory#${yULID}",
                    "SK6": "${version}#rentalInventoryPerson#${xULID}",
                    "_commentGSI6": "Can query by yULID, and get all rental for y, Inventory is x, person is y"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, we have added two more Global Secondary Indexes to handle the second set of relationships in a given rental transaction, this allows us to link a person, a company location, and the inventory being rented.&lt;/p&gt;

&lt;p&gt;Let's go ahead and define what the queries would be for the last 2 access patterns identified. &lt;code&gt;"get customer rental history for a location"&lt;/code&gt; can be satisfied by querying on Global Secondary Index 3, PK3 &lt;code&gt;"${version}#rentalPersonLocation#${yULID}"&lt;/code&gt; and SK3 &lt;code&gt;"${version}#rentalLocationPerson#${xULID}"&lt;/code&gt; where yULID is the person, and xULID is the company location. &lt;code&gt;"get customer rental history for all locations"&lt;/code&gt; we simply change our strategy to using just the PK3 &lt;code&gt;"${version}#rentalPersonLocation#${yULID}"&lt;/code&gt; where yULID is the person and this will return all rentals by a person for all locations.&lt;/p&gt;

&lt;p&gt;Wrapping up, here's all the JSON put together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Access Patterns": [
        "get the location of all rental stores",
        "get the inventory of a store",
        "get the current employees of a store",
        "get all employees who have worked at a store",
        "get all stores an employee has worked at",
        "get all rentals a customer has out",
        "get customer rental history for a location",
        "get customer rental history for all locations"
    ],
    "store": [
        {
            "PK": "${version}#store#storeULID#${storeULID}",
            "SK": {
                "metadata": {
                    "entityType": "storeMetadata",
                    "PK1": "${version}#stores",
                    "SK1": "storeULID#${storeULID}",
                    "_commentGSI1": "GSI to grab all the stores at once."
                },
                "employee#metadata#personULID#${personULID}":{
                    "entityType": "storeEmployee"
                },
                "inventory#metadata#inventoryULID#${inventoryULID}":{
                    "entityType": "storeInventoryItem"
                },
                "activeRentals#personULID#${personULID}#inventoryULID#${inventoryULID}":{
                    "entityType": "storeActiveRental",
                    "PK4": "${version}#activeRentals#personULID#${personULID}",
                    "SK4": "inventoryULID#${inventoryULID}",
                    "_commentGSI4": "GSI for looking up all rentals across all locations a person has out."
                }
            }
        }
    ],
    "person": [
        {
            "PK": "${version}#person#personULID#${personULID}",
            "SK": {
                "metadata": {
                    "entityType": "personMetadata"
                }
            }
        }
    ],
    "relationships": [
        {
            "PK": "${version}#employment#employmentULID#${employmentULID}",
            "SK": {
                "metadata": {
                    "entityType": "employmentRelationship",
                    "PK2": "${version}#employment#${xULID}",
                    "SK2": "${version}#employment#${yULID}",
                    "_commentGSI2": "Can query by xULID, and get all employment for x. Company location is x, person is y",
                    "PK3": "${version}#employment#${yULID}",
                    "SK3": "${version}#employment#${xULID}",
                    "_commentGSI3": "Can query by yULID, and get all employment for y, Company location is x, person is y"
                }
            }
        },
        {
            "PK": "${version}#rental#rentalULID#${rentalULID}",
            "SK": {
                "metadata": {
                    "entityType": "rentalRelationship",
                    "PK2": "${version}#rentalLocationPerson#${xULID}",
                    "SK2": "${version}#rentalPersonLocation#${yULID}",
                    "_commentGSI2": "Can query by xULID, and get all rental for x. Company location is x, person is y",
                    "PK3": "${version}#rentalPersonLocation#${yULID}",
                    "SK3": "${version}#rentalLocationPerson#${xULID}",
                    "_commentGSI3": "Can query by yULID, and get all rental for y, Company location is x, person is y",
                    "PK5": "${version}#rentalInventoryPerson#${xULID}",
                    "SK5": "${version}#rentalPersonInventory#${yULID}",
                    "_commentGSI5": "Can query by xULID, and get all rental for x. Inventory is x, person is y",
                    "PK6": "${version}#rentalPersonInventory#${yULID}",
                    "SK6": "${version}#rentalInventoryPerson#${xULID}",
                    "_commentGSI6": "Can query by yULID, and get all rental for y, Inventory is x, person is y"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is much more we could model, and we could most certainly model some of these better, but this is New Years, and I have ran out of time to publish this in 2024 as I promised myself I would do. I shared what I wanted to share about putting access patterns in JSON format, and hopefully the benefits of doing it that way are seen. I find it works best for me, but I don't claim it is the best way. The current structure still lacks some possible (and common) patterns, so that could be improved if desired. This structure has served me well as it has evolved over the last 7 years and it works quite well for me! I hope those who have made it this far in this post have also appreciated the walk through of another example of evaluating a project, producing the access patterns needed, and implementing those in concrete DynamoDB ready implementations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzfaf0tahezcke6dafm7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzfaf0tahezcke6dafm7.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please feel free to connect with me on LinkedIn, or to join the Believe in Serverless discord where there are over 1000 serverless enthusiasts ranging from just a few days in to people who literally wrote the book on the subject.&lt;/p&gt;

&lt;p&gt;Happy New year!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>dynamodb</category>
      <category>datamodeling</category>
      <category>aws</category>
    </item>
    <item>
      <title>Displaying Dates on AWS</title>
      <dc:creator>Tycko Franklin</dc:creator>
      <pubDate>Mon, 18 Nov 2024 07:11:54 +0000</pubDate>
      <link>https://forem.com/tyckofranklin/displaying-dates-on-aws-4jg</link>
      <guid>https://forem.com/tyckofranklin/displaying-dates-on-aws-4jg</guid>
      <description>&lt;p&gt;This is going to be a short post. It's mainly going to be about the new(ish) feature on AWS for how Dates are displayed and formatted, and what's so special about that coming from a user of AWS.&lt;/p&gt;

&lt;p&gt;I haven't seen much on this topic, so please excuse this if I missed the announcement and this is old news.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0admxu2ljr1jtkftnxj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0admxu2ljr1jtkftnxj3.png" alt="preview of cloud watch logs with the time they display in UTC only" width="470" height="213"&gt;&lt;/a&gt;&lt;br&gt;
First, we'll look at raw cloud watch logs for times. In the above image, you can see that times are accurate and down to the millisecond. They are in UTC. UTC does not have a timezone, but it is how we keep time synced up across the world. &lt;a href="https://en.wikipedia.org/wiki/Coordinated_Universal_Time" rel="noopener noreferrer"&gt;Wikipedia has a nice article on it&lt;/a&gt;. Even with this standard, and being able to easily calculate current time, it still adds to the cognitive load while looking at dates/times on AWS in logs. More than once I have messed up the timing when sifting through lots of logs, especially if it takes a while. Having to swap between local time and UTC can get old fast.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy35kxsa4wacdbzzodol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy35kxsa4wacdbzzodol.png" alt="Step Function logs with a date/time value clicked on with a modal popped up that has UTC, ISO, Local, and unix timestamp displayed at once" width="539" height="282"&gt;&lt;/a&gt;&lt;br&gt;
Now, we'll look at Step Functions and the "new" modal that pops up when you click the time values. In this version that AWS recently released (at least that I noticed), you now have the default time in local time that is displayed in the table, but now with the model you get 4+ different ways to display the time.&lt;/p&gt;

&lt;p&gt;With the modal, you can see UTC, ISO, Local (with UTC Offset, which includes daylights savings time), and unix timestamp. Now you don't have to pick one and only see it. Now if you don't have to do calculations on the fly. &lt;/p&gt;

&lt;p&gt;A huge win for virtual teams across the country and especially across the globe: now you can see times that makes sense to you while also seeing the times that others would see. If you are 4 hours apart, you can use UTC to refer to the exact time regardless of timezone, but then while looking at the same pages at the same time, you can see what it is in local time.&lt;/p&gt;

&lt;p&gt;I often work on systems that pull data from outside sources, and those sources also have many different dates/times we have to keep track of. It can get complicated quickly when debugging things. Have this in AWS helps to remove some of the complications, and makes things more simple to discuss.&lt;/p&gt;

&lt;p&gt;This is a very powerful update. Although it is small addition, this feature has a huge impact on the developer experience of my team while using and maintaining AWS resources! &lt;/p&gt;

&lt;p&gt;Current services I have seen use this is just Step Functions. Code Build has a different approach: time swaps between a few types of time formatting, but not nearly as powerful or complete as the modal.&lt;/p&gt;

&lt;p&gt;I really hope to see AWS add this to all of their services where dates are displayed. I think it would be useful, it would improve developer experience, and reduce time to get things done or investigate issues.&lt;/p&gt;

&lt;p&gt;What do you think of the new time modal for step functions logs?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscommunitybuilder</category>
      <category>dx</category>
      <category>devrel</category>
    </item>
  </channel>
</rss>
