<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: James Matson</title>
    <description>The latest articles on Forem by James Matson (@kknd4eva).</description>
    <link>https://forem.com/kknd4eva</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kknd4eva"/>
    <language>en</language>
    <item>
      <title>Complex Event Filtering with AWS EventBridge Pipes, Rules and No Custom Code.</title>
      <dc:creator>James Matson</dc:creator>
      <pubDate>Thu, 09 May 2024 01:14:29 +0000</pubDate>
      <link>https://forem.com/aws-builders/complex-event-filtering-with-aws-eventbridge-pipes-rules-and-no-custom-code-2b11</link>
      <guid>https://forem.com/aws-builders/complex-event-filtering-with-aws-eventbridge-pipes-rules-and-no-custom-code-2b11</guid>
      <description>&lt;p&gt;It can be a guilty pleasure introducing AWS Lambda functions into your solution. Lambda functions are many things, but perhaps none more so than the swiss army knife of cloud development.&lt;/p&gt;

&lt;p&gt;There’s almost nothing they can’t do, owing mostly to the fact that a Lambda is just an uber-connected vassal for your awesome code. One of the most common uses cases for a Lambda function is as the glue between one or more AWS services, if you’ve worked in the serverless space I’m telling you an all-too-familiar tale. You’ve set up a DynamoDb table to record some event data, and now you want to take that event data and send it to an API after performing some logic or manipulation on the data. You might end up with a design that looks similar to the below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxvb9pgs1rxdpqj01m6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxvb9pgs1rxdpqj01m6b.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A glorious serverless Lambda function creates the glue between the exported stream events and the downstream REST API. Magic! Or, maybe you’re the provider of the API, and you want to receive events from the outside world, perform all kinds of logic and business rules on the data, then submit different messages to different downstream services as a result. You might end up with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbt01ft7712gxrli034w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbt01ft7712gxrli034w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again, Lambda functions serve to provide the glue and business logic between upstream services and downstream services. Using whatever language you’re most comfortable in, you can fill the lambda with whatever code you need to get a virtually limitless variation of logic.&lt;/p&gt;

&lt;p&gt;It’s awesome! I love Lambda.&lt;/p&gt;

&lt;p&gt;Really, I do! If I saw Lambda walking on the opposite side of a busy street on a rainy day, I’d zig-zag through traffic to get to the other side, pick Lambda up right off the ground and hug it to pieces as the raindrops fell around us and the city seemed to freeze in place.&lt;/p&gt;

&lt;p&gt;Then I’d place Lambda down, kiss its nose, and proceed to walk hand-in-hand with it through the rainy streets, talking about nothing and everything at the same time.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Wow&lt;/em&gt;. That got really romantic. I’m not crying, you’re crying!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uforbsq1gqggfgwnoor.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uforbsq1gqggfgwnoor.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is me, hugging AWS Lambda. If this doesn’t melt your heart, you’re dead. You’re literally dead.&lt;/p&gt;

&lt;p&gt;But there are downsides that come with filling your environment with Lambda functions.&lt;/p&gt;

&lt;p&gt;For one thing, Lambda functions are filled with code and libraries, so irrespective of whether you’re using C#, Python, NodeJS or Go you have to face the reality of security scans, package management, code maintenance and just the overall technical debt that comes with maintaining a code base. (Even if that codebase is distributed among lots of little Lambda functions).&lt;/p&gt;

&lt;p&gt;Even if everything is well maintained, as runtimes age out of support from AWS, you’ll be faced with the task of updating functions to the next supported runtime which — depending on how big your environment is — can range from a minor annoyance to a major headache.&lt;/p&gt;

&lt;p&gt;AWS however, have made some great moves across their landscape to lessen the reliance builders of today have on needing Lambda to form the ‘glue’ between services. There’s no replacing Lambda for its flexibility and power to encapsulate business logic for your applications, but using Lambda functions simply as a way to stitch AWS native service A to AWS native service B is a pain and let’s be honest — after the eleventy billionth time you’ve done it — not much fun either.&lt;/p&gt;

&lt;p&gt;Now as a software developer from way back in the olden times of Visual Basic and ASP classic, I know that a dev’s first instinct is to solve the problem with code. I get it. Code is beautiful, and in the right hands can solve all the problems, but the reality is that to think in a ‘cloud native’ manner actually involves in many cases thinking about code less as a solution to all problems and instead a solution to the problems only code can solve.&lt;/p&gt;

&lt;p&gt;Cloud services — AWS included — have come a long way to allowing you to get a lot of work done by leveraging the built in functionality of components like S3, EventBridge, CloudWatch, Step Functions and more. This means that when you do turn to application code, it’s for the right reasons, because it’s the right tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Less is More
&lt;/h2&gt;

&lt;p&gt;To demonstrate how taking the ‘less Lambda/code is more’ approach can work, I’m going to take you through a real world use case, showing you one reasonable way of approaching the solution using AWS Lambda, and then an alternative that uses zero (that’s right, zero — what sorcery is this?) Lambda functions to achieve the same ends.&lt;/p&gt;

&lt;p&gt;Alright, so what’s our real world use case? (Bearing in mind this is a very real world use case. As in, it happened — to me — in the real world, a place I'm rumoured to visit from time to time).&lt;/p&gt;

&lt;p&gt;I’ve been tasked with building a service as a part of a larger platform that will help provide stock on hand information to a retail website. In retail, accurate stock isn’t a nice to have, it’s essential. If you don’t provide correct stock for your customers to view, you’re either losing sales when you’re not reporting stock that is there, or creating frustrated customers showing products in stock that actually aren’t.&lt;/p&gt;

&lt;p&gt;In order to provide useful information quickly to the website, we have a third party search index product that holds our products as well as information about whether the products are in or out of stock. The stock on hand figures are going to be held in a DynamoDb table, and my job is to take the ever changing stock on hand figures from that database, work out whether the change is resulting in the product&lt;/p&gt;

&lt;p&gt;Moving from in stock to out of stock&lt;br&gt;
Moving from out of stock to in stock&lt;br&gt;
and send the resulting information to the third party search index via an API call. To keep things simple, we don’t actually need to send the stock figure (e.g. 5 units in stock) itself to the search index, we just need to send the product ‘sku’ (Stock Keeping Unit) and whether it’s in or out of stock.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda’s Lambda’s Everywhere
&lt;/h2&gt;

&lt;p&gt;Let’s have a look at how we might approach this with our code and Lambda first approach:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jvyv3aitnlnei8o6g6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jvyv3aitnlnei8o6g6w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not too shabby. Putting aside the complexities of error handling, retries and what have you, we have a pretty simple robust solution. Our stock table has DynamoDb streams enabled and configured to send NEW_AND_OLD_IMAGES. This means that when a value changes, the table will export some JSON data that tells us what the old value was as well as what the new value is. This is important for us to determine if the product is moving into/out of stock.&lt;/p&gt;

&lt;p&gt;We then have a Lambda function set up to be triggered by the DynamoDb stream event. This is called the ‘Filtering Service’. It’s job is to examine the data and determine if it’s something we should be sending to our search index. Remember, we don’t care about movements of units up or down unless it results in the product moving in stock or out of stock. Here’s a good visual reference below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecvajvsybr9evosfk8m4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecvajvsybr9evosfk8m4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If our filtering service says ‘yup, this looks good — send it on’ it’s going to send that data to an SQS Queue. Why not directly to the next Lambda? Well, it’s a bit frowned upon to directly invoke Lambda from Lambda and it doesn’t hurt to put a little decoupling between Lambda A and B.&lt;/p&gt;

&lt;p&gt;The Queue will have a trigger setup to invoke the IndexService Lambda. It’s job will be to obtain the required credentials to call the third party search index API, and send along a payload to that API that looks a bit like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "sku": "111837",
    "in_stock": false
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Nice! Easy and done. Except you’ve just introduced another 2 Lambda functions to your landscape. Functions that come with all the baggage that we talked about earlier.&lt;/p&gt;

&lt;p&gt;So, is there another way? A — and here’s a new term I’ve coined just for you — Lambdaless way?&lt;/p&gt;

&lt;p&gt;Of course.&lt;/p&gt;

&lt;h2&gt;
  
  
  This is The Way
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falvjf98xoygmafkx9y45.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falvjf98xoygmafkx9y45.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So how are we going to tackle this problem without any custom code or Lambda functions? We’ll let’s look at the design visually first, then we can walk through it (including — as always — an actual repository you can play around with).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7tdyonlxcuwovf9fnb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7tdyonlxcuwovf9fnb1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whoah. Hang on a second.&lt;/p&gt;

&lt;p&gt;This looks more complicated than the other diagram, which if I put my architect hat on seems counterintuitive — right? You should seek to simplify not complicate an architecture?&lt;/p&gt;

&lt;p&gt;Well yes, that’s absolutely right. But try to remember that in our other diagram there are 2 Lambda functions. Inside those functions is a whole bunch of code. That code includes branching logic and different commands/methods, none of which is actually shown on the diagram.&lt;/p&gt;

&lt;p&gt;We need to replicate that logic somewhere, so we’re using native AWS services and components to do it. Hence, the diagram may look a little more busy, but in reality it’s pretty elegant.&lt;/p&gt;

&lt;p&gt;Because we don’t need any custom code packages, all of our architecture in the image will be delivered by way of a SAM (Serverless Application Model) template, a great infrastructure-as-code solution for AWS projects. You can see the full template in the repository here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kknd4eva/SohWithEventBridge" rel="noopener noreferrer"&gt;Repo&lt;/a&gt;&lt;br&gt;
But we’ll be breaking it down piece by piece below.&lt;/p&gt;

&lt;p&gt;First, let’s have a quick look at our DynamoDb table setup:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Resources:&lt;br&gt;
  DynamoDBStockOnHandTable:&lt;br&gt;
    Type: AWS::DynamoDB::Table&lt;br&gt;
    Properties:&lt;br&gt;
      TableName: StockOnHand&lt;br&gt;
      AttributeDefinitions:&lt;br&gt;
        - AttributeName: sku&lt;br&gt;
          AttributeType: S&lt;br&gt;
      KeySchema:&lt;br&gt;
        - AttributeName: sku&lt;br&gt;
          KeyType: HASH&lt;br&gt;
      ProvisionedThroughput:&lt;br&gt;
        ReadCapacityUnits: 5&lt;br&gt;
        WriteCapacityUnits: 5&lt;br&gt;
      StreamSpecification:&lt;br&gt;
        StreamViewType: NEW_AND_OLD_IMAGES&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We’ve defined a simple table, with ‘sku’ as the hash or partition key. We’ve set up a small amount of provisioned throughput (how much read and write ‘load’ our database can handle) and finally — most importantly — we’ve enabled ‘Streams’ with the type of NEW_AND_OLD_IMAGES which we discussed in our Lambda solution.&lt;/p&gt;

&lt;p&gt;The idea is when an upstream system inserts a new record with a SKU and a stock on hand figure, the data will be streamed out of the database to trigger downstream events.&lt;/p&gt;

&lt;p&gt;Our DynamoDb table and DynamoDb stream remain the same as our Lambda based solution, but after that it’s AWS EventBridge to the rescue to pretty much take care of everything else we could possibly need.&lt;/p&gt;

&lt;p&gt;EventBridge has become — in my humble opinion — the darling of event-driven serverless architecture in AWS. In our team, we are consistently using it for any solution where we need event-driven solutions at scale and with decoupling and fine control built into the solution from the start.&lt;/p&gt;

&lt;p&gt;So wer’e sending our DynamoDb stream to an EventBridge Pipe.&lt;/p&gt;

&lt;p&gt;EventBridge Pipes are a great way to take data from a range of AWS sources, and then filter, enrich, transform and target a downstream source, all without any custom code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5momygp0wzvjwdug8vbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5momygp0wzvjwdug8vbg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our case though, we’re just using the Pipe itself as a way to get our DynamoDb stream from DynamoDb into EventBridge itself, because at the time of writing at least there’s no way to directly target an EventBridge bus with a DynamoDb stream. Some AWS services, like Lambda or API Gateway let you integrate directly to an EventBridge bus, but DynamoDb streams isn’t one of them.&lt;/p&gt;

&lt;p&gt;Using a Pipe however, gives us the ability to get where we need to get. Let’s have a look at the components that we’ve set up to allow our ‘Stream to EventBridge’ connection:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

StockOnHandEventBus:
    Type: AWS::Events::EventBus
    Properties:
      Name: StockEventBus

  Pipe:
    Type: AWS::Pipes::Pipe
    Properties:
      Name: ddb-to-eventbridge
      Description: "Pipe to connect DDB stream to EventBridge event bus"
      RoleArn: !GetAtt PipeRole.Arn
      Source: !GetAtt DynamoDBStockOnHandTable.StreamArn
      SourceParameters:
        DynamoDBStreamParameters:
          StartingPosition: LATEST
          BatchSize: 10
          DeadLetterConfig:
            Arn: !GetAtt PipeDLQueue.Arn
      Target: !GetAtt StockOnHandEventBus.Arn
      TargetParameters:
        EventBridgeEventBusParameters:
          DetailType: "StockEvent"
          Source: "soh.event"

  PipeRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - pipes.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: SourcePolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "dynamodb:DescribeStream"
                  - "dynamodb:GetRecords"
                  - "dynamodb:GetShardIterator"
                  - "dynamodb:ListStreams"
                  - "sqs:SendMessage"
                Resource: 
                  - !GetAtt DynamoDBStockOnHandTable.StreamArn
                  - !GetAtt PipeDLQueue.Arn
        - PolicyName: TargetPolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - 'events:PutEvents'
                Resource: !GetAtt StockOnHandEventBus.Arn

  PipeDLQueue: 
    Type: AWS::SQS::Queue   
    Properties: 
      QueueName: DLQ-StockEvents


  PipeDLQPolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      Queues:
        - !Ref PipeDLQueue
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "events.amazonaws.com"
            Action: "sqs:SendMessage"
            Resource: !GetAtt PipeDLQueue.Arn
            Condition:
              ArnEquals:
                "aws:SourceArn": !Sub "arn:aws:events:${AWS::Region}:${AWS::AccountId}:rule/StockEventBus/*"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There’s quite a bit going on here. First, we’ve created our custom EventBridge bus. This is just a way to seperate our particular sets of events so that we don’t need to recognise them apart from other events that might come into the default EventBridge bus. It’s our own private channel for our stock on hand service.&lt;/p&gt;

&lt;p&gt;Next we’re defining our Pipe. The source for the Pipes events is our DynamoDb stream and the target is our EventBridge bus. We’re sending our event from the pipe to EventBridge with the following parameters:&lt;/p&gt;

&lt;p&gt;DetailType: “StockEvent”&lt;br&gt;
Source: “soh.event”&lt;/p&gt;

&lt;p&gt;The detail type and source are critical to allow EventBridge to properly filter and route the message where it needs to go next.&lt;/p&gt;

&lt;p&gt;You can see we’re also referencing an IAM (Identity and Access Management) role. The role specifies that the AWS service ‘pipes.amazonaws.com’ can assume it, and the policies allow the role to accept the DynamoDb stream and target the EventBridge bus, as well as send any failures (messages that for some reason don’t get to EventBridge) to our SQS DLQ (Dead Letter Queue).&lt;/p&gt;

&lt;p&gt;So now we’re getting our DynamoDb event streamed out of the database and into EventBridge via our Pipe. What does the event look like? Let’s take a look:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "version": "0",
    "id": "REDACTED-ID",
    "detail-type": "StockEvent",
    "source": "soh.event",
    "account": "REDACTED-ACCOUNT",
    "time": "2024-05-02T07:04:50Z",
    "region": "ap-southeast-2",
    "resources": [],
    "detail": {
        "eventID": "REDACTED-EVENT-ID",
        "eventName": "MODIFY",
        "eventVersion": "1.1",
        "eventSource": "aws:dynamodb",
        "awsRegion": "ap-southeast-2",
        "dynamodb": {
            "ApproximateCreationDateTime": 1714633489,
            "Keys": {
                "sku": {
                    "S": "111837"
                }
            },
            "NewImage": {
                "sku": {
                    "S": "111837"
                },
                "soh": {
                    "N": "4"
                }
            },
            "OldImage": {
                "sku": {
                    "S": "111837"
                },
                "soh": {
                    "N": "11"
                }
            },
            "SequenceNumber": "REDACTED-SEQUENCE-NUMBER",
            "SizeBytes": 37,
            "StreamViewType": "NEW_AND_OLD_IMAGES"
        },
        "eventSourceARN": "REDACTED-ARN"
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, we’ve got a standard DynamoDb stream event here, but pay particular attention to two areas that will be important further on. Firstly, the detail type and source. Those will be the same for every message, and will help with routing/filtering by rules.&lt;/p&gt;

&lt;p&gt;Then we have our old and new SOH figures in the OldImage and NewImage sections respectively. If you look at this specific example, you can see that based on our requirements this message shouldn’t get sent to our 3rd party search index, because it’s not a move from out of stock to in stock or vice versa.&lt;/p&gt;

&lt;p&gt;So with our event in EventBridge, what’s next? That’s where our EventBridge rules come in. EventBridge rules are a set of conditions tied to an event bus that tell EventBridge what data should be in the event to make it valid to trigger as well as defining one or more targets to send data to when that data matches the rule.&lt;/p&gt;

&lt;p&gt;Unsurprisingly, we have two rules set up in our SAM template. An ‘in stock’ rule and an ‘out of stock’ rule. Let’s take a look at our in stock rule carefully, because a lot of the magic that lets us replace Lambda code is contained in these rules.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

InStockRule:
    Type: AWS::Events::Rule
    Properties:
      Name: InStockRule
      EventBusName: !Ref StockOnHandEventBus
      EventPattern:
        source:
          - "soh.event"
        "detail-type":
          - "StockEvent"
        detail:
          eventSource:
            - "aws:dynamodb"
          eventName:
            - "MODIFY"
          dynamodb:
            NewImage:
              soh:
                N:
                  - "anything-but": "0"
            OldImage:
              soh:
                N:
                  - "0"
      State: ENABLED
      Targets:
        - Arn: !GetAtt EventApiDestination.Arn
          RoleArn: !GetAtt EventBridgeTargetRole.Arn
          Id: "StockOnHandApi"
          DeadLetterConfig:
            Arn: !GetAtt PipeDLQueue.Arn
          InputTransformer:
            InputPathsMap:
              sku: "$.detail.dynamodb.NewImage.sku.S"
            InputTemplate: |
              {
                "sku": &amp;lt;sku&amp;gt;,
                "in_stock": true
              }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Our event rule uses an EventPattern to determine when to trigger. If you look at our pattern you can see it closely matches the structure of our DynamoDb stream event. The rule looks for the detail type and source of our event, and then interrogates the detail of the event. But here things get a little interesting. Rather than just look for constant values, we have some logic in our rule:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

NewImage:
  soh:
    N:
      - "anything-but": "0"
OldImage:
  soh:
    N:
      - "0"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using ‘content filtering’ in the rule, we’re able to express that we only want to trigger the rule when the event has a soh value in the NewImage section (the new figure) that is anything but 0, and an OldImage (the past figure) soh value that is exactly 0.&lt;/p&gt;

&lt;p&gt;That’s how we ensure that the rule is triggered when something is ‘in stock’.&lt;/p&gt;

&lt;p&gt;We then define an EventApiDestination, that’s basically telling the rule that our target will be an API that exists outside of AWS (more on that later) as well as the same DLQ (Dead Letter Queue) we mentioned before, for any failures / rejections from that API.&lt;/p&gt;

&lt;p&gt;Great! But, we have a problem. If you remember, our 3rd party API expects the format of data as:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "sku": "111837",
    "in_stock": false
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But the DynamoDb stream data looks nothing like this? If we were using custom code, the transformation would be trivial, but what do we do without that option? Transformation to the rescue! EventBridge rules allow you to manipulate data and reshape it before sending it to the target.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

InputTransformer:
  InputPathsMap:
    sku: "$.detail.dynamodb.NewImage.sku.S"
  InputTemplate: |
    {
      "sku": &amp;lt;sku&amp;gt;,
      "in_stock": true
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This final part of the rule definition is essentially saying ‘pick out the value from the JSON path detail.dynamodb.NewImage.sku.s, add it to the variable ‘sku’ then create a new JSON object that uses the variable, and provides an in_stock value of true.&lt;/p&gt;

&lt;p&gt;Huzzah! We’re getting so close now.&lt;/p&gt;

&lt;p&gt;We won’t go through the out of stock rule in detail because it’s essentially the exact same rule with the exact same target, only our content filter is the exact inverse, and our transformation creatse a JSON object with in_stock: false.&lt;/p&gt;

&lt;p&gt;So let’s recap. Our databse has streamed our event out, our pipe has gotten the event to our EventBridge bus, and our rules have ensured that a) we only get the events we want and b) those events are shaped as we require.&lt;/p&gt;

&lt;p&gt;Now we just need to send the event to the third party API, and that’s where our rules ‘target’ comes in. A target can be a variety of AWS services (Lambda, SQS, Step Functions etc) but it can also be a standard API destination, even one that sits outside of AWS. To define that in our template, we use the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  EventApiConnection:
    Type: AWS::Events::Connection
    Properties:
      Name: StockOnHandApiConnection
      AuthorizationType: API_KEY
      AuthParameters:
        ApiKeyAuthParameters:
          ApiKeyName: "x-api-key"
          ApiKeyValue: "xxx"
      Description: "Connection to API Gateway"

  EventApiDestination:
    Type: AWS::Events::ApiDestination
    Properties:
      Name: StockOnHandApiDestination
      InvocationRateLimitPerSecond: 10
      HttpMethod: POST
      ConnectionArn: !GetAtt EventApiConnection.Arn
      InvocationEndpoint: !Ref ApiDestination


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We define a connection, which holds the authentication mechanism for our API (though in our case we’re just passing some dummy values as we’re using a special ‘mock’ API Gateway our team uses for integration tests) and then the API destination itself, which is where we describe the request as a POST request, to a specific URL, and set a rate limit to ensure we don’t flood the API.&lt;/p&gt;

&lt;p&gt;And that’s it! We are done. A complete solution without any custom code. So how about we deploy it and see if it works?&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Solution in Action
&lt;/h2&gt;

&lt;p&gt;Because we’ve opted to use SAM to express our IaC (Infrastructure-as-Code) in a template, we get access to all the wonders and magic of the SAM CLI. This includes ‘sam sync’. This command allows us not only to deploy our template to AWS, but when combined with the ‘watch’ parameter, means if we make any changes to our template locally, the change will be automatically synced to the cloud without us even needing to think about deploying.&lt;/p&gt;

&lt;p&gt;Awesome, let’s give it a shot.&lt;/p&gt;

&lt;p&gt;PS C:\Users\JMatson\source\repos\SohWithEventBridge\SohWithEventBridge&amp;gt; sam sync --watch --stack-name SohWithEventBridge --template serverless.yaml --parameter-overrides ApiDestination=&lt;a href="https://get-your-own.com" rel="noopener noreferrer"&gt;https://get-your-own.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll notice I’m passing in the URL of the ‘third party API’ when deploying the template. This is because the endpoint parameter in the template has no useful value, so if you decide to grab the repo and have a go yourself, you’ll need to supply an API that can accept the post request.&lt;/p&gt;

&lt;p&gt;By passing in a parameter override, we’re populating the below parameter:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

Parameters:
  ApiDestination:
    Type: String
    Default: "&amp;lt;Your API here&amp;gt;"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After a few minutes, our entire solution should be deployed to AWS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph7f0wc9xswmximuh8ww.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph7f0wc9xswmximuh8ww.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A quick spot check of our resources:&lt;/p&gt;

&lt;p&gt;DynamoDb table with stream enabled? Check&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tfaogv9x11k4s117kzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tfaogv9x11k4s117kzl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our Pipe set up with the right source and target? Check&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferfh6jc9ii3m4mqd9rac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferfh6jc9ii3m4mqd9rac.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our Event bus and rules?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwishgetnznu8lwm5wxfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwishgetnznu8lwm5wxfc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s check one of the rules to make sure it has the right filtering, target and transformation set up:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdutqdffdo7oujlqcq9bt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdutqdffdo7oujlqcq9bt.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ps33fusu1ezgbh76khj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ps33fusu1ezgbh76khj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/..." class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/..." alt="Uploading image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking good, so it’s time to test this thing out.&lt;/p&gt;

&lt;p&gt;Because I’m a nice person and I’m all about the developer happiness these days, I’ve included a nifty little python script in the repository that we’re going to use to do some tests.&lt;/p&gt;

&lt;p&gt;You can grab it from:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kknd4eva/SohWithEventBridge/blob/master/SohWithEventBridge/TestScripts/test_script.py" rel="noopener noreferrer"&gt;test script&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s not elegant, but it’ll do. It’s job will be to simulate activity by inserting a few items into our stock on hand table with stock figures, then updating them — possibly more than once — to validate different scenarios (in and out of stock as well as neither).&lt;/p&gt;

&lt;p&gt;The updates are:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Insert items
put_ddb_item('188273', 0)
put_ddb_item('723663', 20)
put_ddb_item('111837', 50)

# Update items
update_ddb_item('188273', 5)
update_ddb_item('723663', 15)
update_ddb_item('111837', 10)

# Additional update
update_ddb_item('111837', 0)
update_ddb_item('111837', 11)
update_ddb_item('111837', 4)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s run it, and check our results:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

(.venv) PS C:\Users\JMatson\source\repos\SohWithEventBridge\SohWithEventBridge\TestScripts&amp;gt; py test_script.py
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:root:Put item 188273 into DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Put item 723663 into DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Put item 111837 into DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 188273 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 723663 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 111837 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 111837 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 111837 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 111837 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
(.venv) PS C:\Users\JMatson\source\repos\SohWithEventBridge\SohWithEventBridge\TestScripts&amp;gt; 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Okay, so the script has run, inserted some products and updated them. Now if all is working and we focus on product sku 111837, we should expect the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Insert items
put_ddb_item('111837', 50) &amp;lt;- This wont count as its an INSERT and we filter for MODIFY only.

# Update items
update_ddb_item('111837', 10) &amp;lt;- 50 to 10 shouldnt filter through

# Additional update
update_ddb_item('111837', 0) &amp;lt;- 10 to 0 we should see
update_ddb_item('111837', 11) &amp;lt;- 0 to 11 we should see
update_ddb_item('111837', 4) &amp;lt;- 11 to 4 shouldnt filter through
So we should get 2 of the 5 events through.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now you can check the monitoring at various points through the event-driven journey by checking invocation triggers for our Pipe or Rules, but we’re going straight to our final target — our API — to check its logs using a simple CloudWatch Insights query.&lt;/p&gt;

&lt;p&gt;With all the transformation and filtering we’ve done previously, we should only see a nice, neat JSON payload that tells us something is in stock, or out of stock:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

**CloudWatch Logs Insights**    
region: ap-southeast-2    
log-group-names: API-Gateway-Execution-Logs_t23ilm51j6/dev    
start-time: -300s    
end-time: 0s    
query-string:


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;fields @timestamp, &lt;a class="mentioned-user" href="https://dev.to/message"&gt;@message&lt;/a&gt;, @logStream, &lt;a class="mentioned-user" href="https://dev.to/log"&gt;@log&lt;/a&gt;&lt;br&gt;
| sort @timestamp desc&lt;br&gt;
| filter &lt;a class="mentioned-user" href="https://dev.to/message"&gt;@message&lt;/a&gt; like '111837'&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

---
| @timestamp | @message | @logStream | @log |
| --- | --- | --- | --- |
| 2024-05-02 11:57:27.388 | {   "sku": "111837",   "in_stock": true } | 26267e5fba9c96a4989c9b712553f791 | 712510509017:API-Gateway-Execution-Logs_t23ilm51j6/dev |
| 2024-05-02 11:57:25.908 | {   "sku": "111837",   "in_stock": false } | 527de3f583aa2547d2819f2328657427 | 712510509017:API-Gateway-Execution-Logs_t23ilm51j6/dev |
---


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Huzzah! Success. We get SKU 111837 having posted both an in stock and out of stock event to our third party API, thus concluding the journey of our Lambdaless and codeless event-driven solution.&lt;/p&gt;

&lt;p&gt;Not too shabby, eh? Well if you’re not impressed that’s okay, I’m impressed enough for the both of us. While I’m a software engineer at heart and I’ll always enjoy the act of writing code, there’s no denying the power and flexibility that can come from being able to combine native services through configuration alone to deliver on real world use cases.&lt;/p&gt;

&lt;p&gt;What are your thoughts about leaning into native services and cutting back on the use of code itself to solve problems? Let me know in the comments and — as always — if you enjoyed the article feel free to leave a like, claps or whatever you feel is appropriate.&lt;/p&gt;

&lt;p&gt;As a reminder — you can grab the complete repo for this guide below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kknd4eva/SohWithEventBridge" rel="noopener noreferrer"&gt;https://github.com/kknd4eva/SohWithEventBridge&lt;/a&gt;&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
    <item>
      <title>☁️ Yes, you should send your tech team to the next AWS summit or conference</title>
      <dc:creator>James Matson</dc:creator>
      <pubDate>Sun, 31 Dec 2023 06:21:00 +0000</pubDate>
      <link>https://forem.com/aws-builders/yes-you-should-send-your-tech-team-to-the-next-aws-summit-or-conference-17l2</link>
      <guid>https://forem.com/aws-builders/yes-you-should-send-your-tech-team-to-the-next-aws-summit-or-conference-17l2</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OtsKzo9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2t413kh6gcld17zmjxe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OtsKzo9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2t413kh6gcld17zmjxe.jpg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Far from being all about free USB-C cables and t-shirts, vendor expos and summits are the spark of inspiration we all need.
&lt;/h2&gt;

&lt;p&gt;It’s that time again. You’re a senior manager or leader overseeing one or more engineering functions and the email has arrived in your inbox.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Early registration now open for AWS Summit/RE:Invent/RE:Mars/Event/Expo, book now!”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You know what comes next. There’s the &lt;em&gt;ask&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;The ask to attend. &lt;/p&gt;

&lt;p&gt;The ask to go. &lt;/p&gt;

&lt;p&gt;Hotels? Plane tickets maybe? Meal vouchers? Uber costs? Has this guy seen the latest company quarterly results? We’re not doing so great. It’s not a red alert or anything, but numbers are down, departments are reigning in spending, and we’ve got a lot of work we need to deliver.&lt;/p&gt;

&lt;p&gt;This just isn’t the time.&lt;/p&gt;

&lt;p&gt;It wasn’t the time last year either.&lt;/p&gt;

&lt;p&gt;Or the year before that.&lt;/p&gt;

&lt;p&gt;Maybe next year, when things pick up.&lt;/p&gt;

&lt;p&gt;Is this the kind of response you get from your workplace when you ask if they will send you to the latest AWS Summit, RE:Invent or RE:Mars? Or to that awesome lunch &amp;amp; learn that is being put on to teach people about how to migrate from on-premises servers to containerisation?&lt;/p&gt;

&lt;p&gt;Whether the reasons are financial, time-constraints or simply no reason at all, it’s safe to say that you’re not alone. Every year organisations like AWS invest time, effort and money into putting on some fairly extraordinary cloud conventions, summits and expos, not to mention the dozens of smaller and more intimate industry vertical or technology specific sessions that dot the professional calender from one end to the other, and every year people aren’t supported to attend or — worse — they don’t even consider attending.&lt;/p&gt;

&lt;p&gt;If you’re workplace isn’t supporting you or your teammates to attend these types of events then feel free to use this article as a digital cudgel to beat the offenders over the head (metaphorically, of course) because what I hope to do here is give at least some clarity on why these events are important — no, more than important — &lt;em&gt;critical&lt;/em&gt;! — and why it’s important to say yes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1o6DClMh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2e33zwpyrzf046i70j9m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1o6DClMh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2e33zwpyrzf046i70j9m.jpg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I thought about just providing a list of reasons why these events make sense. A sort of ‘dot point parade’ of the goodness of tech events, but I think anyone using a little common sense can visualise that list for themselves. It’s not hard. Rather I think I’ll give you a personal story instead. A story that involes taking you on a little journey back in time.&lt;/p&gt;

&lt;p&gt;A journey to the &lt;em&gt;pre-cloud&lt;/em&gt; James.&lt;/p&gt;

&lt;p&gt;A journey to the before times.&lt;/p&gt;

&lt;p&gt;To the &lt;strong&gt;beginning&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(Cue a slow fade to black and a montage of calender pages falling away, revealing the continued slide of dates back into the past. November turns into August, August into March, 2023 gives way to 2022, then 2021, pages continue to fall away faster and faster until eventually the calender settles on 2017…..)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b0Fu9ahn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2c2n19tnq1ta4yi8yiq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b0Fu9ahn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2c2n19tnq1ta4yi8yiq.jpg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Day One
&lt;/h2&gt;

&lt;p&gt;Before I attended my first AWS Summit, I’d never been to a cloud convention, expo or event before.&lt;/p&gt;

&lt;p&gt;I wasn’t even particularly savvy about AWS or the cloud in general. I mean, I knew what it was, and I’d seen a few EC2 instances in my time, but it was always a sort of abstract concept and certainly not something that I really had to deal with in my day-to-day. I had a strong background in .NET engineering, but nearly always started my thinking process from the application up, and very much in a world where the environments were tangible. Servers in stores, machines in a warehouse, that sort of thing.&lt;/p&gt;

&lt;p&gt;By the time I started to tap away at the keyboard, servers had been spun up, IIS had been configured and SQL servers were ready to fill with delicious tasty schema. Typically whatever I was buidling or helping others to build was fairly traditional. Awesomw work was done mind you, but cloud just wasn’t a huge part of the vocabulary.&lt;/p&gt;

&lt;p&gt;I can’t honestly remember how it came to be that I attended the AWS Summit in Sydney that year. I think maybe someone else from my work was going, and they had a spare pass, so they asked if I wanted to attend.&lt;/p&gt;

&lt;p&gt;I didn’t know what I was in for but figured a 2 or 3 day event talking about technology could be a positive experience and besides, a trip is always fun — I’m a sucker for airports and hotels, always have been.&lt;/p&gt;

&lt;p&gt;Going into the specific happenings of the first day at the summit isn’t really the part I want you to focus on, so I’m going to skip over most of it, needless to say my head felt like it was spinning by the time the afternoon had rolled around.&lt;/p&gt;

&lt;p&gt;A few things had become immediately clear to me.&lt;/p&gt;

&lt;p&gt;Firstly, AWS clearly invested a lot of time, effort and energy into these events. Sometimes when we think of a tech summit it can be all too easy to imagine it’s just a massive excuse by vendors and tech companies to sell, sell, sell, covered by the thinnest veneer of innovation or education. It’s this view that can provide fertile ground for the “No’s” we tend to hear from senior management when we ask to attend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Wm3N_CEv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k4j9p0eci8f2cteqqu38.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wm3N_CEv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k4j9p0eci8f2cteqqu38.jpg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It was clear to me from my day one experience however that this wasn’t the case. Sure, there were booths for companies wanting to sell you stuff — a whole floor of them in fact — like some kind of gladitorial arena where tech companies are the lions and us devs and engineers the doe eyed prey, but that was one very optional component. Outside of that floor, there were a wealth of presentations, learning hubs, coding hackathons and breakout sessions that covered every single type of service or technology you could imagine.&lt;/p&gt;

&lt;p&gt;For a brain in learning mode it was nothing short of a bounty.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inspiration
&lt;/h2&gt;

&lt;p&gt;I told you I didn’t want you to focus on what happened during the day, right? What I want you to think about is what happened to me _later _that evening.&lt;/p&gt;

&lt;p&gt;I felt inspired. I felt energised. In that one day I’d been awakened to entire architectures and ways of thinking about delivering value through tech that I’d just never considered. I was tired and my feet were sore, but my mind was alive. I went straight back to the hotel, opened up my laptop and did the only thing that made sense.&lt;/p&gt;

&lt;p&gt;I signed up for an AWS account.&lt;/p&gt;

&lt;p&gt;From there, I started experimenting. It was my first foray, so heavy on the ClickOps as these things so often are, but I started to play, and build, and build and play. I built a queue with the Simple Queue Service, and marvelled as I could deliver and read messages from it, learning all the while about visibility timeouts and at least once delivery.&lt;/p&gt;

&lt;p&gt;I spun up an EC2 instance, and wrote some PowerShell to send a message to the queue containing the details about the instance itself. Lego blocks were starting to fall together around me. I created a Lambda function, and connected it to some other widget — I don’t remember what it was, but I remember the feeling. Exhilerating.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w_e4hLC6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xitbu7nskvtfqfbybocj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w_e4hLC6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xitbu7nskvtfqfbybocj.jpg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That night I think I burned maybe 6 hours straight just experimenting? that feeling of possibility, of innovation at my fingertips, started then and has never really stopped. I’ve done so much since, deep dived and built with into just about every service and solution in AWS, and done similar with Azure and GCP, but that feeling is still with me as I see out the close of 2023 and the herald of 2024.&lt;/p&gt;

&lt;p&gt;I came back to work after the 3 day event, and I talked to everyone about the cloud. The possibilities, the way you could work with different languages, the opportunity so many of the services offered so that you weren’t building from scratch. Over the coming months and years I not only became as proficient as I possibly could in cloud technology in order to create as much value as I could at my work, but I brought other people along for the ride, trying to pass on some of the inspiration I felt onto others, to start their journey towards what I saw as an accelerator and multiplier of already existing talent within our engineering teams.&lt;/p&gt;

&lt;p&gt;Now I’m not saying for a moment that I expect every person you send to a cloud summit to come back like I did; a rambling malnourished madman having experienced some kind of religious experience out in the technology desert and whose now came back with spooky software powers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tNsbW55j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrw6g663j9iib04q1ayf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tNsbW55j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrw6g663j9iib04q1ayf.jpg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;_(Okay, okay so I wasn’t malnourished — the food was actually pretty good — but the rest is on point…)&lt;br&gt;
_&lt;br&gt;
But what I hope you can see — and your leadership can see — is that one of the core pieces of value that you can come away with from these summits, a piece of value you won’t see mentioned in any agenda or keynote, is the simple power of inspiration.&lt;/p&gt;

&lt;p&gt;A little inspiration or a lot, it’s still got to be worth the cost of a flight to Sydney and a couple of nights accomodation right? What can inspiration do? Why, anything. For the individual it can increase their productivity, push them towards new ideas, new ways of approaching problems, and when allowed to propogate like a gentle wave through teammates and departments it can bring about tangible positive change.&lt;/p&gt;

&lt;p&gt;It’s not even a question of some shiny new service offering or technology that could create that inspiration either, it can be sparked simply by connecting with people. Whether it be with AWS folk who staff these events and who — I can tell you with certainty— put their blood, sweat and tears into their tech demos, breakout sessions and presentations, or colleagues from other organisations, developers, data engineers, managers, leaders, security analysts, these events are full of them. From every industry vertical and corner of the landscape they come, and the possibility of conversations, shared knowledge, experience — these opportunities could be the perfect spark of inspiration your team needs?&lt;/p&gt;

&lt;p&gt;Are you going to be the one that says no to that?&lt;/p&gt;

&lt;h2&gt;
  
  
  Delivering a serverless hail mary
&lt;/h2&gt;

&lt;p&gt;Of course that initial inspiration wasn’t the only boon I got from that first summit, it was the start of a chain of events that saw me able to deliver a tangible technology solution into production — the first of many — that is still running to this day and still delivering a stack of value.&lt;/p&gt;

&lt;p&gt;A few months after that initial AWS summit (all the while me burying my head into learning more about the cloud ecosystem and all it could offer) I was part of a project to move from our (then) daily batch-driven integration of sales transactions between physical retail stores and our ERP to something more real-time.&lt;/p&gt;

&lt;p&gt;We wanted to move from delivering crucial sales information from hundreds of retail outlets in a nightly load via a hodge podge of FTP servers and flat files to something more modern, responsive and scalable.&lt;/p&gt;

&lt;p&gt;I was only on the periphery of the project at the time, involved more than anything in managing some of the resources being provided to the project to ensure physical stores weren’t negatively impacted by the changes but as the project approached go live, with maybe a month or so to go, a bombshell hit. It turned out the vendor provided product we thought would give us the real time replication of sales we needed couldn’t in fact deliver. What it could do was — at best — deliver the sales data in 4 hourly batches, which was so far from real time and so close to the existing daily batch that we might as well have done nothing.&lt;/p&gt;

&lt;p&gt;Project status was looking less than ideal, and there was a lot of time, effort and money tied up in the delivery. My small team were approached in a bit of a ‘hail mary’ situation, the question was basically &lt;em&gt;‘is there anything you can do?’&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Well it turns out, having been to a summit, become inspired and kept on a path of learning and experimenting, there actually was.&lt;/p&gt;

&lt;p&gt;So I took everything I’d learned, sat down, and began to architect a solution that leveraged a combination of C# and AWS to deliver — I hoped — exactly what the project was looking for, real time sales transaction replication for each of the 500+ stores in the network to our cloud-based ERP system.&lt;/p&gt;

&lt;p&gt;What I ended up designing and subsequently building was fairly simple looking back. I created a couple of windows services using C#, one which would track changes in the stores database related to sales transactions, and the other which would be responsible for picking up those changes. My next job was how to get those transactions from that service into AWS where our ERP was also sitting. I landed on using something simple and robust — an SQS queue.&lt;/p&gt;

&lt;p&gt;From that SQS queue, I set up a .NET Lambda which would trigger off events arriving in the queue. Now I’d read enough to know that SQS was not without its pitfalls, nor was Lambda, so I needed to architect for failure. To deal with any nasty surprises I set up a small DynamoDb table which would be used as a way to hold failures, but also as a way to track the state of the messages as they transitioned from the queue to the Lambda and beyond. Within my Lambda function I wrote some code to transform the sales data, then finally post it to a REST API which would then see the message safely arrive at its final destination — our back end ERP system.&lt;/p&gt;

&lt;p&gt;It also turned out that on occasion some of our sales transactions exceeded the 256KB limit of an SQS message, so thinking through the problem, I devised a workaround. As each message was tested for size within the store server, if it was found to be &amp;gt; 256KB, I would instead upload the transaction to an S3 bucket, and then send a bit of a ‘proxy’ message via SQS. This proxy message had some header information and a flag which told the Lambda as it pulled the message off the queue, to use the message as a pointer to an object in S3, and process the payload from there instead.&lt;/p&gt;

&lt;p&gt;The whole thing looked a little like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CMimZvnP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3oagjkb1fmwf5il0hogf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CMimZvnP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3oagjkb1fmwf5il0hogf.png" alt="Image description" width="680" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It was simple, effective and from the moment the moment the message hits the SQS queue entirely serverless. It was built, tested and deployed within a month or so end-to-end.&lt;/p&gt;

&lt;p&gt;We dubbed it ‘Store2Cloud’ or S2C for short, and it has been delivering 4 to 10 transactions per second from stores around the country to the ERP system for going on 6 years now without missing a beat. Of course it’s evolved slightly, and we’ve made improvements here or there around cost or performance or stability, but the core solution today is the same as was dreamt up in the heat of a project deadline all those years ago.&lt;/p&gt;

&lt;p&gt;It was a solution that simply would not have been possible without leveraging AWS technology, because we needed a solution that could be stood up quickly while not compromising on security, scalability and stability, and I am certain that without that initial trip to the AWS Summit Sydney and the inspiration that followed it would never have existed, and the project would have lurched towards the finish line as a ball of fiery nonsense.&lt;/p&gt;

&lt;p&gt;I wouldn’t dare to say that because you’re willing to send your tech team (or indeed others from your organisation) to the next AWS summit or cloud expo you’re going to bring forth project saving solutions or a mass spread of tech inspiration. That’s not the point I’m trying to get across. Let’s call my experience — my story — an extreme case. But even if you sent someone to an event and they came back with 5% or 10% of these feelings? If they came back and passed on their knowledge to 2 other people instead of 10? If they came back and found ways using AWS services — existing or new — to save you money or deliver new value? Isn’t that worth it? Is it really so easy to simply not consider the value that might get added long term to your technology function from keeping your engineers engaged through attendance to vendor and community events — even if they happen to be interstate or even — gods forbid — international?&lt;/p&gt;

&lt;p&gt;What price is too high for fostering and nurting a team of inspired, dedicated and continually improving builders who have at their disposal a vast network of peers, colleagues and industry experts who they’ve met and collaborated with in and outside of these events?&lt;/p&gt;

&lt;p&gt;Surely, whatever that number, whatever price &lt;strong&gt;_IS _&lt;/strong&gt;too high, it’s far and away more than the cost of a plane ticket and a hotel?&lt;/p&gt;

&lt;p&gt;The next time you get asked, say yes. Who knows, you might just be witnessing the creation of the next Store2Cloud?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AWKcgICJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a8t6v9dfumciicc64cgm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AWKcgICJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a8t6v9dfumciicc64cgm.jpg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>aws</category>
      <category>learning</category>
      <category>motivation</category>
    </item>
    <item>
      <title>Dear developer, are you one AI model away from being replaced?</title>
      <dc:creator>James Matson</dc:creator>
      <pubDate>Wed, 08 Nov 2023 10:43:39 +0000</pubDate>
      <link>https://forem.com/aws-builders/dear-developer-are-you-one-ai-model-away-from-being-replaced-55cg</link>
      <guid>https://forem.com/aws-builders/dear-developer-are-you-one-ai-model-away-from-being-replaced-55cg</guid>
      <description>&lt;p&gt;From nerdy excitement to existential dread, LLMs evoke so many emotions in developers. Is one of them fear? Should you be worried about your job?&lt;/p&gt;

&lt;h2&gt;
  
  
  Once a nerd
&lt;/h2&gt;

&lt;p&gt;My day job is as a software engineer.&lt;/p&gt;

&lt;p&gt;I design and build software and lead a team of people who do just the same. I don’t work at a FAANG or one of “the big” banks, so I certainly don’t think of myself as any kind of subject matter expert on anything in particular. I’m just a guy who really, _really _loves software.&lt;/p&gt;

&lt;p&gt;As a child of the 80s, I grew up around what could be thought of as the first wave of home computers. My childhood best friend was — for better or worse — a Commodore 64 and then an Amiga 500 (followed by the awesomely powerful Amiga 2000HD — 52MB of hard drive storage! Pwhoar.)&lt;/p&gt;

&lt;p&gt;I started experimenting with code when I was around 10 years old. At first it was C64 Basic following arcane instructions sets laid out in cassette tape adorned magazines like ZZap64! and later on the Amiga using long since dead languages like &lt;a href="https://en.wikipedia.org/wiki/AMOS_(programming_language)"&gt;AMOS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MEVD0RS0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39hmvu35rprxzhww5dxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MEVD0RS0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39hmvu35rprxzhww5dxu.png" alt="Image description" width="800" height="667"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Yep. I really was that young once upon a time, never far from a computer.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;My early teenage nights were spent leaving my Amiga on overnight to render frame after slow frame of seemingly impossible landscapes using Vista fractal generators. Better than any alarm clock was the lure of waking first thing in the morning to find the meagre 30 frames of animation had finished calculating, my reward a 10 second ‘flythrough’ of a mountain landscape.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O6wvGjvz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2im63gt0qy4bv5nsa9d2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O6wvGjvz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2im63gt0qy4bv5nsa9d2.png" alt="Image description" width="800" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Today? A blurry low resolution mess. In 1992? Magic. Source: &lt;a href="http://www.complang.tuwien.ac.at/"&gt;http://www.complang.tuwien.ac.at/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I think I was fated to go into a career in technology. From the moment a Commodore 64 appeared — with all the mystery of childhood Christmas — under the tree, I was hooked.&lt;/p&gt;

&lt;p&gt;I enjoy my job, a lot. I’m one of those people who feels blessed because whatever it is I’m doing in my day job I’d get a great deal of enjoyment out of doing “just because”. That’s a rare gift, and one I’m thankful for. It’s part of the reason that I write here, on Medium. I don’t just love software and tech, I love talking about it, writing about it, just — being immersed in it. Naturally, as soon as the background hum started up about large language models and ChatGPT, I couldn’t wait to get my hands dirty checking out this new technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  A conversation with the future
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;“You never change things by fighting the existing reality.&lt;br&gt;
To change something, build a new model that makes the existing model obsolete.” — Buckminster Fuller&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The first time I loaded up ChatGPT (circa 3.5) and asked it to create a method in C#, I was astounded. Within seconds it spat out a perfectly coherent, well structured method to do something (I can’t remember what) and I sat there giddy.&lt;/p&gt;

&lt;p&gt;I’d tinkering with all kinds of AI/ML services and models in the past but this — this was sorcery.&lt;/p&gt;

&lt;p&gt;Over the next few minutes I engaged in an escalating game of ‘can you?’ with ChatGPT and apart from the hiccups and hallucinations (libraries or properties that don’t exist for the most part) it pretty much delivered every bit of C#, Python, Cloudformation, or anything else I threw at it.&lt;/p&gt;

&lt;p&gt;I didn’t need to be a futurist to know a paradigm shift was being delivered to me straight through the unassuming chat interface of &lt;a href="https://chat.openai.com/"&gt;https://chat.openai.com/&lt;/a&gt; (a website — for what it’s worth — that still ranks in the top 30 most visited websites in the world, ahead of stalwarts like eBay and Twitch).&lt;/p&gt;

&lt;p&gt;After I got over the electric jolt of excitement and possibility, another feeling began to settle over me like a heavy blanket, slightly — but not completely -snuffing out those other feelings.&lt;/p&gt;

&lt;p&gt;It was a sort of, anxiety maybe? Was that it? Anxiety? Or, fear maybe.&lt;/p&gt;

&lt;p&gt;Yes. That’s it. &lt;em&gt;Fear&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;It wasn’t particular strong. It wasn’t as though I ran from the room screaming obscenities and setting stuff on fire in a mad panic, but the feeling was definitely real and it co-exists — even now — with those other feelings of excitement and possibility.&lt;/p&gt;

&lt;p&gt;It dawned on me in those moments, and many since, that what I was looking at — LLMs trained on a corpus of knowledge I could never hope to hold with abilities of recall beyond any human — was a seismic shift in my chosen career. A shift the magnitude of which I may not fully grasp until I’m already in the middle of it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n8ito2MP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sqevomu0qv80yxtlegnn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n8ito2MP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sqevomu0qv80yxtlegnn.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A future of code generating code generating code generating code. Where is the human in all this? Source: &lt;a href="https://creator.nightcafe.studio/"&gt;https://creator.nightcafe.studio/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This isn’t intellisense or a glorified spellcheck. It’s not helping me find the right property or method from a list. It seems — at least on the surface — to be reasoning. To be creating. To be ‘doing’ of its own accord. It’s able to take simple non technical instruction and produce completely feasible code, the natural extension of which would be what? Codebases? services? Frameworks? Entire solutions?&lt;/p&gt;

&lt;p&gt;I’d read enough to know what’s at the heart of a large language model, but the illusion was still strong enough to unsettle me.&lt;/p&gt;

&lt;p&gt;So there I was — and still am — a guy who loves to write software, whose built his entire professional identity around an affinity for and a talent which technology, watching this large language model do with effortless precision a chunk of what I do.&lt;/p&gt;

&lt;p&gt;Not everything I do — not by a long shot — and we’ll talk more about that in this piece, but to simply ignore the coming impact of large language models on the world of software engineering feels like a disservice to myself, to my future, and indeed the future of everyone that chose a career path like mine.&lt;/p&gt;

&lt;p&gt;Even as I write this, I haven’t found a way to reconcile how I feel about what the future holds, or even what credence I give of that future being an entirely positive one, but what I have done over the past few months is listened to and read a lot of people far smarter than myself — on both sides of the argument — and I’m here to give you what little wisdom I have on the topic, in the hopes that it helps you clarify where you fall on the spectrum of a software engineering future that will have AI embedded in the very day-to-day fiber of how we work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attention is all you need
&lt;/h2&gt;

&lt;p&gt;Before we settle into a discussion of just how fearful of current day large language models (LLMs) the average software engineer should be (if at all?) it’s worth taking a quick trip down memory lane.&lt;/p&gt;

&lt;p&gt;It might _feel _to the casual observer as though LLMs (à la GPT, LaMDA, Falcon, Bert etc) just barreled into our lives like a token-predicting avalanche from nowhere, but the truth is what we’re seeing today is simply the tipping point into the mainstream of a long history of improvements in machine learning, none quite so impactful to the LLM of today than the transformer.&lt;/p&gt;

&lt;p&gt;In 2017, a group of researchers from Google Research and Google Brain (the organisation that parent company Alphabet has since merged with DeepMind to form Google DeepMind) published a technical paper titled ‘Attention is all you need’.&lt;/p&gt;

&lt;p&gt;The paper (available here ) has been talked about ad nauseum online so I’m not going to spend a lot of time on it, but it’s worth pointing out that it was this paper and the revolution in deep learning that followed which placed us exactly where we are today.&lt;/p&gt;

&lt;p&gt;Until the advent of the modern transformer mechanism, deep learning had coalesced around two types of neural networks: Recurrent-Neural-Networks (RNN) and Convolutional-Neural-Networks (CNN). (side note: You can check out another article of mine where I use RNN to create music with terrible results!).&lt;/p&gt;

&lt;p&gt;With the development of the transformer model for natural language processing (NLP), the use of RNN or CNN mechanisms were dropped in favor of the concept of ‘self attention’.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ljFFMQA5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0cz499a8henly042r6qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ljFFMQA5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0cz499a8henly042r6qw.png" alt="Image description" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Recent timeline of deep-learning (source: Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This ‘self’ attention mechanism in the transformer model doesn’t look at the relationship between words/tokens one part at a time or with any relevance of whether the words are in the encoder or decoder, but instead look at all the tokens together with the ‘self’ part focusing on the relationship between each token/word and every other token/word.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“The simple act of paying attention can take you a long way” — Keanu Reeves&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The transformer concept has a lot more to it than that, but here are the important takeaways: It’s powerful, scalable and supremely responsible for the large language model driven world we’re all living through today. To grasp the enormity and complexity of some of these models is simply not a task for which any single person is capable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reuters.com/technology/chinas-tencent-says-large-language-ai-model-hunyuan-available-enterprise-use-2023-09-07/"&gt;Hunyuan&lt;/a&gt;, the latest LLM developed by Chinese company Tencent has as many parameters as there are stars in the milky way galaxy; 100 billion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Friend or foe?
&lt;/h2&gt;

&lt;p&gt;So here we are. It’s 2023 and the battle lines are drawn either side of the AI debate with experts on either side and millions of ordinary people like you and I sitting in the middle wondering what’s going to happen.&lt;/p&gt;

&lt;p&gt;As I mentioned earlier, the only way I’ve found to make some sense of the arguments amidst the daily deluge of AI/ML products and services and the absurd speed of advancements is to take careful stock of some of the arguments for — and against — the proposition that AI (generative AI to be precise) is a boon to humanity.&lt;/p&gt;

&lt;p&gt;And there are no shortage of opinions being put forward and podcast conversations bubbling in every conceivable corner of the internet. Philosophers, technologists, politicians, engineers are all taking up the mantle either for the innumerable benefits of AI, or warning us of the borderline apocalyptical nature of its avalanche across the social fabric of our world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8l5nqP3J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1rz0b8rd5ywacv98sml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8l5nqP3J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1rz0b8rd5ywacv98sml.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Google search trends for ‘generative ai’ over the past 5 years. Source: &lt;a href="https://trends.google.com/"&gt;https://trends.google.com/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It’s with that in mind that I put down some of my thoughts here, as one lone developer trying to make sense of the wild west of genrative AI and its impact on — selfishly — him.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sunshine and Tokenisation
&lt;/h2&gt;

&lt;p&gt;The argument that large language models are a boon to the developer is an easy one to make. At its core, a popular mainstay of the supporter is to pitch AI as giving developers superpowers. Hey, as a developer myself I don’t think there’s even a bit of hyperbole in that statement. As an avid user of GitHub Copilot, CodeWhisperer and other AI code ‘assistants’ I am here to tell you that having a widget bolted into your IDE that with a little bit of prompting or the start of a method name will spit out reasonable entire functions is indeed powerful.&lt;/p&gt;

&lt;p&gt;I treat ChatGPT and Co-pilot like a super-intelligent search engine tailored to my specific needs. Rather than having to scour Stack Overflow for answers to problems that are maybe 5% — 70% the same as mine, I get back a crafted response, code block or architecture suggestion that’s 99% exactly what I was looking for. Has it sped up my work? It depends entirely on the language or area I’m working in. For example, if I’m looking for some python code or PostgreSQL script to get me out of a jam, I can rely almost entirely on ChatGPT to give me what I’m looking for, with my role relegated to spotting any obvious errors, and running it to check the outcome. It can be a bit more miss than hit when I’m working with Terraform however.&lt;/p&gt;

&lt;p&gt;Overall though, it’s easy to feel right now (and I stress, right now) as though being a developer has never been better. As I write code in Visual Studio Code, my AI Copilot is right there in the sidebar, ready to help me find bugs, suggest improvements, and help me look for ways to optimise functions I’ve written and think about technical problems in ways I’d never imagined.&lt;/p&gt;

&lt;p&gt;But even when AI works this well, I can feel — mistakenly or not — that what I’m seeing is just a delightful seed that could bear terrible fruit in the future. What about when AI does much more than just play the part of my humble sidebar assistant? What about when it is the entire IDE? When it’s the idea, the development and the execution all in one? Is the role I want to play in this relationship one of just pasting stuff in and running it? Or doing validation of work already completed? Considering this possible future leaves me with all kinds of questions about what it really means to ‘create’ software. How much do I value a super power if it takes away the joy of making? Of building? Am I maybe the only developer who — like the painter at a canvas — is driven in part by the sheer joy of putting brush to canvas?&lt;/p&gt;

&lt;p&gt;Supporters of code assistants like GitHub CEO Thomas Dohmke tend to see only the sunny side of this equation. I’ve been lucky enough to be in the room with Mr Dohmke as he’s espoused the coming AI-led revolution in ‘developer happiness’ and his positivity is infectious.&lt;/p&gt;

&lt;p&gt;Just read his words in a recent &lt;a href="https://www.madrona.com/github-ceo-thomas-dohmke-copilot-generative-ai/"&gt;interview &lt;/a&gt;from Madrona.com. &lt;em&gt;“The generative AI-powered developer experiences gives developers a way to be more creative. And, I mentioned DevOps earlier. I think DevOps is great because it has created a lot of safeguards, and it has made a lot of managers happy because they can monitor the flow of the idea all the way to the cloud and they can track the cycle time….but it hasn’t actually made developers more happy. It hasn’t given them the space to be creative. And so, by bringing AI into the developer workflow by letting developers stay in the flow, we are bringing something back that got lost in the last 20 years, which is creativity, which is happiness, which is not bogging down developers with debugging and solving problems all day but letting them actually write what they want to write. I think that is the true power of AI for software developers.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Well that sounds OK -right? I like being happy. Usually it’s donuts that make me happy, but if AI can do it too then I’m here for it.&lt;/p&gt;

&lt;p&gt;The statistics offered by Thomas and the team behind one of the most popular AI code assistants ‘Copilot’ support the broad appeal of superpowered AI assistants for code generation. He has on many occasions talked about an up to 30% productivity increase for developers that use Copilot, and believes that sooner rather than later, 80% of code will be written by the AI rather than by the developer.&lt;/p&gt;

&lt;p&gt;And that’s just considering things from the ‘developer’ viewpoint. What about the barrier to entry for software creation itself? If the barrier weakened due to the advent of tech like ‘low’ or ‘no’ code, then generative AI is likely to see the remaining barrier simply evaporate. It’s hard to argue against that being a ‘societal good’.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ys41OYa---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n0lvvwwhlkij4c8cnp11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ys41OYa---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n0lvvwwhlkij4c8cnp11.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Acceptance rate of Copilot code recommendations over time. Source: &lt;a href="https://github.blog/"&gt;https://github.blog/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you remember at the beginning of this piece I talked about how the developers role isn’t just writing code, and this is where the productivity argument comes to rest. The idea is that the writing of code is just a portion of what the developer does, and not even the best/most fun part. Outside of writing code a couple of hours a day, the developer is involved in ideation meetings, design sessions, problem solving, collaborating with business users, wrestling with requirements and generally immersing themselves in all the stuff that comes “before” and “after” the code. To proponents of the “AI will compliment” argument, these things are seen as the real job of the software engineer.&lt;/p&gt;

&lt;p&gt;So let’s imagine that LLMs take away 50% of the coding task from engineers. On the surface — according to optimists like Thomas Dohmke — this resolves to simply being 50% more productive.&lt;/p&gt;

&lt;p&gt;The ‘&lt;em&gt;superpower&lt;/em&gt;’ effect.&lt;/p&gt;

&lt;p&gt;More time for the collaboration, the human element — the ideation and connection. But is that really how things pan out? The reality of a 50% increase in automation could have two detrimental effects in terms of job stability and security. In one case, you could get 50% more output which means there are less new jobs for incoming software developers eager to get a break in the field. Alternatively, you could find an organisation that realises it needs 50% less developers, so you’ll see those people shed and put out into a market whether other organisations around them are also shedding engineers in favor of automation. At the moment, the cost to integrate AI assistants into your software development lifecycle isn’t cheap, but it’s also not exorbitantly expensive — and like with all technology — it’ll just get cheaper and cheaper.&lt;/p&gt;

&lt;p&gt;OpenAI CEO Sam Altman, in a recent &lt;a href="https://www.youtube.com/watch?v=qof80Sy3__8"&gt;podcast &lt;/a&gt;with Lex Fridman responded to this potential problem of ‘10x productivity means 10x less developers’ by asserting that &lt;em&gt;“I think we’ll find out that if the world can have 10 times the code at the same price, we’ll just find ways to use even more code”.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I like the sentiment, but he didn’t go on to provide his reasoning beyond believing we still have a ‘supply’ issue in software engineering (for how long?) and — in the same conversation — discussing the very real possibility that AI will obliterate jobs in some areas like customer service. Is it so hard to believe that it could do the same to your average software engineer? Even Sam manages to acquiesce a little further into the interview that he wants to be clear that he thinks these systems will make a “lot” of jobs just “go away”.&lt;/p&gt;

&lt;p&gt;We might very well simply evolve more use cases for code, but in a world already eaten by software — what’s left? A supercharged race to tbe bottom of cost as automation takes over in an end-to-end engineering landscape.&lt;/p&gt;

&lt;p&gt;Right now — as of November 2023 — we’re not at a place where you could see any kind of seismic shift in the developer job market thanks to generative AI, simply because it’s still new, and the big tech companies are still in a flurry launching (and in some cases re-launching) products with AI built-in (I’m looking at you Microsoft and Google!). But remember, it’s taken from 2017 to now to get where we are. From that fateful paper on transformers, to a world where code just manifests on the screen thanks to a few choice conversational words.&lt;/p&gt;

&lt;p&gt;Where will the technology be in another 6 years? Already we’re witnessing LLMs that can &lt;a href="https://medium.com/better-programming/the-next-evolution-of-azure-openai-gpt-models-a-guide-to-executing-functions-baa93d67aeaa"&gt;execute structured methods&lt;/a&gt; surfaced from natural language conversations, and hallucinations — perhaps the biggest problem facing the average large language model — are lessening by using the models themselves in a verification/validation &lt;a href="https://python.langchain.com/docs/use_cases/more/self_check/llm_checker"&gt;loop &lt;/a&gt;(LLMs validating their own outputs for truth).&lt;/p&gt;

&lt;p&gt;The other day I saw a post on LinkedIn that proudly proclaimed that fear about AI taking jobs is utter nonsense. “The calculator did not replace mathematicians!” the author offers us with smug assurance. True, but I feel like this slightly misses the mark of what generative AI is — or at least has the capacity to be. AI isn’t the calculator, it’s the mathematician. Or it’s on a journey towards that.&lt;/p&gt;

&lt;p&gt;If the engineering loop — from a technical standpoint — is synthesis, verification and repair, then large language models are already stitching together that loop with continually better results.&lt;/p&gt;

&lt;p&gt;So where does that leave me? I find myself giving a future where AI simply giving developers super-powers and we’re all stupendously productive and that’s that, a fairly low credence (trying — as physicist Sean Carroll might tell us — to be a good Bayesian). Maybe 20%? Equally I can’t imagine that future iterations will simply do away with the millions of developers — myself included — in some sort of wholesale shift. That’s the view taken by Emad Mostaque, founder and CEO of Stability AI.&lt;/p&gt;

&lt;p&gt;His take is ruthlessly simple; &lt;em&gt;“There will be no programmers in five years.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I don’t know that that’s likely either, so I’m giving that maybe a 30% credence. Maybe there’s a part of me that just hopes that isn’t likely? So you could call that a bias, but also — despite knowing humans are notoriously bad at predicting the future — I just can’t imagine things changing that fast.&lt;/p&gt;

&lt;p&gt;In the future I give the highest credence, let’s say — 70%, the role of the developer changes over the next 5 to 10 years, with code and ‘building’ things taking a backseat to caretaking the AI. In this world, those of us who are senior developers now may very well be the last of our kind. Children who are looking at the future of their education or teenagers hoping to enter the software engineering workforce in the next decade? Those are the ones for who I have the greatest concern. In my theoretical future, there may simply be no places for them to go in a world where today — as I write this piece — &lt;a href="https://decrypt.co/147191/no-human-programmers-five-years-ai-stability-ceo"&gt;40% or more of code on GitHub is AI generated&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  This is not the end
&lt;/h2&gt;

&lt;p&gt;So I find myself feeling somewhat pessimistic about where the long term future of a software developer is, but I also manage to pull myself back from the brink of the true AI alarmists. If you’ve delved into that side of the argument, you’ll find people — far smarter than me — who are genuinely concerned about the advent and evolution of the AI age.&lt;/p&gt;

&lt;p&gt;Thinkers of worldwide renown such as Sam Harris (Neuroscientist, Philosopher), Max Tegmark (Physicist, AI researcher), Eliezer Yudowsky (AI researcher) and more have sounded the alarm about what AI is already doing to our world but more importantly what will it do when it evolves where most people think it’s going to evolve — artificial general intelligence or ‘AGI’. It’s at this point that many believe the AI (or many AIs?) will achieve a level of sentience, of consciousness.&lt;/p&gt;

&lt;p&gt;What will this mean? No-one knows, but that’s possibly the scariest part. You see, right now a large language model gives a pretty good impression of being ‘alive’ or ‘sentient’, but they’re not.&lt;/p&gt;

&lt;p&gt;It’s a trick. An illusion. A facsimile. You can ask an LLM to write some code to take a persons e-commerce order and calculate the discounts they should be given on certain products and it’ll provide the code — but it has absolutely no knowledge of what a person is, what discounts are, it doesn’t understand anything. It doesn’t even understand that it is ‘something’, which fails the famous Thomas Nagel test of consciousness, that to recognise something is conscious you must be able to imagine that it’s “like something, to be that thing” (for example, it is like something to be a bat, but it’s not like anything to “be” a rock).&lt;/p&gt;

&lt;p&gt;(Sidenote: I’m fairly sure I’m conscious, but I have felt like a rock on some Monday mornings. Take that how you will….)&lt;/p&gt;

&lt;p&gt;The worry is that if (when?) these AI models gain an understanding of ontology and an awareness of itself, that it won’t exactly be aligned to what we’d consider human morals and ethics. The AI may have its own concept of ethics/morals that don’t align to ours, or perhaps something so abstract we wouldn’t even recognise it — and that could spell disaster for the human race.&lt;/p&gt;

&lt;p&gt;It’s referred to as ‘the alignment problem’ and in Eliezer Yudowsky’s words “It’s (the AI) got something that it actually does care about, which makes no mention of you. And you are made of atoms that it can use for something else. That’s all there is to it in the end.&lt;/p&gt;

&lt;p&gt;The reason you’re not in its utility function is that the programmers did not know how to do that. The people who built the AI, or the people who built the AI that built the AI that built the AI, did not have the technical knowledge that nobody on earth has at the moment as far as I know, whereby you can do that thing and you can control in detail what that thing ends up caring about.”_&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZinF3Blk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zvb4c44hiyaf6v9zkn6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZinF3Blk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zvb4c44hiyaf6v9zkn6r.png" alt="Image description" width="800" height="798"&gt;&lt;/a&gt;&lt;br&gt;
A future where people master machines, or machines master people? Source: &lt;a href="https://creator.nightcafe.studio/"&gt;https://creator.nightcafe.studio/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I don’t know if or when an AI will reach this level of sophistication, that it can actually be thought of as a sentient agent, but I give the likelyhood of this happening soon (say — in the next 10 years) a low credence. Maybe 10%. Why you say? Well, I tend to think that we — humans — don’t really understand consciousness as it is. If we don’t understand it right now, how can we hope that by creating larger and more powerful ‘next token’ predictors that consciousness will somehow manifest? I wish I could remember who this is to attribute the quote to, but I heard an AI researcher the other day say something that resonated with me:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“I could conceivably generate an atomically complete model of the human urinary system on a computer. Complete with an accurate model of its inner workings, it’s fibres, structure — the works. But doing so won’t result in my computer taking a piss on my desk”.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It’s a colorful way of saying that just because you model something in the abstract, it doesnt’ mean you’re actually manifesting “the thing”. So we can keep building better and more complex large language models that do a great job of approximating intelligence and awareness, but that doesn’t mean we’re anywhere near closer to actually creating artificial ‘life’.&lt;/p&gt;

&lt;p&gt;So all of this — this lone study of “what’s going on” in this landscape, is by no means over. The sands are constantly shifting, the growth absurd, but I think I’ve found a place to settle ‘my view’ as it relates to the dangers and fortunes of a world covered in generative AI, at least as it relates to the humble developer.&lt;/p&gt;

&lt;p&gt;I think we’re in for a future of immense creative power, of productivity gains we’ve never seen before. An upheaval of what the traditional role of the developer is. Some of it will be amazing — miraculous even, but plenty of it will suck. There will be job losses, there will be a contraction in opportunities in the future. That’s my humble prediction. I’ll probably be okay, but I think plenty of people in the industry won’t be.&lt;/p&gt;

&lt;p&gt;And perhaps most important of all — _selfishly _to me at least — I’ll say a long, slow goodbye to one of the things I find most enjoyable about the act of being a software developer, writing that beautiful, beautiful code.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>llm</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Supercharge Your AWS Lambda Game With Lambda Powertools</title>
      <dc:creator>James Matson</dc:creator>
      <pubDate>Sat, 04 Nov 2023 08:02:14 +0000</pubDate>
      <link>https://forem.com/kknd4eva/supercharge-your-aws-lambda-game-with-lambda-powertools-43np</link>
      <guid>https://forem.com/kknd4eva/supercharge-your-aws-lambda-game-with-lambda-powertools-43np</guid>
      <description>&lt;p&gt;&lt;strong&gt;So you’ve figured out how to run serverless code in AWS. Now what? Give your Lambda function superpowers with AWS Powertools for logging, metrics, and more&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you first figure out how AWS Lambda works for the first time, it’s a complete game changer. You mean I can concentrate on the bit that matters — the business logic of my code — and forget about all that banal stuff around servers? Operating systems? Patching? Image management? Auto-scaling groups? Retiring AMIs?&lt;/p&gt;

&lt;p&gt;Well, yes! That’s the initial power of serverless with AWS Lambda.&lt;/p&gt;

&lt;p&gt;It’s a selling point so strong — that resonates so well with developers — that the team I look after at work is almost exclusively building with serverless these days.&lt;/p&gt;

&lt;p&gt;But after you’ve picked your teeth out of your underpants from the sheer awe of how awesome Lambda is, you begin to realise that there’s also a rich ecosystem of libraries, tools, and packages that help make your Lambda development experience even more powerful. Perhaps none more so than Lambda Powertools.&lt;/p&gt;

&lt;p&gt;Available for .NET, Java, Typescript, and Python from &lt;a href="https://github.com/aws-powertools/"&gt;https://github.com/aws-powertools/&lt;/a&gt;, Powertools are designed to give AWS developers a leg up in their building by offering a suite of utilities that put best practice patterns at your fingertips for the following areas of Lambda development:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logging&lt;/strong&gt;: Gives you a fantastic structured JSON logger that includes handling Lambda events for you (cold starts, etc.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics&lt;/strong&gt;: Provides a means to easily collect custom metrics from your application for storage and retrieval via AWS CloudWatch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tracing&lt;/strong&gt;: One of the most powerful companion services to AWS Lambda is X-ray. Powertools tracing allows you to easily hook into X-ray, providing a means to send traces from functions to your X-ray service simply and easily.&lt;/p&gt;

&lt;p&gt;Plus two new(er) features included in the Powertools bundle are Parameters, which provide a simple way to work with parameter values stored in SSM, Secrets Manager, or DynamoDb, and Idempotency, which helps developers build functions that produce the same results when rerun to allow safe retries.&lt;/p&gt;

&lt;p&gt;You can find out much more detail in the Powertools GitHub, but for now, I’m going to give you a personal tour — and beginners guide — to using two of the main features of Powertools: Logging and Metrics. So grab your serverless armour, turn on provisioned concurrency, and let’s learn how to utilise Lambda Powertools!&lt;/p&gt;

&lt;p&gt;(Sidenote: All my examples will be in .NET because it’s awesome, but the concepts will apply for the most part to other supported runtimes.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6YoEBTpB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kfmj7wb8p08ksqiswcsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6YoEBTpB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kfmj7wb8p08ksqiswcsq.png" alt="Image description" width="797" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s really (really) hard to find an image that represents logging. I did find some, but they were logging — like, with an axe | istockphoto&lt;br&gt;
Logging is a critical part of observability in your applications, and the quality of your logged content can make — or break — your ability to get useful information and diagnostics about how your application is performing. I spent a fair bit of time getting logging right, particularly with applications that span multiple Lambda functions, so anything that can help make that process simpler or better is immediately something I’m interested in.&lt;/p&gt;

&lt;p&gt;A quick check of the source for the Logging sample code shows me the crux of getting started. First up, there’s a new package to be added to your code:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;using AWS.Lambda.Powertools.Logging;&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Next, it appears that there’s a new logging annotation that can be added to the top of your handler entry point.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yx0vqYYc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j06wvzq8i63blf60ch3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yx0vqYYc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j06wvzq8i63blf60ch3k.png" alt="Image description" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First up, I’m loving the continued use of annotation style functionality in .NET and Lambda. (Another great example is the annotations framework for Lambda and .NET. If you haven’t checked it out yet — make sure you do!) Adding this LogEvent annotation automatically logs your incoming event as JSON in CloudWatch (in this case, it would be the APIGatewayProxyRequest object).&lt;/p&gt;

&lt;p&gt;This makes me so, so very happy. Why? Because it’s almost a basic standard in our team to ensure that we record the incoming event in our logs for a given Lambda function. Understanding what data looks like when it gets into your function handler is critical. While you can get some of this information from the API Gateway execution logs if you’re using API Gateway, it can often be truncated.&lt;/p&gt;

&lt;p&gt;Now we can annotate and be done! But that’s not the best part of this nifty event logger. There’s a lot more going on. Let’s take a look at the initial logged event. Apart from the JSON representation of our APIGatewayProxyRequest object, we’ve also got a value telling us whether or not the invocation of the function was performed on a container that was cold starting (the initial ‘spinning up’ of the container).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uqr6blnc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwi35ckeiltre23dg05z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uqr6blnc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwi35ckeiltre23dg05z.png" alt="Image description" width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How awesome is that? For anyone who’s spent time using Lambda and .NET, you’d remember that getting information on cold starts before now (aside from educated guessing) involved adding the CloudWatch Lambda Insights layer to your solution stack.&lt;/p&gt;

&lt;p&gt;But wait! There’s more! (&lt;em&gt;shades of 1990s late-night informercials for Queens collections of steak knives, anyone?&lt;/em&gt;) Let’s dig into the code a little more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dmF-AFXD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qs1ofylea7jotemeokhf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dmF-AFXD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qs1ofylea7jotemeokhf.png" alt="Image description" width="480" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ALL THIS CAN BE YOURS FOR THE LOW LOW PRICE OF dotnet add&lt;br&gt;
Along with the LogEvent parameter, you can configure several other options either via parameters in the annotation or as environment variables.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;POWERTOOLS_SERVICE_NAME — Let’s you define what your service is called and what will appear in the service parameter of each logged event.&lt;/li&gt;
&lt;li&gt;POWERTOOLS_LOG_LEVEL — This maps directly onto your standard debug, information, error, trace log levels, in line with other popular logging frameworks.&lt;/li&gt;
&lt;li&gt;POWERTOOLS_LOGGER_CASE — Lets you control the casing used in your events (as a snake_case evangelist, I don’t know why you’d even bother with the others ;))&lt;/li&gt;
&lt;li&gt;POWERTOLS_LOGGER_SAMPLE_RATE — This is an interesting one. Using a double here (from 0 to 1), you can control what percentage of your requests are logged automatically as DEBUG level logging, giving you a way to gather diagnostics over time without enforcing DEBUG always. Very cool!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apart from the standard Logger methods like LogError and LogInformation, there’s an interesting one nestled in there called Logger.AppendKeys. This is very neat, as it lets you append a KeyValuePair to your log entries, and every entry after adding it will contain your KeyValuePair as a part of the output.&lt;/p&gt;

&lt;p&gt;Where would this be useful? Imagine you had a function that allows a customer to enquire about membership information. If I were building this function, I’d probably want to accept the customer identifier and ‘attach’ it manually to the log entries so I can search aggregated logs in CloudWatch (via the awesome CloudWatch Insights query language) by customer id to filter out all the other customers.&lt;/p&gt;

&lt;p&gt;With Logger.AppendKeys, I can create a key-value pair with my customer id:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var customerInfo = new Dictionary&amp;lt;string, string&amp;gt;()
        {
            {"CustomerId","CUST1234" }
        };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then append it to the logger:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logger.AppendKeys(customerInfo);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And — through the magic of power tools — we now have the customer id stamped neatly against all our subsequent log entries:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---xxQMQO6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tad5eeuk227xbhu41dmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---xxQMQO6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tad5eeuk227xbhu41dmf.png" alt="Image description" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that, at a glance, is the Logging portion of power tools for Lambda/.NET. Neat, huh? There’s probably more packed in there, but this is just me having an initial playaround.&lt;/p&gt;

&lt;p&gt;Already, it’s apparent that this package will be easy to fold into our standard tooling simply because it’s easy to use and represents a way to lessen the code footprint we’ll need when building serverless functions.&lt;/p&gt;

&lt;p&gt;A lot of the work we do today to write well-expressed logs will be taken care of via a nuget package and a quick annotation — magic! Just a reminder, if you want to play around with the sample GitHub code locally, you can do so via the same CLI using sam local invoke, as the repository has sample events already provided in JSON format. For our API Gateway example, you’d use:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam local invoke HelloWorldFunction --event events/event.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;(Just don’t forget — like I did — that if you alter the source code and want to see the changes, you must do a sam build :D ). Kudos to the team behind the repository for making everything seamless to use locally or in AWS, with a provided DockerFile, sample events, clear instructions, and a responsive team to issues raised in GitHub!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BtBiSJgs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p7deszpxpqqqjfbq9rlo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BtBiSJgs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p7deszpxpqqqjfbq9rlo.jpg" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Throttles, duration, invocations, error rate. These are the bread and butter of your metrics whenever you bring a new Lambda function into the world. Useful though they are, the intrepid serverless developer will no doubt find a wealth of reasons to want to hook into even more information than the out-of-the-box metrics provide.&lt;/p&gt;

&lt;p&gt;A few different options are available to developers, from adding Lambda layers to provide additional observability to crafting metrics from your logs via CloudWatch Insights. But for .NET developers, there’s another awesome tool made available with the recent GA announcement of Lambda Powertools for .NET: custom metrics.&lt;/p&gt;

&lt;p&gt;So, how can we put custom metrics to use? And how easy does Lambda Powertools make the exercise? (Hint: Super easy).&lt;/p&gt;

&lt;p&gt;Let’s walk through a simple example I’ve put together in Visual Studio to demonstrate how to set up and then report on custom metrics.&lt;/p&gt;

&lt;p&gt;I’ve chosen to start with the Visual Studio template for the Lambda annotations framework (because it’s awesome — more on that another time) and have taken the liberty of creating a simple Lambda and API Gateway-powered HTTP API endpoint that we can use to get a feel for how custom metrics work.&lt;/p&gt;

&lt;p&gt;First things first, the magical metrics package we need to add to our project:&lt;/p&gt;

&lt;p&gt;dotnet add package AWS.Lambda.Powertools.Metrics&lt;br&gt;
Next, let’s take a look at the function code I’ve put together in its entirety, and then we can pull apart the specifics:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R3zRGJ0E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mflcggr2s98fnvfhmpnt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R3zRGJ0E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mflcggr2s98fnvfhmpnt.png" alt="Image description" width="800" height="822"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve set my Lambda handler up to accept a string of environments and an integer of metricValue. The response is a simple ApiResponse object that contains an HTTP status code and a message.&lt;/p&gt;

&lt;p&gt;Because I’m using the Lambda annotations framework, setting up an HTTP API is a breeze. I decorate my handler with the HttpApi decorator specifying the HTTP verb, path, and parameters. In this case, I’ve created a post endpoint that accepts path parameters of {environment} and {metricValue} as an integer.&lt;/p&gt;

&lt;p&gt;Additionally, I’ve decorated the handler with the configuration for my function (namely memory size and timeout). Thanks to the magic of the annotations framework, as I add or modify these configuration values, my serverless application model (SAM) template is automatically synchronised and updated with the configuration described in the code.&lt;/p&gt;

&lt;p&gt;If that’s not literal sorcery, I don’t know what is.&lt;/p&gt;

&lt;p&gt;The idea with this sample is that I’ll be able to call the API, pass a {metricValue} to it, and use those values to generate CloudWatch custom metrics I can view in AWS. So where does {environment} come in? Well, we’re going to use that to demonstrate dimensionality in metrics.&lt;/p&gt;

&lt;p&gt;Essentially, we’ll be able to report metrics to the same namespace but report them as either /production or /development so that we can further slice our metrics by the dimension of environment.&lt;/p&gt;

&lt;p&gt;So, let’s look at the actual pushing of our custom metric:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NNE_rnxR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzw9e93srxq7ihf99vaw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NNE_rnxR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzw9e93srxq7ihf99vaw.png" alt="Image description" width="639" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, we’ve pushed one single metric value to CloudWatch. We’re sourcing the value and dimension from our API endpoint; the rest has been hardcoded. The namespace will show in the CloudWatch console under custom namespaces, with the service and dimension(s) underneath the specific metric.&lt;/p&gt;

&lt;p&gt;We’ve chosen our ‘metric unit’ as CountPerSecond, but a few options are available depending on the type of data you want to capture (examples being Seconds, BytesPerSecond, or Percentage).&lt;/p&gt;

&lt;p&gt;With that one method, we should have everything we need to post the data to CloudWatch. Let’s deploy and see how it works!&lt;/p&gt;

&lt;p&gt;(… one SAM-to-CloudFormation journey later…)&lt;/p&gt;

&lt;p&gt;With our API deployed to AWS, let’s try invoking it with a custom metric value. To be slightly more efficient than just manually calling curl a bunch of times, I’ve put together the following PowerShell script, which will invoke my API a bunch of times, passing in random metric values each time within a range:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (-not $args) {
  Write-Output "Usage: .\test_script.ps1 &amp;lt;api-id&amp;gt;"
  exit 1
}
$api_id = $args[0]
for ($i = 1; $i -le 1000; $i++) {
  $value = Get-Random -Minimum 1 -Maximum 300
  Invoke-RestMethod -Uri "https://$api_id.execute-api.ap-southeast-2.amazonaws.com/metrics/embedded/production/$value" -Method Post
  Start-Sleep -Seconds 1
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;So, let’s fire it off and see what happens!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j7f7pBAj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/595wb3gywd7kmbnni6lu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j7f7pBAj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/595wb3gywd7kmbnni6lu.png" alt="Image description" width="641" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Excellent. My metrics are off (we hope) to CloudWatch. So, let’s log into the console and see what’s available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v9M1m_lo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jv8q95p20qusekhf126p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v9M1m_lo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jv8q95p20qusekhf126p.png" alt="Image description" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Excellent! Under custom namespaces, we can see our MyMetrics namespace available, including our custom dimension of Environment available alongside defaults like Service and FunctionName.&lt;/p&gt;

&lt;p&gt;(Side note: You might have noticed the Lambda handler was also decorated with [Metrics(CaptureColdStart)]). This handy metrics feature lets you include Lambda cold starts in your metrics namespace and anything else you want to capture. That’s why there are three metrics available under MyMetrics &amp;gt; Service. One for cold starts, one for my custom metric, and one for, well, the same custom metric but with service_undefined as the service name because I forgot to include it the first time I created the sample :).&lt;/p&gt;

&lt;p&gt;So, back to our custom metrics, as you can see, I’ve created a simple CloudWatch graph featuring my custom metric viewed in two ways. One is as a one-second average, the other as the maximum value simultaneously. From that, we get the line graph of our custom metric over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--igCmdWL6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4pd3fyfcz7560j3wri2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--igCmdWL6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4pd3fyfcz7560j3wri2.png" alt="Image description" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s really as easy as that, and while I’ve used a pretty abstract example here, you can imagine all the use cases just waiting to be explored. I know with my team, we’ll be able to utilise custom metrics in our serverless functions as a way to measure the performance of different dependent services without needing to do what we do today: log the data in our own custom JSON format and then ‘hand craft’ metrics via CloudWatch Insights.&lt;/p&gt;

&lt;p&gt;In case you’re looking to reproduce the sample above, I’ve put the code into a small repository here:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/kknd4eva"&gt;
        kknd4eva
      &lt;/a&gt; / &lt;a href="https://github.com/kknd4eva/AWS.Lambda.Powetools.Metrics.Sample"&gt;
        AWS.Lambda.Powetools.Metrics.Sample
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A small sample of using AWS Lambdatools for .NET
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/kknd4eva/AWS.Lambda.Powetools.Metrics.Sample"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;Hopefully, this has given you — the intrepid serverless guru — the motivation to add Lambda Powertools to your toolbelt. Please look for a future article where I’ll explore some of the other Powertools utilities.&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>serverless</category>
      <category>dotnet</category>
    </item>
  </channel>
</rss>
