DEV Community

Cover image for Working with AWS DSQL and Lambda: First Part - Setup of the project
2 2

Working with AWS DSQL and Lambda: First Part - Setup of the project

Not long ago, I was exploring Amazon Aurora DSQL, AWS's newest distributed SQL database built specifically for serverless workloads. Unlike traditional databases that require constant maintenance, Aurora DSQL scales automatically and only charges for what you use - making it a perfect match for Lambda functions.

When you think serverless databases, DynamoDB probably comes to mind first — and for good reason. It's been the go-to NoSQL solution for serverless applications for years. But now we can finally consider relational databases in a truly serverless context too. Aurora DSQL brings the familiarity of SQL and relational data modeling to the serverless world, without the overhead of managing database instances or worrying about scaling.

Welcome to the first installment in a three-part series where I'll build a serverless backend using Amazon Aurora DSQL, AWS Lambda, AWS CDK, and GitHub Actions.

Diagram

What will this first article cover?

Project Setup:

  • You’ll learn how to initialize and organize your serverless backend project, including setting up your repository and preparing the necessary files and folders for a smooth development workflow.

AWS CDK for Infrastructure:

  • This article walks you through using AWS CDK to define your cloud infrastructure as code. You’ll see how to use CDK constructs (especially L1 for Aurora DSQL, where applicable) to provision resources such as Lambda functions and Aurora DSQL clusters, and how to structure your CDK stack for maintainability and scalability.

Lambda and DSQL Connectivity:

  • You’ll discover how to establish secure connectivity between AWS Lambda and Amazon Aurora DSQL. This includes configuring IAM roles and policies to allow Lambda to interact with the DSQL Data API, and setting up environment variables or secrets for secure database access.

Code Snippets – Step-by-Step:

  • Practical, actionable code examples are provided at each stage. You’ll see how to write CDK stacks, define Lambda functions, and connect them to DSQL, making it easy to follow along and adapt the approach to your own projects.

Open Source Repository:

  • The article references an open-source repository, allowing you to review the complete project structure, clone the code, and experiment with the setup yourself. This ensures transparency and makes it easy to check or extend the approach.

Step 1:

Here’s how I like to organize the project

Structure

  • infra/ – Contains all CDK-related infrastructure code.

  • service/ – Your Lambda source code goes here.

  • .github/ - The GitHub Actions workflows go here.

Note:

  • This approach usually empowers me to easily separate infrastructure from business logic and isolate the pipeline related actions as well.

  • Also - I like to structure the infrastructure in stacks based on their domain. I found it useful as the project kept on growing.

Step 2:

Now let's jump into the details of working with AWS CDK and AWS DSQL.

Note: Since AWS DSQL has been recently released, the only Constructs that are available are L1 Constructs which are automatically generated. I will adjust this article & open-source repository as the AWS DSQL support becomes more widely adopted in the CDK world.

export class StorageStack extends cdk.Stack {
    dsqlCluster: cdk.CfnResource;
    dsqlClusterArn: string;
    dsqlClusterEndpoint: string;

    constructor(scope: constructs.Construct, id: string, props: cdk.StackProps) {
        super(scope, id, props);

        // Create DSQL cluster using native CloudFormation resource
        this.dsqlCluster = new cdk.CfnResource(this, 'DSQLCluster', {
            type: 'AWS::DSQL::Cluster',
            properties: {
                DeletionProtectionEnabled: true,
                Tags: [
                    {
                        Key: 'Project',
                        Value: 'aws-dsql-demo'
                    }
                ]
            }
        });

        this.dsqlClusterArn = this.dsqlCluster.getAtt('ResourceArn').toString();
        this.dsqlClusterEndpoint = `${this.dsqlCluster.getAtt('Identifier').toString()}.dsql.${this.region}.on.aws`;
    }
}
Enter fullscreen mode Exit fullscreen mode

Step 3:

Moving further to the Lambda layer, I will need to ensure it has the correct permissions to be able to connect to the DSQL Cluster.

export interface FunctionsStackProps extends cdk.StackProps {
    label: {
        id: string;
    },
    domainName: string;
    dsqlClusterEndpoint: string;
    dsqlClusterArn: string;
}

export class FunctionsStack extends cdk.Stack {
    generateGameLambda: lambda.Function;

    constructor(scope: constructs.Construct, id: string, props: FunctionsStackProps) {
        super(scope, `${id}-functions-stack`, props);

        // Create Lambda role with DSQL permissions
        const lambdaRole = new iam.Role(this, 'DSQLLambdaRole', {
            assumedBy: new iam.ServicePrincipal('lambda.amazonaws.com'),
            managedPolicies: [
                iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AWSLambdaBasicExecutionRole'),
            ]
        });

        // Add DSQL permissions
        lambdaRole.addToPolicy(new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: [
                'dsql:DbConnect',
                'dsql:ExecuteStatement',
                'dsql:DbConnectAdmin',
            ],
            resources: [props.dsqlClusterArn]
        }));

        this.generateGameLambda = new nodeLambda.NodejsFunction(this, 'GenerateGameLambda', {
            entry: './service/generate-game/Handler.ts',
            runtime: lambda.Runtime.NODEJS_20_X,
            role: lambdaRole,
            environment: {
                DSQL_ENDPOINT: props.dsqlClusterEndpoint
            }
        });
    }
}

Enter fullscreen mode Exit fullscreen mode

I also added the DSQL_ENDPOINT as an environment variable to easily adapt it dynamically when creating a new infrastructure without any manual changes to the Lambda.

Cloudformation

Step 4:

One important part of working with Aurora DSQL in Lambda is managing the database connection efficiently. Since Lambdas are ephemeral and scale rapidly, you want to avoid repeatedly initializing database connections which can be expensive and slow.

Here, we create a DatabaseService class that:

  • Caches the TypeORM DataSource connection so it’s initialized only once per Lambda container lifecycle.

  • Uses the DsqlSigner from @aws-sdk/dsql-signer to generate secure, temporary authentication tokens to connect to the database without storing passwords.

  • Leverages TypeORM’s DataSource for managing your entities and database interactions with familiar ORM patterns. We will dive deeper in the second article of the series to explain more on the TypeORM integration.

import "reflect-metadata";
import { DataSource } from "typeorm";
import { DsqlSigner } from "@aws-sdk/dsql-signer";
import { Game } from "../models/Game";

export class DatabaseService {
    private static dataSource: DataSource;

    private static async getAuthToken(host: string): Promise<string> {
        const signer = new DsqlSigner({
            hostname: host,
            region: process.env.AWS_REGION || 'eu-west-2'
        });

        return await signer.getDbConnectAdminAuthToken();
    }

    static async initialize(): Promise<DataSource> {
        if (!DatabaseService.dataSource) {
            const host = process.env.DSQL_ENDPOINT || '';

            DatabaseService.dataSource = new DataSource({
                type: "postgres",
                host: host,
                port: 5432,
                username: 'admin',
                password: await DatabaseService.getAuthToken(host),
                database: "postgres",
                ssl: {
                    rejectUnauthorized: true
                },
                synchronize: true,
                logging: true,
                entities: [Game]
            });

            // Initialize connection to postgres database
            await DatabaseService.dataSource.initialize();
        }

        return DatabaseService.dataSource;
    }

    static async saveGame(game: Partial<Game>): Promise<Game> {
        const dataSource = await DatabaseService.initialize();
        const gameRepository = dataSource.getRepository(Game);

        const newGame = gameRepository.create(game);
        return await gameRepository.save(newGame);
    }
} 
Enter fullscreen mode Exit fullscreen mode

Step 5:

Creating the Handler for the Lambda

export async function handler(event: APIGatewayProxyEvent, context: Context): Promise<APIGatewayProxyResult> {
    console.log(event, context);
    await DatabaseService.initialize();

    try {
        // Generate a random game
        const gameData = generateRandomGame();

        // Save it to the database
        const savedGame = await DatabaseService.saveGame(gameData);

        return {
            statusCode: 200,
            headers: {
                'Content-Type': 'application/json'
            },
            body: JSON.stringify({
                message: 'Game generated and saved successfully',
                game: savedGame
            })
        };
    } catch (error) {
        console.error('Error generating game:', error);
        return {
            statusCode: 500,
            headers: {
                'Content-Type': 'application/json'
            },
            body: JSON.stringify({
                message: 'Failed to generate and save game',
                error: error instanceof Error ? error.message : String(error)
            })
        };
    }
}
Enter fullscreen mode Exit fullscreen mode

You can see the response at the API level here.
API Response


Manual debugging

You can even connect to the Database directly using the AWS Console.

Go to:

  • Search AWS DSQL.
  • Select your cluster.
  • Press on Connect > Open in Cloudshell.
  • Connect as admin > Press Run script.
  • And you're live -> You can directly run SQL Queries.

SQL Query Response


Achievements in this article:

  • Set up a clean, scalable project structure separating infrastructure, service code, and CI/CD workflows.

  • Provisioned an Amazon Aurora DSQL cluster and Lambda functions using AWS CDK, leveraging L1 constructs for the new DSQL resource.

  • Configured IAM roles and permissions enabling Lambda to securely connect and execute SQL commands on Aurora DSQL.

  • Implemented an efficient Lambda database connection pattern using TypeORM with AWS DsqlSigner to generate secure temporary auth tokens.

  • Created a Lambda handler that generates and saves data to Aurora DSQL, exposing it via an API Gateway endpoint secured by an api key.

In the upcoming articles, you’ll learn how to:

  • Implement business logic with TypeORM entities and migrations
  • Expose multiple operations on these TypeORM entities via API Gateway, including how to add an effective caching mechanism to improve performance and reduce latency.
  • Benefits of using AWS DSQL and best practices for efficient & performant infrastructure.

Open Source Repository:

Documentation:

AWS GenAI LIVE image

How is generative AI increasing efficiency?

Join AWS GenAI LIVE! to find out how gen AI is reshaping productivity, streamlining processes, and driving innovation.

Learn more

Top comments (0)

AWS Security LIVE! From re:Inforce 2025

Tune into AWS Security LIVE! streaming live from the AWS re:Inforce expo floor in Philadelphia from 8:00 AM ET-6:00 PM ET.

Tune in to the full event

DEV is partnering to bring live events to the community. Join us or dismiss this billboard if you're not interested. ❤️