<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: gins p cyriac</title>
    <description>The latest articles on Forem by gins p cyriac (@gins_cyriac).</description>
    <link>https://forem.com/gins_cyriac</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gins_cyriac"/>
    <language>en</language>
    <item>
      <title>Smart Customer Support - A Multi Agent Ticketing System Using OpenAI Agents SDK</title>
      <dc:creator>gins p cyriac</dc:creator>
      <pubDate>Sun, 01 Feb 2026 23:31:00 +0000</pubDate>
      <link>https://forem.com/gins_cyriac/smart-customer-support-a-multi-agent-ticketing-system-using-openai-agents-sdk-19an</link>
      <guid>https://forem.com/gins_cyriac/smart-customer-support-a-multi-agent-ticketing-system-using-openai-agents-sdk-19an</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In this project, I built an intelligent car customer support system using AI agents which converts customer support emails into structured support tickets with AI generated draft response based on internal document references which are then reviewed by human agents.&lt;/p&gt;

&lt;p&gt;Handling hundreds of customer support emails at scale is difficult, especially when many request are repetitive. Currently, the support teams often read through hundreds of emails everyday, analyze each request and send meaningful response to each customer. This is a tedious and time consuming process, especially if the customer service agent has to search through various internal documents before responding to each customer. &lt;/p&gt;

&lt;p&gt;My goal with this project is to automate this repetitive parts of the support workflow while still giving the humans the final say. This project is a collaboration between AI and human agents. Full code for this project can be found on &lt;a href="https://github.com/Gins47/smart-customer-support/tree/main" rel="noopener noreferrer"&gt;Link&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Project Features
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;📩 Email ingestion (Gmail)&lt;/li&gt;
&lt;li&gt;🤖 AI agent analyze support emails and generates

&lt;ul&gt;
&lt;li&gt;Priority&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;li&gt;Draft response&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;🧑‍💼 Human-in-the-loop approval before sending response to users

&lt;ul&gt;
&lt;li&gt;✏️ Editable draft responses&lt;/li&gt;
&lt;li&gt;✅ Approve / ❌ Reject tickets&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;📊 Analytics dashboard

&lt;ul&gt;
&lt;li&gt;Tickets per day&lt;/li&gt;
&lt;li&gt;Tickets by priority&lt;/li&gt;
&lt;li&gt;Tickets by status&lt;/li&gt;
&lt;li&gt;Total tickets &amp;amp; pending review&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  🚀 Technologies
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; – Data storage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FastAPI&lt;/strong&gt; – Backend API implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM (GPT-5 Nano)&lt;/strong&gt; – Language model powering the AI agent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Next.js&lt;/strong&gt; – Frontend web application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose&lt;/strong&gt; – Service orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Architecture Overview
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febqmjmfb70doufquq5ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febqmjmfb70doufquq5ae.png" alt="Architecture Diagram" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Agent Orchestration Design
&lt;/h2&gt;

&lt;p&gt;Instead of using a single agent with a large prompt, in this project I have used separate specialized agents with a clear defined role and strict boundaries. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. Email Analyzer Agent
&lt;/h3&gt;

&lt;p&gt;The Email Analyzer Agent is responsible for processing incoming emails.&lt;/p&gt;

&lt;p&gt;Its job is to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inspect the email body&lt;/li&gt;
&lt;li&gt;Detect whether it contains a valid customer complaint or request&lt;/li&gt;
&lt;li&gt;Assign a priority level (LOW, MEDIUM, HIGH, CRITICAL)&lt;/li&gt;
&lt;li&gt;Generate a one sentence summary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the email does not contain a valid request, the agent explicitly marks priority as INVALID.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Car Expert Agent (RAG based Domain Expert)
&lt;/h3&gt;

&lt;p&gt;The Car Expert Agent acts as a car specific knowledge agent. It does not rely on general LLM knowledge. The agent provide response by strictly referencing car documentation using a Retrieval-Augmented Generation (RAG) approach. If the requested information is not found, then the agent provide the response accordingly.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Orchestrator Agent (Coordinator &amp;amp; Decision Maker)
&lt;/h3&gt;

&lt;p&gt;The Orchestrator Agent is the brain of the system. It coordinates the entire workflow by invoking other agents as tools and enforcing business rules.&lt;/p&gt;

&lt;p&gt;Its responsibilities include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calling the Email Analyzer Agent&lt;/li&gt;
&lt;li&gt;Deciding whether a ticket should be created&lt;/li&gt;
&lt;li&gt;Generating a human-readable ticket number&lt;/li&gt;
&lt;li&gt;Detecting whether the request is car-related&lt;/li&gt;
&lt;li&gt;Conditionally invoking the Car Expert Agent&lt;/li&gt;
&lt;li&gt;Producing a draft email response for human review
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;class OrchestratorAgentOutput&lt;span class="o"&gt;(&lt;/span&gt;BaseModel&lt;span class="o"&gt;)&lt;/span&gt;:
    priority:str
    summary:str
    draft_response: str
    subject: str
    ticket_number: str

orchestrator_agent &lt;span class="o"&gt;=&lt;/span&gt; Agent&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"orchestrator agent"&lt;/span&gt;,
                           &lt;span class="nv"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""" 
                            You are a customer care car support orchestrator.
                            Process:
                            1. Use email_analyzer to classify priority and summarize.
                            2. If priority == INVALID:
                            - Create a polite rejection response
                            3. If priority != INVALID:
                            - Determine if the request is about car operation, maintenance, or faults
                            - ONLY IF it is car-related, call car_expert
                            4. Use car_expert output strictly; do not add assumptions
                            5. Produce a draft email response and subject

                            Output MUST match OrchestratorAgentOutput. 
                           """&lt;/span&gt;, 
                           &lt;span class="nv"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gpt-5-nano"&lt;/span&gt;,
                           &lt;span class="nv"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=[&lt;/span&gt;    
                            create_ticket_number,     
                            email_analyzer_agent.as_tool&lt;span class="o"&gt;(&lt;/span&gt;
                                &lt;span class="nv"&gt;tool_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"email_analyzer"&lt;/span&gt;,
                                &lt;span class="nv"&gt;tool_description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Perform analysis of the email and provide a brief summary and priority of the email"&lt;/span&gt;,
                            &lt;span class="o"&gt;)&lt;/span&gt;,         
                            car_expert_agent.as_tool&lt;span class="o"&gt;(&lt;/span&gt;
                                &lt;span class="nv"&gt;tool_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"car_expert"&lt;/span&gt;,
                                &lt;span class="nv"&gt;tool_description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Provides correct answers regarding the car by referring the car documentation"&lt;/span&gt;,
                            &lt;span class="o"&gt;)&lt;/span&gt;,
                           &lt;span class="o"&gt;]&lt;/span&gt;,
                           &lt;span class="nv"&gt;output_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;OrchestratorAgentOutput
                           &lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the orchestrator agent use the &lt;strong&gt;email_analyzer_agent&lt;/strong&gt; and &lt;strong&gt;car_expert_agent&lt;/strong&gt; as tools. This approach keeps the prompts small and focused, it prevents role overlap while also allowing easy tool extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  Human-in-the-Loop By Design
&lt;/h2&gt;

&lt;p&gt;The orchestrator agent produces a draft response, not a final answer.&lt;/p&gt;

&lt;p&gt;Every generated response is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stored as a draft.&lt;/li&gt;
&lt;li&gt;Reviewed by a human agent.&lt;/li&gt;
&lt;li&gt;Edited or Approved / Rejected by a human agent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures trust in AI-assisted workflows and also keeping the accountability. &lt;/p&gt;

&lt;h1&gt;
  
  
  🖼 Screenshots
&lt;/h1&gt;

&lt;p&gt;Below are a few screenshots highlighting key parts of the system&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dashboard&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2s4fd1sazj22uulcd9cr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2s4fd1sazj22uulcd9cr.png" alt="Dashboard" width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Ticket Lists&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nmokk4pe9xp8ilstpej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nmokk4pe9xp8ilstpej.png" alt="TicketList" width="800" height="423"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Edit Response&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkizi31fl99kcmi9wfe8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkizi31fl99kcmi9wfe8.png" alt="Editresponse" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  🚀 Next Steps
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Pluggable LLM Provider Support.&lt;/li&gt;
&lt;li&gt;Enhanced RAG Pipeline.&lt;/li&gt;
&lt;li&gt;Mentioning source in AI Responses.&lt;/li&gt;
&lt;li&gt;Add support for more agents (e.g., billing, warranty, returns)&lt;/li&gt;
&lt;li&gt;UI Improvements&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>openai</category>
      <category>ai</category>
      <category>agents</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Connecting AI Agents to Factory Data with MCP, Node.js &amp; TypeScript</title>
      <dc:creator>gins p cyriac</dc:creator>
      <pubDate>Mon, 22 Sep 2025 18:56:19 +0000</pubDate>
      <link>https://forem.com/gins_cyriac/connecting-ai-agents-to-factory-data-with-mcp-nodejs-typescript-44e</link>
      <guid>https://forem.com/gins_cyriac/connecting-ai-agents-to-factory-data-with-mcp-nodejs-typescript-44e</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;AI is rapidly evolving and disrupting many industries. Initially, LLM agents were only able to provide us insights based on their training data, which was very limited and this prevented us from using AI agent from obtaining real-time insights and using it efficiently in an enterprise context.&lt;/p&gt;

&lt;p&gt;The introduction of &lt;strong&gt;MCP (Model Context Protocol)&lt;/strong&gt; became a game changer in this field and it enabled the AI agents to get the real-time information and also to access enterprise data, which unlocked a whole new scope for using AI agents and implementing powerful new use-cases. &lt;/p&gt;

&lt;p&gt;In this post, I would like to share my learning about MCP Server and also a walk through a sample project where I am getting the  factory machine data stored in a PostgreSQL database using MCP Server and the AI agent invokes the tools exposed by the MCP server for obtaining the machine data based on the user input and provide user with a structured response.&lt;/p&gt;

&lt;p&gt;Full code for this project can be found on &lt;a href="https://github.com/Gins47/smart-factory-mcp/tree/main#" rel="noopener noreferrer"&gt;Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we get into the details of the project implementation, Let us first understand what is MCP, what is the benefit of using it and how the it unlocks new abilities for the AI agents.&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP Server
&lt;/h3&gt;

&lt;p&gt;MCP (Model Context Protocol) was introduced by Anthropic in November 2024, it provides a standardized way for AI applications to connect with external systems and data. With MCP, LLMs are no longer limited to static training data, now they can access real-time information and enterprise systems securely.&lt;/p&gt;

&lt;p&gt;MCP Server exposes 3 Primitives&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tools:&lt;/strong&gt; Executable functions that AI applications can invoke (e.g., API calls, database queries)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resources:&lt;/strong&gt; Data sources that provide context (e.g., file contents, database records, API responses)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompts:&lt;/strong&gt; Structured templates that help guide LLMs (e.g., system prompts, few-shot examples)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Benefits of using MCP Server in Enterprise context
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Secure Access:&lt;/strong&gt; Secure data access only to authorized programs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restricted Access:&lt;/strong&gt; Fine grained control over data used by AI agents. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard Integration:&lt;/strong&gt; Instead of building custom connectors for every AI agent, MCP provides a common protocol for connecting to databases, APIs, and internal tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditability &amp;amp; Compliance:&lt;/strong&gt; Every tool call and database request by the AI agent can be logged, which helps to meet the compliance requirement and track AI behavior. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jyrtxetqa7g1dc74n27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jyrtxetqa7g1dc74n27.png" alt="Project Architecture" width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this project, I have setup a MCP Server that exposes factory data stored in a PostgreSQL as tool. The chat-client application communicates with MCP Server using Streamable HTTP calls. In this implementation, I have seeded the PostgreSQL database using a sample Industrial Energy Forecast Dataset obtained from Kaggle. The dataset can be downloaded from &lt;a href="https://www.kaggle.com/datasets/zara2099/industrial-energy-forecast-dataset?resource=download" rel="noopener noreferrer"&gt;Link&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MCP Server:&lt;/strong&gt; Provides tools which AI agent calls to get the machine data stored in database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL:&lt;/strong&gt; Stores the seeded dataset from Kaggle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama:&lt;/strong&gt; Local LLM (llama3.1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chat Client:&lt;/strong&gt; An express application which interacts with MCP server and LLM to get response for the user request.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Start services
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Pull llm model for llama
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   docker exec -it ollama ollama pull llama3.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Core Application logic
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;MCP Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this project, MCP Server as an express application that exposes tools the AI agents can use to fetch the requested data on demand. &lt;/p&gt;

&lt;p&gt;I have created a function &lt;code&gt;registerTools&lt;/code&gt; which registers all the available tools provided by the MCP server in &lt;code&gt;tools.ts&lt;/code&gt; (&lt;code&gt;mcp-server/src/modules/tools.ts&lt;/code&gt;). Currently, I have only one tool &lt;code&gt;get-machine-record&lt;/code&gt; which gets the latest record of a given &lt;code&gt;machineId&lt;/code&gt; from the postgres database.&lt;/p&gt;

&lt;p&gt;Each tool is defined with following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name:&lt;/strong&gt; identifier for the tool (e.g., "get-machine-record")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input Schema:&lt;/strong&gt; JSON schema describing required arguments (e.g., { "device": "string" })&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Callback:&lt;/strong&gt; the function that contains the logic (e.g., SQL query).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  server.tool(
    "get-machine-record",
    "Gets latest record for a machine",
    {
      machineId: z.string(),
    },
    async ({ machineId }) =&amp;gt; {
      // Fetch data from PostgreSQL
      console.log(`Fetching record for machine ${machineId}`);
      const machineRecord = await getLatestMachineRecord(machineId);
      if (machineRecord) {
        return {
          content: [{ type: "text", text: JSON.stringify(machineRecord) }],
        };
      } else {
        return {
          content: [{ type: "text", text: "[]" }],
        };
      }
    }
  );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Chat Client
&lt;/h2&gt;

&lt;p&gt;Chat client is also an express application which exposes 2 APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Direct MCP query&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://localhost:4001/machines/${machineID}&lt;/code&gt; → Fetch raw machine data of the mentioned &lt;code&gt;machineID&lt;/code&gt; from the MCP server by directly calling the &lt;code&gt;get-machine-record&lt;/code&gt; tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcp3e6curpixdlaoojeu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcp3e6curpixdlaoojeu.png" alt="Get request" width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. AI-assisted query&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://localhost:4001/chat&lt;/code&gt; → This API provides AI generated response for the requested details of a machine. The core logic is implemented in &lt;code&gt;chat.service.ts&lt;/code&gt; (&lt;code&gt;chat-client/src/services/chat.services.ts&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user message is converted into a prompt and sent to Ollama (local LLM).&lt;/li&gt;
&lt;li&gt;If the LLM decides it needs external data, it calls the tool provided by the MCP server.&lt;/li&gt;
&lt;li&gt;The tool fetches the machine data from PostgreSQL and provides back to the chat client.&lt;/li&gt;
&lt;li&gt;The result is then fed back into the LLM.&lt;/li&gt;
&lt;li&gt;The LLM generates a refined, human-readable answer for the user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz84ys7lmlmgddxzul84w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz84ys7lmlmgddxzul84w.png" alt="AI response" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps 🚀
&lt;/h2&gt;

&lt;p&gt;For the next iteration of this project, I am planning to implement below features:&lt;br&gt;
&lt;strong&gt;1. Integrate live data feeds:&lt;/strong&gt; Ingest real-time factory telemetry into PostgreSQL, so the AI agent works with fresh data.&lt;br&gt;
&lt;strong&gt;2. Add authentication to the MCP Server:&lt;/strong&gt; Restrict access to only authorized applications and users.&lt;br&gt;
&lt;strong&gt;3. Expand toolset:&lt;/strong&gt; Create more tools&lt;br&gt;
&lt;strong&gt;4. Support multiple MCP Servers:&lt;/strong&gt; Allow the client to connect to and aggregate responses from different MCP servers&lt;/p&gt;

</description>
      <category>ai</category>
      <category>typescript</category>
      <category>mcp</category>
      <category>node</category>
    </item>
    <item>
      <title>AWS CDK + Localstack (API Gateway, Lambda, SQS, DynamoDB,TypeScript)</title>
      <dc:creator>gins p cyriac</dc:creator>
      <pubDate>Sun, 03 Nov 2024 18:05:49 +0000</pubDate>
      <link>https://forem.com/gins_cyriac/aws-cdk-localstack-api-gateway-lambda-sqs-dynamodbtypescript-man</link>
      <guid>https://forem.com/gins_cyriac/aws-cdk-localstack-api-gateway-lambda-sqs-dynamodbtypescript-man</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This project is a Fleet Emission Data Management System designed to track and manage emissions data for various vehicles within a fleet. Built using AWS infrastructure, the project leverages &lt;code&gt;AWS Lambda&lt;/code&gt; for serverless compute, &lt;code&gt;DynamoDB&lt;/code&gt; for highly scalable storage, and &lt;code&gt;Amazon SQS&lt;/code&gt; for asynchronous message queuing. The application is developed with modular code and follows a well-defined structure to enhance reusability, maintainability, and scalability. By using &lt;code&gt;TypeScript&lt;/code&gt;, we ensure strong type-checking, which helps maintain code quality and minimize runtime errors.&lt;/p&gt;

&lt;p&gt;Full code for this project can be found on [&lt;a href="https://github.com/Gins47/fleet-emission-calculator" rel="noopener noreferrer"&gt;https://github.com/Gins47/fleet-emission-calculator&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;For local AWS development and testing we will be making use of &lt;strong&gt;LocalStack&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Make sure that you have the following installed in you local development setup.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;NodeJS&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;AWS CLI&lt;/li&gt;
&lt;li&gt;Localstack&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For this project we will be using AWS CDK for the infrastructure setup and deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install AWS CDK
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; aws-cdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2kqdbjx4vr0v1n4i9rh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2kqdbjx4vr0v1n4i9rh.png" alt="High level architecture diagram" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB Table Design
&lt;/h2&gt;

&lt;p&gt;The DynamoDB table uses a standardized approach to defining the &lt;strong&gt;partition key (pk)&lt;/strong&gt; and &lt;strong&gt;sort key (sk)&lt;/strong&gt; to make it adaptable for multiple data entities. This structure allows us to store and retrieve different types of data (e.g., vehicle data, company data) while keeping the data organized and easily queryable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Partition Key (pk):&lt;/strong&gt; Uniquely identifies each entity type in the table.&lt;br&gt;
&lt;strong&gt;Sort Key (sk):&lt;/strong&gt; Provides the secondary key to allow range queries and ordering within each partition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Primary Key Format&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Partition Key (pk):&lt;/code&gt; Composed of the entity type and unique identifier, following the format &lt;code&gt;entityType#identifier&lt;/code&gt;.&lt;br&gt;
&lt;em&gt;Example:&lt;/em&gt; For a vehicle emission entry, the pk could be &lt;code&gt;vehicle#M-MJ-456&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Sort Key (sk):&lt;/code&gt; Used for time-series data or sub-entities within the same partition, following the format &lt;code&gt;attribute#value&lt;/code&gt;.&lt;br&gt;
&lt;em&gt;Example:&lt;/em&gt; For vehicle data, the sk could be &lt;code&gt;creationTime#1730641488&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This structure allows the table to support multiple entities (such as vehicles and companies) by defining each type with a specific prefix, while also allowing range queries on creationTime.&lt;/p&gt;
&lt;h2&gt;
  
  
  Project Setup
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;

&lt;p&gt;I followed clean architecture principles, with separate folders for handlers, controllers, models, repositories, services, and error handling. Below is a breakdown of each main folder and its purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;lib/resources/lambdas/src:&lt;/code&gt; This is the main source folder for the Lambda functions and supporting files. It includes subdirectories for controllers, handlers, models, repositories, services, and error handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Folders&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;controllers&lt;/code&gt;: Contains controllers responsible for handling incoming requests and directing them to the appropriate services.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;services&lt;/code&gt;: Includes general application services that provide reusable business logic across various controllers and handlers.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;repository&lt;/code&gt;: This folder contains classes and methods for interacting with DynamoDB.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;models&lt;/code&gt;: Defines TypeScript models or interfaces used throughout the application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;errors&lt;/code&gt;: This folder manages custom error classes for the application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;handlers&lt;/code&gt;: Contains the AWS Lambda handler functions, which serve as entry points for each Lambda function.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;consumeEmissionData.ts&lt;/code&gt;: Consumes emission data from an SQS queue.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;createVehicleLambda.ts&lt;/code&gt;: Handles vehicle data creation.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;getVehicleDataLambda.ts&lt;/code&gt;: Retrieves vehicle data by vehicleNumber and CreationTime.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;publishEmissionData.ts&lt;/code&gt;:  Publishes emission data to an SQS queue.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  AWS CDK Stack
&lt;/h3&gt;

&lt;p&gt;By using AWS CDK we can create the required resources very easily. Below, you can find the way to create various AWS resources using AWS CDK.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Gateway
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; const api &lt;span class="o"&gt;=&lt;/span&gt; new RestApi&lt;span class="o"&gt;(&lt;/span&gt;this, &lt;span class="s2"&gt;"ApiGateway"&lt;/span&gt;, &lt;span class="o"&gt;{&lt;/span&gt;
      restApiName: &lt;span class="s2"&gt;"Fleet Emission API"&lt;/span&gt;,
      deployOptions: &lt;span class="o"&gt;{&lt;/span&gt;
        stageName: &lt;span class="s2"&gt;"dev"&lt;/span&gt;,
      &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="o"&gt;})&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;SQS
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   const queue &lt;span class="o"&gt;=&lt;/span&gt; new sqs.Queue&lt;span class="o"&gt;(&lt;/span&gt;this, &lt;span class="s2"&gt;"FleetEmissionQueue"&lt;/span&gt;, &lt;span class="o"&gt;{&lt;/span&gt;
      visibilityTimeout: cdk.Duration.seconds&lt;span class="o"&gt;(&lt;/span&gt;300&lt;span class="o"&gt;)&lt;/span&gt;,
    &lt;span class="o"&gt;})&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;DynamoDB
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; const table &lt;span class="o"&gt;=&lt;/span&gt; new dynamodb.Table&lt;span class="o"&gt;(&lt;/span&gt;this, &lt;span class="s2"&gt;"FleetEmissionDataTable"&lt;/span&gt;, &lt;span class="o"&gt;{&lt;/span&gt;
      partitionKey: &lt;span class="o"&gt;{&lt;/span&gt; name: &lt;span class="s2"&gt;"pk"&lt;/span&gt;, &lt;span class="nb"&gt;type&lt;/span&gt;: dynamodb.AttributeType.STRING &lt;span class="o"&gt;}&lt;/span&gt;,
      sortKey: &lt;span class="o"&gt;{&lt;/span&gt; name: &lt;span class="s2"&gt;"sk"&lt;/span&gt;, &lt;span class="nb"&gt;type&lt;/span&gt;: dynamodb.AttributeType.STRING &lt;span class="o"&gt;}&lt;/span&gt;,
      billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
      tableName: &lt;span class="s2"&gt;"FleetEmissionData"&lt;/span&gt;,
    &lt;span class="o"&gt;})&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In this set up, I am adding a &lt;strong&gt;Global Secondary Index (GSI)&lt;/strong&gt; called &lt;code&gt;VehicleCompanyTypeIndex&lt;/code&gt; with the following structure:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Partition Key:&lt;/strong&gt; vehicleCompany (string) - Enables queries to be grouped by the company.&lt;br&gt;
&lt;strong&gt;Sort Key:&lt;/strong&gt; vehicleType (string) - Allows sorting and filtering by vehicle type within each company.&lt;br&gt;
&lt;strong&gt;Projection Type:&lt;/strong&gt; ALL - Includes all attributes from the original table in the index, making it possible to access the entire item data without referring back to the main table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; table.addGlobalSecondaryIndex&lt;span class="o"&gt;({&lt;/span&gt;
      indexName: &lt;span class="s2"&gt;"VehicleCompanyTypeIndex"&lt;/span&gt;,
      partitionKey: &lt;span class="o"&gt;{&lt;/span&gt;
        name: &lt;span class="s2"&gt;"vehicleCompany"&lt;/span&gt;,
        &lt;span class="nb"&gt;type&lt;/span&gt;: dynamodb.AttributeType.STRING,
      &lt;span class="o"&gt;}&lt;/span&gt;,
      sortKey: &lt;span class="o"&gt;{&lt;/span&gt; name: &lt;span class="s2"&gt;"vehicleType"&lt;/span&gt;, &lt;span class="nb"&gt;type&lt;/span&gt;: dynamodb.AttributeType.STRING &lt;span class="o"&gt;}&lt;/span&gt;,
      projectionType: dynamodb.ProjectionType.ALL,
    &lt;span class="o"&gt;})&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create Vehicle Data Lambda function
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    const createVehicleDataFunction &lt;span class="o"&gt;=&lt;/span&gt; new lambda.Function&lt;span class="o"&gt;(&lt;/span&gt;
      this,
      &lt;span class="sb"&gt;`&lt;/span&gt;createVehicleDataFunction&lt;span class="sb"&gt;`&lt;/span&gt;,
      &lt;span class="o"&gt;{&lt;/span&gt;
        runtime: lambda.Runtime.NODEJS_20_X,
        memorySize: 128,
        &lt;span class="nb"&gt;timeout&lt;/span&gt;: cdk.Duration.seconds&lt;span class="o"&gt;(&lt;/span&gt;100&lt;span class="o"&gt;)&lt;/span&gt;,
        architecture: lambda.Architecture.X86_64,
        handler: &lt;span class="s2"&gt;"createVehicleLambda.handler"&lt;/span&gt;,
        code: lambda.Code.fromAsset&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"dist/handlers"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
        environment: &lt;span class="o"&gt;{&lt;/span&gt;
          DYNAMODB_TABLE: table.tableName,
        &lt;span class="o"&gt;}&lt;/span&gt;,
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to give access permission to the lambda function to write data to DynamoDB. This can be achieved using below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    table.grantReadWriteData&lt;span class="o"&gt;(&lt;/span&gt;createVehicleDataFunction&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to integrate the API Gateway with our Lambda function, we can create a new route in the API Gateway and trigger the Lambda function using below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    const vehicle &lt;span class="o"&gt;=&lt;/span&gt; api.root.addResource&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"vehicle"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    vehicle.addMethod&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"POST"&lt;/span&gt;, new LambdaIntegration&lt;span class="o"&gt;(&lt;/span&gt;createVehicleDataFunction&lt;span class="o"&gt;))&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Publish data to SQS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to publish data to SQS, I used a lambda function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   const publishVehicleDataFunction &lt;span class="o"&gt;=&lt;/span&gt; new lambda.Function&lt;span class="o"&gt;(&lt;/span&gt;
      this,
      &lt;span class="sb"&gt;`&lt;/span&gt;publishVehicleDataFunction&lt;span class="sb"&gt;`&lt;/span&gt;,
      &lt;span class="o"&gt;{&lt;/span&gt;
        runtime: lambda.Runtime.NODEJS_20_X,
        memorySize: 128,
        &lt;span class="nb"&gt;timeout&lt;/span&gt;: cdk.Duration.seconds&lt;span class="o"&gt;(&lt;/span&gt;100&lt;span class="o"&gt;)&lt;/span&gt;,
        architecture: lambda.Architecture.X86_64,
        handler: &lt;span class="s2"&gt;"publishEmissionData.handler"&lt;/span&gt;,
        code: lambda.Code.fromAsset&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"dist/handlers"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
        environment: &lt;span class="o"&gt;{&lt;/span&gt;
          QUEUE_URL: queue.queueUrl,
        &lt;span class="o"&gt;}&lt;/span&gt;,
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to give permission for this lambda function to send message to SQS queue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; queue.grantSendMessages&lt;span class="o"&gt;(&lt;/span&gt;publishVehicleDataFunction&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets link this lambda function to API Gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    const publishResource &lt;span class="o"&gt;=&lt;/span&gt; vehicle.addResource&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"publish"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    publishResource.addMethod&lt;span class="o"&gt;(&lt;/span&gt;
      &lt;span class="s2"&gt;"POST"&lt;/span&gt;,
      new LambdaIntegration&lt;span class="o"&gt;(&lt;/span&gt;publishVehicleDataFunction&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deployment to LocalStack
&lt;/h3&gt;

&lt;p&gt;Lets start the localstack using the command below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;localstack start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After LocalStack is ready, lets bootstrap the cdk using below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run cdk:local bootstrap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After successfully bootstapping lets build project for deployment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, Lets deploy the project to localstack using the command below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run cdk:local deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  API Endpoints
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Endpoint&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/vehicle/{VehicleNumber}/time/{creationTime}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Retrieve vehicle data by creationTime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/vehicle&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Create new vehicle Emission Data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/vehicle/sqs&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Publish emission data to SQS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  API Test
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create vehicle data
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'{API Gateway Endpoint}/dev/vehicle'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s1"&gt;'{
    "creationTime": 1730641478,
    "creationTimeISO": "2024-11-03T13:44:38+00:00",
    "userId": "user12",
    "vehicleNumber": "M-GC-123",
    "vehicleType": "CAR",
    "vehicleCompany": "BMW",
    "fuelType": "Petrol",
    "liters": 5.7,
    "cost": 324,
    "co2EmissionFactor": 34
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Get vehicle data by vehicle number and creation timestamp
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'{API Gateway Endpoint}/dev/vehicle/M-GC-123/time/1730641478'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Publish vehicle data to SQS queue
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'{API Gateway Endpoint}/dev/vehicle/publish'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s1"&gt;'{
    "creationTime": 1730641488,
    "creationTimeISO": "2024-11-03T14:44:38+00:00",
    "userId": "user12",
    "vehicleNumber": "M-MJ-456",
    "vehicleType": "CAR",
    "vehicleCompany": "Benz",
    "fuelType": "Petrol",
    "liters": 9.7,
    "cost": 1234,
    "co2EmissionFactor": 50
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  AWS CLI Commands
&lt;/h2&gt;

&lt;p&gt;Below are some of the useful AWS CLI commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# CloudFormation&lt;/span&gt;

aws cloudformation list-stacks &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566  &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 

aws cloudformation delete-stack &lt;span class="nt"&gt;--stack-name&lt;/span&gt; FleetEmissionStack &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566

aws cloudformation list-stacks &lt;span class="nt"&gt;--stack-status-filter&lt;/span&gt; DELETE_COMPLETE &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566

aws cloudformation describe-stacks &lt;span class="nt"&gt;--stack-name&lt;/span&gt; FleetEmissionStack &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566  &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1

aws cloudformation describe-stack-resources &lt;span class="nt"&gt;--stack-name&lt;/span&gt; FleetEmissionStack &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1


&lt;span class="c"&gt;# API Gateway&lt;/span&gt;

aws apigateway get-rest-apis &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1

aws apigateway delete-stage &lt;span class="nt"&gt;--rest-api-id&lt;/span&gt; szt80q7wda &lt;span class="nt"&gt;--stage-name&lt;/span&gt; DevStage &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1

aws apigateway delete-rest-api  &lt;span class="nt"&gt;--rest-api-id&lt;/span&gt; szt80q7wda  &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1

aws apigateway get-api-keys &lt;span class="nt"&gt;--include-values&lt;/span&gt; &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566

aws apigateway get-resources &lt;span class="nt"&gt;--rest-api-id&lt;/span&gt; egpnkrrsgi &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566

&lt;span class="c"&gt;# Lambda&lt;/span&gt;

aws lambda list-functions &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1

aws lambda list-event-source-mappings &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1

&lt;span class="c"&gt;# dynamodb&lt;/span&gt;

aws dynamodb list-tables &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1

aws dynamodb scan &lt;span class="nt"&gt;--table-name&lt;/span&gt; FleetEmissionData &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1

aws dynamodb describe-table &lt;span class="nt"&gt;--table-name&lt;/span&gt; FleetEmissionData &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1


&lt;span class="c"&gt;# SQS&lt;/span&gt;

aws sqs list-queues &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566 &lt;span class="nt"&gt;--region&lt;/span&gt; eu-central-1

aws sqs receive-message &lt;span class="nt"&gt;--queue-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://sqs.eu-central-1.localhost.localstack.cloud:4566/000000000000/FleetEmissionStack-FleetEmissionQueue2C4D14-9b6bdecd &lt;span class="nt"&gt;--endpoint-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:4566  &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project showcases a highly scalable and maintainable serverless architecture for managing vehicle emissions data. By using AWS services like &lt;code&gt;Lambda&lt;/code&gt;, &lt;code&gt;DynamoDB&lt;/code&gt;, and &lt;code&gt;SQS&lt;/code&gt;, the system can handle high volumes of data with low operational overhead. The modular design and organized project structure facilitate easy integration of new features and enhancements. With a reliable and efficient mechanism for capturing, storing, and retrieving data, this project sets a strong foundation for building a comprehensive fleet management solution.&lt;/p&gt;

&lt;p&gt;The implementation of &lt;code&gt;Global Secondary Indexes (GSIs)&lt;/code&gt; on key fields like &lt;code&gt;vehicleCompany&lt;/code&gt; and &lt;code&gt;vehicleType&lt;/code&gt; has also enhanced the system’s querying capability, enabling more efficient data retrieval and expanding the types of insights that can be derived from the data. This architecture can seamlessly accommodate future enhancements as the data volume and complexity grow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Adding validation for APIs&lt;/li&gt;
&lt;li&gt;Authentication and Authorization&lt;/li&gt;
&lt;li&gt;Automated Testing&lt;/li&gt;
&lt;li&gt;Analytics&lt;/li&gt;
&lt;li&gt;Web UI&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>typescript</category>
      <category>cdk</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
