<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Obayuwana Paul</title>
    <description>The latest articles on Forem by Obayuwana Paul (@obayuwanapaul).</description>
    <link>https://forem.com/obayuwanapaul</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/obayuwanapaul"/>
    <language>en</language>
    <item>
      <title>AWS Serverless: Getting Started</title>
      <dc:creator>Obayuwana Paul</dc:creator>
      <pubDate>Tue, 27 Jan 2026 21:54:16 +0000</pubDate>
      <link>https://forem.com/obayuwanapaul/aws-serverless-getting-started-4e44</link>
      <guid>https://forem.com/obayuwanapaul/aws-serverless-getting-started-4e44</guid>
      <description>&lt;h2&gt;
  
  
  Before Serverless: The EC2 Era
&lt;/h2&gt;

&lt;p&gt;EC2 (Elastic Compute Cloud) is one of the foundational services in AWS. It gives you a virtual server where you run applications on AWS infrastructure. You control the computing resources. You scale up and down with a button. Unlike traditional on-premises servers that require physical management, EC2 abstracts the hardware. But you still manage a lot.&lt;/p&gt;

&lt;p&gt;For years, companies used EC2 for everything. It worked. But certain use cases made EC2 feel like overkill: processing an event, reacting to an image upload, responding to a database change. With EC2, you are not just writing code. You provision resources, deploy applications, patch operating systems, and handle infrastructure. For a simple function that runs for two seconds, that overhead adds up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda: Serverless Compute Arrives
&lt;/h2&gt;

&lt;p&gt;In November 2014, AWS introduced Lambda, the first core serverless compute service.&lt;/p&gt;

&lt;p&gt;Lambda lets you run code in response to events. You do not manage the underlying servers. AWS handles provisioning, scaling, and patching. You write your function, deploy it, and Lambda runs it when triggered.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Lambda Works
&lt;/h2&gt;

&lt;p&gt;Lambda runs on &lt;a href="https://firecracker-microvm.github.io/" rel="noopener noreferrer"&gt;Firecracker&lt;/a&gt;, a micro virtual machine technology that AWS developed and open-sourced. Dedicated hosts run these micro VMs. When you invoke a Lambda function, AWS creates an execution environment, allocates resources, and runs your code. After execution, the environment can be reused for subsequent invocations or cleaned up.&lt;/p&gt;

&lt;p&gt;You focus on your code. AWS handles everything else. This was the shift that made serverless real.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Invocation Types
&lt;/h2&gt;

&lt;p&gt;Lambda supports three invocation patterns. Each serves different use cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Asynchronous Invocation
&lt;/h3&gt;

&lt;p&gt;The caller sends an event to Lambda and moves on. It does not wait for a response. Lambda processes the event in the background. This is used for background tasks: sending emails, processing images, and updating analytics. The event source fires and forgets.&lt;/p&gt;

&lt;p&gt;For example, an S3 bucket with S3 events enabled - when you upload a new image to this S3 bucket, this causes an event to be generated and sent through to Lambda.&lt;/p&gt;

&lt;h3&gt;
  
  
  Synchronous Invocation
&lt;/h3&gt;

&lt;p&gt;The caller sends a request, waits for Lambda to execute, and receives a response. The caller is blocked until the function completes. This is used for user-facing applications. A user submits a form, API Gateway triggers Lambda, Lambda processes the request, and the response goes back to the user.&lt;/p&gt;

&lt;h3&gt;
  
  
  Poll-based Invocation
&lt;/h3&gt;

&lt;p&gt;Lambda polls a source for new records and processes them in batches. This works with SQS queues, Kinesis streams, and DynamoDB Streams. Lambda pulls messages or records, processes them, and deletes them from the source. Use this for stream processing and queue-based workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Serverless Matters
&lt;/h2&gt;

&lt;p&gt;Serverless is not just about avoiding server management. It changes how you build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automatic scaling:&lt;/strong&gt; Functions scale in response to demand without configuration. You do not provision capacity for peak load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost efficiency:&lt;/strong&gt; You pay for execution time, not idle infrastructure. If your function runs for 100 milliseconds, you pay for 100 milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built-in resilience:&lt;/strong&gt; Without much configuration, you get high availability. AWS handles infrastructure-level reliability.&lt;/p&gt;

&lt;p&gt;Serverless fits well for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Startups wanting minimal ops overhead&lt;/li&gt;
&lt;li&gt;Event-driven workloads (IoT, APIs, webhooks)&lt;/li&gt;
&lt;li&gt;Burst or unpredictable traffic&lt;/li&gt;
&lt;li&gt;Applications decomposed into microservices&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Lambda Limitations
&lt;/h2&gt;

&lt;p&gt;Lambda is not the right tool for every job.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cold starts introduce latency:&lt;/strong&gt; When a new execution environment spins up, your function takes longer to respond. The impact depends on runtime and package size. For latency-sensitive applications, consider &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html" rel="noopener noreferrer"&gt;Provisioned Concurrency&lt;/a&gt;, which keeps environments warm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timeout limits:&lt;/strong&gt; Lambda functions can run for a maximum of 15 minutes. Long-running or CPU-intensive tasks may need containers or EC2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vendor lock-in:&lt;/strong&gt; Using Lambda-specific features ties you to AWS. Moving to another cloud requires rewriting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging complexity:&lt;/strong&gt; Serverless architectures need new patterns and tooling for testing and debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Changed at re:Invent 2025
&lt;/h2&gt;

&lt;p&gt;AWS addressed several Lambda limitations at re:Invent 2025.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda Managed Instance
&lt;/h3&gt;

&lt;p&gt;One argument against Lambda was it is expensive at scale. With Lambda managed instance, the argument got weaker. By default, your Lambda functions run on shared EC2 instances across AWS accounts. You can now use dedicated instances from your own account instead. AWS still manages everything: OS patching, load balancing, and auto-scaling. You get isolation without the infrastructure burden.&lt;/p&gt;

&lt;h3&gt;
  
  
  Durable Functions
&lt;/h3&gt;

&lt;p&gt;AWS introduced Lambda Durable Functions, similar to Step Functions but without Amazon States Language. You write workflows in familiar code with your existing dependencies. Unlike standard Lambda's 15-minute timeout, durable function execution can span multiple invocations with a maximum timeout of one year.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html" rel="noopener noreferrer"&gt;Official AWS documentation for Durable Functions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The tradeoff with these new options: you now manage instance capacity alongside function code. Not harder, just different.&lt;/p&gt;

&lt;p&gt;Your existing Lambda integrations still work. EventBridge, CloudWatch, X-Ray, AWS Config. Nothing changes in how you trigger or observe functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  API Gateway: The HTTP Trigger
&lt;/h2&gt;

&lt;p&gt;In 2015, AWS introduced API Gateway, making Lambda practical for web and mobile applications.&lt;/p&gt;

&lt;p&gt;Lambda needs a trigger. For web applications, that trigger is usually an HTTP request. API Gateway provides the HTTP endpoint that receives requests and invokes your Lambda function.&lt;/p&gt;

&lt;p&gt;API Gateway is fully managed. AWS handles provisioning, scaling, and patching. You define your routes and connect them to backend logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Beyond Routing
&lt;/h3&gt;

&lt;p&gt;API Gateway does more than route requests to Lambda.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authentication and authorization:&lt;/strong&gt; Control access using IAM roles, Cognito user pools, or Lambda authorizers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request validation:&lt;/strong&gt; Define a schema and reject invalid requests before they reach your function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate limiting:&lt;/strong&gt; Protect your backend by limiting requests per second.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Usage quotas:&lt;/strong&gt; Limit requests per day or month. Useful for SaaS applications with tiered plans.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching:&lt;/strong&gt; Store responses to reduce Lambda invocations and improve latency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring:&lt;/strong&gt; Integrates with CloudWatch for metrics and logging.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Three API Types
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;REST API&lt;/strong&gt; is the original API Gateway product. It includes the most features: API keys, per-client throttling, request and response transformation, caching, AWS WAF integration, and usage plans. Use REST API when you need advanced features or when building APIs for third-party consumption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP API&lt;/strong&gt; is newer, simpler, and cheaper. AWS designed it for performance with minimal features. HTTP APIs cost up to 70% less than REST APIs. They support Lambda and HTTP backends, JWT authorization (OAuth 2.0 and OpenID Connect), CORS configuration, and automatic deployments. For most serverless applications, HTTP API is sufficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WebSocket API&lt;/strong&gt; enables real-time, two-way communication. Unlike REST and HTTP APIs where the client initiates every request, WebSocket keeps a persistent connection open. The server pushes messages without waiting for a request. Use WebSocket API for chat applications, live notifications, real-time dashboards, and multiplayer games.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB: The Serverless Database
&lt;/h2&gt;

&lt;p&gt;DynamoDB is a NoSQL database fully managed by AWS. You do not provision servers or manage resources. AWS handles all of that. You focus on your data.&lt;/p&gt;

&lt;p&gt;AWS introduced DynamoDB in 2012, before Lambda existed. It was foundational for serverless architectures from the start.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Traditional Databases Fail with Lambda
&lt;/h3&gt;

&lt;p&gt;When you build serverless applications, traditional databases like PostgreSQL or MySQL create problems.&lt;/p&gt;

&lt;p&gt;Serverless is ephemeral. Lambda execution environments spin up, handle a request, and get destroyed. Traditional databases need persistent connections. Your application opens a connection, keeps it alive, and reuses it. Lambda does not stay alive. Environments get killed. New environments spin up. Each one tries to open a new connection. With high traffic, you exhaust your connection pool. Your database chokes.&lt;/p&gt;

&lt;p&gt;DynamoDB works differently. It operates like an API. You make an HTTP request, and you get a response. No persistent connections. No connection pools to manage. Each request is independent. This is why DynamoDB works well with Lambda.&lt;/p&gt;

&lt;h3&gt;
  
  
  Capacity Modes
&lt;/h3&gt;

&lt;p&gt;DynamoDB offers two capacity modes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provisioned capacity:&lt;/strong&gt; You specify reads and writes per second. You pay for that capacity whether you use it or not. This is used when traffic is predictable, and you want to optimize costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On-demand capacity:&lt;/strong&gt; Pay per request. This came out later in 2018, perfect for serverless. No capacity planning. DynamoDB scales automatically. This is used when traffic is unpredictable or when you are starting.&lt;/p&gt;

&lt;p&gt;For serverless applications, on-demand mode usually makes sense. It matches the pay-per-use model of Lambda.&lt;/p&gt;

&lt;h3&gt;
  
  
  DynamoDB Resources
&lt;/h3&gt;

&lt;p&gt;If you want to go deeper on DynamoDB:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.dynamodbbook.com/" rel="noopener noreferrer"&gt;The DynamoDB Book&lt;/a&gt; by Alex DeBrie: If you are serious about DynamoDB, this is the book. Covers data modelling, access patterns, and advanced techniques. Essential reading for serverless developers.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/alexdebrie/awesome-dynamodb" rel="noopener noreferrer"&gt;Awesome DynamoDB&lt;/a&gt;: A curated list of resources for modelling, operating, and using DynamoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Communication Patterns: Request-Response vs Event-Driven
&lt;/h2&gt;

&lt;p&gt;Serverless applications use two main communication patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Request-response&lt;/strong&gt; is synchronous. The client sends a request and waits. This works for real-time scenarios: fetching data, submitting a form, and logging in. The user expects an immediate answer. API Gateway with Lambda is request-response in action.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event-driven&lt;/strong&gt; is asynchronous. A publisher announces something happened and does not wait. Consumers react when ready. This works for background processing: sending emails, updating analytics, processing images, syncing data. Choreography and orchestration are both patterns within event-driven architecture.&lt;/p&gt;

&lt;p&gt;Most serverless applications use both. API Gateway handles user-facing requests where you need immediate responses. EventBridge or SNS handles background processing where work happens asynchronously.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step Functions: Orchestration
&lt;/h2&gt;

&lt;p&gt;AWS introduced Step Functions in 2016 to coordinate multiple Lambda functions into workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orchestration&lt;/strong&gt; means a central controller manages the workflow. Think of a conductor leading an orchestra. The conductor tells each musician when to play. The orchestrator tells each component what to do and when.&lt;/p&gt;

&lt;p&gt;Step Functions is the orchestrator. You define your workflow as a state machine. Step Functions executes it, calling Lambda functions in sequence, handling retries, managing state, and dealing with errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  When to Use Orchestration
&lt;/h3&gt;

&lt;p&gt;Orchestration works well when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need visibility into the entire workflow&lt;/li&gt;
&lt;li&gt;Order of execution matters&lt;/li&gt;
&lt;li&gt;You want centralized error handling&lt;/li&gt;
&lt;li&gt;The workflow has many conditional paths&lt;/li&gt;
&lt;li&gt;Processes run for hours or days&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consider an order processing workflow: validate payment, check inventory, reserve items, send confirmation, notify shipping. You could write one Lambda that calls other Lambdas, handling retries and errors yourself. AWS calls this the "Lambda as orchestrator" anti-pattern. It works, but it gets messy.&lt;/p&gt;

&lt;p&gt;Step Functions handles this for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Step Functions Provides
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Built-in error handling:&lt;/strong&gt; Define retry strategies and catch blocks. If a step fails, Step Functions retries automatically or routes to a fallback.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State management:&lt;/strong&gt; Step Functions tracks where you are in the workflow. If something fails, you know which step failed and can resume from there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visual monitoring:&lt;/strong&gt; You see your workflow as a diagram. Each execution shows which steps succeeded, failed, or are in progress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long-running workflows:&lt;/strong&gt; Lambda has a 15-minute timeout. Step Functions Standard workflows can run for up to one year.&lt;/p&gt;

&lt;h2&gt;
  
  
  EventBridge: Choreography
&lt;/h2&gt;

&lt;p&gt;AWS introduced EventBridge in 2019, evolving from CloudWatch Events, to enable event-driven architectures at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choreography&lt;/strong&gt; has no central controller. Each service listens for events and decides how to respond. Think of dancers who know their moves and react to the music without someone directing each step.&lt;/p&gt;

&lt;p&gt;EventBridge is the choreography tool in AWS serverless. Services publish events to an event bus. Other services subscribe to events they care about and react independently.&lt;/p&gt;

&lt;h3&gt;
  
  
  When to Use Choreography
&lt;/h3&gt;

&lt;p&gt;Choreography works well when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Services should be truly independent&lt;/li&gt;
&lt;li&gt;Different teams own different services&lt;/li&gt;
&lt;li&gt;You want to avoid a single point of failure&lt;/li&gt;
&lt;li&gt;The workflow is simple (event happens, services react)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EventBridge and SNS are not competing. You often use both together. EventBridge excels at content-based routing with complex rules. SNS excels at fan-out to multiple subscribers. Choose based on the scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Lambda Is Not Enough: Fargate
&lt;/h2&gt;

&lt;p&gt;Lambda handles most serverless workloads. But sometimes you hit limits.&lt;/p&gt;

&lt;p&gt;Lambda has a 15-minute timeout. Memory caps at 10 GB. Cold starts affect latency-sensitive workloads. The package size is limited. Some workloads need persistent processes or specific runtimes that Lambda does not support.&lt;/p&gt;

&lt;p&gt;When you hit these limits, Fargate is an option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fargate is a serverless container.&lt;/strong&gt; You run containers without managing servers. Traditional container hosting (ECS on EC2, Kubernetes on EC2) requires you to provision and manage instances. You decide server count, instance types, and scaling. You patch operating systems.&lt;/p&gt;

&lt;p&gt;Fargate removes that. You define your container, specify CPU and memory, and AWS runs it. No servers to manage. No clusters to configure. You pay for the computing resources your containers use.&lt;/p&gt;

&lt;p&gt;Fargate works with ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service). If you already use Kubernetes, Fargate can run your pods without managing nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Serverless Mindset
&lt;/h2&gt;

&lt;p&gt;Serverless is not just "no servers." The servers exist. You just do not manage them.&lt;/p&gt;

&lt;p&gt;The real shift is in how you design systems. You think about events and reactions, not servers and capacity. You think about how signals flow, not where code runs. You design around system boundaries, coupling, and error propagation.&lt;/p&gt;

&lt;p&gt;Lambda, API Gateway, DynamoDB, Step Functions, EventBridge. These are not just services. They are building blocks for a different way of thinking about applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with &lt;strong&gt;Lambda&lt;/strong&gt; for compute&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;API Gateway&lt;/strong&gt; for HTTP triggers&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;DynamoDB&lt;/strong&gt; for data&lt;/li&gt;
&lt;li&gt;Coordinate with &lt;strong&gt;Step Functions&lt;/strong&gt; when you need orchestration&lt;/li&gt;
&lt;li&gt;Decouple with &lt;strong&gt;EventBridge&lt;/strong&gt; when you need choreography&lt;/li&gt;
&lt;li&gt;Reach for &lt;strong&gt;Fargate&lt;/strong&gt; when you outgrow Lambda&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the serverless toolkit. The rest is learning when to use each piece.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Comes Next
&lt;/h2&gt;

&lt;p&gt;This article covered the building blocks. Production serverless requires two more skills: &lt;strong&gt;observability&lt;/strong&gt; and &lt;strong&gt;cost management&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt; means understanding what your system is doing. CloudWatch collects logs and metrics from Lambda, API Gateway, and DynamoDB. X-Ray traces requests across services, showing you where time is spent and where errors occur. Without observability, debugging distributed systems becomes guesswork.&lt;/p&gt;

&lt;p&gt;Serverless promises you pay only for what you use. No traffic, no cost. That is mostly true. But serverless costs can surprise you. Lambda invocations are cheap, but they add up. DynamoDB writes cost more than you expect. CloudWatch logs accumulate. Without attention, you can spend more than planned.&lt;/p&gt;

&lt;p&gt;Understanding observability and cost patterns helps you build production-ready serverless applications. The services covered here give you the foundation. The next step is putting them together in a real project.&lt;/p&gt;




&lt;h2&gt;
  
  
  Further Resources
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Books
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://obayuwana.gumroad.com/l/eedjz" rel="noopener noreferrer"&gt;Serverless Essentials&lt;/a&gt;: A guide for leaders and beginners by Paul Obayuwana. This book covers the essentials: Lambda, API Gateway, DynamoDB, EventBridge, Step Functions, Fargate, security, cost management, and monitoring. Broad enough to give you the full picture.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Blogs and Newsletters
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://theburningmonk.com/" rel="noopener noreferrer"&gt;The Burning Monk&lt;/a&gt; by Yan Cui: Deep dives on serverless architecture, patterns, and best practices. Yan consults for companies running serverless in production. His insights come from real experience.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://offbynone.io/" rel="noopener noreferrer"&gt;Off By None&lt;/a&gt; by Jeremy Daly: Weekly serverless newsletter. Curates the best serverless content each week.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/blogs/compute/" rel="noopener noreferrer"&gt;AWS Compute Blog&lt;/a&gt;: Official AWS blog covering Lambda, Step Functions, and other compute services. Announcements, tutorials, and best practices from AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Open Source Projects
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/serverless/examples" rel="noopener noreferrer"&gt;Serverless Framework Examples&lt;/a&gt;: A collection of serverless application examples built with the Serverless Framework. Covers different languages, use cases, and AWS services.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/ran-isenberg/aws-lambda-handler-cookbook" rel="noopener noreferrer"&gt;AWS Lambda Handler Cookbook&lt;/a&gt;: A production-ready serverless template by Ran Isenberg. Shows best practices for Lambda functions with CDK, including testing, CI/CD, and operational patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Powertools for AWS Lambda:&lt;/strong&gt; Open source toolkit for implementing serverless best practices.

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/aws-powertools/powertools-lambda-python" rel="noopener noreferrer"&gt;Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws-powertools/powertools-lambda-typescript" rel="noopener noreferrer"&gt;TypeScript&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Courses
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://productionreadyserverless.com/?affiliateId=4031d06a-8854-4f2f-aeed-e81104e3fbda" rel="noopener noreferrer"&gt;Production-Ready Serverless&lt;/a&gt;: The most comprehensive serverless course available. Covers testing, monitoring, security, CI/CD, and operational best practices. Hands-on, practical, and based on real production experience.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://appsyncmasterclass.com/?affiliateId=4031d06a-8854-4f2f-aeed-e81104e3fbda" rel="noopener noreferrer"&gt;AppSync Masterclass&lt;/a&gt;: If you are building GraphQL APIs on AWS, this is the course. Covers AppSync from basics to production patterns, including real-time subscriptions, authentication, and integrating with Lambda and DynamoDB.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Connect with me: &lt;a href="https://obayuwanapaul.hashnode.dev/" rel="noopener noreferrer"&gt;Hashnode&lt;/a&gt; | &lt;a href="https://www.linkedin.com/in/obayuwana-paul/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>cloud</category>
    </item>
    <item>
      <title>MCP: extending your LLM without writing code</title>
      <dc:creator>Obayuwana Paul</dc:creator>
      <pubDate>Tue, 06 Jan 2026 18:19:32 +0000</pubDate>
      <link>https://forem.com/obayuwanapaul/mcp-extending-your-llm-without-writing-code-e3o</link>
      <guid>https://forem.com/obayuwanapaul/mcp-extending-your-llm-without-writing-code-e3o</guid>
      <description>&lt;p&gt;In 2025, MCP was everywhere. The "MCPification" of services became a defining trend. But MCP can be a hard concept to grasp, especially for non-technical users. This article breaks down what MCP is and what problem it solves.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem: LLMs have limitations
&lt;/h2&gt;

&lt;p&gt;Before we understand MCP, we have to understand the problem it solves. That problem is related to LLMs (Large Language Models). Your AI, like ChatGPT.&lt;/p&gt;

&lt;p&gt;The LLM on its own is not capable of doing much. Most of the time, you ask it questions and it answers from what it knows. Sometimes it is stuck in the past, not updated with the latest events. If you ask it to send an email or help you do some shopping, the LLM on its own cannot do that.&lt;/p&gt;

&lt;p&gt;LLMs are intelligent, but they have limitations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools and frameworks: developer-centric solutions
&lt;/h2&gt;

&lt;p&gt;This is where tools come in. Most providers like OpenAI and Anthropic have tool use built into their APIs. You define a tool, handle the logic, and execute it. But you manage everything yourself. Each provider implements tool use slightly differently. Frameworks like LangChain extend this standardization. You can import Python classes that represent capabilities like web search, mix and match LLM providers, and write custom tools. But it is still developer-centric. You have to do some coding.&lt;/p&gt;

&lt;p&gt;Building tools into your LLM requires work. You need to know coding. You need to know how to use LangChain and similar frameworks. That is why we have AI products where tools are already built in. Perplexity has web search and deep research. &lt;a href="https://every.to/?via=paul-obayuwana" rel="noopener noreferrer"&gt;Every.to&lt;/a&gt; offers a suite of AI tools: Spiral for writing, Cora for computer automation, and Sparkle for editing. These products have capabilities baked in by developers, so you just use them.&lt;/p&gt;

&lt;p&gt;But there is an issue with tools. You cannot extend the capability of your AI on your own if you are not good with coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP: module-centric, not developer-centric
&lt;/h2&gt;

&lt;p&gt;This is where MCP comes in. MCP is module-centric, not developer-centric.&lt;/p&gt;

&lt;p&gt;MCP allows you to extend the capability of your LLM. It is almost like adding an extension to your browser. Browser extensions extend your browser's capabilities. You can do this without coding. There might be some complexity in setup, but compared to tools, you do not need to write code.&lt;/p&gt;

&lt;p&gt;The creators of MCP (Anthropic) created a standard. This standard allows service providers to build their own MCP servers. When service providers build MCP servers using Anthropic's standard, your LLM can connect to them. Your LLM's capabilities get extended through these connections.&lt;/p&gt;

&lt;p&gt;Anthropic left the maintenance of MCP servers to the service providers. That is why you have so many MCP servers available. Anthropic created the standard. Service providers built servers using that unified language.&lt;/p&gt;

&lt;h2&gt;
  
  
  The MCP ecosystem
&lt;/h2&gt;

&lt;p&gt;In the MCP ecosystem, we have four parts: the MCP host, the MCP client, the MCP server, and the data sources or services.&lt;/p&gt;

&lt;p&gt;The MCP host is the AI application that wants to use external data or tools. Claude Desktop, Cursor, Windsurf, Cline. This is what you interact with. You talk to the host, and the host coordinates everything behind the scenes.&lt;/p&gt;

&lt;p&gt;The MCP client is a component that runs inside the host. It maintains the connection to MCP servers, sends requests, and receives responses. You do not interact with the client directly; the host manages it for you.&lt;/p&gt;

&lt;p&gt;The MCP server is a lightweight program that exposes data or capabilities using the MCP standard. Each server typically connects to one data source or service. Think of it as an adapter that knows how to fetch or manipulate a particular kind of data. Service providers build and maintain these servers using Anthropic's protocol.&lt;/p&gt;

&lt;p&gt;The data sources and services are the actual places where information or functionality resides. They can be local (files on your computer, a local database) or remote (web APIs, cloud services, Slack, GitHub). The server connects to these sources and exposes them to the AI.&lt;/p&gt;

&lt;p&gt;The flow: the AI host talks to a server (via its internal client), and the server talks to some data or tool. The AI might say, "Hey server, give me the file report.pdf" or "Hey server, execute this database query." The server performs that action and returns the result.&lt;/p&gt;

&lt;h2&gt;
  
  
  The building manager analogy
&lt;/h2&gt;

&lt;p&gt;Let me use a scenario. Picture the MCP ecosystem as a large building.&lt;/p&gt;

&lt;p&gt;The MCP host is the building itself. When you walk in, you are entering the host. This is what you interact with.&lt;/p&gt;

&lt;p&gt;The MCP client is the building manager. The building manager works behind the scenes, coordinating requests between you and the departments. You do not talk to the building manager directly; the building handles that for you.&lt;/p&gt;

&lt;p&gt;The LLM is the intelligence manager that sits upstairs. The building manager relays your request to the intelligence manager. The intelligence manager interprets what you want and identifies which department can help.&lt;/p&gt;

&lt;p&gt;Each department is an MCP server. These departments provide specialized services. They follow a standard language protocol (the one Anthropic defined). The files, databases, and resources within each department are the data sources and services.&lt;/p&gt;

&lt;p&gt;The flow: you enter the building (host) and state what you need. The building manager (client) relays your request to the intelligence manager (LLM). The intelligence manager identifies which department (server) can help and sends a request. The department fetches the information from its files and resources (data sources). The response travels back: department to intelligence manager to building manager to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP is stateful
&lt;/h2&gt;

&lt;p&gt;One more thing about MCP compared to tools: MCP is stateful. MCP servers maintain persistent connections during a session. The server keeps the connection open and can track context within that session. Traditional tool calls are one-off: call, respond, disconnect. MCP keeps the line open.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP primitives: resources, tools, prompts, and sampling
&lt;/h2&gt;

&lt;p&gt;When you build an MCP server, you expose one or more of these four capabilities:&lt;/p&gt;

&lt;p&gt;Resources are passive data. The client asks to read a URI (like a file path or database record). Think of it as a file read: informational only, no action taken.&lt;/p&gt;

&lt;p&gt;Tools are executable functions. These let the LLM take action on your behalf: execute a database query, send a Slack message, create a file. Tools do things.&lt;/p&gt;

&lt;p&gt;Prompts are reusable templates. A server can define a template (like "Analyze Error Logs") that the host loads to jumpstart a conversation. They give the AI context to work with.&lt;/p&gt;

&lt;p&gt;Sampling is when the server asks the LLM for help with reasoning. Picture the kitchen department in our building analogy asking the intelligence manager upstairs for help deciding which dish to prepare. The server needs the LLM's reasoning to complete its task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Categories of MCP servers
&lt;/h2&gt;

&lt;p&gt;MCP servers come in many categories. Here are some common ones:&lt;/p&gt;

&lt;p&gt;Browser automation: Playwright MCP is the most popular, with over 12,000 stars on GitHub. It lets AI agents interact with web pages, perform scraping, and automate browser-based workflows. With accessibility snapshots, it can help you do online shopping or navigate complex web apps.&lt;/p&gt;

&lt;p&gt;File system servers: These let your AI access files on your computer. Read, write, search, and manage files and directories.&lt;/p&gt;

&lt;p&gt;Database servers: These expose databases to your AI. Query data, run reports, and interact with your data stores.&lt;/p&gt;

&lt;p&gt;Code execution: Servers like Code Alchemist let you run code in simulated environments. Your AI can execute Python or other languages safely.&lt;/p&gt;

&lt;p&gt;Vision and media: Some servers help AI process images, videos, or other media formats.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding MCP servers
&lt;/h2&gt;

&lt;p&gt;Several resources list available MCP servers:&lt;/p&gt;

&lt;p&gt;punkpeye/awesome-mcp-servers on GitHub: A curated collection of MCP servers with categories and descriptions.&lt;/p&gt;

&lt;p&gt;tolkonepiu/best-of-mcp-servers on GitHub: A ranked list of over 410 MCP servers, updated weekly.&lt;/p&gt;

&lt;p&gt;mcpserver.works: A website that catalogs MCP servers and makes them easy to discover.&lt;/p&gt;

&lt;p&gt;The ecosystem keeps growing. New servers appear regularly as more service providers adopt the standard.&lt;/p&gt;

&lt;h2&gt;
  
  
  In conclusion
&lt;/h2&gt;

&lt;p&gt;MCP lets you extend your AI without coding. Anthropic created the standard. Service providers build the servers. You just connect.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Preparing for an AWS Summit</title>
      <dc:creator>Obayuwana Paul</dc:creator>
      <pubDate>Mon, 18 Nov 2024 22:25:37 +0000</pubDate>
      <link>https://forem.com/obayuwanapaul/preparing-for-an-aws-summit-3b67</link>
      <guid>https://forem.com/obayuwanapaul/preparing-for-an-aws-summit-3b67</guid>
      <description>&lt;p&gt;Attending an AWS Summit can be both exciting and overwhelming. With numerous sessions, networking opportunities, and cutting-edge technologies on display, it’s easy to get lost in the whirlwind of activities. Here’s a short guide to help you prepare and maximize your time at an AWS Summit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Register Early&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Seats fill up quickly, and registration often closes once the program is fully booked. Don’t wait until the last minute. Secure your spot as soon as registration opens to avoid missing out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Download the Event App and Study the Session Guide&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AWS event app is essential. It provides the event schedule, session guides, and even a venue map. Use the app to review the session guide and select the ones you want to attend. There are various sessions for different experience levels, so plan to attend those that align with your skills or interests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Plan Your Day Ahead&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The day before the event, ensure you’re ready:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pack essentials like your ID for event entry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Depending on the summit, you may need to bring your laptop for hands-on labs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Plan your commute and parking, as logistics can sometimes be tricky.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Make the Most of Networking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Networking is key at events like these. Although it might seem overwhelming, take a deep breath and dive in. Here's how to make it easier:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Don’t stress about sticking to a strict plan; be open to spontaneous conversations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ask others about their projects or what they do. Not everyone you meet will be a developer — some may be business owners looking for cloud solutions. Find common ground.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prepare conversation starters, like how to improve your resume or what a day in their job is like.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect with people on LinkedIn or other social media platforms and send a message afterward to maintain the connection.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, after the summit, review what you’ve learned and identify areas to explore further in your AWS journey. Follow these tips to get the most out of your experience, and you’ll leave with new knowledge, connections, and possibly a fresh perspective on your cloud career.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Do you need a portfolios </title>
      <dc:creator>Obayuwana Paul</dc:creator>
      <pubDate>Sat, 01 Aug 2020 22:20:20 +0000</pubDate>
      <link>https://forem.com/obayuwanapaul/do-you-need-a-portfolios-3n61</link>
      <guid>https://forem.com/obayuwanapaul/do-you-need-a-portfolios-3n61</guid>
      <description>&lt;p&gt;&lt;a href="https://awesomeopensource.com/projects/portfolio"&gt;https://awesomeopensource.com/projects/portfolio&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>anyone interested in the following because I want to write on it</title>
      <dc:creator>Obayuwana Paul</dc:creator>
      <pubDate>Tue, 24 Mar 2020 23:28:12 +0000</pubDate>
      <link>https://forem.com/obayuwanapaul/anyone-interested-in-the-following-because-i-want-to-write-on-it-205b</link>
      <guid>https://forem.com/obayuwanapaul/anyone-interested-in-the-following-because-i-want-to-write-on-it-205b</guid>
      <description>&lt;p&gt;creating a project using bootstrap sass and parcel bundler&lt;br&gt;
understanding the this Parameter in javascript&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
