<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ogooluwa Akinola</title>
    <description>The latest articles on Forem by Ogooluwa Akinola (@ogooluwa).</description>
    <link>https://forem.com/ogooluwa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ogooluwa"/>
    <language>en</language>
    <item>
      <title>Hey folks! Checkout my latest tutorial on building a serverless sentiment analytics dashboard</title>
      <dc:creator>Ogooluwa Akinola</dc:creator>
      <pubDate>Tue, 25 Mar 2025 04:26:00 +0000</pubDate>
      <link>https://forem.com/ogooluwa/checkout-my-latest-tutorial-on-building-a-serverless-sentiment-analytics-dashboard-15l6</link>
      <guid>https://forem.com/ogooluwa/checkout-my-latest-tutorial-on-building-a-serverless-sentiment-analytics-dashboard-15l6</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/ogooluwa" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2681494%2F4a4f5326-ff24-4c4a-bc32-8fbae930b457.jpg" alt="ogooluwa"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/ogooluwa/building-a-serverless-social-media-sentiment-analytics-dashboard-on-aws-3doj" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Building a Serverless Social Media Sentiment Analytics Dashboard on AWS&lt;/h2&gt;
      &lt;h3&gt;Ogooluwa Akinola ・ Mar 25&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#serverless&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#machinelearning&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>aws</category>
      <category>serverless</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Building a Serverless Social Media Sentiment Analytics Dashboard on AWS</title>
      <dc:creator>Ogooluwa Akinola</dc:creator>
      <pubDate>Tue, 25 Mar 2025 04:13:39 +0000</pubDate>
      <link>https://forem.com/ogooluwa/building-a-serverless-social-media-sentiment-analytics-dashboard-on-aws-3doj</link>
      <guid>https://forem.com/ogooluwa/building-a-serverless-social-media-sentiment-analytics-dashboard-on-aws-3doj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg6ez4kiyiima6tdm4eb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg6ez4kiyiima6tdm4eb.png" alt="Architecture" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hey there, fellow AWS explorers! Ever wondered how to turn the chaotic chatter of social media into actionable insights? Today, we're diving headfirst into the world of serverless architecture to build a simple analytics dashboard for social media sentiment data.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll walk through building a complete serverless backend solution for a social media sentiment analytics dashboard. We'll be leveraging AWS Lambda, API Gateway, DynamoDB, Amazon Kinesis, and a few other services to create a scalable, cost-effective system. We will be using AWS CloudFormation templates to define and manage our infrastructure as code, which enables version control, reproducibility, and easier collaboration.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Serverless?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now, you might be asking, "Why go serverless?" Great question! Here's why:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No Servers to Manage:&lt;/strong&gt; Say goodbye to patching, scaling, and the headache of managing infrastructure.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pay-as-you-go:&lt;/strong&gt; Only pay for the compute time you consume.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability:&lt;/strong&gt; AWS handles scaling automatically, so your application can handle any load.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster Development:&lt;/strong&gt; Focus on your code, not infrastructure.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code:&lt;/strong&gt; Using CloudFormation, we define and manage our infrastructure in a declarative way, enabling version control, reproducibility, and easier collaboration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Let’s get our Hands Dirty&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Pre-requisite&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Account:&lt;/strong&gt; If you don't have one, sign up for a &lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=*all&amp;amp;awsf.Free%20Tier%20Categories=*all" rel="noopener noreferrer"&gt;free tier account&lt;/a&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM User:&lt;/strong&gt; Create an &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console" rel="noopener noreferrer"&gt;IAM user&lt;/a&gt; with the necessary permissions for Lambda, API Gateway, DynamoDB, etc.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI:&lt;/strong&gt; Install and configure the &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-awscli.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; for command-line access if you haven’t.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mastodon API Credentials:&lt;/strong&gt; In this project, we’ll use the &lt;a href="https://docs.joinmastodon.org/methods/search/" rel="noopener noreferrer"&gt;Mastodon&lt;/a&gt; to source our social media data. Follow this link to obtain your &lt;a href="https://www.npmjs.com/package/masto" rel="noopener noreferrer"&gt;access token&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Note: You can find the complete code &lt;a href="https://github.com/rovilay/social-sentiment-analytics-dashboard" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let's get started! We'll break this down into manageable steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1.  Creating your secrets manager&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We’ll create a secrets manager to store our mastodon API credentials.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the AWS Secrets Manager console
&lt;/li&gt;
&lt;li&gt;Click the &lt;code&gt;Store new secrets&lt;/code&gt; button &lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;Other type of secret&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add your Mastodon API credentials

&lt;ul&gt;
&lt;li&gt;MASTODON_INSTANCE_URL
&lt;/li&gt;
&lt;li&gt;We use &lt;a href="https://mastodon.social" rel="noopener noreferrer"&gt;&lt;code&gt;https://mastodon.social&lt;/code&gt;&lt;/a&gt; for this project
&lt;/li&gt;
&lt;li&gt;MASTODON_ACCESS_TOKEN
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Save and note the Secret ARN (we will use it later 😉)&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.  Creating an S3 bucket for Code storage&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We’ll create an S3 bucket to store all code for our lambda functions and cloud formation. This is where we will push our compiled lambda code and cloud formation templates for this project. To do this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the Amazon S3 console.
&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;Create a bucket&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Provide a unique name for your bucket and use all default settings.
&lt;/li&gt;
&lt;li&gt;Your bucket URL is &lt;code&gt;https://{YOUR-UNIQUE-BUCKET-NAME}.s3.amazonaws.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;We will use this URL throughout this project&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Defining the DynamoDB Table with CloudFormation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We'll use DynamoDB to store our sentiment data. Here's the CloudFormation template (dynamodb-table.yaml) that defines our table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
  SentimentDataTable:
    Type: 'AWS::DynamoDB::Table'
    Properties:
      TableName: SentimentDataTable
      AttributeDefinitions:
        - 
          AttributeName: DataId
          AttributeType: S
      KeySchema:
        - 
          AttributeName: DataId
          KeyType: HASH
      ProvisionedThroughput:
        ReadCapacityUnits: 5
        WriteCapacityUnits: 5
Outputs:
  SentimentDataTableName:
    Value: !Ref SentimentDataTable
    Description: Name of the Sentiment data table
    Export:
      Name: SentimentDataTableName
  SentimentDataTableArn:
    Value: !GetAtt SentimentDataTable.Arn
    Description: ARN of the Sentiment data table
    Export: 
      Name: SentimentDataTableArn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Explanation:&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The Resources section defines the resources we want to create. Here, we're creating a DynamoDB table named &lt;code&gt;SentimentDataTable&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;Properties specify the table's configuration, including:
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TableName:&lt;/strong&gt; The name of the table.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AttributeDefinitions:&lt;/strong&gt; The attributes that make up the table's schema. We define DataId as a String attribute.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KeySchema:&lt;/strong&gt; The primary key for the table. We use DataId as the hash key.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ProvisionedThroughput:&lt;/strong&gt; The read and write capacity for the table. For this tutorial, we'll use a basic configuration.
&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;Outputs&lt;/em&gt; section defines values that are returned when you create the CloudFormation stack. Here, the table name and ARN are exported, which can be used by other stacks.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt; You would deploy this template using the AWS CLI or the AWS Management Console. We will do this later (stay tuned 🙂).  &lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;After Deployment, navigate to the AWS console to view the &lt;code&gt;SentimentDataTable&lt;/code&gt;.!&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnt2ke0af2oec7pwj3qj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnt2ke0af2oec7pwj3qj.png" alt="DynamoDB table" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Building the Lambda Functions&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Lambda is the heart of our serverless backend. We'll create three main Lambda functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;data-collection-function:&lt;/strong&gt; This function will fetch data from a social media source (&lt;a href="https://mastodon.social" rel="noopener noreferrer"&gt;Mastodon&lt;/a&gt;) and send it to a Kinesis stream.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;sentiment-analysis-function:&lt;/strong&gt; This function will process the data from the Kinesis stream, analyze the sentiment of the text, and store the results in DynamoDB.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;api-handlers-function:&lt;/strong&gt; This function will handle API requests from the frontend, querying the sentiment data from DynamoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4.1 Data Collection Function&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Here's the code for the &lt;code&gt;data-collection-function&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { KinesisClient, PutRecordCommand, PutRecordCommandInput } from "@aws-sdk/client-kinesis";
import { SecretsManagerClient, GetSecretValueCommand } from "@aws-sdk/client-secrets-manager";
import { Handler } from 'aws-lambda';
import { createRestAPIClient } from 'masto';

const kinesisClient = new KinesisClient({});
const secretsManagerClient = new SecretsManagerClient({});

export const handler: Handler = async (event: any): Promise&amp;lt;{ statusCode: number, body: string }&amp;gt; =&amp;gt; {
  try {
    // 1. Retrieve API credentials from Secrets Manager
    const secretResponse = await secretsManagerClient.send(
      new GetSecretValueCommand({ SecretId: "social-media-analytics-secrets-manager" })
    );
    const secrets = JSON.parse(secretResponse.SecretString || "{}");

    const masto = createRestAPIClient({
      url: secrets.MASTODON_INSTANCE_URL || '',
      accessToken: secrets.MASTODON_ACCESS_TOKEN || '',
    });

    // 2. Fetch toots based on your criteria (e.g., keywords, hashtags)
    const toots = await masto.v2.search.list({
      q: "crypto",
      type: "statuses",
      limit: 40
    })

    // 3. Iterate through toots and send them to Kinesis
    const kinesisStreamName = "SocialMediaDataStream";
    const encoder = new TextEncoder()

    const res = await Promise.allSettled(toots.statuses?.map((toot) =&amp;gt; {
      const text = toot.content.replace(/&amp;lt;[^&amp;gt;]+&amp;gt;/g, '');
      const postId = toot.id;
      const createdAt = toot.createdAt;
      const authorUsername = toot.account.username;

      const data = {
        PostId: postId,
        Text: text,
        CreatedAt: createdAt,
        AuthorUsername: authorUsername,
      }

      // Prepare data for Kinesis
      const recordParams: PutRecordCommandInput = {
        Data: encoder.encode(JSON.stringify(data)),
        PartitionKey: postId,
        StreamName: kinesisStreamName
      };

      return kinesisClient.send(new PutRecordCommand(recordParams));
    }));

    return {
      statusCode: 200,
      body: "Successfully sent toots to Kinesis",
    };
  } catch (error) {
    console.error("Error processing toots:", error);
    return {
      statusCode: 500,
      body: "Error processing toots",
    };
  }
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Explanation:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;The function uses the &lt;code&gt;SecretsManagerClient&lt;/code&gt; to retrieve the &lt;a href="https://docs.joinmastodon.org/methods/search/" rel="noopener noreferrer"&gt;Mastodon API&lt;/a&gt; credentials. This is a best practice for security, as it avoids hardcoding sensitive information in your code.
&lt;/li&gt;
&lt;li&gt;It then uses the &lt;a href="https://www.npmjs.com/package/masto" rel="noopener noreferrer"&gt;Masto library&lt;/a&gt; to fetch "toots" (posts) from Mastodon. In this case, we are searching for “crypto” related posts.
&lt;/li&gt;
&lt;li&gt;For each toot, it extracts the relevant data (post ID, text, creation date, author) and sends it to a Kinesis stream using the KinesisClient. The PartitionKey is set to the &lt;code&gt;postId&lt;/code&gt; for even data distribution across Kinesis shards.
&lt;/li&gt;
&lt;li&gt;For error handling, we log into the console and return a status &lt;code&gt;500&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4.2 Sentiment Analysis Function&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Here's the code for the &lt;code&gt;sentiment-analysis-function&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { ComprehendClient, DetectSentimentCommand } from "@aws-sdk/client-comprehend";
import { DynamoDBClient, PutItemCommand } from "@aws-sdk/client-dynamodb";
import { Handler, KinesisStreamEvent } from "aws-lambda";

const comprehendClient = new ComprehendClient({});
const dynamoDBClient = new DynamoDBClient({});

export const handler: Handler = async (event: KinesisStreamEvent) =&amp;gt; {
    try {
      // 1. Process each record (toot)
      for (const record of event.Records || []) {
        if (!record?.kinesis?.data) continue

        const toot: {
          PostId: string,
          Text: string,
          AuthorUsername: string,
          CreatedAt: string
        } = JSON.parse(Buffer.from(record.kinesis.data, 'base64').toString())

        // 3. Detect sentiment using Comprehend
        const sentimentResponse = await comprehendClient.send(
          new DetectSentimentCommand({
            LanguageCode: "en",
            Text: toot.Text,
          })
        );

        // 4. Store toot and sentiment in DynamoDB
        const putItemParams = {
          TableName: "SentimentDataTable",
          Item: {
            DataId: { S: toot.PostId },
            Text: { S: toot.Text },
            AuthorUsername: { S: toot.AuthorUsername },
            CreatedAt: { S: toot.CreatedAt },
            Sentiment: { S: sentimentResponse.Sentiment || "UNKNOWN" },
            SentimentScore: {
              M: {
                Positive: { N: sentimentResponse.SentimentScore?.Positive?.toString() || "0" },
                Negative: { N: sentimentResponse.SentimentScore?.Negative?.toString() || "0" },
                Neutral: { N: sentimentResponse.SentimentScore?.Neutral?.toString() || "0" },
                Mixed: { N: sentimentResponse.SentimentScore?.Mixed?.toString() || "0" },
              },
            },
          },
        };
        await dynamoDBClient.send(new PutItemCommand(putItemParams));
      }
    } catch (error) {
      console.error("Error processing records:", error);
    }
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Explanation:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;This function is triggered by new data arriving in the Kinesis stream. It uses the &lt;code&gt;KinesisClient&lt;/code&gt; to retrieve records from the stream.
&lt;/li&gt;
&lt;li&gt;For each record, it parses the data and extracts the &lt;code&gt;toot&lt;/code&gt; information.
&lt;/li&gt;
&lt;li&gt;It then uses the &lt;code&gt;ComprehendClient&lt;/code&gt; to detect the sentiment of the toot's text.
&lt;/li&gt;
&lt;li&gt;Finally, it stores the &lt;code&gt;toot&lt;/code&gt; data and the sentiment analysis results in the &lt;code&gt;DynamoDB&lt;/code&gt; table using the &lt;code&gt;DynamoDBClient&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4.3 API Handlers Function&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Here's the code for the &lt;code&gt;api-handlers-function&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { APIGatewayProxyEvent, APIGatewayProxyResult, Context } from 'aws-lambda';
import { DynamoDBClient, ExecuteStatementCommand } from '@aws-sdk/client-dynamodb';

const TABLE_NAME = 'SentimentDataTable';
const VALID_SENTIMENTS = ['POSITIVE', 'NEGATIVE', 'NEUTRAL', 'MIXED'];
const dynamoDBClient = new DynamoDBClient({});

export const handler = async (event: APIGatewayProxyEvent): Promise&amp;lt;APIGatewayProxyResult&amp;gt; =&amp;gt; {
  const headers = {
    "Access-Control-Allow-Headers" : "Content-Type",
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Methods": "OPTIONS,POST,GET"
}

  try {
    // 1. Extract relevant data from the API Gateway event
    const { httpMethod, requestContext, queryStringParameters } = event;

    // 2. Determine the requested action based on the path or method
    if (httpMethod === 'GET' &amp;amp;&amp;amp; requestContext.resourcePath === '/sentiment/{keyword}') {
        // 3. Validate keyword parameter
        const keyword = event.pathParameters?.keyword;
        if (!keyword) {
            return { statusCode: 400, body: JSON.stringify({ error: 'Missing keyword' }) };
        }

        // 4. Validate sentiment parameter
        const sentiment = (queryStringParameters?.sentiment || '').toUpperCase(); 
        if (sentiment &amp;amp;&amp;amp; !VALID_SENTIMENTS.includes(sentiment)) {
          return { statusCode: 400, body: JSON.stringify({ error: `Invalid sentitment value: allowed values are ${VALID_SENTIMENTS}` }) };
        }

        // 5. Fetch data from DynamoDB based on the request
        const sentimentData = await getSentimentData(keyword, sentiment);

        // 6. Format the response
        return { statusCode: 200, headers, body: JSON.stringify(sentimentData ?? []) };
    }


    return { statusCode: 404, headers, body: JSON.stringify({ error: 'Not found' }) };
  } catch (error) {
    console.error('Error processing request:', error);
    return { statusCode: 500, headers, body: JSON.stringify({ error: 'Internal server error' }) };
  }
};

const getSentimentData = async (keyword: string, sentiment?: string) =&amp;gt; {
    let statement = `SELECT * FROM "${TABLE_NAME}" WHERE Sentiment = '${sentiment}' AND contains(Text, '${keyword}')`

    if (!sentiment) {
        statement = `SELECT * FROM "${TABLE_NAME}" WHERE contains(Text, '${keyword}')`
    }

    // Execute the PartiQL statement
    const command = new ExecuteStatementCommand({
        Statement: statement,
    });

    const response = await dynamoDBClient.send(command);

    return response.Items;
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Explanation:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;This function handles requests from external users and is triggered by an API Gateway request.
&lt;/li&gt;
&lt;li&gt;It extracts the keyword and sentiment parameters from the request.
&lt;/li&gt;
&lt;li&gt;It uses the &lt;code&gt;DynamoDBClient&lt;/code&gt; to query the &lt;code&gt;SentimentDataTable&lt;/code&gt; using a &lt;code&gt;PartiQL&lt;/code&gt; SELECT statement. The query filters the data based on the provided keyword and sentiment (optional).
&lt;/li&gt;
&lt;li&gt;It then returns the data in a JSON format.
&lt;/li&gt;
&lt;li&gt;Note, since we are using the &lt;code&gt;AWS_PROXY&lt;/code&gt; &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-method-integration.html" rel="noopener noreferrer"&gt;Method Integration type&lt;/a&gt; (refer to &lt;code&gt;api-gateway.yaml&lt;/code&gt;), It is important to send the &lt;code&gt;headers&lt;/code&gt; in the response object to prevent CORS errors.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Setting up API Gateway with CloudFormation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We'll use CloudFormation to define our API Gateway. Here's the api-gateway.yaml template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Parameters:
  apiGatewayName:
    Type: String
    Default: SentimentAPI
  apiGatewayStageName:
    Type: String
    AllowedPattern: '[a-z0-9]+'
    Default: dev
  apiGatewayHTTPMethod:
    Type: String
    Default: GET

Resources:
  SentimentAPI:
    Type: AWS::ApiGateway::RestApi
    Properties:
      Name: SentimentAPI
      EndpointConfiguration:
        Types:
          - REGIONAL

  SentimentAPIResource:
    Type: AWS::ApiGateway::Resource
    DependsOn:
      - SentimentAPI
    Properties:
      RestApiId: !Ref SentimentAPI
      ParentId: !GetAtt SentimentAPI.RootResourceId
      PathPart: 'sentiment'

  SentimentKeywordResource:
    Type: AWS::ApiGateway::Resource
    DependsOn:
      - SentimentAPI
    Properties:
      RestApiId: !Ref SentimentAPI
      ParentId: !Ref SentimentAPIResource
      PathPart: '{keyword}'

  SentimentAPIMethod:
    Type: AWS::ApiGateway::Method
    DependsOn:
      - SentimentAPI
      - SentimentAPIResource
    Properties:
      AuthorizationType: NONE
      ApiKeyRequired: false
      HttpMethod: !Ref apiGatewayHTTPMethod
      RequestParameters:
        method.request.path.keyword: true
      MethodResponses:
        - StatusCode: 200
          ResponseModels:
            "application/json": "Empty"
      Integration:
        IntegrationHttpMethod: POST
        Type: AWS_PROXY
        Uri: !Sub 
          - arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ApiHandlersFunctionArn}/invocations
          - ApiHandlersFunctionArn: !ImportValue ApiHandlersFunctionArn
      RestApiId: !Ref SentimentAPI
      ResourceId: !Ref SentimentKeywordResource

  SentimentAPIDeployment:
    Type: AWS::ApiGateway::Deployment
    DependsOn: SentimentAPIMethod
    Properties:
      RestApiId: !Ref SentimentAPI
      StageName: !Ref apiGatewayStageName

  SentimentAPIPermission:
    Type: AWS::Lambda::Permission
    DependsOn: SentimentAPI
    Properties:
      Action: lambda:InvokeFunction
      FunctionName: !ImportValue ApiHandlersFunctionName
      Principal: apigateway.amazonaws.com
      SourceArn: !Sub arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${SentimentAPI.RestApiId}/*/*

Outputs:
  apiGatewayInvokeURL:
    Value: !Sub https://${SentimentAPI}.execute-api.${AWS::Region}.amazonaws.com/${apiGatewayStageName}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Explanation:&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;This template defines the API Gateway and its resources.
&lt;/li&gt;
&lt;li&gt;It creates a REST API named SentimentAPI.
&lt;/li&gt;
&lt;li&gt;It defines a resource &lt;code&gt;/sentiment/{keyword}&lt;/code&gt;, where &lt;code&gt;{keyword}&lt;/code&gt; is a path parameter.
&lt;/li&gt;
&lt;li&gt;It creates a &lt;code&gt;GET&lt;/code&gt; method for this resource.
&lt;/li&gt;
&lt;li&gt;The Integration property is crucial:
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Type: AWS_PROXY&lt;/code&gt;: This tells API Gateway to forward the entire request to the ApiHandlersFunction Lambda function.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Uri&lt;/code&gt;: This specifies the ARN of the Lambda function to invoke. The &lt;code&gt;!Sub&lt;/code&gt; syntax is used to substitute the actual function ARN, which is obtained from the output value of the &lt;code&gt;api-handlers-function&lt;/code&gt; CloudFormation stack.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AWS::Lambda::Permission&lt;/code&gt;: This resource grants API Gateway permission to invoke the Lambda function.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78c6dnx1h8z7anya87a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78c6dnx1h8z7anya87a3.png" alt="API Gateway" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;6. Tying it All Together with a Main Stack&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To simplify deployment, we'll create a "main" CloudFormation stack (main-stack.yaml) that references the other stacks. This helps manage dependencies and ensures resources are created in the correct order.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Parameters:
  BucketName:
    Type: String
    Description: Unique name for the S3 bucket
    Default: {YOUR-UNIQUE-BUCKET-NAME}
  BucketURL:
    Type: String
    Description: S3 bucket URL
    Default: https://{YOUR-UNIQUE-BUCKET-NAME}.s3.amazonaws.com
  EnvVariablesAndCredentials:
    Type: String
    Description: Credentials
    Default: {YOUR-SECRETS-MANAGER-ARN}

Resources:
  KinesisDataStreamStack:
    Type: AWS::CloudFormation::Stack
    Properties:
      TemplateURL: !Sub
        - ${BucketURL}/infrastructure/kinesis-data-stream.yaml
        - BucketURL: !Ref BucketURL

  DataCollectionFunctionStack:
    Type: AWS::CloudFormation::Stack
    Properties:
      TemplateURL: !Sub
        - ${BucketURL}/infrastructure/data-collection-function.yaml
        - BucketURL: !Ref BucketURL
      Parameters:
        BucketName: !Ref BucketName
        EnvVariablesAndCredentials: !Ref EnvVariablesAndCredentials
    DependsOn:
      - KinesisDataStreamStack

  DynamoDBTableStack:
    Type: AWS::CloudFormation::Stack
    Properties:
      TemplateURL: !Sub
        - ${BucketURL}/infrastructure/dynamodb-table.yaml
        - BucketURL: !Ref BucketURL

  SentimentAnalysisFunctionStack:
    Type: AWS::CloudFormation::Stack
    Properties:
      TemplateURL: !Sub
        - ${BucketURL}/infrastructure/sentiment-analysis-function.yaml
        - BucketURL: !Ref BucketURL
      Parameters:
        BucketName: !Ref BucketName
    DependsOn:
      - DynamoDBTableStack
      - KinesisDataStreamStack

  ApiHandlersFunctionStack:
    Type: AWS::CloudFormation::Stack
    DependsOn:
      - DynamoDBTableStack
    Properties:
      TemplateURL: !Sub
        - ${BucketURL}/infrastructure/api-handlers-function.yaml
        - BucketURL: !Ref BucketURL
      Parameters:
        BucketName: !Ref BucketName

  ApiGatewayStack:
    Type: AWS::CloudFormation::Stack
    DependsOn:
      - ApiHandlersFunctionStack
    Properties:
      TemplateURL: !Sub
        - ${BucketURL}/infrastructure/api-gateway.yaml
        - BucketURL: !Ref BucketURL

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Explanation:&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;This template defines the overall application stack.
&lt;/li&gt;
&lt;li&gt;It uses &lt;code&gt;AWS::CloudFormation::Stack&lt;/code&gt; resources to reference the other CloudFormation templates (for Kinesis, Data Collection, DynamoDB, Sentiment Analysis, API Handlers, and API Gateway).
&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;DependsOn&lt;/code&gt; property is used to specify dependencies between the stacks, ensuring they are created in the correct order. For example, the &lt;code&gt;SentimentAnalysisFunctionStack&lt;/code&gt; depends on the &lt;code&gt;DynamoDBTableStack&lt;/code&gt; and &lt;code&gt;KinesisDataStreamStack&lt;/code&gt; because the Sentiment Analysis function needs the DynamoDB table and Kinesis stream to be created first.
&lt;/li&gt;
&lt;li&gt;Parameters like &lt;code&gt;BucketName&lt;/code&gt;, &lt;code&gt;BucketURL&lt;/code&gt;, and &lt;code&gt;EnvVariablesAndCredentials&lt;/code&gt; are used to pass configuration values to the nested stacks.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;7. Deploying to AWS&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We’ll deploy our code to aws via &lt;code&gt;aws-cli&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Since our lambda code is in typescript and has some external dependencies, we will build and compile our code to javascript. We will use &lt;a href="https://www.npmjs.com/package/esbuild" rel="noopener noreferrer"&gt;esbuild&lt;/a&gt; for this, ensure to install it globally or in your project dependencies.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;esbuild ./src/index.ts \--bundle \--minify \--sourcemap \--platform=node \--target=es2020 \--outfile=dist/index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After building the lambda functions, we’ll zip the bundled code.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd dist &amp;amp;&amp;amp; zip \-r {function-name}.zip index.js\*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now, we’ll upload our zip files and cloud formation template to s3
Upload zipped files
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  aws s3 cp ./dist/{function-name}.zip s3://{YOUR-UNIQUE-BUCKET-NAME}.}/{function-name}.zip  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upload Cloud formation templates&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 cp backend/infrastructure/lib/ s3://{YOUR-UNIQUE-BUCKET-NAME}/infrastructure/ \--recursive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Finally, we will deploy our stack using aws cloud-formation
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   aws cloudformation create-stack \  
      --stack-name social-sentiment-backend-stack \ 
      --template-url https://{YOUR-UNIQUE-BUCKET-NAME}.}.s3.amazonaws.com/infrastructure/main-stack.yaml \ 
      --capabilities CAPABILITY\_NAMED\_IAM CAPABILITY\_AUTO\_EXPAND

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3n7reuk1oy37skwwo8e2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3n7reuk1oy37skwwo8e2.png" alt="CloudFormation stack" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;8. Connecting the Frontend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We will create a simple analytics dashboard. This dashboard will retrieve the sentiment data via the sentiment API (api-gateway) url. To connect the frontend to this API, we would use the URL provided in the CloudFormation stack's output (apiGatewayInvokeURL) or we can navigate to the api-gateway console and get the invoke URL. For example, if the URL is &lt;a href="https://your-api-gateway-id.execute-api.us-east-1.amazonaws.com/dev" rel="noopener noreferrer"&gt;https://your-api-gateway-id.execute-api.us-east-1.amazonaws.com/dev&lt;/a&gt;, you would make a GET request to &lt;a href="https://your-api-gateway-id.execute-api.us-east-1.amazonaws.com/dev/sentiment/keyword?sentiment=POSITIVE" rel="noopener noreferrer"&gt;https://your-api-gateway-id.execute-api.us-east-1.amazonaws.com/dev/sentiment/keyword?sentiment=POSITIVE&lt;/a&gt; to get all positive sentiments for "keyword".&lt;br&gt;&lt;br&gt;
Note: this tutorial only supports the &lt;code&gt;crypto&lt;/code&gt; keyword.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5cjosxvdelwea6jh58o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5cjosxvdelwea6jh58o.png" alt="API Gateway Invoke URL" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fr5f5dfg9gxx161q108.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fr5f5dfg9gxx161q108.png" alt="Sentiment Analysis Dashboard" width="800" height="626"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;9. Clean up:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To avoid unnecessary and unforeseen costs, it is a good practice to clean up your aws resources. We simply just have to delete our stack by running this 👇🏿CLI command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation delete-stack --stack-name social-sentiment-backend-stack  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, navigate to your AWS console to delete your project bucket, and your secrets manager for the project.&lt;/p&gt;

&lt;p&gt;And there you have it! We've built a serverless backend for a social media sentiment analytics dashboard using AWS Lambda, API Gateway, DynamoDB, Amazon Kinesis, Eventbridge, and CloudFormation. This is just the beginning. You can further enhance it by adding more features, visualizations, and social media integrations.&lt;/p&gt;

&lt;p&gt;Share your thoughts and questions in the comments below.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Host a Static Website on Amazon Elastic Kubernetes Service (EKS)</title>
      <dc:creator>Ogooluwa Akinola</dc:creator>
      <pubDate>Thu, 06 Feb 2025 11:08:15 +0000</pubDate>
      <link>https://forem.com/ogooluwa/host-resume-on-amazon-elastic-kubernetes-service-eks-1i49</link>
      <guid>https://forem.com/ogooluwa/host-resume-on-amazon-elastic-kubernetes-service-eks-1i49</guid>
      <description>&lt;p&gt;In this article, we will explore how to deploy and run a static website (your resume in this case) on  Amazon EKS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technologies &amp;amp; Tools
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=*all&amp;amp;awsf.Free%20Tier%20Categories=*all" rel="noopener noreferrer"&gt;AWS Account (Free Tier)&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-awscli.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository" rel="noopener noreferrer"&gt;Docker Account&lt;/a&gt; - a version control for your code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;Kubectl&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://eksctl.io/installation/#" rel="noopener noreferrer"&gt;Eksctl&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository" rel="noopener noreferrer"&gt;Github Repo&lt;/a&gt; - a version control for your code&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requsite
&lt;/h2&gt;

&lt;h3&gt;
  
  
  [Optional] Github Account &amp;amp; Repo
&lt;/h3&gt;

&lt;p&gt;Skip this if you don't intend to save your project on github. However, you can follow these links to setup a &lt;a href="https://github.com/signup" rel="noopener noreferrer"&gt;Github Account&lt;/a&gt; &amp;amp; &lt;a href="https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository" rel="noopener noreferrer"&gt;Repo&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup AWS account and IAM User
&lt;/h3&gt;

&lt;p&gt;Create you &lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=*all&amp;amp;awsf.Free%20Tier%20Categories=*all" rel="noopener noreferrer"&gt;AWS Free Tier account&lt;/a&gt; if you haven't. Then, follow these steps to create your &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console" rel="noopener noreferrer"&gt;IAM user&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup AWS CLI
&lt;/h3&gt;

&lt;p&gt;Skip this if you already have AWS CLI setup on your machine.&lt;/p&gt;

&lt;p&gt;Follow this &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-awscli.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; installation guide.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Docker
&lt;/h3&gt;

&lt;p&gt;We need docker to create our images and run containers (locally) we also need a docker account to save our images. &lt;a href="https://app.docker.com/signup" rel="noopener noreferrer"&gt;Sign up&lt;/a&gt; to Docker and follow the &lt;a href="https://docs.docker.com/desktop/setup/install/mac-install/" rel="noopener noreferrer"&gt;Docker setup guide&lt;/a&gt; to get docker installed on your machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Kubernetes CLI
&lt;/h3&gt;

&lt;p&gt;Kubernetes provides a command line tool for communicating with a Kubernetes cluster's control plane, using the Kubernetes API. This tool is named kubectl. Follow this &lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;installation guide&lt;/a&gt; to get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup EKS CLI
&lt;/h3&gt;

&lt;p&gt;Eksctl is a simple CLI tool for creating and managing clusters on EKS - Amazon's managed Kubernetes service for EC2. Follow this &lt;a href="https://eksctl.io/installation/#" rel="noopener noreferrer"&gt;installation guide&lt;/a&gt; to get started.&lt;/p&gt;

&lt;p&gt;⚠️ PLEASE NOTE: &lt;a href="https://aws.amazon.com/eks/pricing/" rel="noopener noreferrer"&gt;Amazon EKS cost&lt;/a&gt; &lt;code&gt;10 cents&lt;/code&gt; per hour, although relatively cheap I wanted to let you know before we get started. We will ensure to clean up once we are done with this tutorial in order not to incur any hidden cost.&lt;/p&gt;

&lt;p&gt;Now let's get started! 😀&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1
&lt;/h3&gt;

&lt;p&gt;Create a folder for your project &lt;code&gt;EKS-Static-Resume-Website&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir EKS-Static-Resume-Website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2
&lt;/h3&gt;

&lt;p&gt;Change directory to &lt;code&gt;EKS-Static-Resume-Website&lt;/code&gt; and create necessary files and folders&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd EKS-Static-Resume-Website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch index.html Dockerfile loadbalancerservice.yaml style.css
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3
&lt;/h3&gt;

&lt;p&gt;Convert your resume to html (you can you GPT AI tools --- ChatGPT, Gemini, etc for that. Or copy my resume from my &lt;a href="https://github.com/rovilay/resumestaticwebsite/blob/main/index.html" rel="noopener noreferrer"&gt;github repo&lt;/a&gt;) and then copy it into the &lt;code&gt;index.html&lt;/code&gt; file, also copy the generated css into the &lt;code&gt;style.css&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4
&lt;/h3&gt;

&lt;p&gt;Copy this &lt;a href="https://github.com/rovilay/resumestaticwebsite/blob/main/Dockerfile" rel="noopener noreferrer"&gt;Dockerfile&lt;/a&gt; 👇🏿 into the &lt;code&gt;Dockerfile&lt;/code&gt;. This is how we create the docker image for our resume website.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM httpd
COPY index.html /usr/local/apache2/htdocs/index.html
COPY style.css /usr/local/apache2/htdocs/style.css
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM httpd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are building our image 👆🏿 from the &lt;a href="https://hub.docker.com/_/httpd" rel="noopener noreferrer"&gt;Apache HTTP server image&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY index.html /usr/local/apache2/htdocs/index.html
COPY style.css /usr/local/apache2/htdocs/style.css
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We copy the &lt;code&gt;index.html&lt;/code&gt; and &lt;code&gt;style.css&lt;/code&gt; files into the &lt;code&gt;/usr/local/apache2/htdocs&lt;/code&gt; folder respectively. The htdocs folder is our server root and where Apache will serve files from.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5
&lt;/h3&gt;

&lt;p&gt;We are going to build our docker image. In your project root, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build --platform linux/amd64 -t {docker-account-username}/custom-httpd .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;replace {docker-account-username} with your real docker account username, for me it's &lt;code&gt;rovilay&lt;/code&gt;. The &lt;code&gt;-t&lt;/code&gt; flag tags the built image with the name &lt;code&gt;rovilay/custom-httpd&lt;/code&gt;. we specify the --platform &lt;code&gt;linux/amd64&lt;/code&gt; to build images compatible with our ec2 instances.&lt;/p&gt;

&lt;p&gt;Your image is still on your local machine. To confirm, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgf4ytiufnw8cjvthcqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgf4ytiufnw8cjvthcqr.png" alt="docker images" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5.1
&lt;/h4&gt;

&lt;p&gt;you can run your docker image locally just to confirm everything is fine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 8080:80 {docker-account-username}/custom-httpd 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7pql8870tz2rebvrslq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7pql8870tz2rebvrslq.png" alt="docker run" width="800" height="42"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check if the container is running, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1olt6z6xsgy38p555bch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1olt6z6xsgy38p555bch.png" alt="docker ps" width="800" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to localhost:8080 on your browser to see your resume&lt;/p&gt;

&lt;p&gt;Now run 👇🏿 to stop the container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stop {container ID}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 5.2
&lt;/h4&gt;

&lt;p&gt;We will push our image to docker hub. To do that run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push rovilay/custom-httpd:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you can visit your &lt;a href="https://hub.docker.com/repository/docker/rovilay/custom-httpd/general" rel="noopener noreferrer"&gt;docker account&lt;/a&gt; on docker hub to view the image.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6
&lt;/h3&gt;

&lt;p&gt;Now we are going to setup our Kubernetes configuration a.k.a manifest file. Copy this &lt;a href="https://github.com/rovilay/resumestaticwebsite/blob/main/loadbalancerservice.yaml" rel="noopener noreferrer"&gt;loadbalancerservice.yaml&lt;/a&gt; 👇🏿 in to the &lt;code&gt;loadbalancerservice.yaml&lt;/code&gt;. Don't feel overwhelmed I will explain the code. 😅&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: lb-service
  labels:
    app: lb-service
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  minReadySeconds: 30
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend-container
        image: rovilay/custom-httpd

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: lb-service
  labels:
    app: lb-service
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: frontend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first part 👆🏿 creates the loadbalancer service which is a kubernetes service that exposes the container (resume image) -- name &lt;code&gt;frontend&lt;/code&gt; running within our pod (more on this later). I won't bore you with the extra kubernetes details.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  minReadySeconds: 30
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend-container
        image: rovilay/custom-httpd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Deployment&lt;/code&gt; manages our pods and in extention our containers; basically each pod runs one container (note: a pod can run multiple containers but usually and in this case we run a container per pod). The deployment ensures that our cluster is running the right amount of pod replicas (&lt;code&gt;replicas: 2&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    spec:
      containers:
      - name: frontend-container
        image: rovilay/custom-httpd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This part 👆🏿 is where we referenced our docker image &lt;code&gt;rovilay/custom-httpd&lt;/code&gt; that our container is running.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7
&lt;/h3&gt;

&lt;p&gt;Create your kubernetes cluster on eks. To do this run,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster --node-type t2.micro --nodes 4  --region us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I for one love the random names AWS comes up with 😅, however you can add the &lt;code&gt;--name {your-cluster-name}&lt;/code&gt; flag if you want to name your cluster.&lt;/p&gt;

&lt;p&gt;It takes a while for your cluster to be fully setup on EKS, so take some time, relax 🛀🏿 and sip a cup of coffee. ☕️&lt;/p&gt;

&lt;p&gt;if your cluster runs successfully, you should see this 👇🏿 in your terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97xmyk18wrkg747tizpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97xmyk18wrkg747tizpv.png" alt="eksctl create cluster" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to your EKS dashboard on AWS to see your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0g6tgwv7p91gw4b4uy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0g6tgwv7p91gw4b4uy2.png" alt="eks dashboard" width="800" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To view the EC2 instances provisioned in the cluster, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpfe16053r7ts7y4gpar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpfe16053r7ts7y4gpar.png" alt="kubectl get tnode" width="800" height="65"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjk70hxed1l7l4sy42xa5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjk70hxed1l7l4sy42xa5.png" alt="kubectl get nodes 2" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 8
&lt;/h3&gt;

&lt;p&gt;Deploy your container to the kubernetes cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f loadbalancerservice.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm deployment, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68jn270k8b9seek4i77p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68jn270k8b9seek4i77p.png" alt="kubectl get all" width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 9
&lt;/h3&gt;

&lt;p&gt;Navigate to the load balancer url to view your host resume. Run,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the url is in the &lt;code&gt;External-IP&lt;/code&gt; column.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10
&lt;/h3&gt;

&lt;p&gt;Clean up your cluster to avoid any unforseen cost. Run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete deployment frontend-deployment

kubectl delete service lb-service 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now delete the cluster on EKS, get the cluster name by running,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl get cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and run,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl delete cluster {cluster-name}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5u2st0ygv9awut4p7h1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5u2st0ygv9awut4p7h1x.png" alt="eksctl delete cluster" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it guys!!! 🥳🎉🍾&lt;/p&gt;

&lt;p&gt;We have successfully hosted a simple static web page on EKS. &lt;/p&gt;

&lt;p&gt;Follow for more content like this.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>eks</category>
      <category>cloud</category>
    </item>
    <item>
      <title>A Quick Guide to Databases on AWS: Choosing the Right Tool for the Job🧰</title>
      <dc:creator>Ogooluwa Akinola</dc:creator>
      <pubDate>Tue, 28 Jan 2025 04:16:32 +0000</pubDate>
      <link>https://forem.com/ogooluwa/a-quick-guide-to-databases-on-aws-choosing-the-right-tool-for-the-job-3jeb</link>
      <guid>https://forem.com/ogooluwa/a-quick-guide-to-databases-on-aws-choosing-the-right-tool-for-the-job-3jeb</guid>
      <description>&lt;p&gt;Hey cloud explorers! Welcome back to another episode of Cloud in List of Threes (CiLoTs) ☁️3️⃣, where we break down complex cloud concepts into bite-sized pieces, seasoned with fun analogies! 🤩 Today, we're taking a trip into the world of databases. ☁️🗄️  We'll explore three popular AWS database solutions: Relational, NoSQL, and Purpose-built!&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Relational Databases
&lt;/h2&gt;

&lt;p&gt;Imagine a well-organized library with books arranged neatly on shelves. Each book represents a data record, and the shelves represent tables. You can easily find the book you need using the library's catalogue system (SQL queries). But there is more, this library is always open, fully managed, can scale as needed and can be customized to suit specific needs i.e. a library based on archaeology, and another based on science only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical translation 👨🏿‍💻:&lt;/strong&gt; Amazon RDS is a fully managed relational database service, that stores data in structured tables with rows and columns. They use SQL (Structured Query Language) to manage and query data, ensuring data integrity and consistency.  It makes it easy to set up, operate, and scale popular databases. RDS offers six database engines of varying choices — Aurora, MySQL, PostgreSQL, Oracle, Microsoft SQL Server, and Maria DB. Aurora is a fully AWS-managed relational database engine that's built for the cloud and compatible with MySQL and PostgreSQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Key Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;- Easy to Manage:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 Amazon RDS is a managed service which means you don’t have to worry about the administrative task overhead such as infrastructure maintenance, backups etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;- Wide Compatibility and Choice:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 Amazon RDS offers support for a wide range of relational database engines such as MySQL, PostgreSQL, Oracle, SQL server and Maria DB, hence, you’re not stuck with one choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;- High Availability:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 Amazon RDS offers deployment to multi-AZ regions which increases the availability of your database infrastructure. You don’t have to worry about downtime and your data is always available to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. NoSQL Databases
&lt;/h2&gt;

&lt;p&gt;Imagine a toolbox, but not just any toolbox. This one is special because it has compartments of all shapes and sizes. Some are perfect for holding long screwdrivers, others are ideal for small screws and nails, and some can even hold oddly shaped tools that don't fit anywhere else. And just like you can quickly grab the right tool from your toolbox when you need it, NoSQL databases allow you to easily access the specific data you're looking for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical translation 👨🏿‍💻:&lt;/strong&gt; NoSQL databases store data in flexible schemas that scale easily. AWS offers a variety of NoSQL database solutions e.g. Amazon DynamoDB and DocumentDB. Amazon DynamoDB is a fully managed key-value database that is designed to be highly partitionable and scalable horizontally. It provides very high performance in terms of latency and scalability. Amazon DocumentDB (with MongoDB compatibility) is a fully managed JSON document database that stores data as JSON objects that are flexible, semi-structured, and hierarchical in nature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Key Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Scalability:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 Imagine your toolbox growing as you get more tools. NoSQL databases excel at handling massive amounts of data and user traffic. This makes them ideal for applications that experience rapid growth or unpredictable spikes in usage. They can easily scale horizontally by distributing data across multiple servers, ensuring your application remains responsive even under heavy load. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Flexibility:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 Just like the toolbox with different compartments, NoSQL databases can handle various data structures. This is crucial because modern applications often deal with data that doesn't fit neatly into rows and columns e.g. social media posts, sensor data from IoT devices, or even complete documents.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- High Availability:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 If one of the compartments in your toolbox breaks, you can still access the tools in the other compartments. Similarly, NoSQL databases are designed for continuous operation and fault tolerance. They often replicate data across multiple availability zones or regions, so even if one server fails, your data remains accessible. This ensures your application stays up and running, providing a seamless experience for your users.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Purpose-built Databases
&lt;/h2&gt;

&lt;p&gt;Imagine specialized tools for specific tasks, like a high-powered drill for construction 👷🏿 or a precision screwdriver for electronics repair.🪛 These specialized tools offer more efficiency and precision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical translation 👨🏿‍💻:&lt;/strong&gt; AWS offers database service built and optimized for specific use cases. Amazon Neptune (Graph Database) is a managed database good for graph data e.g. social network data, recommendation engines, etc. Amazon Timestream (Time Series Databases) are ideal for tracking changes over time, such as stock prices, sensor readings from IoT devices, or website traffic patterns. Amazon QLDB (Ledger Databases) are perfect for situations where an accurate, auditable history is crucial, such as financial transactions, supply chain management, or voting systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Key Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Performance:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 Just like the high-powered drill is specifically designed for drilling tasks, purpose-built databases are optimized for their specific data models and workloads. This means they can handle those tasks with exceptional speed and efficiency, outperforming general-purpose databases in those specific scenarios.  &lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Cost-Efficiency:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 Purpose-built databases are designed to minimize unnecessary overhead and resource consumption. By focusing on specific workloads, they can offer cost savings compared to general-purpose databases that may require more resources to achieve the same level of performance for those particular tasks.  &lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Ease of Use:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 Think of how a tool designed for a specific task is often easier to use than a general-purpose tool. Purpose-built databases simplify development and management for their targeted data types. They often come with specialized features, tools, and APIs that make it easier to work with those specific data models. This reduces the complexity of development and streamlines database administration.&lt;/p&gt;

&lt;p&gt;And there you have it, folks! We've now learned how to store our data in the cloud (AWS). Whether you need to store structured, unstructured, or special-purpose data, AWS has the perfect database solution for you. Stay tuned for more cloud adventures in the next episode of Cloud in List of Threes! ☁️3️⃣&lt;/p&gt;

&lt;p&gt;Checkout the last episode &lt;a href="https://dev.to/ogooluwa/storing-your-stuff-in-the-cloud-a-simple-guide-to-s3-ebs-and-efs-2880"&gt;here.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cilots</category>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>database</category>
    </item>
    <item>
      <title>Store weather data in Amazon S3</title>
      <dc:creator>Ogooluwa Akinola</dc:creator>
      <pubDate>Thu, 23 Jan 2025 02:24:52 +0000</pubDate>
      <link>https://forem.com/ogooluwa/store-weather-data-in-amazon-s3-4lm7</link>
      <guid>https://forem.com/ogooluwa/store-weather-data-in-amazon-s3-4lm7</guid>
      <description>&lt;p&gt;In this article, we will look at how to store JSON data in an Amazon S3 bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technologies &amp;amp; Tools
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.python.org/3/using/mac.html" rel="noopener noreferrer"&gt;Python&lt;/a&gt; - Run time environment&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=*all&amp;amp;awsf.Free%20Tier%20Categories=*all" rel="noopener noreferrer"&gt;AWS Account (Free Tier)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html" rel="noopener noreferrer"&gt;Boto3&lt;/a&gt; - AWS SDK for python&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pypi.org/project/python-dotenv/" rel="noopener noreferrer"&gt;python-dotenv&lt;/a&gt; - to load environment variables from &lt;code&gt;.env&lt;/code&gt; files.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pypi.org/project/requests/" rel="noopener noreferrer"&gt;requests&lt;/a&gt; - a simple HTTPS library&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://openweathermap.org" rel="noopener noreferrer"&gt;Openweather API&lt;/a&gt; - external api that provides real time weather data&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository" rel="noopener noreferrer"&gt;Github Repo (Optional)&lt;/a&gt; - a version control for your code&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisite
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setup Python
&lt;/h3&gt;

&lt;p&gt;if you haven't, please setup &lt;a href="https://docs.python.org/3/using/mac.html" rel="noopener noreferrer"&gt;Python&lt;/a&gt; on you local machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  [Optional] Github Account &amp;amp; Repo
&lt;/h3&gt;

&lt;p&gt;Skip this if you don't intend to save your project on github. However, you can follow these links to setup a &lt;a href="https://github.com/signup" rel="noopener noreferrer"&gt;Github Account&lt;/a&gt; &amp;amp; &lt;a href="https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository" rel="noopener noreferrer"&gt;Repo&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup AWS account and IAM User
&lt;/h3&gt;

&lt;p&gt;Create you &lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=*all&amp;amp;awsf.Free%20Tier%20Categories=*all" rel="noopener noreferrer"&gt;AWS Free Tier account&lt;/a&gt; if you haven't. Then, follow these steps to create your &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console" rel="noopener noreferrer"&gt;IAM user&lt;/a&gt;. Ensure the IAM user is assigned the &lt;code&gt;AmazonS3FullAccess&lt;/code&gt; policy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbb3hn6qt7zwpq52y4gd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbb3hn6qt7zwpq52y4gd.png" alt="IAM-AmazonS3FullAccess" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup AWS CLI
&lt;/h3&gt;

&lt;p&gt;We are going to be using the &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html" rel="noopener noreferrer"&gt;Boto3&lt;/a&gt; AWS SDK to interact with AWS. Underneath, the AWS uses a AWS API KEY and secret key to authenticate with AWS. Hence we need to configure AWS credentials on our local machine. Please follow the &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration" rel="noopener noreferrer"&gt;Boto3 documentation&lt;/a&gt; for configuration guide. Ensure the IAM user credentials you're using is configured with the &lt;code&gt;AmazonS3FullAccess&lt;/code&gt; policy mentioned above.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenWeather API
&lt;/h3&gt;

&lt;p&gt;Openweather is the external api that we will use in this project to provide real time weather data. Please &lt;a href="https://home.openweathermap.org/users/sign_up" rel="noopener noreferrer"&gt;sign up&lt;/a&gt; to OpenWeather service to get your &lt;a href="https://home.openweathermap.org/api_keys" rel="noopener noreferrer"&gt;OpenWeather API key&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37wgzsiwfmqdps6gf7c0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37wgzsiwfmqdps6gf7c0.png" alt="Openweather-api-key" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's get started! 😀&lt;/p&gt;

&lt;p&gt;P.S the CLI commands examples will be for MacOs/linux users. For windows users you can research the commands online.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1
&lt;/h3&gt;

&lt;p&gt;Create a folder called &lt;code&gt;weather-dashboard-demo&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir weather-dashboard-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2
&lt;/h3&gt;

&lt;p&gt;Change directory to weather-dashboard-demo and create necessary files and folders&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd weather-dashboard-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir src
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch .env requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3
&lt;/h3&gt;

&lt;p&gt;Initialize git (optional)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4
&lt;/h3&gt;

&lt;p&gt;Add the package dependencies in re  requirements.txt&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "boto3==1.26" &amp;gt;&amp;gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "python-dotenv==1.0.0" &amp;gt;&amp;gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "requests==2.28" &amp;gt;&amp;gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5
&lt;/h3&gt;

&lt;p&gt;Install your dependencies&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6
&lt;/h3&gt;

&lt;p&gt;Add env variables to your .env file. &lt;/p&gt;

&lt;p&gt;Get your API key from the open weather account you created above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "OPENWEATHER_API_KEY=&amp;lt;replace-with-your-api-key&amp;gt;" &amp;gt;&amp;gt; .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add your s3 bucket name. Note your bucket name must be globally unique.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "AWS_BUCKET_NAME=weather-dashboard-&amp;lt;append-unique-value&amp;gt;" &amp;gt;&amp;gt; .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 7
&lt;/h3&gt;

&lt;p&gt;Create the necessary files in the src folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch src/__init__.py src/weather_dashboard.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 8
&lt;/h3&gt;

&lt;p&gt;Copy the full code into the weather_dashboard.py. Don't feel overwhelmed I will explain the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import json
import boto3
import requests
from datetime import datetime
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

class WeatherDashboard:
    def __init__(self):
        self.api_key = os.getenv('OPENWEATHER_API_KEY')
        self.bucket_name = os.getenv('AWS_BUCKET_NAME')
        self.s3_client = boto3.client('s3')

    def create_bucket_if_not_exists(self):
        """Create S3 bucket if it doesn't exist"""
        try:
            self.s3_client.head_bucket(Bucket=self.bucket_name)
            print(f"Bucket {self.bucket_name} exists")
        except:
            print(f"Creating bucket {self.bucket_name}")
        try:
            # Simpler creation for us-east-1
            self.s3_client.create_bucket(Bucket=self.bucket_name)
            print(f"Successfully created bucket {self.bucket_name}")
        except Exception as e:
            print(f"Error creating bucket: {e}")

    def fetch_weather(self, city):
        """Fetch weather data from OpenWeather API"""
        base_url = "http://api.openweathermap.org/data/2.5/weather"
        params = {
            "q": city,
            "appid": self.api_key,
            "units": "imperial"
        }

        try:
            response = requests.get(base_url, params=params)
            response.raise_for_status()
            return response.json()
        except requests.exceptions.RequestException as e:
            print(f"Error fetching weather data: {e}")
            return None

    def save_to_s3(self, weather_data, city):
        """Save weather data to S3 bucket"""
        if not weather_data:
            return False

        timestamp = datetime.now().strftime('%Y%m%d-%H%M%S')
        file_name = f"weather-data/{city}-{timestamp}.json"

        try:
            weather_data['timestamp'] = timestamp
            self.s3_client.put_object(
                Bucket=self.bucket_name,
                Key=file_name,
                Body=json.dumps(weather_data),
                ContentType='application/json'
            )
            print(f"Successfully saved data for {city} to S3")
            return True
        except Exception as e:
            print(f"Error saving to S3: {e}")
            return False

def main():
    dashboard = WeatherDashboard()

    # Create bucket if needed
    dashboard.create_bucket_if_not_exists()

    cities = ["Philadelphia", "Seattle", "New York"]

    for city in cities:
        print(f"\nFetching weather for {city}...")
        weather_data = dashboard.fetch_weather(city)
        if weather_data:
            temp = weather_data['main']['temp']
            feels_like = weather_data['main']['feels_like']
            humidity = weather_data['main']['humidity']
            description = weather_data['weather'][0]['description']

            print(f"Temperature: {temp}°F")
            print(f"Feels like: {feels_like}°F")
            print(f"Humidity: {humidity}%")
            print(f"Conditions: {description}")

            # Save to S3
            success = dashboard.save_to_s3(weather_data, city)
            if success:
                print(f"Weather data for {city} saved to S3!")
        else:
            print(f"Failed to fetch weather data for {city}")

if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This part of the code imports the packages used and loads the environment variables from the .env file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;os&lt;/code&gt; package allows interaction with the operating system (e.g file paths)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;json&lt;/code&gt; package allows easy parsing of json objects&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;boto3&lt;/code&gt; package allows easy interaction with AWS resources&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;requests&lt;/code&gt; package allows making https requests&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;datetime&lt;/code&gt; package for working with date and time&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dotenv&lt;/code&gt; package for loading the environment variables from the .env file.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import json
import boto3
import requests
from datetime import datetime
from dotenv import load_dotenv

# Load environment variables
load_dotenv()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This part of the code initializes the api_key, bucket_name, and s3_client properties of the &lt;code&gt;WeatherDashboard class&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class WeatherDashboard:
    def __init__(self):
        self.api_key = os.getenv('OPENWEATHER_API_KEY')
        self.bucket_name = os.getenv('AWS_BUCKET_NAME')
        self.s3_client = boto3.client('s3')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method checks if the bucket exists and if not it proceeds to creating the s3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    def create_bucket_if_not_exists(self):
        """Create S3 bucket if it doesn't exist"""
        try:
            self.s3_client.head_bucket(Bucket=self.bucket_name)
            print(f"Bucket {self.bucket_name} exists")
        except:
            print(f"Creating bucket {self.bucket_name}")
        try:
            # Simpler creation for us-east-1
            self.s3_client.create_bucket(Bucket=self.bucket_name)
            print(f"Successfully created bucket {self.bucket_name}")
        except Exception as e:
            print(f"Error creating bucket: {e}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method fetches the weather data by for the specified city and returns the response data in json format or None (in case of error).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    def fetch_weather(self, city):
        """Fetch weather data from OpenWeather API"""
        base_url = "http://api.openweathermap.org/data/2.5/weather"
        params = {
            "q": city,
            "appid": self.api_key,
            "units": "imperial"
        }

        try:
            response = requests.get(base_url, params=params)
            response.raise_for_status()
            return response.json()
        except requests.exceptions.RequestException as e:
            print(f"Error fetching weather data: {e}")
            return None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method saves the weather data to s3. It creates a unique json file ({city}-{timestamp}.json) and saves it in the weather-data folder on S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    def save_to_s3(self, weather_data, city):
        """Save weather data to S3 bucket"""
        if not weather_data:
            return False

        timestamp = datetime.now().strftime('%Y%m%d-%H%M%S')
        file_name = f"weather-data/{city}-{timestamp}.json"

        try:
            weather_data['timestamp'] = timestamp
            self.s3_client.put_object(
                Bucket=self.bucket_name,
                Key=file_name,
                Body=json.dumps(weather_data),
                ContentType='application/json'
            )
            print(f"Successfully saved data for {city} to S3")
            return True
        except Exception as e:
            print(f"Error saving to S3: {e}")
            return False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;main&lt;/code&gt; function is where we do all the magic. We initialize the dashboard instance using the WeatherDashboard class. We then create the S3 bucket if it doesn't exist. After which we iterate through our cities, fetch the weather data for each city, then extract the temparature, feel_like, humidity, and description data and print it to the console. Finally we save the weather data into s3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def main():
    dashboard = WeatherDashboard()

    # Create bucket if needed
    dashboard.create_bucket_if_not_exists()

    cities = ["Philadelphia", "Seattle", "New York"]

    for city in cities:
        print(f"\nFetching weather for {city}...")
        weather_data = dashboard.fetch_weather(city)
        if weather_data:
            temp = weather_data['main']['temp']
            feels_like = weather_data['main']['feels_like']
            humidity = weather_data['main']['humidity']
            description = weather_data['weather'][0]['description']

            print(f"Temperature: {temp}°F")
            print(f"Feels like: {feels_like}°F")
            print(f"Humidity: {humidity}%")
            print(f"Conditions: {description}")

            # Save to S3
            success = dashboard.save_to_s3(weather_data, city)
            if success:
                print(f"Weather data for {city} saved to S3!")
        else:
            print(f"Failed to fetch weather data for {city}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, This code ensures the main function runs when the file is directly executed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 9
&lt;/h3&gt;

&lt;p&gt;Run the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 src/weather_dashboard.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything goes right, you should see this on your terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xxt4br1axyoa1q0qnwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xxt4br1axyoa1q0qnwa.png" alt="script-result" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to your &lt;a href="https://us-east-1.console.aws.amazon.com/s3/buckets?region=us-east-1&amp;amp;bucketType=general" rel="noopener noreferrer"&gt;S3 console&lt;/a&gt; and check out the bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eubaeb8kz5f06upwi9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eubaeb8kz5f06upwi9l.png" alt="weather bucket" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;code&gt;weather-data&lt;/code&gt; bucket to see the weather data&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froygti2zzwdz3na8xnek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froygti2zzwdz3na8xnek.png" alt="weather data" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To view the JSON data, click on any of the city data and then click open&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0zzhdwqxbipk2dc6bbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0zzhdwqxbipk2dc6bbl.png" alt="open json data" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The JSON data looks like this &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4df9n3cdgtwx20gprgrn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4df9n3cdgtwx20gprgrn.png" alt="city json data" width="800" height="856"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Voila! we are done.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10
&lt;/h3&gt;

&lt;p&gt;It's good practice to clean up after you're done. So you can navigate to your S3 console, empty the bucket then proceed to delete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyrp7gk4783qiwa0wflx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyrp7gk4783qiwa0wflx.png" alt="Cleanup data" width="800" height="138"&gt;&lt;/a&gt;&lt;br&gt;
Conclusion&lt;/p&gt;

&lt;p&gt;In this project, we created an S3 bucket using the boto3 AWS SDK, we then loaded weather data into the S3 bucket. Follow me for more cloud-based projects. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>cloud</category>
      <category>cloudskills</category>
    </item>
    <item>
      <title>Storing Your Stuff in the Cloud: A Simple Guide to S3, EBS, and EFS</title>
      <dc:creator>Ogooluwa Akinola</dc:creator>
      <pubDate>Tue, 21 Jan 2025 11:38:18 +0000</pubDate>
      <link>https://forem.com/ogooluwa/storing-your-stuff-in-the-cloud-a-simple-guide-to-s3-ebs-and-efs-2880</link>
      <guid>https://forem.com/ogooluwa/storing-your-stuff-in-the-cloud-a-simple-guide-to-s3-ebs-and-efs-2880</guid>
      <description>&lt;p&gt;Hey cloud explorers! Welcome back to another episode of Cloud in List of Threes (CiLoTs) ☁️3️⃣, where we break down complex cloud concepts into bite-sized pieces, seasoned with fun analogies! 🤩 Today, we're taking a trip into the world of cloud storage. ☁️🗄️  We'll explore three popular AWS storage solutions: S3, EBS, and EFS!&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon S3 (Simple Storage Service) ☁️🗄️
&lt;/h2&gt;

&lt;p&gt;Imagine a massive, secure storage warehouse. 🏢 You can rent a unit (bucket) to store anything you want: photos, videos, documents, even old furniture! 📦 You can access your stuff from anywhere in the world, and you only pay for the space you use. 🌍💰&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical translation 👨🏿‍💻:&lt;/strong&gt; Amazon S3 is an object storage, meaning it stores data as individual objects in buckets. It's highly scalable, durable, and cost-effective. Each object has its own unique ID and metadata and is organized into containers called buckets. Buckets help manage and group objects, providing kind of like a hierarchical structure for your data. Basically, S3 allows you to store any type of data (text, blob, video, audio, etc)  in a highly scalable, durable, and cost-effective manner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Key Benefits:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Durability:&lt;/em&gt;&lt;/strong&gt; Your data is safe and secure, with multiple copies stored across different locations. 💪&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Scalability:&lt;/em&gt;&lt;/strong&gt; Store as much or as little as you need, and easily adjust your storage capacity. 📈&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Accessibility:&lt;/em&gt;&lt;/strong&gt; Access your data from anywhere with an internet connection. 🌐&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon EBS (Elastic Block Storage)  💾💻
&lt;/h2&gt;

&lt;p&gt;Think of EBS as an extra external hard drive you can attach to your computer (EC2 instance). It's like having more space to store your operating system, applications, and files. 💽 You can even choose different types of hard drives depending on your needs (speed, performance).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical translation 👨🏿‍💻:&lt;/strong&gt; Amazon EBS offers block-level storage volumes, which function similarly to individual hard drives.  These volumes can be attached to Amazon EC2 instances, providing the operating system and applications with persistent storage.  EBS offers various volume types (SSD-based and HDD-based), each optimized for specific performance needs such as throughput and IOPS (input/output operations per second).  This allows you to customize your storage based on the requirements of your applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Key Benefits:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Persistence:&lt;/em&gt;&lt;/strong&gt; Your data remains on the volume even if you unplug the hard drive or restart your computer/EC2. 🔄&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Flexibility:&lt;/em&gt;&lt;/strong&gt; Choose from different volume types — SSD-based and HDD-based to optimize performance and cost. ⚙️&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Integration:&lt;/em&gt;&lt;/strong&gt; Seamlessly integrates with EC2 for easy management. 🤝&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon EFS (Elastic File System) 📁🔗
&lt;/h2&gt;

&lt;p&gt;Imagine a toy box that you and your friends can all share. 🪀🗃️You can put your toys in it, and your friends can take them out to play with them. You can all play with the same toys at the same time without fighting over them! That's kind of like what Amazon EFS does for computers in the cloud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical translation&lt;/strong&gt; 👨🏿‍💻: EFS offers a network file system that can be accessed concurrently by multiple Amazon EC2 instances. This shared access facilitates collaboration and data sharing among different parts of your application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Key Benefits:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Scalability:&lt;/em&gt;&lt;/strong&gt; Automatically grows and shrinks as your storage needs change. 📈&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Shared Access:&lt;/em&gt;&lt;/strong&gt; Multiple services and instances can access the file system simultaneously. 🧑‍🤝‍🧑&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Elasticity&lt;/em&gt;&lt;/strong&gt;: Pay only for the storage you use, and scale your capacity as needed. 💰&lt;/p&gt;

&lt;p&gt;And there you have it, folks! We've now learned how to store your stuff in the cloud (AWS). Whether you need a massive warehouse, an extra hard drive, or a shared file system, AWS has the perfect storage solution for you. Stay tuned for more cloud adventures in the next episode of Cloud in List of Threes! ☁️3️⃣&lt;/p&gt;

&lt;p&gt;Check out the previous episode &lt;a href="https://dev.to/ogooluwa/from-regions-to-edge-locations-a-cilots-guide-to-cloud-infrastructure-4ajl"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;#CloudComputing #AWS #S3 #EBS #EFS #CiLoTs #CiLoTsEp04&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>ebs</category>
      <category>efs</category>
    </item>
    <item>
      <title>Cloud computing can be confusing, but it doesn't have to be! ☁️🤔 In the latest episode of Cloud in List of Threes (CiLoTs), I’m serving up easy-to-digest (pun intended 🤭) explanations analogy to explain Regions, Availability Zones, and Edge Locations</title>
      <dc:creator>Ogooluwa Akinola</dc:creator>
      <pubDate>Tue, 14 Jan 2025 10:38:23 +0000</pubDate>
      <link>https://forem.com/ogooluwa/cloud-computing-can-be-confusing-but-it-doesnt-have-to-be-in-the-latest-episode-of-cloud-in-l06</link>
      <guid>https://forem.com/ogooluwa/cloud-computing-can-be-confusing-but-it-doesnt-have-to-be-in-the-latest-episode-of-cloud-in-l06</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/ogooluwa" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2681494%2F4a4f5326-ff24-4c4a-bc32-8fbae930b457.jpg" alt="ogooluwa"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/ogooluwa/from-regions-to-edge-locations-a-cilots-guide-to-cloud-infrastructure-4ajl" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;From Regions to Edge Locations: A CiLoTs Guide to Cloud Infrastructure&lt;/h2&gt;
      &lt;h3&gt;Ogooluwa Akinola ・ Jan 14&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloudcomputing&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cilots&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloud&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>cloud</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>From Regions to Edge Locations: A CiLoTs Guide to Cloud Infrastructure</title>
      <dc:creator>Ogooluwa Akinola</dc:creator>
      <pubDate>Tue, 14 Jan 2025 10:21:48 +0000</pubDate>
      <link>https://forem.com/ogooluwa/from-regions-to-edge-locations-a-cilots-guide-to-cloud-infrastructure-4ajl</link>
      <guid>https://forem.com/ogooluwa/from-regions-to-edge-locations-a-cilots-guide-to-cloud-infrastructure-4ajl</guid>
      <description>&lt;p&gt;Hey cloud explorers! Welcome back to another episode of Cloud in List of Threes (CiLoTs) ☁️3️⃣, where we break down complex cloud concepts into bite-sized pieces, seasoned with fun analogies! 🤩 Today, we're taking a trip to the global restaurant of AWS, learning about Regions, Availability Zones, and Edge Locations. Get ready to order up some knowledge!&lt;/p&gt;

&lt;h2&gt;
  
  
  Region
&lt;/h2&gt;

&lt;p&gt;Imagine AWS as a global chain of restaurants.🌍🏘️ Each region is like a different country/state where they have at least a restaurant. You would probably choose to eat at the restaurant closest to you for the best experience. Similarly, you'd typically choose an AWS region closest to you or your users for optimal performance and legal compliance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical translation 👨🏿‍💻:&lt;/strong&gt; AWS Regions are separate geographic areas where Amazon's cloud infrastructure (data centre) is located. Each region is isolated and independent, consisting of multiple Availability Zones. This isolation helps achieve high fault tolerance and stability. Choosing the right region is important for factors like latency, cost, and regulatory requirements. You can view Amazon’s Global Regions here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Key Benefits:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;- Data Sovereignty:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 Keep your data in specific regions to comply with local regulations. (Like a restaurant sourcing ingredients locally 🧂🌶️)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;- Disaster Recovery:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 If one region goes down, your data is safe in another. (If one restaurant location is closed, you can go to another in a different country.)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;- &lt;strong&gt;Low Latency:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 Keep your data close to your users for faster access. (Customers get served faster at their local restaurant. 🚀)&lt;/p&gt;

&lt;h2&gt;
  
  
  Availability Zones (AZs)
&lt;/h2&gt;

&lt;p&gt;Now imagine this global chain of restaurants, let’s call it Pizzalicious, and you’ve guessed it, they sell delicious pizzas. 🍕And as a pizza lover you purchase pizzas from a branch near you. However, sometimes, that branch is closed for cleaning or fixing something. 🧹🛠️ That means you can't have pizza that day! 😥&lt;br&gt;
But what if that pizza place had another restaurant, just like it, in a different part of the city? 🌆 Then, if one restaurant was closed, you could still go to the other one and get your pizza! 🥳That's kind of like what Availability Zones are. They're like extra copies of things in the cloud, so if one "place" breaks, there's another one ready to go. ☁️🚀&lt;br&gt;
So, you can always get your pizza, or whatever you need from the cloud, no matter what! 😊&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical translation 👨🏿‍💻:&lt;/strong&gt; Availability Zones are physically separated data centres within a Region, each with its own power, networking, and connectivity. AZs are connected by a low latency, high throughput, and highly redundant network and are physically separated by a meaningful distance, many kilometres, from any other AZs, although all are within 100 km (60 miles) of each other to achieve high consistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Key Benefits:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;- Fault Tolerance:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 If one AZ fails, your applications can still run in another. (If one Pizzalicious branch has a power outage, you can go to another in a different branch within the same city.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;- High Availability:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 Ensures your applications are always accessible, even during disruptions. (There's always a Pizzalicious open somewhere, so you will always get your delicious pizza 🍕.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;- Scalability:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 Easily add more resources as your needs grow. (Pizzalicious can open more branches to serve more customers.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge Location
&lt;/h2&gt;

&lt;p&gt;Imagine you're craving a Pizzalicious pizza, but it's all the way across town! 🍕🚗 It would take a long time to drive there, and you're super hungry! 😫&lt;br&gt;
But guess what? They have a special pizza truck that comes to your neighbourhood! 🍕🚚 Now you can get your pizza much faster, without having to go all the way to the restaurant. 😄That’s basically what Edge Location is. It’s like a mini version of the cloud that's closer to you, so you can get things from the internet faster. 💻💨&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Translation👨🏿‍💻:&lt;/strong&gt; An Edge Location is a smaller data centre strategically positioned around the globe within the AWS network, designed to deliver content with minimal latency to users by caching copies of data closer to their physical location, primarily used by services like Amazon CloudFront (CDN) to ensure faster access to web content for users across different regions; essentially acting as a local storage point to serve content quickly instead of fetching it from a distant origin server every time a request is made.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Key Benefits:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;- Reduced Latency:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 Faster content delivery for users far from Regions. (Pizzalicious customers get their pizza faster from a nearby food truck.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;- Bandwidth Savings:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Less data needs to travel long distances. (The pizza doesn't have to be transported from the main Pizzalicious branch.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;- Improved Performance:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Enhanced user experience with faster loading times. (Pizzalicious customers enjoy a quicker and more convenient service.)&lt;/p&gt;

&lt;p&gt;So there you have it, folks! The next time you're grabbing a bite, remember those Regions, AZs, and Edge Locations working hard behind the scenes of your favourite cloud services. It's like a well-coordinated kitchen, ensuring your data is always hot and ready to serve! Stay tuned for more tasty cloud insights on Cloud in List of Threes! ☁️3️⃣&lt;/p&gt;

&lt;p&gt;Check out the previous episode &lt;a href="https://www.linkedin.com/posts/ogooluwa-akinola_cloudcomputing-aws-ec2-activity-7282250596617760768--Tqb?utm_source=share&amp;amp;utm_medium=member_desktop" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;#CloudComputing #AWS #AWSRegion #AvailabilityZone #EdgeLocation #CiLoTs #CiLoTsEp03&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>cilots</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
