<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Harshavardhan S U </title>
    <description>The latest articles on Forem by Harshavardhan S U  (@harsha_infinity).</description>
    <link>https://forem.com/harsha_infinity</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/harsha_infinity"/>
    <language>en</language>
    <item>
      <title>Internal Details About S3</title>
      <dc:creator>Harshavardhan S U </dc:creator>
      <pubDate>Thu, 14 Aug 2025 08:47:58 +0000</pubDate>
      <link>https://forem.com/aws-builders/internal-details-about-s3-26nn</link>
      <guid>https://forem.com/aws-builders/internal-details-about-s3-26nn</guid>
      <description>&lt;p&gt;We will see about how Amazon S3 works. &lt;br&gt;
S3 is based on &lt;em&gt;object-based&lt;/em&gt; storage model, where each and everything you put into S3 is considered as an object.&lt;/p&gt;

&lt;p&gt;An entity is considered an object if it has an Key (ID), data (blob), metadata and some attributes. &lt;/p&gt;

&lt;p&gt;Here the &lt;strong&gt;Key&lt;/strong&gt; is unique inside the bucket and metadata is data about the contents inside the object. The attributes are data about the object itself rather than the content inside. &lt;/p&gt;

&lt;p&gt;And it seems, S3 runs on distributed systems and basically every object uploaded is copied across different nodes, said as multi AZ Replication to improve fault tolerance since even if one node fails, the data is available in remaining nodes. &lt;/p&gt;

&lt;p&gt;The S3 follows flat storage model where there is no hierarchy in storage like we have in file storage where a folder can exist inside a folder. Here we cannot create a bucket inside a bucket. You can notice that you can upload a folder to S3, let's see that with an example. When you upload folder with picture1.png inside it. S3 will take it's path as its key which is images/picture1.png where "/" is just part of the object's key. Thereby creating an illusion of folder inside S3. &lt;br&gt;
Every object inside a single bucket is indexed using object's key. When you upload a file into a bucket, the file's name will be taken as the object's ID.&lt;/p&gt;

&lt;p&gt;Each operation you do in management console or AWS command line, it will fire a API call to the servers. The API is nothing but an access point for  servers to utilize the server functions. The simple way for developers to create a bucket is using AWS CLI. Let's see some commands to get a hands on experience. Please ensure that you have configured the command line with AWS credentials and you have installed AWS CLI. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bucket Management&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a bucket 
aws s3 mb s3://&amp;lt;bucket-name&amp;gt; 

# Delete a bucket: 
aws s3 rb s3://&amp;lt;bucket-name&amp;gt; --force
(Use --force to delete a non-empty bucket)

# List all buckets: 
aws s3 ls 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Object Management&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Upload a file: 
aws s3 cp &amp;lt;local-file-path&amp;gt; s3://&amp;lt;bucket-name&amp;gt;/&amp;lt;key&amp;gt;

# Download a file: 
aws s3 cp s3://&amp;lt;bucket-name&amp;gt;/&amp;lt;key&amp;gt; &amp;lt;local-file-path&amp;gt; 

# Delete an object: 
aws s3 rm s3://&amp;lt;bucket-name&amp;gt;/&amp;lt;key&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Synchronization&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Sync local directory to bucket 
aws s3 sync &amp;lt;local-directory&amp;gt; s3://&amp;lt;bucket-name&amp;gt; 

# Sync bucket to local directory: 
aws s3 sync s3://&amp;lt;bucket-name&amp;gt; &amp;lt;local-directory&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Listing objects&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# List objects in a bucket: 
aws s3 ls s3://&amp;lt;bucket-name&amp;gt; 

# For generating a presigned URL for an object:
aws s3 presign s3://&amp;lt;bucket-name&amp;gt;/&amp;lt;key&amp;gt; --expires-in &amp;lt;seconds&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have included the link where you can see all the basic commands with arguments to manage buckets and objects. &lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/s3/#single-local-file-and-s3-object-operations" rel="noopener noreferrer"&gt;s3 -- AWS CLI 2.28.6 Command Reference&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>storage</category>
    </item>
    <item>
      <title>Hosting My Portfolio on AWS: A Serverless Journey with S3, CloudFront, and Lambda</title>
      <dc:creator>Harshavardhan S U </dc:creator>
      <pubDate>Fri, 04 Apr 2025 13:10:56 +0000</pubDate>
      <link>https://forem.com/aws-builders/hosting-my-portfolio-on-aws-a-serverless-journey-with-s3-cloudfront-and-lambda-29nd</link>
      <guid>https://forem.com/aws-builders/hosting-my-portfolio-on-aws-a-serverless-journey-with-s3-cloudfront-and-lambda-29nd</guid>
      <description>&lt;p&gt;This article describes how I hosted my static portfolio website, which is a static website in the cloud using Amazon Web Services, leveraging &lt;em&gt;serverless technology&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Thanks to Namecheap's free domain for students, I designed the website with Canva and developed it using Bootstrap 5, which has a 12-column Layout for easy mobile responsiveness.   &lt;/p&gt;

&lt;p&gt;Here's how I did it, the challenges I faced, and why AWS Serverless was the perfect fit. &lt;/p&gt;

&lt;p&gt;These are the following services I used to make it possible. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: To store my website files, including HTML files, static folder (CSS, JS files), assets folder, and a resume.&lt;br&gt;
&lt;strong&gt;Amazon CloudFront&lt;/strong&gt;: To make the site globally available at greater speed using caching optimization. &lt;br&gt;
&lt;strong&gt;Amazon Certification Manager&lt;/strong&gt;: To obtain the SSL certificate for enabling https security on my site. &lt;br&gt;
&lt;strong&gt;Amazon API Gateway&lt;/strong&gt;: To make the lambda function available as an endpoint to the client side. &lt;br&gt;
&lt;strong&gt;Amazon Lambda&lt;/strong&gt;: To process the request and send a pre-signed URL of the resume stored in S3. &lt;br&gt;
&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt;: To store the email IDs with timestamps for future use.&lt;/p&gt;

&lt;p&gt;The architecture I used: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2heemvaohdl5nenjtlox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2heemvaohdl5nenjtlox.png" alt="Reference Architecture" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, create a bucket in your preferred region and upload all the required files. Block all public access because we going to let only CloudFront access the bucket through OAC (Origin Access Control). &lt;/p&gt;

&lt;p&gt;Bucket Items&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbtt25nwfb1ybnbw6vsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbtt25nwfb1ybnbw6vsw.png" alt="Bucket Items" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Block Public Access &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcezgjsfoeng4l3tv2yrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcezgjsfoeng4l3tv2yrq.png" alt="Block Public Access" width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bucket Policy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2008-10-17",
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
        {
            "Sid": "AllowCloudFrontServicePrincipal",
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudfront.amazonaws.com"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::harsha-resume-doc/*",
            "Condition": {
                "StringEquals": {
                    "AWS:SourceArn": "arn:aws:cloudfront::533267145529:distribution/E2NCAYZ67PRE3K"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create your free SSL certificate using ACM and verify it through your DNS provider. In my case, Namecheap's control panel, adding some DNS records for verification. After verification, the certificate is provided, and we are ready to use it for the next step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz0cjf1buorxsxbe6w1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz0cjf1buorxsxbe6w1r.png" alt="SSL Certificate Issued" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: You can do this step only if you are creating the request in N. Virginia region, i.e. us-east-1.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Create a CloudFront Distribution with Origins as the bucket. Attach the SSL certificate you got using the previous step, and more importantly about the price class, I went with all edge locations for best performance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg48m8w0ah9efzpbmuers.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg48m8w0ah9efzpbmuers.png" alt="Origin as your bucket" width="800" height="99"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Redirect your domain to the CloudFront by inserting the CNAME record in the DNS server in Namecheap. Now, you can view your website being hosted by pasting your domain name in the browser. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhqr2dfrnjjul0jc1hjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhqr2dfrnjjul0jc1hjk.png" alt="DNS Server Configuration" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's add functionality to our resume download button.&lt;br&gt;
Create a DynamoDB table with Email ID as a primary key for storing the email IDs of the users. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ftrtbg47wehh78ee0tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ftrtbg47wehh78ee0tf.png" alt="NoSQL Table is created" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a proper execution role for the Lambda function with access to S3 and DynamoDB using AWS IAM service. &lt;/p&gt;

&lt;p&gt;Create a Lambda function to get the request, process it, and store it in the DynamoDB table. Then, return a pre-signed URL to the resume back to the user for downloading it within 5 minutes (300 seconds).&lt;br&gt;
Lambda Code in Python&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
import datetime
import os

# Initialize AWS clients
dynamodb = boto3.resource("dynamodb")
s3 = boto3.client("s3")

# Environment variables (set these in Lambda)
TABLE_NAME = os.environ["DYNAMODB_TABLE"]  # DynamoDB Table Name
S3_BUCKET = os.environ["BUCKET_NAME"]     # S3 Bucket Name
RESUME_KEY = os.environ["RESUME_KEY"]               # File key in S3

def lambda_handler(event, context):
    try:
        # Parse request body
        body = json.loads(event["body"])
        email = body.get("email")

        if not email:
            return {"statusCode": 400, "body": json.dumps({"message": "Email is required"})}

        # Store email &amp;amp; date in DynamoDB
        table = dynamodb.Table(TABLE_NAME)
        table.put_item(
            Item={
                "email": email,
                "date": str(datetime.date.today())
            }
        )

        # Generate a pre-signed URL for the resume
        presigned_url = s3.generate_presigned_url(
            "get_object",
            Params={"Bucket": S3_BUCKET, "Key": RESUME_KEY},
            ExpiresIn=300  # URL valid for 5 minutes
        )

        return {
            "statusCode": 200,
            "body": json.dumps({"message": "Success", "url": presigned_url})
        }

    except Exception as e:
        print(f"Error: {str(e)}")
        return {"statusCode": 500, "body": json.dumps({"message": "Internal server error"})}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an HTTP API using API Gateway and add a route request-resume with a POST Request. Integrate the API with the Lambda function and add this API as a form action in the download button submission form. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3v9je0cglf3fvv618y5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3v9je0cglf3fvv618y5.png" alt="HTTP API is created" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ta-da!!! You will get your portfolio hosted completely using Serverless technology using AWS, and you can put this in your resume. &lt;/p&gt;

&lt;p&gt;Errors I have to face: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wrong access control of bucket Items in S3 
Solved by attaching a bucket policy allowing CloudFront access via OAC.&lt;/li&gt;
&lt;li&gt;Unable to cloudfront distribution 
Opened an AWS support ticket for account verification (resolved in 24 hours). &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The cost this project incurs is based on the number of visitors to the site. If the number is below 10, it will be very low. As the data moving out of the cloud will incur cost. And yes, we still cache the site at the edge locations using CDN. &lt;/p&gt;

&lt;p&gt;You can access the site via this &lt;a href="https://harshavardhansu.me" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Future considerations for enhancing the project: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using AWS SES, sending the resume via the mail to the users, and adding a follow-up mail &lt;/li&gt;
&lt;li&gt;Adding AWS WAF protection and its captcha in forms can secure our site against vulnerabilities. &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>challenge</category>
      <category>portfolio</category>
      <category>dns</category>
    </item>
  </channel>
</rss>
