<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shubham Kumar</title>
    <description>The latest articles on Forem by Shubham Kumar (@shubhamkcloud).</description>
    <link>https://forem.com/shubhamkcloud</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shubhamkcloud"/>
    <language>en</language>
    <item>
      <title>Exploring AWS Generative AI &amp; AI Services: A Practical Guide for Builders</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Sat, 20 Dec 2025 18:38:05 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/exploring-aws-generative-ai-ai-services-a-practical-guide-for-builders-1mfm</link>
      <guid>https://forem.com/shubhamkcloud/exploring-aws-generative-ai-ai-services-a-practical-guide-for-builders-1mfm</guid>
      <description>&lt;p&gt;Having recently cleared the AWS Certified AI Practitioner exam, I wanted to share my learning and perspective through this practical tour of AWS Generative AI and AI services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generative AI&lt;/strong&gt; is everywhere right now—but building real, production-ready GenAI systems requires much more than just large language models.&lt;/p&gt;

&lt;p&gt;AWS has quietly built a powerful ecosystem of AI services that handle language, speech, vision, search, conversations and human feedback. These services often work behind the scenes of modern GenAI applications, making them scalable, reliable and enterprise-ready.&lt;/p&gt;

&lt;p&gt;In this article, we’ll walk through the most important AWS AI services and understand where they fit in a real-world GenAI architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. AWS Bedrock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt; - Foundation Models Made Simple with AWS Bedrock&lt;/p&gt;

&lt;p&gt;AWS Bedrock is the cornerstone of Generative AI on AWS. It provides a fully managed way to build GenAI applications using leading foundation models from Amazon and third-party providers—without managing infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What makes Bedrock important:&lt;/strong&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Access to multiple foundation models via a single API&lt;/li&gt;
&lt;li&gt;No need to manage or train large models&lt;/li&gt;
&lt;li&gt;Enterprise-grade security and privacy&lt;/li&gt;
&lt;li&gt;Seamless integration with AWS services&lt;/li&gt;
&lt;li&gt;Support for Retrieval-Augmented Generation (RAG)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bedrock is where LLMs live, but their real power is unlocked only when combined with services like Comprehend, Kendra, Transcribe, and Rekognition. It acts as the brain of your GenAI application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. AWS Comprehend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt; - It help us to understanding the context of a text.&lt;/p&gt;

&lt;p&gt;AWS Comprehend helps applications understand text rather than just store it.&lt;/p&gt;

&lt;p&gt;It can detect:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Sentiment (positive, negative, neutral)&lt;/li&gt;
&lt;li&gt;Named entities like people, locations and organizations&lt;/li&gt;
&lt;li&gt;Key phrases and topics&lt;/li&gt;
&lt;li&gt;Personally Identifiable Information (PII)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before sending data to a GenAI model, you often need to clean, classify or filter it. Comprehend does this efficiently and at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. AWS Translate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Translate&lt;/strong&gt; makes applications multilingual with minimal effort.&lt;/p&gt;

&lt;p&gt;It supports real-time and batch translations, custom terminology and dozens of languages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it shines:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Multilingual chatbots&lt;/li&gt;
&lt;li&gt;International customer support&lt;/li&gt;
&lt;li&gt;Localized content platforms&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;GenAI angle:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Translate allows GenAI applications to reach users across geographies—without training separate models per language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. AWS Transcribe&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt; - It helps in turning speech into text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Transcribe&lt;/strong&gt; converts spoken language into accurate text using deep learning.&lt;/p&gt;

&lt;p&gt;It supports:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Live and batch transcription&lt;/li&gt;
&lt;li&gt;Speaker identification&lt;/li&gt;
&lt;li&gt;Custom vocabularies&lt;/li&gt;
&lt;li&gt;Specialized models for medical and call analytics&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Common use cases:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Call center analytics&lt;/li&gt;
&lt;li&gt;Meeting summaries&lt;/li&gt;
&lt;li&gt;Voice-driven applications&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once transcribed, the text becomes a perfect input for summarization, sentiment analysis or LLM prompts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Amazon Polly&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Polly&lt;/strong&gt; does the opposite of Transcribe — it turns text into lifelike speech.&lt;/p&gt;

&lt;p&gt;With neural voices and SSML support, Polly is widely used for:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Voice assistants&lt;/li&gt;
&lt;li&gt;Audiobooks&lt;/li&gt;
&lt;li&gt;Accessibility tools&lt;/li&gt;
&lt;li&gt;IVR systems&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;GenAI in action:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pair an LLM’s response with Polly and suddenly your GenAI application can speak.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Amazon Rekognition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Rekognition&lt;/strong&gt; allows applications to understand images and videos.&lt;/p&gt;

&lt;p&gt;It can:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Detect objects and scenes&lt;/li&gt;
&lt;li&gt;Analyze faces&lt;/li&gt;
&lt;li&gt;Extract text from images&lt;/li&gt;
&lt;li&gt;Moderate inappropriate content&lt;/li&gt;
&lt;li&gt;Analyze video streams&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Why it’s powerful:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GenAI isn’t limited to text anymore. Rekognition enables multimodal AI, where visual data enhances decision-making.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Amazon Lex&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt; - It helps in building Chatbots&lt;/p&gt;

&lt;p&gt;Amazon Lex is AWS’s service for building conversational interfaces—the same technology behind Alexa.&lt;/p&gt;

&lt;p&gt;It handles:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Natural language understanding&lt;/li&gt;
&lt;li&gt;Speech recognition&lt;/li&gt;
&lt;li&gt;Context-aware conversations&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;How it fits with GenAI:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lex manages the conversation flow, while LLMs generate intelligent, dynamic responses behind the scenes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Amazon Kendra&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Kendra&lt;/strong&gt; brings Google-like search to enterprise data.&lt;/p&gt;

&lt;p&gt;It connects to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;S3&lt;/li&gt;
&lt;li&gt;SharePoint&lt;/li&gt;
&lt;li&gt;Confluence&lt;/li&gt;
&lt;li&gt;Databases&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;And understands natural language questions, not just keywords.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GenAI superpower:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kendra is often used in Retrieval-Augmented Generation (RAG) to ensure LLM answers are grounded in enterprise knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Amazon Mechanical Turk&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Mechanical Turk&lt;/strong&gt; provides access to a global workforce for human intelligence tasks.&lt;/p&gt;

&lt;p&gt;It’s commonly used for:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Data labeling&lt;/li&gt;
&lt;li&gt;Content moderation&lt;/li&gt;
&lt;li&gt;Model evaluation&lt;/li&gt;
&lt;li&gt;Edge-case validation&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GenAI models improve dramatically when humans help review and refine their outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. AWS Augmented AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Augmented AI (A2I)&lt;/strong&gt; integrates human review directly into ML workflows.&lt;/p&gt;

&lt;p&gt;It’s especially useful when:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Model confidence is low&lt;/li&gt;
&lt;li&gt;Decisions are compliance-critical&lt;/li&gt;
&lt;li&gt;Accuracy is non-negotiable&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;A2I is widely used with services like Rekognition and Textract but also works with custom models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Generative AI may grab the headlines, but these foundational AWS AI services that make GenAI usable, scalable and trustworthy in real applications.&lt;/p&gt;

&lt;p&gt;If you’re building AI systems on AWS, understanding how these services work together will give you a serious architectural advantage.&lt;/p&gt;

&lt;p&gt;If you have questions or want to share your experience, feel free to drop a comment or reach out to me at &lt;a href="https://www.linkedin.com/in/shubham-kumar1807/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/shubham-kumar1807/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
    <item>
      <title>Building a Responsible GenAI Agent with AWS Bedrock</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Sun, 26 Oct 2025 08:57:55 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/building-a-responsible-genai-agent-with-aws-bedrock-io4</link>
      <guid>https://forem.com/shubhamkcloud/building-a-responsible-genai-agent-with-aws-bedrock-io4</guid>
      <description>&lt;p&gt;Generative AI (GenAI) is redefining how organizations interact with data, automate workflows and enhance decision-making. AWS Bedrock provides a fully managed environment that simplifies the process of building, scaling and deploying GenAI-powered applications—all without worrying about model hosting or infrastructure management.&lt;/p&gt;

&lt;p&gt;In this blog, I’ll share my hands-on experience using AWS Bedrock to create a custom AI agent with Guardrails, grounding and relevance, prompt engineering, and a knowledge base backed by Amazon S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started with AWS Bedrock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Bedrock enables access to foundation models (FMs) from leading providers like Anthropic, AI21 Labs and Stability AI through a single API. This makes it easy for developers to experiment and integrate powerful LLMs into enterprise systems securely.&lt;/p&gt;

&lt;p&gt;For this project, I used the Novalite Agent—a flexible, pre-built agent in Bedrock that allows experimentation in both chat and text-based interactions through the Chat/Text Playground. This interactive playground helped me tune responses, test prompts, and explore the model’s reasoning capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing Guardrails for Responsible AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Responsible AI is a key principle when deploying GenAI applications. AWS Bedrock provides Guardrails that ensure your agent behaves safely and ethically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here’s how I configured my agent’s guardrails:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PII Detection and Masking:&lt;/strong&gt; The agent automatically detects and blocks Personally Identifiable Information (PII) to prevent data leakage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Filtering:&lt;/strong&gt; I set up context restrictions to block certain domains, such as medical advice or other sensitive topics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Policy Enforcement:&lt;/strong&gt; Specific rules ensure that responses stay compliant with organizational policies and data protection standards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These guardrails make your agent enterprise-ready—safe, reliable, and aligned with compliance requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grounding and Relevance: Ensuring Trusted Responses&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the major challenges with LLMs is hallucination—when models generate plausible but incorrect responses. To address this, I incorporated grounding and relevance techniques in my Bedrock agent:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grounding&lt;/strong&gt;: Ensures the model bases its responses on factual and verifiable data sources rather than just its internal training.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relevance Filtering:&lt;/strong&gt; Keeps responses contextually focused, discarding unrelated or speculative information.&lt;/p&gt;

&lt;p&gt;This results in outputs that are more accurate, transparent, and trustworthy—critical for production-grade AI systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Importance of Prompt Engineering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While models like those in AWS Bedrock are incredibly powerful, the quality of your prompts directly influences the quality of the model’s output. This is where Prompt Engineering becomes essential.&lt;/p&gt;

&lt;p&gt;Prompt Engineering is the practice of designing and refining prompts that guide the model to produce the most relevant, coherent, and accurate responses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A well-crafted prompt can:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clarify intent:&lt;/strong&gt; Clearly define what the model should do, avoiding ambiguity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provide context:&lt;/strong&gt; Include background information or constraints to make outputs more precise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control tone and style:&lt;/strong&gt; Guide the model to respond in a professional, conversational, or technical tone as needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improve reliability:&lt;/strong&gt; Reduce hallucinations and ensure the model stays on-topic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my experiments, I noticed how small prompt tweaks—like specifying format (“respond in bullet points”) or adding context (“use the S3 knowledge base as reference”)—dramatically improved the accuracy and structure of responses.&lt;/p&gt;

&lt;p&gt;When combined with grounding and a knowledge base, prompt engineering acts as the bridge between human intent and model intelligence, helping you get consistently meaningful results from Bedrock.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building a Knowledge Base in AWS Bedrock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A key part of my project was creating a Knowledge Base to provide my agent with contextual awareness and organization-specific data. Bedrock supports multiple types of knowledge bases, allowing flexible integration based on your data source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vector Store&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build a fully customizable knowledge base with maximum flexibility.&lt;/li&gt;
&lt;li&gt;Specify the location of your data, select an embedding model, and
Configure a vector store. &lt;/li&gt;
&lt;li&gt;Bedrock automatically stores and updates embeddings, enabling efficient
semantic retrieval for relevant responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Structured Data Store&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ideal for structured data such as databases or tables.&lt;/li&gt;
&lt;li&gt;Enables semantic search and retrieval within your existing systems.
&lt;/li&gt;
&lt;li&gt;Allowing your agent to deliver accurate, data-driven answers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Kendra GenAI Index&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Powered by Amazon Kendra, this option enables deep document
understanding and search.&lt;/li&gt;
&lt;li&gt;Useful for retrieving context from unstructured documents such as PDFs,
reports, and policies with high precision.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my use case, I connected the agent to an S3-based knowledge base to provide access to domain-specific data. This allowed the agent to generate grounded, context-aware responses directly based on enterprise content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding Action Groups&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While the knowledge base provides information, Action Groups define what the agent can do.&lt;/p&gt;

&lt;p&gt;An Action Group in AWS Bedrock specifies a set of operations or functions your agent can perform, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Querying data from a knowledge base&lt;/li&gt;
&lt;li&gt;Calling APIs or triggering workflows&lt;/li&gt;
&lt;li&gt;Executing specific business logic&lt;/li&gt;
&lt;li&gt;Applying guardrails and filters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By modularizing these capabilities, Action Groups ensure that your AI agent operates within defined boundaries, remains auditable, and aligns with enterprise goals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhancing Development with Amazon Q&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To further enhance the GenAI development experience, AWS offers Amazon Q—an AI-powered assistant designed to help developers, data scientists and architects work more efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With Amazon Q, you can:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use it as an extension in your code editor, such as VS Code, to get real-time code suggestions and explanations.&lt;/p&gt;

&lt;p&gt;Integrate it with AWS Console or Bedrock workflows to assist with prompt optimization, model selection or debugging.&lt;/p&gt;

&lt;p&gt;Quickly generate infrastructure-as-code templates, deployment scripts or Bedrock configurations using natural language.&lt;/p&gt;

&lt;p&gt;In short, Amazon Q acts as a personal co-developer—accelerating your Bedrock projects and making the entire GenAI development lifecycle smoother and more intelligent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Responsible AI&lt;/strong&gt; Is Actionable: Guardrails make it easy to enforce safety and compliance in GenAI applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt Engineering&lt;/strong&gt; Drives Quality: The better your prompt, the better your model’s understanding, precision, and reliability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grounding Enhances Trust:&lt;/strong&gt; Connecting your agent to verified data sources reduces hallucinations and improves user confidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowledge Base&lt;/strong&gt; Adds Intelligence: Using S3, Vector Stores, or Kendra Indexes allows your AI to leverage both structured and unstructured enterprise data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action Groups Enable Control:&lt;/strong&gt; They define what your AI agent can do, ensuring flexibility without compromising governance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Q&lt;/strong&gt; Accelerates Development: With Q integrated into your editor, you can experiment, refine, and deploy GenAI applications faster and smarter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Bedrock makes it remarkably easy to build intelligent, safe, and scalable GenAI applications. By combining the Novalite Agent, Guardrails, Grounding, Prompt Engineering, and an S3 Knowledge Base, I was able to create an AI assistant that not only understands context but also respects privacy and organizational constraints.&lt;/p&gt;

&lt;p&gt;And with Amazon Q as a coding companion, the process becomes even more seamless—bridging the gap between creativity, compliance, and efficiency.&lt;/p&gt;

&lt;p&gt;For teams exploring GenAI adoption, AWS Bedrock offers the perfect balance of innovation, control, and responsibility—a true enterprise-ready foundation for the next generation of AI-driven solutions.&lt;/p&gt;




&lt;p&gt;If you have questions or want to share your experience, feel free to drop a comment or reach out to me at &lt;a href="https://www.linkedin.com/in/shubham-kumar1807/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/shubham-kumar1807/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>genai</category>
      <category>ai</category>
      <category>aws</category>
      <category>bedrock</category>
    </item>
    <item>
      <title>Getting Started with Coder.com</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Wed, 17 Sep 2025 17:20:25 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/getting-started-with-codercom-nm8</link>
      <guid>https://forem.com/shubhamkcloud/getting-started-with-codercom-nm8</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In today’s cloud-native world, developer productivity is often limited by the complexity of local development environments. Setting up tools, dependencies, and resources can be time-consuming and inconsistent across teams. That’s where Coder.com comes in.&lt;/p&gt;

&lt;p&gt;Coder provides a way to run developer environments remotely, allowing engineers to code from anywhere while maintaining security, scalability, and reproducibility. Recently, I performed a Proof of Concept (POC) using Coder.com, and in this blog, I’ll share my experience, learnings, and key takeaways.&lt;/p&gt;

&lt;p&gt;Why Coder.com?&lt;/p&gt;

&lt;p&gt;Traditional development setups often face challenges like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inconsistent environments across machines&lt;/li&gt;
&lt;li&gt;Long onboarding times for new developers&lt;/li&gt;
&lt;li&gt;Security concerns when code resides on laptops&lt;/li&gt;
&lt;li&gt;Scalability issues when projects demand more compute power&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To address these issues, I explored Coder.com, a platform that enables cloud-hosted, reproducible developer workspaces. Recently, I ran a Proof of Concept (POC) to evaluate how Coder fits into a real-world setup. This blog walks you through my installation, configuration, and learnings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Dependencies&lt;/strong&gt;&lt;br&gt;
I began with a clean Ubuntu server. First, I updated the system and installed required utilities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y
sudo apt install -y curl unzip

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Install Coder&lt;/strong&gt;&lt;br&gt;
I fetched the latest release of Coder from GitHub and installed it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CODER_VERSION=$(curl -s https://api.github.com/repos/coder/coder/releases/latest | grep tag_name | cut -d '"' -f 4 | sed 's/v//')
curl -fSL "https://github.com/coder/coder/releases/download/v${CODER_VERSION}/coder_${CODER_VERSION}_linux_amd64.deb" -o coder.deb

sudo apt install ./coder.deb
coder --version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Install Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Coder requires Docker for container-based workspaces, so I installed and enabled it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install -y docker.io
sudo systemctl enable --now docker
docker ps

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also added my user (ubuntu) to the Docker group for permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG docker ubuntu

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Run Coder Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once Docker was ready, I launched the Coder server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;coder server

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make Coder run as a service, I created a systemd service unit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tee /etc/systemd/system/coder.service &amp;gt; /dev/null &amp;lt;&amp;lt;EOF
[Unit]
Description=Coder service
After=network.target

[Service]
ExecStart=/usr/bin/coder server
Restart=always
User=ubuntu
Environment=CODER_ADDRESS=0.0.0.0:7080

[Install]
WantedBy=multi-user.target
EOF

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then reloaded and enabled it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl enable coder
sudo systemctl start coder

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, I opened port 7080 on the server and accessed the Coder dashboard at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;server-ip&amp;gt;:7080

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you access the UI, we need to create admin user first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Play with Coder UI&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I have used the built-in Docker Template to Create a Workspace&lt;/li&gt;
&lt;li&gt;From the Coder dashboard, I selected the Docker template to spin up a workspace.&lt;/li&gt;
&lt;li&gt;This template allowed me to quickly create a development container running on my server.&lt;/li&gt;
&lt;li&gt;I have created a new user and assign him the member role and he can easily access my templates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Create Custom template&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Coder integrates natively with Terraform. I have ssh into my ubuntu machine and created a main.tf file to define a simple workspace template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    coder = {
      source  = "coder/coder"
      version = "0.15.0"
    }
  }
}

provider "coder" {}

resource "coder_agent" "dev" {
  arch = "amd64"
  os   = "linux"
  startup_script = &amp;lt;&amp;lt;EOT
    #!/bin/bash
    set -e

    sudo apt-get update
    sudo apt-get install -y python3 python3-pip git curl vim
  EOT
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I initialized Terraform and pushed the template to Coder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
coder templates push my-ubuntu-template

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, in the Coder dashboard, I can see my custom template.&lt;/p&gt;

&lt;p&gt;✅ Final Thoughts&lt;/p&gt;

&lt;p&gt;This POC with Coder.com gave me a solid understanding of how remote developer environments can simplify onboarding, improve consistency, and enhance security. By testing both the Docker template and Terraform-based custom templates, I was able to validate that Coder is flexible enough to support simple as well as advanced use cases.&lt;/p&gt;

&lt;p&gt;If you have questions or want to share your experience, feel free to drop a comment or reach out to me at &lt;a href="https://www.linkedin.com/in/shubham-kumar1807/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/shubham-kumar1807/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>coder</category>
      <category>clouddevelopment</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>🛡️ Kubernetes Pod Security Best Practices: 13 Key Strategies for Hardening Your Workloads</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Tue, 05 Aug 2025 07:28:49 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/kubernetes-pod-security-best-practices-13-key-strategies-for-hardening-your-workloads-f0a</link>
      <guid>https://forem.com/shubhamkcloud/kubernetes-pod-security-best-practices-13-key-strategies-for-hardening-your-workloads-f0a</guid>
      <description>&lt;p&gt;Kubernetes has become the standard and default platform for deploying containerized applications at scale—but with this power comes a broad and evolving attack surface. Pods, as the smallest deployable units, must be secured thoughtfully to prevent breaches, privilege escalation or lateral movement within the cluster.&lt;br&gt;
In this guide, we’ll walk through 13 practical and essential strategies to secure Kubernetes pods, covering everything from IAM and network policy to runtime protection and advanced detection tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvs401v5oscv6f8wsh62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvs401v5oscv6f8wsh62.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. 🔐 Use IAM Roles for Service Accounts (IRSA)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Instead of assigning broad IAM roles to entire worker nodes, Kubernetes in AWS (EKS) supports IAM Roles for Service Accounts (IRSA)—a method that grants AWS permissions to individual pods via service accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; This approach aligns with the principle of least privilege, reducing the blast radius in case a pod is compromised.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; A pod that needs access to an S3 bucket gets a scoped-down IAM role via its associated service account, without granting that access to every pod on the node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. 🌐 Implement Network Policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Kubernetes &lt;strong&gt;Network Policies&lt;/strong&gt; control how pods communicate with each other and external endpoints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; By default, all pods can talk to each other, which is risky in production. Network policies let you isolate traffic by namespace, label, port or IP block.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with a default deny-all policy.&lt;/li&gt;
&lt;li&gt;Explicitly allow required connections (e.g., frontend ↔ backend).
Think of network policies as a virtual firewall for your pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. 📦 Apply Pod Security Standards (PSS)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Kubernetes defines three levels of pod security:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Privileged:&lt;/strong&gt; No restrictions—use only in trusted, internal environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Baseline:&lt;/strong&gt; Disallows dangerous features but permits standard workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restricted:&lt;/strong&gt; Enforces strong security defaults (e.g., no privilege escalation, non-root containers).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; These standards help establish a cluster-wide baseline for pod hardening, especially useful in multi-team or shared environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. 🛂 Enforce Pod Security Admission (PSA)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Pod Security Admission (PSA) evaluates pods against the above standards during admission (Kubernetes 1.25+).&lt;/p&gt;

&lt;p&gt;Modes of enforcement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enforce:&lt;/strong&gt; Blocks non-compliant pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warn:&lt;/strong&gt; Allows the pod but shows a warning message.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit:&lt;/strong&gt; Records policy violations in logs for visibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt; In development environments, start with audit and warn, then gradually move to enforce as your policies mature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. 🔧 Use Security Context for Containers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; The securityContext in pod specs allows you to define OS-level security settings for containers.&lt;/p&gt;

&lt;p&gt;Key settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;runAsNonRoot: true&lt;/code&gt;– Avoid running containers as root.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;readOnlyRootFilesystem: true&lt;/code&gt; – Prevent writing to the container filesystem.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;allowPrivilegeEscalation: false&lt;/code&gt; – Blocks processes from gaining additional privileges.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;capabilities.drop: ["ALL"]&lt;/code&gt; – Removes unneeded Linux capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; These settings significantly reduce the risk of container breakout and lateral attacks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. 🔐 Enable TLS Between Pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Use mutual TLS (mTLS) to encrypt traffic between services within the cluster and verify their identities.&lt;/p&gt;

&lt;p&gt;How to implement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use service meshes like Istio, Linkerd, or Consul.&lt;/li&gt;
&lt;li&gt;Configure automatic certificate rotation and policy-based encryption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; TLS prevents man-in-the-middle attacks and eavesdropping in internal service-to-service communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. 🐧 Prefer Linux Nodes for Security Features&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Most advanced container security features like AppArmor, Seccomp, and SELinux are only supported on Linux nodes.&lt;br&gt;
&lt;strong&gt;Best practice:&lt;/strong&gt; For workloads requiring advanced security hardening, run them on Linux-based nodes rather than Windows nodes, which currently lack equivalent support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. 📉 Set Resource Limits and Requests Wisely&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Kubernetes allows defining CPU and memory requests (guaranteed) and limits (maximum allowed) per container.&lt;/p&gt;

&lt;p&gt;Why it matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prevent resource starvation across the cluster.&lt;/li&gt;
&lt;li&gt;Avoid over-committing memory, which can cause pods to be evicted or terminated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt; Set memory limits equal to or lower than the request to avoid unpredictable behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. 🚫 Enable Seccomp Profiles&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Seccomp (Secure Computing Mode) restricts the syscalls a container can make to the kernel.&lt;/p&gt;

&lt;p&gt;How to implement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use RuntimeDefault for base protection.&lt;/li&gt;
&lt;li&gt;Create custom profiles for even tighter control.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; It minimizes the risk of kernel-level attacks and narrows the container’s system call surface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. 🧱 Use AppArmor or SELinux for Mandatory Access Control&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AppArmor:&lt;/strong&gt; Available in Debian/Ubuntu-based distributions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SELinux:&lt;/strong&gt; Used in RedHat/CentOS environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both restrict what a containerized application can do, e.g., file access, network usage, system resource access.&lt;br&gt;
&lt;strong&gt;Why it matters:&lt;/strong&gt; Enforcing security profiles with AppArmor or SELinux provides host-level protection from malicious container activity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. 📊 Enable Logging and Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Centralized logging and monitoring systems capture logs and metrics from pods and the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fluent Bit / Fluentd&lt;/li&gt;
&lt;li&gt;Prometheus + Grafana&lt;/li&gt;
&lt;li&gt;ELK Stack (Elasticsearch + Logstash + Kibana)&lt;/li&gt;
&lt;li&gt;Splunk, Datadog, or Cloud-native options (CloudWatch, GCP Ops)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Early detection of anomalies (e.g., unauthorized access, unexpected pod restarts) is only possible if you have proper visibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12. 📍 Define Pod Placement Rules&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Use Node Affinity, Taints/Tolerations, and Node Selectors to control where pods are scheduled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Helps isolate critical workloads, enforce compliance, and reduce the attack surface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Place workloads with elevated privileges on isolated nodes.&lt;/li&gt;
&lt;li&gt;Prevent multi-tenancy risks by using taints and tolerations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;13. 🛡️ Use GuardDuty for Runtime Threat Detection (EKS Only)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Amazon GuardDuty offers runtime threat detection for Amazon EKS clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detect suspicious Kubernetes API calls (e.g., unauthorized role binding)&lt;/li&gt;
&lt;li&gt;Identify known malware in containers (via GuardDuty Malware Protection)&lt;/li&gt;
&lt;li&gt;Surface signs of crypto mining, lateral movement, or C2 activity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Provides deep, continuous security insights with no additional agent installation required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✅ Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Securing Kubernetes pods is not a one-time task—it’s a layered strategy that involves identity controls, traffic restrictions, runtime hardening, visibility, and threat detection. By implementing these 13 best practices, you strengthen your defenses against both internal misconfigurations and external threats.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Security is not a feature. It’s a discipline." — Treat it as such in every phase of your Kubernetes adoption.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you have questions or want to share your experience, feel free to drop a comment or reach out to me at &lt;a href="https://www.linkedin.com/in/shubham-kumar1807/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/shubham-kumar1807/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>eks</category>
      <category>security</category>
      <category>aws</category>
    </item>
    <item>
      <title>🔐 Setting Up Headscale and Tailscale for Secure Private Networking: A Step-by-Step Guide</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Thu, 24 Jul 2025 06:24:28 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/setting-up-headscale-and-tailscale-for-secure-private-networking-a-step-by-step-guide-2mo6</link>
      <guid>https://forem.com/shubhamkcloud/setting-up-headscale-and-tailscale-for-secure-private-networking-a-step-by-step-guide-2mo6</guid>
      <description>&lt;p&gt;In this blog, we'll walk through how to set up Headscale, a self-hosted alternative to Tailscale coordination server and connect multiple systems using Tailscale — all within your own private network. This setup enables secure and seamless access between devices, governed by customisable ACLs (Access Control Lists).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧩 What Are Tailscale and Headscale?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tailscale is a modern VPN built on WireGuard. It creates secure peer-to-peer connections between devices, even behind NATs and firewalls.&lt;br&gt;
Headscale is an open-source self-hosted control server that manages Tailscale nodes without relying on Tailscale's cloud infrastructure.&lt;br&gt;
By setting up Headscale and connecting clients via Tailscale, you gain full control over your network topology, access rules, and data flow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🛠️ Step-by-Step Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Provision an EC2 Server for Headscale&lt;/strong&gt;&lt;br&gt;
First, we spin up an EC2 instance (Ubuntu is preferred) to host our Headscale server. Make sure to:&lt;/p&gt;

&lt;p&gt;Open required ports in your EC2 security group (e.g., 443 for HTTPS, 80 for HTTP if using a web UI later).&lt;br&gt;
Assign a static public IP or use Elastic IP for persistent access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Install and Configure Headscale&lt;/strong&gt;&lt;br&gt;
SSH into the EC2 instance and follow these steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Update packages
sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y

# Install dependencies
sudo apt install -y curl git sqlite3

# Install Headscale binary
curl -fsSL https://tailscale.com/headscale/install.sh | sh

# Generate configuration
sudo headscale generate-config &amp;gt; /etc/headscale/config.yaml

# Start Headscale service
sudo systemctl enable --now headscale

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Create a User in Headscale&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo headscale users create employee

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a user under which multiple devices can be registered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Register a Node (Application Server)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, create another EC2 server that we’ll use to host a sample application:&lt;/p&gt;

&lt;p&gt;For the testing purpose, I am simply running a sample application&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Start a simple web server on port 8888
python3 -m http.server 8888

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to open port 8888 on the EC2 security group.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To add this server as a node:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Tailscale on the node.&lt;/li&gt;
&lt;li&gt;Get a join URL from Headscale, run the following command on headscale server :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo headscale preauthkeys create --reusable --expiration 24h --user employee

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;On the node, run the following command to connect node:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tailscale up --login-server "https://&amp;lt;your-headscale-ec2-ip&amp;gt;" --authkey &amp;lt;generated-key&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also try with http for the learning and POC purpose.&lt;/p&gt;

&lt;p&gt;The server is now securely connected to the Headscale-managed private network.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the headscale server and confirm the node has been attached successfully.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo headscale list nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Add a Client System (Intern's Laptop)&lt;/strong&gt;&lt;br&gt;
Repeat the above Tailscale installation steps on another system (e.g., your laptop or intern’s machine), using a different auth key or the same reusable one. Once connected, this device becomes a node in your mesh network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Define ACLs to Control Access&lt;/strong&gt;&lt;br&gt;
To restrict access based on roles, define ACLs in the Headscale configuration file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a file with name acl.hujson and save this file on the location /etc/headscale.
You can refer the following ACL for example
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "groups": {
    "group:employees": [
    "username1",
    "username2"

],
    "group:interns": [
        "shubham.kumar"      
    ]
  },
  "acls": [    
{
     "action": "accept",
     "src": ["group:employees"],
     "dst": ["*:*"]
},
{
     "action": "accept",
     "src": ["group:interns"],
     "dst": ["&amp;lt;app server ip&amp;gt;:8888"]
}
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is just a simple test policy which is allowing full access to the employee group and only a specific IP on specific port access to the Intern group.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the headscale config file which is present at /etc/headscale, add/uncomment the following -
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;policy:
  path: "/etc/headscale/acl.hujson"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Restart the headscale to update the ACL:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl stop headscale
sudo systemctl start headscale
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Check the Headscale status
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status headscale
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;✅ Final Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test 1&lt;/strong&gt; - Connectivity Check &lt;br&gt;
From the intern's laptop, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;telnet &amp;lt;app server ip&amp;gt; 88888
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response should be connected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test 2&lt;/strong&gt; - Web Access Test&lt;br&gt;
Open the browser and visit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;tailscale-ip-of-app-server&amp;gt;:8888
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm that access works only as defined in the ACL rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test 3&lt;/strong&gt; - ACL Enforcement&lt;br&gt;
Change the ACL policy to remove access or modify the port, then restart Headscale:&lt;br&gt;
&lt;code&gt;# Example change: remove 8888 access for interns&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now retry access — it should be blocked as per the updated ACL.&lt;/p&gt;

&lt;p&gt;I hope this guide helps you get started with setting up your own Headscale-Tailscale network. Setting up your own secure, self-hosted VPN network doesn't have to be complex — and with Headscale and Tailscale, it's easier than ever. Try it out and take full control of your network access. &lt;/p&gt;

&lt;p&gt;If you have questions or want to share your experience, feel free to drop a comment or reach out to me at &lt;code&gt;https://www.linkedin.com/in/shubham-kumar1807/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Good Luck!!&lt;/p&gt;

</description>
      <category>selfhostedvpn</category>
      <category>tailscale</category>
      <category>headscale</category>
      <category>cloudnetworking</category>
    </item>
    <item>
      <title>A Beginner’s Guide to Building a Career in DevOps and Cloud</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Thu, 21 Nov 2024 18:30:31 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/a-beginners-guide-to-building-a-career-in-devops-and-cloud-2665</link>
      <guid>https://forem.com/shubhamkcloud/a-beginners-guide-to-building-a-career-in-devops-and-cloud-2665</guid>
      <description>&lt;p&gt;With over 7 years of experience in the fields of Cloud and DevOps, I often get asked a common question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“How can I start a career in DevOps and Cloud?”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The answer isn’t simple, but it’s definitely achievable with the right mindset and plan. DevOps and Cloud are vast domains and diving into them without a roadmap can feel overwhelming. In this blog, I’ll share some key steps to help you begin your journey in these exciting fields.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Understand the Connection Between Cloud and DevOps&lt;/strong&gt;&lt;br&gt;
In today’s tech-driven world, DevOps and Cloud are inseparable. Cloud platforms (like AWS, Azure, and GCP) provide the infrastructure while DevOps tools and practices enable efficient development, deployment, and management of applications on these platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where to Start&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Begin by picking one cloud platform to focus on. Your choice can depend on your interests or the platform’s popularity in your region or industry.&lt;/p&gt;

&lt;p&gt;A key point to note: All major cloud providers offer similar services with different names. For example:&lt;/p&gt;

&lt;p&gt;For a server:&lt;br&gt;
AWS provides EC2&lt;br&gt;
Azure offers Compute&lt;br&gt;
GCP uses Compute Engine&lt;br&gt;
Though the terminology varies, the functionality remains largely the same.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Approach Learning Cloud Computing to start your career&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cloud computing is a vast domain, so it’s essential to go step by step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with the Basics: Identify and focus on foundational services. Just search online for "top 10 basic services of [cloud provider]" to create a list and start learning them one by one.&lt;/li&gt;
&lt;li&gt;Use Free or Affordable Learning Resources:&lt;/li&gt;
&lt;li&gt;YouTube tutorials for practical demonstrations.&lt;/li&gt;
&lt;li&gt;Udemy courses for structured learning.&lt;/li&gt;
&lt;li&gt;Focus on Hands-on Practice: Practice is key to understanding concepts. Create and manage resources on your chosen cloud platform to solidify your learning.&lt;/li&gt;
&lt;li&gt;Remember, you don’t need to learn everything at once. Start small and build your expertise gradually.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Learn the Basics of DevOps&lt;/strong&gt;&lt;br&gt;
DevOps fosters collaboration between development and operations teams through tools and automation. Here are the areas you should focus on first:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD Pipelines&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CI/CD (Continuous Integration/Continuous Deployment) is the backbone of DevOps. It automates the process of building, testing, and deploying applications.&lt;/p&gt;

&lt;p&gt;Start with Jenkins, a popular tool for creating CI/CD pipelines.&lt;br&gt;
Install Jenkins on your system, explore tutorials, and practice building pipelines on your local machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containerization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docker is a critical tool in DevOps that enables containerization of applications. Learn the basics of Docker, understand its concepts, and experiment with containerizing simple applications on your local system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Master Infrastructure as Code (IaaC)&lt;/strong&gt;&lt;br&gt;
When learning cloud platforms, you’ll likely create resources manually using the console. However, DevOps emphasizes automation. Tools like Terraform or CloudFormation allow you to automate resource creation and management.&lt;/p&gt;

&lt;p&gt;Start with Terraform: Practice creating resources like the ones you learned in your initial cloud exploration. Automating tasks builds confidence and expertise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Understand Source Code Management&lt;/strong&gt;&lt;br&gt;
Version control is vital for modern software development. Start with Git, one of the most widely used version control tools.&lt;/p&gt;

&lt;p&gt;Learn how to use Git to manage code repositories.&lt;br&gt;
Practice on your local system to gain hands-on experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Develop Basic Scripting Skills&lt;/strong&gt;&lt;br&gt;
While DevOps doesn’t require you to be a programming expert, understanding the basics of scripting is essential.&lt;/p&gt;

&lt;p&gt;Learn the fundamentals of Python or Bash scripting.&lt;br&gt;
Focus on practical tasks, like automating repetitive processes or managing files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Explore Orchestration Tools&lt;/strong&gt;&lt;br&gt;
Kubernetes is a powerful tool for container orchestration. While it’s a vast topic, even a basic understanding will set you apart.&lt;/p&gt;

&lt;p&gt;Start learning Kubernetes concepts using tutorials or courses.&lt;br&gt;
Install Minikube on your laptop to practice orchestrating the Dockerized applications you’ve already built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plan Your Learning Path&lt;/strong&gt;&lt;br&gt;
I understand this might seem overwhelming at first, but remember, everyone starts somewhere. I began my journey the same way—step by step. It might seem like a lot to take in, but trust me, starting with the process I’ve outlined will make it manageable.&lt;/p&gt;

&lt;p&gt;Don’t aim to master everything right away. Focus on learning gradually, one step at a time. Practical experience is the key to mastering any skill. Once you have a basic understanding of DevOps and Cloud, start building simple projects to solidify your knowledge.&lt;/p&gt;

&lt;p&gt;You can connect with me on LinkedIn or via email for guidance. I’d be happy to suggest projects that will help you gain hands-on experience.&lt;/p&gt;

&lt;p&gt;Good Luck!&lt;/p&gt;

&lt;p&gt;Welcome to the world of DevOps and Cloud! If you found this guide helpful or have questions, feel free to connect with me on LinkedIn: "shubham-kumar1807", or leave a comment here.&lt;/p&gt;

&lt;p&gt;Thank you for reading!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>aws</category>
      <category>learning</category>
    </item>
    <item>
      <title>Enhancing AWS S3 Security with GuardDuty.</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Fri, 30 Aug 2024 06:24:56 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/enhancing-aws-s3-security-with-guardduty-ldb</link>
      <guid>https://forem.com/shubhamkcloud/enhancing-aws-s3-security-with-guardduty-ldb</guid>
      <description>&lt;p&gt;In today's digital era, data is the lifeblood of businesses and individuals alike. As the volume of data continues to grow exponentially, more organizations are turning to cloud solutions for scalable and reliable storage. Among these, Amazon S3 has emerged as one of the most popular and trusted data storage services.&lt;/p&gt;

&lt;p&gt;However, while data is undeniably valuable, it also becomes a prime target for malicious activities. Ensuring the security of your data is not just an option—it's a necessity. Protecting your data from potential threats is crucial to maintaining the integrity and trustworthiness of your operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS GuardDuty&lt;/strong&gt; offers robust features for safeguarding your S3 data through two distinct protection plans:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;S3 Protection&lt;/li&gt;
&lt;li&gt;Malware Protection for S3&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this blog, we'll provide a brief overview of the S3 Protection plan and take a closer look at the Malware Protection for S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 Protection Plan&lt;/strong&gt;&lt;br&gt;
The S3 Protection plan within Amazon GuardDuty is designed to monitor and detect suspicious activities and potential security threats involving your S3 buckets. It emphasizes the identification of unauthorized access, data exfiltration attempts, and misconfigurations that could jeopardize your data. By analyzing access logs, API call patterns, and bucket configurations, the S3 Protection plan generates actionable alerts, enabling you to swiftly address unauthorized actions and maintain secure bucket settings. This plan plays a crucial role in safeguarding your S3 environment by continuously monitoring how your data is accessed and managed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Malware Protection for S3&lt;/strong&gt;&lt;br&gt;
AWS recently introduced the Malware Protection for S3 feature as part of Amazon GuardDuty. This powerful tool helps detect potential malware by scanning newly uploaded objects in your selected Amazon Simple Storage Service (Amazon S3) buckets. Whenever a new object or a new version of an existing object is uploaded to the designated bucket, GuardDuty automatically initiates a malware scan, providing an additional layer of security for your data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why should We use it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of Malware Protection for S3&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Seamless Integration:&lt;/strong&gt; Integrating Malware Protection for S3 is straightforward, requiring no additional infrastructure. Simply enable the feature and select the desired S3 bucket for scanning. You can activate it through the AWS Management Console, API, CLI, CloudFormation templates, or Terraform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customizable Scanning:&lt;/strong&gt; You have the flexibility to configure scans at the folder level by defining prefixes, allowing you to target specific areas of your S3 buckets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Scans on Upload:&lt;/strong&gt; The system automatically scans new objects as they are uploaded to your bucket, generating detailed reports within seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support for All File Formats:&lt;/strong&gt; Malware Protection for S3 is versatile, supporting scans across all file formats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highly Scalable:&lt;/strong&gt; The service is designed to scale effortlessly with your needs, ensuring consistent performance regardless of your data volume.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tagging Mechanism:&lt;/strong&gt; When enabled, the tagging feature labels each scanned object with one of the following statuses:

&lt;ul&gt;
&lt;li&gt;NO_THREATS_FOUND: The object is clean.&lt;/li&gt;
&lt;li&gt;THREATS_FOUND: Malware has been detected.&lt;/li&gt;
&lt;li&gt;UNSUPPORTED: The file format is not supported for scanning.&lt;/li&gt;
&lt;li&gt;ACCESS_DENIED: The scan couldn't access the object.&lt;/li&gt;
&lt;li&gt;FAILED: The scan was unsuccessful.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Quarantine Infected Files:&lt;/strong&gt; Infected files are automatically quarantined in a separate S3 bucket, effectively preventing further distribution and mitigating potential threats.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Rapid Findings:&lt;/strong&gt; Scan results are generated within seconds, providing swift feedback on the status of your files.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Contextualized Findings:&lt;/strong&gt; The system provides detailed insights, including metadata about the S3 data, specific scan results for the object, and the category of detected malware.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How it works?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fer2islvexe4qog53njvd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fer2islvexe4qog53njvd.png" alt="Architecture" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you enable Malware Protection, an EventBridge rule is automatically added to your S3 bucket. This rule triggers the scanning mechanism whenever a file is uploaded. GuardDuty then initiates a process within a dedicated, secure VPC that has no internet access. Through AWS PrivateLink, the files are securely transferred from the S3 bucket to this isolated environment. GuardDuty’s Malware Protection then scans the files using heuristic analysis and machine learning models. Once the scan is complete, the file is deleted and the scan status, along with scan metadata, is processed. The results are published via an EventBridge rule, and scan metrics are sent to CloudWatch for monitoring.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;What It Scans&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The system detects threats using YARA rule definitions, which are specialized patterns for identifying malicious files.&lt;/li&gt;
&lt;li&gt;It has comprehensive visibility into various types of malware that may target AWS environments.&lt;/li&gt;
&lt;li&gt;The detection capabilities extend to multiple types of malware, including crypto miners, ransomware and web shells.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;GuardDuty Malware S3 protection Quota and Limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maximum S3 Object Size: Up to 5 GB per object.&lt;/li&gt;
&lt;li&gt;Extracted Archive Bytes: The maximum size for extracted archive content is 5 GB.&lt;/li&gt;
&lt;li&gt;Extracted Archive Files: Up to 1,000 files can be extracted from an archive for scanning.&lt;/li&gt;
&lt;li&gt;Maximum Archive Depth Levels: Archives can be scanned up to a depth of 5 nested levels.&lt;/li&gt;
&lt;li&gt;Maximum Protected Buckets: You can protect up to 25 S3 buckets per account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you for taking the time to read my blog. I hope this guide has provided you with a clear understanding of how AWS GuardDuty's Malware Protection for S3 can effectively safeguard your data against potential threats. Implementing these security measures will enhance the protection of your S3 buckets, ensuring that your data remains secure and your cloud environment resilient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FAQs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What will happen if we upload a file of size greater than 5 GB?&lt;/strong&gt;&lt;br&gt;
The 5 GB limit is a strict threshold as of now, and it cannot be increased. If a file larger than this limit is uploaded, GuardDuty will not scan it. Additionally, if tagging is enabled, the file will be tagged as "UNSUPPORTED."&lt;/p&gt;

&lt;p&gt;Thank you for reading our guide on AWS GuardDuty S3 Malware Protection! We hope this information helps enhance your security practices. If you have any questions or suggestions, feel free to reach out at &lt;a href="mailto:kumarshubham1807@gmail.com"&gt;kumarshubham1807@gmail.com&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Stay secure, and happy cloud computing!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>guardduty</category>
      <category>s3malwareprotection</category>
      <category>s3</category>
    </item>
    <item>
      <title>Understanding Reverse Proxy with Nginx - Step By Step Guide</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Mon, 18 Dec 2023 07:24:02 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/understanding-reverse-proxy-with-nginx-step-by-step-guide-18a0</link>
      <guid>https://forem.com/shubhamkcloud/understanding-reverse-proxy-with-nginx-step-by-step-guide-18a0</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the realm of web servers and networking, the concept of a reverse proxy is a powerful and versatile tool. Among the myriad of choices, Nginx stands out as a robust solution for implementing reverse proxy functionality. In this blog post, we'll explore what a reverse proxy is, why it's beneficial, and how to set up and configure one using Nginx.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a Reverse Proxy?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A reverse proxy is a server that sits between client devices (such as web browsers) and a web server, forwarding client requests to the server and returning the server's responses back to the clients. Unlike a forward proxy, which handles requests from clients to the internet, a reverse proxy manages requests from clients to one or more backend servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use a Reverse Proxy?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Port Forwarding:&lt;/strong&gt;&lt;br&gt;
A reverse proxy is an excellent tool for simplifying and securing port forwarding scenarios. Instead of exposing internal services directly to the internet, you can use a reverse proxy to manage access to these services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load Balancing:&lt;/strong&gt;&lt;br&gt;
One of the primary benefits of using a reverse proxy is load balancing. When multiple backend servers are available, a reverse proxy distributes incoming requests among them, ensuring optimal resource utilization and preventing any single server from being overwhelmed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased Security:&lt;/strong&gt;&lt;br&gt;
A reverse proxy acts as an additional layer of security for your web applications. It can hide the identity and characteristics of your backend servers from external clients. This adds a level of obscurity, making it more challenging for potential attackers to gather information about your infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSL Termination:&lt;/strong&gt;&lt;br&gt;
Handling SSL/TLS encryption can be resource-intensive for backend servers. A reverse proxy can offload the SSL termination process, freeing up resources on the backend and simplifying certificate management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web Acceleration:&lt;/strong&gt;&lt;br&gt;
Reverse proxies can cache static assets, such as images and CSS files, and serve them directly to clients. This reduces the load on backend servers, speeds up content delivery, and enhances overall web performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up Reverse Proxy with Nginx:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Install Nginx. I am using Ubuntu OS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Configure Nginx as a Reverse Proxy&lt;br&gt;
Edit the Nginx configuration file, usually located at /etc/nginx/nginx.conf or /etc/nginx/sites-available/default. Add a location block for the backend server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;

    server_name example.com;

    location /serviceA/ {
        proxy_pass http://backend-server:8080/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    location /serviceB/ {
        proxy_pass http://backend-server:8081/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # Add more location blocks for additional services as needed
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's say you have multiple services running on your backend servers, each on a different port (e.g., service A on port 8080, service B on port 8081, and so on). Instead of exposing each service individually, you can leverage Nginx as a reverse proxy for port forwarding.&lt;/p&gt;

&lt;p&gt;Replace example.com with your domain and backend-server with the address of your backend server. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;proxy_pass http&lt;/strong&gt; is the parameter where you have to mentioned where really you want to forward or proxy to. So, as per the configuration of the above file if you hit example.com/serviceA --&amp;gt; it will proxy you to the backend service with name backend-server and port 8080 and likewise for serviceB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Test and Reload Nginx&lt;br&gt;
Test the configuration for syntax errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nginx -t
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If no errors are reported, restart Nginx to apply the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service nginx restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Test the Reverse Proxy if it is working&lt;/p&gt;

&lt;p&gt;Incase you don't have any backend services currently running on the server, you can test it by the following approach&lt;/p&gt;

&lt;p&gt;Create a file named index.html with some basic content. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;
    &amp;lt;title&amp;gt;Port Forwarding Test&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
    &amp;lt;h1&amp;gt;Hello, this is a test page!&amp;lt;/h1&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use Python to Serve the HTML File on Port 8080:&lt;/p&gt;

&lt;p&gt;Open a terminal and navigate to the directory containing index.html. Run the following Python command to start a simple HTTP server on port 8080:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /path/to/test/
python3 -m http.server 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update Nginx Configuration with the following configuration by following all the steps mentioned above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;
    server_name localhost;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access Your Application:&lt;br&gt;
Now, you should be able to access your application on port 80, and Nginx will forward the traffic to port 8080.&lt;/p&gt;

&lt;p&gt;Visit &lt;a href="http://localhost"&gt;http://localhost&lt;/a&gt; (or Server IP) in your web browser, and it should display the content served by the application running on port 8080.&lt;/p&gt;

&lt;p&gt;Adjust the Nginx configuration as needed based on your specific requirements and the structure of your application.&lt;/p&gt;

&lt;p&gt;Thank you for reading our guide! We hope it proves valuable for your web server setup. If you have any questions or suggestions, feel free to reach out on &lt;a href="mailto:kumarshubham1807@gmail.com"&gt;kumarshubham1807@gmail.com&lt;/a&gt;. Happy coding!&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>reverseproxy</category>
      <category>portforwarding</category>
      <category>webserver</category>
    </item>
    <item>
      <title>My CKAD Exam Journey: Tips for Success</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Tue, 17 Oct 2023 06:10:51 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/my-ckad-exam-journey-tips-for-success-2n4f</link>
      <guid>https://forem.com/shubhamkcloud/my-ckad-exam-journey-tips-for-success-2n4f</guid>
      <description>&lt;p&gt;If you've found your way to this page, congratulations because it means you're already one step ahead on your Kubernetes journey and you are planning for certification. While many are merely considering joining the Kubernetes community, your presence here reflects a genuine interest and the ambition to excel in this dynamic field. Boost your confidence!!!!!&lt;/p&gt;

&lt;p&gt;Today, I'm thrilled to share that I've successfully cleared my CKAD (Certified Kubernetes Application Developer) exam. This certification marks a significant milestone in my Kubernetes adventure, and I'm excited to pass on the knowledge and insights I've gained along the way. This blog is designed to serve as a guiding light, helping you prepare for the CKAD exam and increase your confidence in achieving this remarkable certification. So, let's dive in and explore the path to CKAD success together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 1:&lt;/strong&gt; &lt;strong&gt;Understanding the CKAD Exam&lt;/strong&gt;&lt;br&gt;
The CKAD (Certified Kubernetes Application Developer) exam stands apart from the typical certification tests you may have encountered, such as those from AWS, Azure, or other cloud providers. Unlike the multiple-choice questions or a small handful of subjective queries that populate many exams, the CKAD is an entirely subjective exam. In essence, it challenges your practical Kubernetes skills and problem-solving abilities rather than mere rote memorization.&lt;/p&gt;

&lt;p&gt;With the CKAD, you won't find any multiple-choice questions. Instead, you'll be tasked with real-world scenarios where you must craft Kubernetes manifest files, troubleshoot issues, and perform hands-on tasks. What sets this exam apart is that you have access to the Kubernetes official documentation during the test. This open-book approach makes the CKAD not just challenging but also engaging. The questions are designed to be clear and to the point. If you possess the skill to navigate and search the Kubernetes documentation swiftly, you're well on your way to mastering this exam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here are some essential details to note about the CKAD:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Duration&lt;/strong&gt;: You're given a total of 2 hours to complete the exam.&lt;br&gt;
&lt;strong&gt;Number of Questions&lt;/strong&gt;: There will be 15 to 20 questions.&lt;br&gt;
&lt;strong&gt;Passing&lt;/strong&gt; &lt;strong&gt;Score&lt;/strong&gt;: To successfully clear the CKAD, you generally need to score around 67% to 68%.&lt;br&gt;
&lt;strong&gt;Weightage&lt;/strong&gt;: Each question in the exam carries a different weightage, and this information is clearly indicated for you. Understanding the weightage of each question allows you to allocate your time efficiently and prioritize tasks accordingly.&lt;br&gt;
The CKAD is a unique certification experience, and with the right approach and preparation, you can confidently conquer it. Now, let's dive deeper into how you can prepare effectively for this exam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 2:&lt;/strong&gt; &lt;strong&gt;Preparing for the Exam&lt;/strong&gt;&lt;br&gt;
To excel in the CKAD (Certified Kubernetes Application Developer) exam, it's essential to build a strong foundation in the fundamental concepts of Kubernetes. The beauty of this exam lies in its simplicity – the questions are designed to test your understanding of core concepts rather than trick you with complexity. Before you venture into the exam room, ensure that you have a crystal-clear understanding of various critical topics, including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pod Designing&lt;/li&gt;
&lt;li&gt;Resource Allocation&lt;/li&gt;
&lt;li&gt;Deployment&lt;/li&gt;
&lt;li&gt;Service&lt;/li&gt;
&lt;li&gt;ConfigMap and Secrets&lt;/li&gt;
&lt;li&gt;Network Policy&lt;/li&gt;
&lt;li&gt;Ingress&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Volume Mounting&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's worth noting that you can expect at least one question related to each of these topics during the exam. Mastering these fundamental concepts will put you in a strong position to tackle the CKAD exam questions with confidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's a recommended strategy how to prepare for it:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Set Up Minikube:&lt;/strong&gt; Begin by setting up Minikube on your local system. Minikube is a lightweight Kubernetes cluster that allows you to create a local, isolated environment for Kubernetes testing and experimentation. It's an excellent tool for hands-on learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Hands-On Practice:&lt;/strong&gt; Once Minikube is up and running, delve into hands-on practice. Take each of the core topics mentioned earlier, such as pod designing, deployment, services, config maps, secrets, network policies, ingress, Docker, and volume mounting, and work through practical exercises.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Simulate Exam Conditions:&lt;/strong&gt; To truly prepare for the CKAD exam, it's essential to simulate the exam conditions. Select one topic at a time, look for sample questions related to that topic, and set a specific time limit for yourself, mirroring the real exam conditions. This will help you practice time management and the efficient execution of tasks.&lt;/p&gt;

&lt;p&gt;This hands-on, focused approach not only solidifies your understanding but also sharpens your ability to work under the constraints of time and pressure. It's a highly effective way to prepare for the CKAD exam and aligns with the exam's practical nature. Remember, practice is the key to success in this exam, and this approach ensures that you're not just well-versed in theory but can confidently apply your knowledge in real-world scenarios.&lt;/p&gt;

&lt;p&gt;With this practical strategy, you'll be well-prepared to tackle the CKAD exam's challenges and emerge with the confidence to excel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 3:&lt;/strong&gt; &lt;strong&gt;Tips for Exam Day&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Preparing for the CKAD (Certified Kubernetes Application Developer) exam is not just about mastering Kubernetes concepts; it's also about honing specific skills and techniques that can help you perform your best on exam day. Here are some valuable tips to ensure success:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Efficient VIM Editor Usage:&lt;/strong&gt; Familiarity with the VIM text editor is a tremendous asset for the CKAD. Since you'll be writing manifest files, practicing writing them in VIM can significantly enhance your speed and accuracy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Linux Basics:&lt;/strong&gt; A solid grasp of Linux fundamentals is crucial. Understanding common Linux commands and concepts will make your interactions with the Kubernetes cluster smoother.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Utilize Environment Variables:&lt;/strong&gt; Time is of the essence during the CKAD exam. To save precious seconds, create environment variables in your terminal for frequently used commands. This way, you can avoid typing the same commands repeatedly, which can be a real time-saver. Here's a practical example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alias k=kubectl
export do="--dry-run=client -o yaml
export now="--force --grace-period 0"

#How to use it:
k creat deploy ckad --image=nginx $do &amp;gt; deploy.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Leverage Imperative Commands:&lt;/strong&gt; Imperative commands are your friends. They allow you to generate manifest files quickly, edit them, and deploy them in a few simple steps. Practice using kubectl imperative commands extensively. Remember, efficiency is key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Don't Get Stuck:&lt;/strong&gt; If you encounter a challenging question, don't let it stall your progress. Flag the question, move on to others, and come back to it later. The CKAD exam is not about achieving a perfect score but rather demonstrating your overall competence. Time management is critical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Effective Use of kubectl --help&lt;/strong&gt;: Make sure you understand how to use the kubectl --help command. It can provide valuable insights and options for various kubectl commands, helping you work more efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Last and Final Step Before Taking the Exam&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you've taken the initiative to book your CKAD (Certified Kubernetes Application Developer) exam, you're well on your way to achieving this valuable certification. However, before you rush into the exam, it's crucial to make the most of your preparation time. Here's what you should consider in the lead-up to your exam:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Timing and Scheduling:&lt;/strong&gt; The CKAD exam booking grants you a one-year window to take the test. This generous timeframe provides flexibility and allows you to choose a date that best suits your preparation. Avoid rushing into the exam; take the time you need to build confidence and proficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Simulator Exams:&lt;/strong&gt; Along with your exam booking, you'll receive two simulator exams. These simulator exams are slightly more challenging than the real CKAD exam, serving as an excellent preparation tool. They offer a taste of real exam conditions and a chance to refine your skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Practice and Repetition:&lt;/strong&gt; Dedicate significant time to these simulator exams. Solve them repeatedly until you feel confident in your ability to tackle the questions. Remember, practice is the key to success in the CKAD exam. The simulator exams provide a testing ground for you to refine your knowledge and skills.&lt;/p&gt;

&lt;p&gt;By the time you feel comfortable completing the simulator exams, you'll be in an excellent position to clear the actual CKAD exam. With your understanding of core Kubernetes concepts, hands-on experience, and a taste of real exam conditions, you can confidently book your CKAD exam.&lt;/p&gt;

&lt;p&gt;Wishing you the best of luck on your CKAD journey! You're on the path to becoming a Certified Kubernetes Application Developer, and with your dedication and preparation, success is within reach.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ckad</category>
      <category>cloud</category>
      <category>certification</category>
    </item>
    <item>
      <title>AWS DMS - Database Migration to AWS</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Thu, 24 Aug 2023 17:46:59 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/aws-dms-database-migration-to-aws-4k5o</link>
      <guid>https://forem.com/shubhamkcloud/aws-dms-database-migration-to-aws-4k5o</guid>
      <description>&lt;p&gt;Moving databases to the cloud can be confusing, but AWS Data Migration Service (DMS) is here to help. It's like a guide that makes moving your data to Amazon Web Services (AWS) simple. Whether you're new to this or already know a bit, AWS DMS is here to make things easier. In this guide, we'll show you what AWS DMS is, what it does, and why it's important for modern IT. Come with us to learn about AWS DMS and see how it can help you smoothly shift to the cloud. Let's start this exciting journey into AWS DMS together!We are going to explore AWS DMS. We will se how AWS Data migration service work in real project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsx8fqz98zofaxfdq1k0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsx8fqz98zofaxfdq1k0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in the diagram to explore AWS DMS we are going to migrate one DB hosted on EC2 to the AWS RDS using AWS DMS.&lt;/p&gt;

&lt;h2&gt;
  
  
  But what is AWS DMS?
&lt;/h2&gt;

&lt;p&gt;AWS Data Migration Service (DMS) is a specialized tool by Amazon Web Services that simplifies moving your databases from one place to another, like from your own computers to the powerful cloud servers of AWS. It's like a skilled mover for your data, ensuring it gets to its new home safely and quickly. Whether you're shifting lots of data or just a bit, DMS takes care of the tricky parts, making database migration a breeze.&lt;/p&gt;

&lt;p&gt;Now let's see the implementation step by step&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Configure one MySQL database on EC2 which will act as a source database in our case.&lt;/p&gt;

&lt;p&gt;1.1. Create one EC2 instance and configure MySQL database on it. Please note the DB endpoint, username and password. We will need it in the further step to migrate data from this server to AWS RDS, which is the agenda of this session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Create a managed database using AWS Relational Database.&lt;/p&gt;

&lt;p&gt;2.1. Go to the AWS RDS console.&lt;br&gt;
2.2. Choose Create Database.&lt;br&gt;
2.3. For choose a database creation method: &lt;code&gt;Select        &lt;br&gt;
Standard Create&lt;/code&gt;.&lt;br&gt;
2.4. For Engine Type: &lt;code&gt;Select MySQL&lt;/code&gt;&lt;br&gt;
2.5. For Edition: &lt;code&gt;Select MySQL Community&lt;/code&gt;&lt;br&gt;
2.6. For Version: &lt;code&gt;Select Latest Version&lt;/code&gt;&lt;br&gt;
2.7. For Templates: &lt;code&gt;Select Free Tier&lt;/code&gt;&lt;br&gt;
2.8. Under setting give DB instance identifier name-&lt;code&gt;TargetDB&lt;/code&gt;&lt;br&gt;
2.9. Give Master Username - &lt;code&gt;admin&lt;/code&gt;&lt;br&gt;
2.10. Give Master Password - &lt;code&gt;shubhamkcloud&lt;/code&gt;&lt;br&gt;
2.11. Confirm Password&lt;br&gt;
2.12. Under instance Configuration - For DB instance class select &lt;code&gt;Brustable classes&lt;/code&gt; and then select &lt;code&gt;db.t3.micro&lt;/code&gt;&lt;br&gt;
2.13. Leave the storage parameter as it is.&lt;br&gt;
2.14. Under Connectivity: Select VPC and Subnet Group.&lt;br&gt;
2.15. For Public Access: &lt;code&gt;Select No&lt;/code&gt;&lt;br&gt;
2.16. Select Security Group. Make sure to allow port 3306.&lt;br&gt;
2.17. For Database Authentication: Select &lt;code&gt;password &lt;br&gt;
Authentication&lt;/code&gt;.&lt;br&gt;
2.18. Finally click on &lt;code&gt;Create Database&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It will take sometime but finally DB will be created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Create Replication Instance&lt;/p&gt;

&lt;p&gt;Replication Instance initiates the connection between source and target database and it will transfer data.&lt;/p&gt;

&lt;p&gt;3.1.  Go to the &lt;code&gt;Database Migration Service&lt;/code&gt; console.&lt;br&gt;
3.2.  Click on the &lt;code&gt;Replication instance&lt;/code&gt; on the left side panel.&lt;br&gt;
3.3.  Click on &lt;code&gt;Create Replication instance&lt;/code&gt;&lt;br&gt;
3.4.  Enter instance name - &lt;code&gt;replication-instance&lt;/code&gt;&lt;br&gt;
3.5.  For instance class: select &lt;code&gt;dms.t3.medium&lt;/code&gt;&lt;br&gt;
3.6.  For Engine version: select latest version. Currently it is 3.5.1&lt;br&gt;
3.7.  For High Availability: Select &lt;code&gt;Dev or test workload&lt;/code&gt;&lt;br&gt;
3.8.  For Allocation storage: enter &lt;code&gt;50&lt;/code&gt;&lt;br&gt;
3.9.  For network type - select &lt;code&gt;IPV4&lt;/code&gt;&lt;br&gt;
3.10. Select VPC and Subnet&lt;br&gt;
3.11. Select the checkbox for &lt;code&gt;public accessible&lt;/code&gt;&lt;br&gt;
3.12. Expand Advance setting and select Availability zone and Security group.&lt;br&gt;
3.13. Choose &lt;code&gt;Create replication instance&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This will again take some time but after waiting for 5-7 mins replication instance will be created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Configure Source Database&lt;/p&gt;

&lt;p&gt;Use continuous replication of changes (also known as Change Data Capture(CDC))to ensure minimum downtime.&lt;/p&gt;

&lt;p&gt;4.1. Get the Source DB endpoint, username and password.&lt;br&gt;
4.2. You need to login to the DB server and grant the following permission to the DB USER. You can use the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -p &amp;lt;password&amp;gt;
GRANT REPLICATION CLIENT ON *.* to '&amp;lt;DB_USER&amp;gt;;
GRANT REPLICATION SLAVE ON *.* to '&amp;lt;DB_USER&amp;gt;;
GRANT SUPER ON *.* to '&amp;lt;DB_USER&amp;gt;;
exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.3. Restart MYSQL service with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service mysql restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;STEP 5:&lt;/strong&gt; Create Source and Target end points&lt;/p&gt;

&lt;p&gt;5.1. Go to the AWS DMS console.&lt;br&gt;
5.2. Click on &lt;code&gt;Create endpoint&lt;/code&gt; from the left panel.&lt;br&gt;
5.3. Give Endpoint identifier name as &lt;code&gt;source-endpoint&lt;/code&gt;&lt;br&gt;
5.4. Select Source engine as &lt;code&gt;MySQL&lt;/code&gt;&lt;br&gt;
5.5. For Access to endpoint database - Select &lt;code&gt;Provide access information manually&lt;/code&gt;&lt;br&gt;
5.6. For Server  name - Enter Source DB server name(from step 1).&lt;br&gt;
5.7. For port enter: &lt;code&gt;3306&lt;/code&gt;&lt;br&gt;
5.8. Enter Username set on the first step.&lt;br&gt;
5.9. Enter password set on the first step.&lt;br&gt;
5.10. For Secure Socket Layer mode - select &lt;code&gt;None&lt;/code&gt;&lt;br&gt;
5.11. Click on &lt;code&gt;Test endpoint connection&lt;/code&gt; for testing the endpoint.&lt;br&gt;
5.12. Select VPC&lt;br&gt;
5.13. For Replication instance - Select &lt;code&gt;replication-instance&lt;/code&gt; from the dropdown list.&lt;br&gt;
5.14. Choose &lt;code&gt;Create endpoint&lt;/code&gt;&lt;br&gt;
5.15 Follow the same step from 5.1 to 5.14 and create Target endpoint in the same way as you created for source endpoint. Remember we have to give here RDS endpoint, username and password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Create and Run Replication Task&lt;/p&gt;

&lt;p&gt;As I mentioned above we will perform this migration by continuous data replication migration approach. Although this can be perform in multiple ways.&lt;/p&gt;

&lt;p&gt;6.1. Go to the AWS DMS console.&lt;br&gt;
6.2. Navigate to &lt;code&gt;Database Migration Tasks&lt;/code&gt; on the left side.&lt;br&gt;
6.3. Click &lt;code&gt;Create Task&lt;/code&gt;&lt;br&gt;
6.4. Provide a task identifier name, e.g., &lt;code&gt;replication-task&lt;/code&gt;&lt;br&gt;
6.5. For "Replication instance," select &lt;code&gt;replication-instance&lt;/code&gt; from the dropdown list.&lt;br&gt;
6.6. For Source database endpoint - Select &lt;code&gt;source-endpoint&lt;/code&gt; from the dropdown list.&lt;br&gt;
6.7. For Target database endpoint - Select &lt;code&gt;target-endpoint&lt;/code&gt; from the dropdown list.&lt;br&gt;
6.8. For Migration Type - &lt;code&gt;Select Migrate existing data and replicate ongoing changes&lt;/code&gt;&lt;br&gt;
6.9. On the Task setting section, For Editing mode - Select &lt;code&gt;Wizard&lt;/code&gt;&lt;br&gt;
6.10. For Target table preparation mode - Select &lt;code&gt;Do nothing&lt;/code&gt;&lt;br&gt;
6.11. For Stop task after full load complete: Select &lt;code&gt;Don't stop&lt;/code&gt;&lt;br&gt;
6.12. For Include LOB columns in replication: &lt;code&gt;Select Limited LOB mode&lt;/code&gt;&lt;br&gt;
6.13. For Maximum LOB size(KB) - Enter &lt;code&gt;32&lt;/code&gt;&lt;br&gt;
6.14. Select the checkbox for - &lt;code&gt;Turn on validation&lt;/code&gt;&lt;br&gt;
6.15. Select the checkbox for - &lt;code&gt;Turn on Cloudwatch logs&lt;/code&gt;&lt;br&gt;
6.16. Leave the other values as default&lt;br&gt;
6.17. For the Table mapping section, select Editing mode as &lt;code&gt;Wizard&lt;/code&gt;&lt;br&gt;
6.18. choose &lt;code&gt;Selection rules&lt;/code&gt; and then choose &lt;code&gt;Add new selection rule&lt;/code&gt;&lt;br&gt;
6.19. For Schema - Select&lt;code&gt;Enter a schema&lt;/code&gt; from the dropdown list&lt;br&gt;
6.20. For Source name - enter DB name.&lt;br&gt;
6.21. Under the Migration task startup configuration, for Start migration task - Select &lt;code&gt;Automatically on create&lt;/code&gt;&lt;br&gt;
6.22. Choose &lt;code&gt;Create task&lt;/code&gt;. This will take few minutes to complete. Wait for the status to change to &lt;code&gt;Load complete, replication ongoing&lt;/code&gt;&lt;br&gt;
6.23. We can check the table statistics on the &lt;code&gt;Table statistics&lt;/code&gt; tab inside your replication-task &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Validate&lt;/p&gt;

&lt;p&gt;Since you completed all these steps, now you can verify the data in your RDS.&lt;/p&gt;

&lt;p&gt;Conclusion: Congratulations on successfully migrating your database using AWS Data Migration Service (DMS). You've harnessed the power of seamless data movement to AWS RDS.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dms</category>
      <category>datamigrationtords</category>
      <category>rdscreation</category>
    </item>
    <item>
      <title>Mastering Prompt Engineering: A Journey into AI-Powered Conversations</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Tue, 22 Aug 2023 09:21:29 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/mastering-prompt-engineering-a-journey-into-ai-powered-conversations-3897</link>
      <guid>https://forem.com/shubhamkcloud/mastering-prompt-engineering-a-journey-into-ai-powered-conversations-3897</guid>
      <description>&lt;h2&gt;
  
  
  Why Prompt Engineering?
&lt;/h2&gt;

&lt;p&gt;In a world where artificial intelligence (AI) is becoming an integral part of our lives, understanding the art of prompt engineering can be your key to unlocking AI's true potential. Let's embark on a journey to explore how crafting the right prompts can empower you to communicate effectively with AI, enhancing the way you interact with technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Prompt Engineering?
&lt;/h2&gt;

&lt;p&gt;Imagine explaining prompt engineering using a cooking analogy. Just like how we need the right ingredients to cook a specific dish, we need to provide precise information to artificial intelligence (AI) for it to work effectively.&lt;/p&gt;

&lt;p&gt;Think about making rice. If you want to cook rice but you only have wheat, you won't get the desired result. Similarly, as we step into the era of Artificial Intelligence and machine learning, we need to prepare our input carefully.&lt;/p&gt;

&lt;p&gt;When you want AI to do something, you're like the chef in the kitchen. If you give clear and exact instructions, AI will understand what you want and provide the right output, just like a well-cooked meal. This process is called prompt engineering.&lt;/p&gt;

&lt;p&gt;In essence, prompt engineering is like giving smart directions to a computer friend. It involves using the right words and instructions to help AI comprehend our needs. By being precise in our prompts, we can ensure that AI gives us the outcomes we're looking for – much like a delicious dish prepared from the right ingredients. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Palette of Effective Prompt Formulation
&lt;/h2&gt;

&lt;p&gt;Just as an artist chooses colors with care, crafting effective prompts requires attention to detail. Let's dive into the palette of prompt formulation and examine its core components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clear and Specific Language:&lt;/strong&gt; Guiding AI's Understanding&lt;br&gt;
Using simple language and being specific in your requests ensures that AI grasps your intent accurately. Think of it as speaking directly to AI, clarifying your expectations like you would with a friend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contextual Details:&lt;/strong&gt; Providing Depth to Your Prompts&lt;br&gt;
Similar to adding layers to a painting for depth, including relevant context in your prompts helps AI to understand your request more comprehensively. This contextual information empowers AI to generate contextually fitting responses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Avoiding Ambiguity:&lt;/strong&gt; The Path to Precise Responses&lt;br&gt;
Ambiguity in prompts is like blurred strokes in a painting – it leads to confusion. By steering clear of vague instructions, you enable AI to provide clear and accurate responses, just as an artist creates well-defined lines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prompt Example:
Suppose your prompt is - "Tell me about the best hiking spots."
You can make your prompt like - "I'm planning a weekend trip. Can you recommend the top hiking destinations in the Rocky Mountains?"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Complete Instructions:&lt;/strong&gt; Crafting a Comprehensive Prompt&lt;br&gt;
A complete prompt equates to a complete picture. Furnishing AI with all necessary details ensures it has the elements it needs to generate accurate and holistic responses, akin to a complete artwork.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured Queries:&lt;/strong&gt; Organizing for Optimal Outcomes&lt;br&gt;
Just as an architect plans a building with structured blueprints, framing your prompt in a structured manner enhances AI's understanding. Breaking down complex queries into clear steps enables AI to deliver more precise answers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prompt Example:
Suppose your prompt is "Explain the theory of relativity."
You can make your prompt like - "Break down the theory of relativity into its key components: general relativity and special relativity."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Testing and Iteration:&lt;/strong&gt; Refining for Desired Results&lt;br&gt;
Iterating prompts is like revising a draft until it's perfect. Testing your prompts, observing the results, and refining them based on feedback improves the accuracy of AI responses over time.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prompt Frameworks Unleashed
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Short Prompts:&lt;/strong&gt; Direct and Impactful Communication&lt;br&gt;
Ask your question directly, very short and very precise. You can also give some reference and ask AI to answer your question with respect of your example provided. You're telling AI exactly what you need. It's almost like having a chat with a very helpful and super-smart friend who knows all the answers.  It'll give you just the info you're looking for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two-Way Communication:&lt;/strong&gt; Ensuring Clarity through AI Interrogation&lt;br&gt;
Think of prompt communication like having a friendly Q&amp;amp;A session with AI. Imagine you're the student, and AI is your knowledgeable teacher. When you ask a question, your teacher has to fully understand it to provide a helpful answer.&lt;br&gt;
Here's the twist: what if your teacher didn't get your question? They might give you a puzzling answer, right? Similarly, if your prompt to AI isn't crystal clear, it might give you unexpected or odd responses.&lt;br&gt;
So, let's teach AI to double-check! Just like you'd want your teacher to ask, "Did you mean this?" if they're unsure, prompt the AI to question your query. For example, "AI, did you understand my request to clearly?"&lt;br&gt;
This two-way interaction is like teaching AI to be a careful student, making sure it understood before answering. It's a bit like avoiding misunderstandings in a classroom – with clear communication, you'll get the right answers. So, next time you prompt AI, have it quiz your query to ensure it's on the same page before providing the info you're looking for.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prompt Example: 
- ChatGPT, did you understand my request for an overview of World War II?
- If you're uncertain about my request, please ask for clarification before providing an answer.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Nurturing Expertise with ChatGPT&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Transforming ChatGPT: Your Personal Expert&lt;/strong&gt;&lt;br&gt;
Imagine ChatGPT as a super-smart friend who can become an expert in any topic you ask. It's like having a friend who can instantly learn about whatever you're curious about.&lt;br&gt;
Here's the cool part: just tell ChatGPT the topic, and it will dive into it like a quick learner. It gathers information and becomes an expert. Then, it gives you the best answers, as if it's been studying that topic for ages.&lt;br&gt;
Think of it like having a friend who becomes a champ in different subjects just to help you out. From science to history to art – ChatGPT has your back. So, the next time you're curious, ask ChatGPT to become an expert, and it will guide you like a pro!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prompt Example:
- ChatGPT, you become an expert in renewable energy sources.
- Explore various types of renewable energy, their benefits, and their impact on the environment.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Perfecting Your Prompts with ChatGPT&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Refinement Through Critique:&lt;/strong&gt; ChatGPT as Your Prompt Coach&lt;br&gt;
Just as an artist seeks feedback to enhance their work, ChatGPT can critique your prompts for improvement. It offers suggestions to make your prompts clearer, ensuring optimal results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Empowering Your Queries: ChatGPT's Guided Refinement&lt;/strong&gt;&lt;br&gt;
Ever had a friend who helps you refine your ideas? Think of ChatGPT as that helpful friend. It can critique your prompts and help you make them even better!&lt;br&gt;
Imagine ChatGPT as your prompt coach. It can give you tips to make your prompts clearer and better. Like a friend suggesting improvements, ChatGPT helps you create prompts that get the best results. So, when in doubt, let ChatGPT guide you to ask in the right way!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prompt Example:
- ChatGPT, critique my prompt about climate change.
- Offer suggestions to make my climate change prompt more concise and impactful.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion: Embrace the Power of Prompt Engineering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a world where communication with AI is increasingly prevalent, prompt engineering is your gateway to seamless interactions. Just as a skilled chef produces masterpieces with the right ingredients, you too can yield outstanding AI responses by mastering the art of prompt engineering. By understanding how to frame effective prompts, leveraging ChatGPT's expertise, and refining your queries, you're poised to navigate the AI landscape with finesse, transforming everyday interactions into extraordinary outcomes.&lt;/p&gt;

</description>
      <category>promptengineering</category>
      <category>chatgpt</category>
      <category>ai</category>
      <category>openai</category>
    </item>
    <item>
      <title>How to use AWSCLI to download S3 object</title>
      <dc:creator>Shubham Kumar</dc:creator>
      <pubDate>Thu, 18 May 2023 13:54:32 +0000</pubDate>
      <link>https://forem.com/shubhamkcloud/how-to-use-awscli-to-download-s3-object-30fj</link>
      <guid>https://forem.com/shubhamkcloud/how-to-use-awscli-to-download-s3-object-30fj</guid>
      <description>&lt;p&gt;AWS console doesn't allows to download multiple files or folder together. This is a limitation for the AWS console. But we can use awscli to get it. This is a very small doc but good to start with.&lt;/p&gt;

&lt;p&gt;Prerequisite:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Active Account&lt;/li&gt;
&lt;li&gt;AWS CLI&lt;/li&gt;
&lt;li&gt;IAM user with s3 list and get access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create IAM user&lt;/li&gt;
&lt;li&gt;Configure AWSCLI in your local&lt;/li&gt;
&lt;li&gt;Use awscli to download it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Implementation:&lt;br&gt;
&lt;strong&gt;Step 1: IAM user creation&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Login to AWS console, go to the IAM service and create one user.&lt;/li&gt;
&lt;li&gt;Make sure to add S3 specific permission with this user. For your test purpose you can give AWS managed S3FullAccess permission.&lt;/li&gt;
&lt;li&gt;Once the user is created &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;       --&amp;gt; click on users under the IAM page &lt;/li&gt;
&lt;li&gt;       --&amp;gt; click on the user that you created just now&lt;/li&gt;
&lt;li&gt;       --&amp;gt; click on security credentials
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gSUu463x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e9myyj48f9kuv5g004h0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gSUu463x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e9myyj48f9kuv5g004h0.png" alt="Image description" width="800" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;       --&amp;gt; scroll down and under access key section click on
      create access key.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Plxtvojf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9mo95y3sfdd1u5nytjp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Plxtvojf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9mo95y3sfdd1u5nytjp.png" alt="Image description" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;       --&amp;gt; Save you access key and secrets key in safe place. You will not be able to find it if you missed to save.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Configure AWSCLI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Please refer step 1 and 2 of the below mentioned doc to get the steps if you need it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/shubhamkcloud/automate-application-deployment-into-lightsail-instance-28co"&gt;https://dev.to/shubhamkcloud/automate-application-deployment-into-lightsail-instance-28co&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Use awscli to download it&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We may have different use cases of downloading a single file or a complete folder.&lt;/p&gt;

&lt;p&gt;If you need to download a single file, use the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 cp &amp;lt;s3://bucket-name&amp;gt;/&amp;lt;file_name&amp;gt; &amp;lt;local/path&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to download complete folder, use the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 sync &amp;lt;s3://bucket-name&amp;gt;/&amp;lt;folder_name&amp;gt; &amp;lt;local/path&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using the above mentioned command you will be able to download S3 objects from your S3 bucket to your local system.&lt;/p&gt;

&lt;p&gt;Thanks for reading. This is a very small blog but important one to save our time.&lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>s3folderdownload</category>
      <category>awscli</category>
      <category>aws</category>
      <category>startwithawscli</category>
    </item>
  </channel>
</rss>
