<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Monica Escobar</title>
    <description>The latest articles on Forem by Monica Escobar (@monica_escobar).</description>
    <link>https://forem.com/monica_escobar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/monica_escobar"/>
    <language>en</language>
    <item>
      <title>Bridging the Gap: Solving Common Business Connectivity Challenges with Hybrid DNS Architecture</title>
      <dc:creator>Monica Escobar</dc:creator>
      <pubDate>Sun, 07 Jul 2024 19:32:42 +0000</pubDate>
      <link>https://forem.com/monica_escobar/bridging-the-gap-solving-common-business-connectivity-challenges-with-hybrid-dns-architecture-50kb</link>
      <guid>https://forem.com/monica_escobar/bridging-the-gap-solving-common-business-connectivity-challenges-with-hybrid-dns-architecture-50kb</guid>
      <description>&lt;p&gt;Before we dive in, I’d like to extend my special thanks to Adrian Cantrill for his exceptional training materials. More details at the end of the article.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;A4L's Mission&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Animals 4 Life (A4L) is a fictional but passionate animal charity dedicated to protecting animals around the world. Founded in Australia, A4L has expanded its reach globally, leveraging technology to enhance its conservation efforts. With operations spanning multiple regions, A4L’s commitment to wildlife protection drives its need for a robust and interconnected IT infrastructure. This hybrid DNS architecture plays a crucial role in supporting their mission, ensuring that both their on-premises and cloud environments can work together seamlessly to safeguard wildlife everywhere.&lt;br&gt;
In the dynamic metropolis of A4L, two distinct domains existed: the boundless cloud of AWS and the steadfast on-premises data centers. These domains, though powerful, struggled with communication. AWS-hosted applications needed access to on-premises data and vice versa, and they had to keep some data on premises for regulatory compliance. The inability to resolve domain names across these environments created silos (and if you follow Scrum, this ain’t good), hindering the seamless integration vital for A4L's operations.&lt;/p&gt;

&lt;p&gt;Have a peek at the initial architecture: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff41g96d37636u1fjm1nh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff41g96d37636u1fjm1nh.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Challenges presented by this architecture:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Isolation&lt;/strong&gt;: AWS resources couldn’t locate on-premises resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complexity&lt;/strong&gt;: Manual DNS configurations were error-prone and inefficient.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Inconsistent connectivity led to delays and reliability issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here we can see how when trying to connect from the AWS side with the on premises side, no connection was established:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsgu37hlo3ymuci87n7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsgu37hlo3ymuci87n7y.png" alt="Image description" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the same when trying the other way around:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9w0fitjr4u6iqezxhnw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9w0fitjr4u6iqezxhnw.png" alt="Image description" width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bridge which came to the rescue
&lt;/h2&gt;

&lt;p&gt;Enter the hybrid DNS architecture, a powerful solution designed to unite these disparate worlds. Central to this architecture were AWS Route 53 Resolver and Direct Connect (I used VPC peering as Direct Connect is hard to set up and not worth for a fictional company), forming the bridge for seamless communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture Unfolds:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Establish Direct Connect&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahi01v3ssg5h3q4c0mu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahi01v3ssg5h3q4c0mu5.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Direct Connect&lt;/strong&gt; provides a dedicated, high-speed network link between AWS and the on-premises data center, ensuring reliable and secure dedicated communication. Although expensive, and with high overhead requirements, seemed to be the perfect solution for joining this two worlds, as it provides a constant connection between our two worlds providing a consistent connection, which was one of our key issues. Therefore, going back to our initial key challenges, two of them were removed by the Direct Connect connection, leaving us to focus only on the DNS configurations: &lt;br&gt;
  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Isolation&lt;/strong&gt;: AWS resources couldn’t locate on-premises resources. - Fixed &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complexity&lt;/strong&gt;: Manual DNS configurations were error-prone and inefficient.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Inconsistent connectivity led to delays and reliability issues. - Fixed&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;However, after doing this, we can still see there is no DNS resolution: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy40im8ljwamfjdj36bf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy40im8ljwamfjdj36bf.png" alt="Image description" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Configure AWS Side&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Route 53 Private Hosted Zone&lt;/strong&gt;: Managed internal DNS records within AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Route 53 Resolver Endpoints&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inbound Endpoints&lt;/strong&gt;: Inbound endpoints are crucial because they allow on-premises DNS servers to send DNS queries to AWS. This means that resources within the on-premises environment can resolve domain names for AWS-hosted services. For example, our on-  premises application will need to access an AWS-hosted database or web service. Without inbound endpoints, these queries would fail, isolating the on-premises environment from the cloud.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9aegwvm8i6fsqc6xoea7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9aegwvm8i6fsqc6xoea7.png" alt="Image description" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Outbound Endpoints&lt;/strong&gt;: they serve the opposite purpose. They enable AWS                 resources to query the on-premises DNS servers. This is essential for cloud-hosted          applications that need to interact with on-premises services. For instance, our web             application running on an EC2 instance will require data from an on-premises database.      Outbound endpoints ensure these DNS queries can be resolved, allowing seamless          communication between AWS and on-premises resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Integrate On-Premises Side&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As a result of the previous step, &lt;strong&gt;Our Application Servers&lt;/strong&gt; can now directly query AWS services, bridging the gap. In this architecture, both inbound and outbound endpoints work in tandem to create a bidirectional flow of DNS queries. This unified approach ensures that applications and services, regardless of their location, can resolve each other’s domain names, facilitating seamless integration and operation. This hybrid DNS setup is a cornerstone in achieving a truly interconnected and efficient IT ecosystem. Therefore, going back to our initial key challenges, the last remaining one was removed by the endpoint resolvers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Isolation&lt;/strong&gt;: AWS resources couldn’t locate on-premises resources. - Fixed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complexity&lt;/strong&gt;: Manual DNS configurations were error-prone and inefficient. - Fixed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Inconsistent connectivity led to delays and reliability issues. -Fixed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Leaving us with this final architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhq1seymnpsztsrkhl4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhq1seymnpsztsrkhl4c.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Unified Operations, two Kingdoms working as one:
&lt;/h2&gt;

&lt;p&gt;With the hybrid DNS architecture in place, A4L saw transformative changes, some of these were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Seamless Integration&lt;/strong&gt;: AWS and on-premises resources communicated effortlessly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralised Management&lt;/strong&gt;: Streamlined DNS management reduced errors and administrative burdens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Performance&lt;/strong&gt;: Direct Connect ensured low-latency, high-reliability connections.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability and Flexibility&lt;/strong&gt;: Supported dynamic and scalable DNS query handling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security and Compliance&lt;/strong&gt;: Ensured sensitive data remained protected through private connections.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Happy Ending: Who Benefits?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Wildlife Protectors&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A4L’s web servers in AWS could now seamlessly access on-premises databases, ensuring real-time updates on wildlife conservation efforts, improving chances on saving all animals, yay!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Disaster Recovery Champions&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensured consistent DNS resolution across environments, enabling robust failover strategies.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Residency Stewards&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintained compliance by keeping sensitive data on-premises while leveraging cloud resources.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Innovators with Complex Networks&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Facilitated seamless communication across microservices and hybrid deployments.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The story of A4L's hybrid DNS architecture is one of bridging gaps and fostering unity between cloud and on-premises environments. With AWS Route 53 Resolver and Direct Connect as pivotal characters, the tale of seamless integration and operational excellence unfolds, promising a future where cloud and local resources work harmoniously together. This architecture not only solves the problem of fragmented networks but also paves the way for a new era of interconnected possibilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Special thanks to Adrian Cantrill&lt;/strong&gt; for his exceptional training materials. Adrian is a top-notch learning provider who excels in teaching not just theory but also hands-on application. His courses are packed with practical projects that allow learners to apply what they've learned and navigate real-world business scenarios.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>data</category>
      <category>linux</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Deploy an Amazon Lex Chatbot to your own website.</title>
      <dc:creator>Monica Escobar</dc:creator>
      <pubDate>Fri, 14 Jun 2024 15:33:57 +0000</pubDate>
      <link>https://forem.com/monica_escobar/deploy-an-amazon-lex-chatbot-to-your-own-website-e18</link>
      <guid>https://forem.com/monica_escobar/deploy-an-amazon-lex-chatbot-to-your-own-website-e18</guid>
      <description>&lt;h2&gt;
  
  
  Why a chatbot?
&lt;/h2&gt;

&lt;p&gt;For professionals, integrating a chatbot into your website is more than just a cool tech feature; it’s a way to showcase your commitment to innovation and user experience. It reflects a forward-thinking approach and shows that you value your visitors’ time and needs. By offering instant, personalised interactions, a chatbot makes your website more engaging and user-friendly.&lt;/p&gt;

&lt;p&gt;Incorporating a chatbot into your website is a smart move that demonstrates a proactive approach to communication and technology. It’s about making connections easier, information more accessible, and experiences more enjoyable for your visitors. In essence, it’s a reflection of who you are as a professional – someone who values innovation, accessibility, and excellence in every interaction.&lt;/p&gt;

&lt;p&gt;These are just some of the reasons I chose to build and deploy my own chatbot, and I ended up liking it so much that I wanted to share the steps with everyone else in case you found it useful or beneficial for any of your projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack/resources used:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;-Amazon Lex &lt;br&gt;
-CloudFront &lt;br&gt;
-My own website &lt;/p&gt;

&lt;p&gt;Below are the steps I followed to create the chatbot using AWS Lex: &lt;/p&gt;

&lt;p&gt;When creating a chatbot in AWS Lex, &lt;strong&gt;&lt;em&gt;you have several options:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
        - Descriptive Bot Builder: Automatically generates intents, utterances, and slots based on your use case but you need to use BedRock for this, make sure you check the fees before you do this. Also, you will need to apply to have access granted if you have never used it before. &lt;br&gt;
        - Create a Blank Bot: Start from scratch and define your own intents, utterances, and slots. This is the one I chose. &lt;br&gt;
        - Start with a Transcript: Upload a JSON file with the conversation flow. Bear in mind, that if you decide to upload the json file you will need to provide 1000 lines as a minimum.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt; Using the Visual Editor: *&lt;/em&gt;&lt;br&gt;
    - Create a blank bot and proceed to the Visual Editor.&lt;br&gt;
    - Define intents, slots, and conversation flows using a visual interface.&lt;br&gt;
    - Add intents, slots, prompts, and responses to script the conversation flow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt; Testing and Building the Chatbot:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- After defining the conversation flow, save the bot.
- Build the bot and test it within the AWS Lex console to ensure it functions correctly.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt; Integrating the Chatbot:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Once the chatbot is built successfully, create an alias for the bot.
- Integrate the chatbot with your website by deploying a stack (you can get it here: https://aws.amazon.com/blogs/machine-learning/deploy-a-web-ui-for-your-chatbot/)that includes CloudFront, web UI artifacts, and authentication using Amazon Cognito (if required, I personally did not include authentication).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt; Deployment and Configuration: &lt;br&gt;
    - Launch the stack with the necessary parameters like Bot ID, Alias ID, and other configurations.&lt;br&gt;
    - Copy the snippet URL provided after the stack creation to integrate the chatbot into your website.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_ Finalising Integration: _&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Update your website's HTML code with the provided snippet URL to embed the chatbot.
- Upload the updated HTML file to your hosting platform (e.g., S3 bucket).
- Invalidate the CloudFront cache to reflect the changes on your website.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt; Testing the Integrated Chatbot: &lt;br&gt;
    - Access your website and interact with the chatbot using text or voice commands.&lt;br&gt;
    - Validate that the chatbot functions correctly and responds to user inputs as expected.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Future enhancements: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Looking ahead, my journey with AI and automation is far from over. I have exciting plans to further enhance the capabilities of the chatbot and, in turn, the overall user experience of my portfolio.&lt;br&gt;
One of the key areas I’m focusing on is harnessing user behaviour insights. The chatbot has the potential to track common questions and interactions, revealing what visitors find most interesting or important, this constant feedback can help me enhance the user experience. &lt;/p&gt;

&lt;p&gt;If you got this far, thank you so much and happy building! &lt;/p&gt;

</description>
      <category>aws</category>
      <category>automation</category>
      <category>ai</category>
      <category>lex</category>
    </item>
    <item>
      <title>Using AI to improve security and learning in your AWS environment.</title>
      <dc:creator>Monica Escobar</dc:creator>
      <pubDate>Mon, 10 Jun 2024 13:57:00 +0000</pubDate>
      <link>https://forem.com/monica_escobar/using-ai-to-improve-security-and-learning-in-your-aws-environment-p90</link>
      <guid>https://forem.com/monica_escobar/using-ai-to-improve-security-and-learning-in-your-aws-environment-p90</guid>
      <description>&lt;h2&gt;
  
  
  Why We're Building a Chatbot: Empowering Our Platform Team
&lt;/h2&gt;

&lt;p&gt;Our Platform engineers are the backbone of our secure development process. However, they often face hurdles that slow them down and hinder their ability to deliver top-notch work. On top of this, there are different levels of expertise within the team and consulting all the available documentation can be a daunting and time consuming task. This chatbot could potentially reduce the coaching workload from the senior members to the junior members of the team. &lt;/p&gt;

&lt;p&gt;To address these challenges and empower the teams, I chose to build a chatbot specifically designed to assist them.&lt;/p&gt;

&lt;p&gt;Our team’s stack is composed of Airflow and Spark mainly, as well as the infrastructure hosted in AWS. &lt;/p&gt;

&lt;p&gt;Here's a closer look at the pain points we're aiming to solve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streamlining Security Testing: Finding potential security vulnerabilities in code can be a time-consuming task. The chatbot will help engineers identify issues quickly and efficiently, it can trigger automated security scans through Spark or Airflow jobs, analyze scan results from S3 buckets, and present findings to engineers in a user-friendly way. This has become even more prominent and needed now with GDPR regulations. It can become a good tool for detecting PII data especifically tailored to our environment, if for whatever reason, Amazon Macie is not on the cards. &lt;/li&gt;
&lt;li&gt;Enhancing Security Visibility: the team needs a clear picture of potential software vulnerabilities across all applications, like out of support versions. The chatbot can leverage data from various sources, like security reports stored in S3 buckets, and use this information to identify trends and potential vulnerabilities across applications managed by your EC2 instances within VPCs.&lt;/li&gt;
&lt;li&gt;Heavy teaching workload: having an easy accesible but secure chatbot to consult all the platform’s documentation, fully tailored to our environment, will positively impact everyone in the team. Junior members will become more independent and confident in what they do and senior members will have more time to spend working on their own tickets. &lt;/li&gt;
&lt;li&gt;Simplifying Security Practices: Developing strong threat models and security policies is crucial, but it can be complex. The chatbot will offer guidance and best practices, making it easier for engineers to implement these essential safeguards.&lt;/li&gt;
&lt;li&gt;Boosting Infrastructure Security: Infrastructure as Code (IaC) plays a vital role in our development process.The chatbot can leverage tools like CloudFormation or Terraform to identify potential security risks within IaC templates before deployment to EC2 instances.&lt;/li&gt;
&lt;li&gt;Enforcing Security Pipelines: Integrating security checks seamlessly into our development pipelines is essential. The chatbot will help enforce these checks, guaranteeing that security is never an afterthought by triggering security scans within Airflow pipelines, ensuring vulnerabilities are identified and addressed before code is deployed to production instances behind your ELBs.&lt;/li&gt;
&lt;li&gt;Ensuring Quality Control: We are committed to delivering high-quality solutions. The chatbot will provide us with valuable insights and data, enabling us to maintain a high level of control over the development process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By building this chatbot, we're investing in building secure and reliable software, as well as promoting a continuous learning culture and self development.&lt;/p&gt;

&lt;p&gt;Here's a detailed breakdown of the steps I followed to build our chatbot:&lt;/p&gt;

&lt;p&gt;Step 0: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Use the following CloudFormation template to deploy all the resources. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Thetechyteacher"&gt;
        Thetechyteacher
      &lt;/a&gt; / &lt;a href="https://github.com/Thetechyteacher/ai-chatbot"&gt;
        ai-chatbot
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;ai-chatbot&lt;/h1&gt;

&lt;/div&gt;

&lt;/div&gt;
&lt;br&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Thetechyteacher/ai-chatbot"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;Step 1: Adding Documents to Amazon Simple Storage Service (S3)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What it is: Amazon S3 is a secure, scalable object storage service that acts as the foundation for our chatbot's knowledge base.&lt;/li&gt;
&lt;li&gt;The Why: You'll store essential documents like security best practices, threat model templates, and IaC security guidelines in S3. This documents will be specific to your own production environment, so the answers can be fully tailored to our requirements. I personally chose to feed it with Spark and Airflow documentation, as well as general security docs.&lt;/li&gt;
&lt;li&gt;How to do it:

&lt;ol&gt;
&lt;li&gt;Create an S3 bucket: This acts as a virtual folder where you'll store your documents.&lt;/li&gt;
&lt;li&gt;Upload relevant documents: Upload security policies, best practice guides, and any other resources you might need.&lt;/li&gt;
&lt;li&gt;Configure access permissions: Ensure the chatbot has the necessary permissions to access and retrieve information from the S3 bucket.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgih1s80n98xsmlmna33v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgih1s80n98xsmlmna33v.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Searching with Amazon Kendra&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What it is: Amazon Kendra is an intelligent search service that allows users to easily find relevant information across various sources like S3 buckets.&lt;/li&gt;
&lt;li&gt;The Why: Kendra will be crucial for your chatbot to efficiently search the vast amount of security information stored in S3.&lt;/li&gt;
&lt;li&gt;How to do it:

&lt;ol&gt;
&lt;li&gt;Create a Kendra index: This index tells Kendra where to look for information, in this case, your S3 bucket containing security documents.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08efojeuc3vbvc86ld3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08efojeuc3vbvc86ld3i.png" alt="Image description" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2. Add an s3 connector and link it to the s3 bucket which contains the data.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq9jwqaubhmb2lz3etj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq9jwqaubhmb2lz3etj2.png" alt="Image description" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;3. Sync now.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3usd7wsbat1b1xgs0m61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3usd7wsbat1b1xgs0m61.png" alt="Image description" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3: Setting Up Access to Amazon Bedrock&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What it is: Amazon Bedrock is a security configuration language that helps enforce security best practices throughout your infrastructure.&lt;/li&gt;
&lt;li&gt;The Why: The chatbot can leverage Bedrock (AI) to guide chatbot users (our engineers) in developing secure IaC configurations and identify potential security risks in their code.&lt;/li&gt;
&lt;li&gt;How to do it:

&lt;ol&gt;
&lt;li&gt;You will need access to Anthrotopic (Claude) and Titan Express. If you don’t have it granted yet, you’ll need to request it now. &lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02kg40uybrujomxa0bb2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02kg40uybrujomxa0bb2.png" alt="Image description" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4: Using SageMaker Studio IDE to Build Your Chatbot&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What it is: SageMaker Studio IDE is a cloud-based integrated development environment (IDE) designed for building and deploying machine learning models.&lt;/li&gt;
&lt;li&gt;The Why: SageMaker Studio provides the tools and resources you need to develop your chatbot's core functionality, including natural language processing (NLP) and dialogue management capabilities.&lt;/li&gt;
&lt;li&gt;How to do it:

&lt;ol&gt;
&lt;li&gt;Choose an appropriate Large Language Model (LLM): LLMs are AI models trained on massive amounts of text data, forming the foundation for your chatbot's ability to understand and respond to user queries. We will be using Claude and Titan Express.&lt;/li&gt;
&lt;li&gt;Train the LLM on your security knowledge base: This involves feeding the LLM with the documents stored in S3, allowing it to learn the specific language and concepts related to DevSecOps security.&lt;/li&gt;
&lt;li&gt;From the terminal, git clone the following repo: git clone &lt;a href="https://github.com/aws-samples/generative-ai-to-build-a-devsecops-chatbot/"&gt;https://github.com/aws-samples/generative-ai-to-build-a-devsecops-chatbot/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;From the terminal, you will have to export Kendra’s Index ID, like this:  export KENDRA_INDEX_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&lt;/li&gt;
&lt;li&gt;Install requirements: pip install -r requirements.txt       pip install -U langchain-community&lt;/li&gt;
&lt;li&gt;And use streamline to design your script’s interface by running the following command:  streamlit run app.py titan&lt;/li&gt;
&lt;li&gt;Get the URL from this page and remove the /lab path. Instead, add this: /proxy/8501/&lt;/li&gt;
&lt;li&gt;Navigate to that URL and you will see your own chatbot live and running, how exciting!&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NOTE: If you want to use Claude run this command instead: streamlit run app.py claudeInstant&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bpy6hd4tpp795yv4f6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bpy6hd4tpp795yv4f6r.png" alt="Image description" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35l27v2840kij3nss0pg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35l27v2840kij3nss0pg.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxaj3w74rx15yy1t3flc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxaj3w74rx15yy1t3flc.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why would you want to test different LLM?
Different LLMs have varying strengths and weaknesses. Testing with multiple options helps identify the LLM that best understands the terminology and delivers the most accurate and helpful responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 5: Remember to clean up your resources to avoid any potential charges. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Set up an automated incident management response using AWS</title>
      <dc:creator>Monica Escobar</dc:creator>
      <pubDate>Sun, 09 Jun 2024 19:05:27 +0000</pubDate>
      <link>https://forem.com/monica_escobar/set-up-an-automated-incident-management-response-using-aws-mp6</link>
      <guid>https://forem.com/monica_escobar/set-up-an-automated-incident-management-response-using-aws-mp6</guid>
      <description>&lt;h2&gt;
  
  
  Incident management
&lt;/h2&gt;

&lt;p&gt;Excited to share my latest AWS project focusing on automating an incident response following what I found to be a really engaging AWS made workshop. &lt;/p&gt;

&lt;p&gt;The project involves setting up a core configuration of 3 EC2 instances with its corresponding security groups and a VPC in us-east-1 through a CloudFormation template. &lt;/p&gt;

&lt;p&gt;To enhance security and ensure prompt incident response, a pipeline has been implemented. This pipeline starts with GuardDuty constantly monitoring the environment. When an anomaly is detected, an Amazon EventBridge Rule triggers a Lambda function. &lt;/p&gt;

&lt;p&gt;The Lambda function plays a crucial role in securing the EC2 instances by restricting access to only ports 3389 and 22. Additionally, it takes an EC2 snapshot and stores it in an S3 bucket to prevent any data loss. The architecture is as follows: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojfrjwowtu1vtb0u00zs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojfrjwowtu1vtb0u00zs.png" alt="Image description" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To initiate the scenario and create the infrastructure for the  automated incident response, I followed these steps to deploy the CloudFormation template:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy the CloudFormation template&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Review the CloudFormation Template

&lt;ul&gt;
&lt;li&gt;Before deploying, you can review the template to understand its components and configurations.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Security Automations Workshop template.  Sets up VPC, EC2 instances, turns on GuardDuty and GuardDuty-Tester",
    "Metadata": {
        "AWS::CloudFormation::Interface": {
            "ParameterGroups": [
                {
                    "Label" : {"default": "Workshop Service Configuration"},
                    "Parameters": ["EnableGuardDuty"]
                },
                {
                    "Label" : {"default": "Workshop Parameters"},
                    "Parameters": ["LatestAMZNLinuxAMI", "LatestAMZNLinux2AMI", "LatestWindows2016AMI"]
                }
            ],
            "ParameterLabels": {
                "EnableGuardDuty": {"default" : "Automatically enable GuardDuty?"}            
            }
        }
    },
    "Parameters": {
        "LatestAMZNLinuxAMI": {
            "Description": "DO NOT CHANGE: The latest AMI ID for Amazon Linux",
            "Type": "AWS::SSM::Parameter::Value&amp;lt;AWS::EC2::Image::Id&amp;gt;",
            "Default": "/aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-gp2"
        },
        "LatestAMZNLinux2AMI": {
            "Description": "DO NOT CHANGE: The latest AMI ID for Amazon Linux2",
            "Type": "AWS::SSM::Parameter::Value&amp;lt;AWS::EC2::Image::Id&amp;gt;",
            "Default": "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2"
        },
        "LatestWindows2016AMI": {
            "Description": "DO NOT CHANGE: The latest AMI ID for Windows 2016",
            "Type": "AWS::SSM::Parameter::Value&amp;lt;AWS::EC2::Image::Id&amp;gt;",
            "Default": "/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base"
        },
        "EnableGuardDuty": {
            "Description": "Choose Yes if GuardDuty is not yet enabled in the account and region this template will be deployed to, otherwise choose No.",
            "Type": "String",
            "AllowedValues": ["Yes-Enable GuardDuty", "No-GuardDuty is already enabled"],
            "Default": "Yes-Enable GuardDuty"
        }
    },
    "Mappings": {
        "AWSRegionAMIMap": {
            "ap-south-1": {"HVM64": "ami-b46f48db"},
            "eu-west-3": {"HVM64": "ami-cae150b7"},
            "eu-west-2": {"HVM64": "ami-c12dcda6"},
            "eu-west-1": {"HVM64": "ami-9cbe9be5"},
            "ap-northeast-3": {"HVM64": "ami-68c1cf15"},
            "ap-northeast-2": {"HVM64": "ami-efaf0181"},
            "ap-northeast-1": {"HVM64": "ami-28ddc154"},
            "sa-east-1": {"HVM64": "ami-f09dcc9c"},
            "ca-central-1": {"HVM64": "ami-2f39bf4b"},
            "ap-southeast-1": {"HVM64": "ami-64260718"},
            "ap-southeast-2": {"HVM64": "ami-60a26a02"},
            "eu-central-1": {"HVM64": "ami-1b316af0"},
            "us-east-1": {"HVM64": "ami-467ca739"},
            "us-east-2": {"HVM64": "ami-976152f2"},
            "us-west-1": {"HVM64": "ami-46e1f226"},
            "us-west-2": {"HVM64": "ami-6b8cef13"}
            }
    },

    "Conditions": {
        "EnableGuardDuty": {"Fn::Equals": [{"Ref": "EnableGuardDuty"}, "Yes-Enable GuardDuty"]}
    },

    "Resources": {
        "SSMInstanceRole": {
            "Type": "AWS::IAM::Role",
            "Properties": {
                "AssumeRolePolicyDocument": {
                    "Version": "2012-10-17",
                    "Statement": [
                        {
                            "Effect": "Allow",
                            "Principal": {
                                "Service": [
                                    "ssm.amazonaws.com",
                                    "ec2.amazonaws.com"
                                ]
                            },
                            "Action": "sts:AssumeRole"
                        }
                    ]
                },
                "Policies": [
                    {
                        "PolicyName": "S3andSSMAccess",
                        "PolicyDocument": {
                            "Statement": [
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "ssm:DescribeAssociation",
                                        "ssm:GetDeployablePatchSnapshotForInstance",
                                        "ssm:GetDocument",
                                        "ssm:DescribeDocument",
                                        "ssm:GetManifest",
                                        "ssm:GetParameters",
                                        "ssm:GetParameter",
                                        "ssm:ListAssociations",
                                        "ssm:ListInstanceAssociations",
                                        "ssm:PutInventory",
                                        "ssm:PutComplianceItems",
                                        "ssm:PutConfigurePackageResult",
                                        "ssm:UpdateAssociationStatus",
                                        "ssm:UpdateInstanceAssociationStatus",
                                        "ssm:UpdateInstanceInformation"
                                    ],
                                    "Resource": "*"
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "ssmmessages:CreateControlChannel",
                                        "ssmmessages:CreateDataChannel",
                                        "ssmmessages:OpenControlChannel",
                                        "ssmmessages:OpenDataChannel"
                                    ],
                                    "Resource": "*"
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "ec2messages:AcknowledgeMessage",
                                        "ec2messages:DeleteMessage",
                                        "ec2messages:FailMessage",
                                        "ec2messages:GetEndpoint",
                                        "ec2messages:GetMessages",
                                        "ec2messages:SendReply"
                                    ],
                                    "Resource": "*"
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "cloudwatch:PutMetricData"
                                    ],
                                    "Resource": "*"
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "ec2:DescribeInstanceStatus"
                                    ],
                                    "Resource": "*"
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "ds:CreateComputer",
                                        "ds:DescribeDirectories"
                                    ],
                                    "Resource": "*"
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "logs:CreateLogGroup",
                                        "logs:CreateLogStream",
                                        "logs:DescribeLogGroups",
                                        "logs:DescribeLogStreams",
                                        "logs:PutLogEvents"
                                    ],
                                    "Resource": "*"
                                },
                                {
                                    "Effect": "Allow",
                                    "Action": [
                                        "s3:GetBucketLocation",
                                        "s3:PutObject",
                                        "s3:GetObject",
                                        "s3:GetEncryptionConfiguration",
                                        "s3:AbortMultipartUpload",
                                        "s3:ListMultipartUploadParts",
                                        "s3:ListBucket",
                                        "s3:ListBucketMultipartUploads"
                                    ],
                                    "Resource": "*"
                                },
                                {
                                    "Sid": "S3ListBuckets",
                                    "Effect": "Allow",
                                    "Action": [
                                        "s3:ListAllMyBuckets"
                                    ],
                                    "Resource": "arn:aws:s3:::*"
                                },
                                {
                                    "Sid": "S3GetObjects",
                                    "Effect": "Allow",
                                    "Action": [
                                        "s3:ListBucket",
                                        "s3:GetBucketLocation",
                                        "s3:GetObject"
                                    ],
                                    "Resource": {
                                        "Fn::Join": [
                                            "",
                                            [
                                                "arn:aws:s3:::",
                                                "agentbucket-",
                                                {
                                                    "Ref": "AWS::AccountId"
                                                },
                                                "/*"
                                            ]
                                        ]
                                    }
                                }
                            ]
                        }
                    }
                ],
                "Path": "/"
            }
        },
        "VPC": {
            "Type": "AWS::EC2::VPC",
            "Properties": {
                "CidrBlock": "10.0.0.0/16",
                "EnableDnsSupport": "true",
                "EnableDnsHostnames": "true",
                "Tags": [
                    {
                        "Key": "Application",
                        "Value": {
                            "Ref": "AWS::StackName"
                        }
                    },
                    {
                        "Key": "Name",
                        "Value": {
                            "Fn::Join": [
                                "-",
                                [
                                    "VPC",
                                    {
                                        "Ref": "AWS::StackName"
                                    }
                                ]
                            ]
                        }
                    }
                ]
            }
        },
        "PublicSubnet": {
            "Type": "AWS::EC2::Subnet",
            "Properties": {
                "VpcId": {
                    "Ref": "VPC"
                },
                "CidrBlock": "10.0.1.0/24",
                "MapPublicIpOnLaunch": "true",
                "AvailabilityZone": {
                    "Fn::Select": [
                        "0",
                        {
                            "Fn::GetAZs": {
                                "Ref": "AWS::Region"
                            }
                        }
                    ]
                },
                "Tags": [
                    {
                        "Key": "Name",
                        "Value": {
                            "Fn::Join": [
                                "-",
                                [
                                    "Pub1",
                                    {
                                        "Ref": "AWS::StackName"
                                    }
                                ]
                            ]
                        }
                    }
                ]
            }
        },
        "IGW": {
            "Type": "AWS::EC2::InternetGateway",
            "Properties": {
                "Tags": [
                    {
                        "Key": "Application",
                        "Value": {
                            "Ref": "AWS::StackName"
                        }
                    }
                ]
            }
        },
        "AttachGateway": {
            "Type": "AWS::EC2::VPCGatewayAttachment",
            "Properties": {
                "VpcId": {
                    "Ref": "VPC"
                },
                "InternetGatewayId": {
                    "Ref": "IGW"
                }
            }
        },
        "PublicRouteTable": {
            "Type": "AWS::EC2::RouteTable",
            "Properties": {
                "VpcId": {
                    "Ref": "VPC"
                },
                "Tags": [
                    {
                        "Key": "Application",
                        "Value": {
                            "Ref": "AWS::StackName"
                        }
                    },
                    {
                        "Key": "Network",
                        "Value": "Public"
                    }
                ]
            }
        },
        "PublicRoute": {
            "Type": "AWS::EC2::Route",
            "DependsOn": [
                "AttachGateway"
            ],
            "Properties": {
                "RouteTableId": {
                    "Ref": "PublicRouteTable"
                },
                "DestinationCidrBlock": "0.0.0.0/0",
                "GatewayId": {
                    "Ref": "IGW"
                }
            }
        },
        "PublicSubnetRouteAssociation": {
            "Type": "AWS::EC2::SubnetRouteTableAssociation",
            "Properties": {
                "SubnetId": {
                    "Ref": "PublicSubnet"
                },
                "RouteTableId": {
                    "Ref": "PublicRouteTable"
                }
            }
        },
        "GDdetector": {
            "Type": "AWS::GuardDuty::Detector",
            "Condition": "EnableGuardDuty",
            "Properties": {
                "Enable": true,
                "FindingPublishingFrequency": "FIFTEEN_MINUTES"
            }
        },
        "GuardDutyTesterTemplate": {
            "Type": "AWS::CloudFormation::Stack",
            "Properties": {
                "TemplateURL":{
                    "Fn::Join": [
                            "",
                            [
                                "https://sa-security-specialist-workshops-",
                                {
                                    "Ref": "AWS::Region"
                                },
                                ".s3.",
                                {
                                    "Ref": "AWS::Region"
                                },
                                ".amazonaws.com/security-hub-workshop/templates/guardduty-tester-template.json"
                            ]
                        ]
                    },
                "Parameters": {
                    "InstanceSubnetId": {
                        "Ref": "PublicSubnet"
                    },
                    "DeployVPC": {
                        "Ref": "VPC"
                    },
                    "DeployVPCCidr": {
                        "Fn::GetAtt": [
                            "VPC",
                            "CidrBlock"
                        ]
                    },
                    "LatestWindows2012R2AMI":  "/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base"
                }
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Deploy the Template&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select aregion, for instance, I will be using us-east-1 (N. Virginia).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Specify Stack Details&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the following parameters:

&lt;ul&gt;
&lt;li&gt;Stack name: AutomatedIncidentResponseWorkshop&lt;/li&gt;
&lt;li&gt;Enable GuardDuty: Yes&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;After filling in the parameters, click Next.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure Stack Options&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click Next again on the following page, leaving all options at their default values.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Acknowledge and Create Stack&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll down to the bottom of the page, check the box acknowledging that the template will create IAM roles, and click Create stack.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrvdmhqa600l926htp85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrvdmhqa600l926htp85.png" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up a security group&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Security Group

&lt;ul&gt;
&lt;li&gt;Create a new security group named ForensicsSG.&lt;/li&gt;
&lt;li&gt;Remove all outbound rules and set the following inbound rules:

&lt;ul&gt;
&lt;li&gt;RDP: Protocol TCP, Port 3389, Source (your IP), Description: RDP for IR team&lt;/li&gt;
&lt;li&gt;SSH: Protocol TCP, Port 22, Source (your IP), Description: SSH for IR team&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Creating and Attaching Policies&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a New IAM Policy&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a policy called ec2instance-containment-with-forensics-policy with the following JSON to deny termination of isolated instances:  {&lt;/li&gt;
&lt;li&gt;    "Version": "2012-10-17",&lt;/li&gt;
&lt;li&gt;    "Statement": [&lt;/li&gt;
&lt;li&gt;        {&lt;/li&gt;
&lt;li&gt;            "Sid": "VisualEditor0",&lt;/li&gt;
&lt;li&gt;            "Effect": "Deny",&lt;/li&gt;
&lt;li&gt;            "Action": [&lt;/li&gt;
&lt;li&gt;                "ec2:TerminateInstances",&lt;/li&gt;
&lt;li&gt;                "ec2:DeleteTags",&lt;/li&gt;
&lt;li&gt;                "ec2:CreateTags"&lt;/li&gt;
&lt;li&gt;            ],&lt;/li&gt;
&lt;li&gt;            "Resource": "*",&lt;/li&gt;
&lt;li&gt;            "Condition": {&lt;/li&gt;
&lt;li&gt;                "StringEquals": {&lt;/li&gt;
&lt;li&gt;                    "aws:ResourceTag/status": "isolated"&lt;/li&gt;
&lt;li&gt;                }&lt;/li&gt;
&lt;li&gt;            }&lt;/li&gt;
&lt;li&gt;        }&lt;/li&gt;
&lt;li&gt;    ]&lt;/li&gt;
&lt;li&gt;}&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the Execution role for the Lambda function:&lt;br&gt;
 Create a role called ec2instance-containment-with-forensics-role with Lambda as a trusted entity in Trust Relationships&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a User Group&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a group named ec2-users.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2f2phox2wmjdv4y2axu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2f2phox2wmjdv4y2axu.png" alt="Image description" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Attach Policies to the Group&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Attach the following policies to the ec2-users group:

&lt;ul&gt;
&lt;li&gt;AmazonEC2FullAccess (AWS Managed Policy)&lt;/li&gt;
&lt;li&gt;Deny-termination-of-isolated-instances (Custom policy created above).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a New User&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an IAM user named testuser&lt;/li&gt;
&lt;li&gt;Add this user to the ec2-users group.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fth6lpg0myb0flezsklvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fth6lpg0myb0flezsklvg.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring the Lambda Function&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create IAM Policy for Lambda&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an IAM policy and attach it to the IAM role that the Lambda function will assume for automated responses.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the Lambda Function&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Develop and deploy the Lambda function that will handle automated incident responses. Change the timeout to 15 minutes and select ec2instance-containment-with-forensics-role as the execution role. Select Python as runtime.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxe72w0f5szxpgfeedzs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxe72w0f5szxpgfeedzs.png" alt="Image description" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add the following environmental values:
&lt;/li&gt;
&lt;li&gt;Key: ForensicsSG&lt;/li&gt;
&lt;li&gt;Value: sg-...(the ID of your Forensics SG)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y6ojmpbrndv1ktzu1ke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y6ojmpbrndv1ktzu1ke.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Include the following code: 
import boto3
import time
from datetime import date
from botocore.exceptions import ClientError
import os&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;def lambda_handler(event, context):&lt;br&gt;
    # Copyright 2022 - Amazon Web Services&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

# print('## ENVIRONMENT VARIABLES')
# print(os.environ)
# print('## EVENT')
# print(event)
response = 'Error remediating the security finding.'
try:
    # Gather Instance ID from CloudWatch event
    instanceID = event['detail']['resource']['instanceDetails']['instanceId']
    print('## INSTANCE ID: %s' % (instanceID))

    # Get instance details
    client = boto3.client('ec2')
    ec2 = boto3.resource('ec2')
    instance = ec2.Instance(instanceID)
    instance_description = client.describe_instances(InstanceIds=[instanceID])
    print('## INSTANCE DESCRIPTION: %s' % (instance_description))

    #-------------------------------------------------------------------
    # Protect instance from termination
    #-------------------------------------------------------------------
    ec2.Instance(instanceID).modify_attribute(
    DisableApiTermination={
        'Value': True
    })
    ec2.Instance(instanceID).modify_attribute(
    InstanceInitiatedShutdownBehavior={
        'Value': 'stop'
    })

    #-------------------------------------------------------------------
    # Create tags to avoid accidental deletion of forensics evidence
    #-------------------------------------------------------------------
    ec2.create_tags(Resources=[instanceID], Tags=[{'Key':'status', 'Value':'isolated'}])
    print('## INSTANCE TAGS: %s' % (instance.tags))

    #------------------------------------
    ## Isolate Instance
    #------------------------------------
    print('quarantining instance -- %s, %s' % (instance.id, instance.instance_type))

    # Change instance Security Group attribute to terminate connections and allow Forensics Team's access
    instance.modify_attribute(Groups=[os.environ['ForensicsSG']])
    print('Instance ready for root cause analysis -- %s, %s' % (instance.id,  instance.security_groups))

    #------------------------------------
    ## Create snapshots of EBS volumes 
    #------------------------------------
    description= 'Isolated Instance:' + instance.id + ' on account: ' + event['detail']['accountId'] + ' on ' + date.today().strftime("%Y-%m-%d  %H:%M:%S")
    SnapShotDetails = client.create_snapshots(
        Description=description,
        InstanceSpecification = {
            'InstanceId': instanceID,
            'ExcludeBootVolume': False
        }
    )
    print('Snapshot Created -- %s' % (SnapShotDetails))

    response = 'Instance ' + instance.id + ' auto-remediated'        

except ClientError as e:
    print(e)

return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Test the Lambda Function

&lt;ul&gt;
&lt;li&gt;Perform tests to ensure the Lambda function works as expected. You can use the following code (remember to change some of the variables):
import boto3, json
import time
from datetime import date
from botocore.exceptions import ClientError
import os&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;def lambda_handler(event, context):&lt;br&gt;
    # Copyright 2022 - Amazon Web Services&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

# print('## ENVIRONMENT VARIABLES')
# print(os.environ)
# print('## EVENT')
# print(event)
response = 'Error remediating the security finding.'
try:
    # Gather Instance ID from CloudWatch event
    instanceID = event['detail']['resource']['instanceDetails']['instanceId']
    print('## INSTANCE ID: %s' % (instanceID))

    # Get instance details
    client = boto3.client('ec2')
    ec2 = boto3.resource('ec2')
    instance = ec2.Instance(instanceID)
    instance_description = client.describe_instances(InstanceIds=[instanceID])
    print('## INSTANCE DESCRIPTION: %s' % (instance_description))

    #-------------------------------------------------------------------
    # Protect instance from termination
    #-------------------------------------------------------------------
    ec2.Instance(instanceID).modify_attribute(
    DisableApiTermination={
        'Value': True
    })
    ec2.Instance(instanceID).modify_attribute(
    InstanceInitiatedShutdownBehavior={
        'Value': 'stop'
    })

    #-------------------------------------------------------------------
    # Create tags to avoid accidental deletion of forensics evidence
    #-------------------------------------------------------------------
    ec2.create_tags(Resources=[instanceID], Tags=[{'Key':'status', 'Value':'isolated'}])
    print('## INSTANCE TAGS: %s' % (instance.tags))

    #------------------------------------
    ## Isolate Instance
    #------------------------------------
    print('quarantining instance -- %s, %s' % (instance.id, instance.instance_type))

    # Change instance Security Group attribute to terminate connections and allow Forensics Team's access
    instance.modify_attribute(Groups=[os.environ['ForensicsSG']])
    print('Instance ready for root cause analysis -- %s, %s' % (instance.id,  instance.security_groups))

    #------------------------------------
    ## Create snapshots of EBS volumes 
    #------------------------------------
    description= 'Isolated Instance:' + instance.id + ' on account: ' + event['detail']['accountId'] + ' on ' + date.today().strftime("%Y-%m-%d  %H:%M:%S")
    SnapShotDetails = client.create_snapshots(
        Description=description,
        InstanceSpecification = {
            'InstanceId': instanceID,
            'ExcludeBootVolume': False
        }
    )
    print('Snapshot Created -- %s' % (SnapShotDetails))

    response = 'Instance ' + instance.id + ' auto-remediated'        

except ClientError as e:
    print(e)

return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;**IMPORTANT: **Verify status after execution: check on the EC2 console what is the current status of the instance "BasicLinuxTarget”. You will see that the security group has changed to the one we configured only for the IR team. You will also see new snapshots have been created. You are now seeing our automated response in live action!&lt;/p&gt;

&lt;p&gt;Before testing, the security group in the instance was:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakp3pos5yumv1qi4wb6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakp3pos5yumv1qi4wb6x.png" alt="Image description" width="726" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After testing, note the change in security group. Our IR security group took over. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3j7vtnfm23b7sedk4j7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3j7vtnfm23b7sedk4j7o.png" alt="Image description" width="699" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;You can check the GuardDuty Dashboard as well to see the threats it detected and the lambda logs, as well as the snapshots created. *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8sv1tjnzduxzhhk6cji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8sv1tjnzduxzhhk6cji.png" alt="Image description" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2abbdtz9ux2gluruzu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2abbdtz9ux2gluruzu7.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n6rq4e5b77pbnq1ureg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n6rq4e5b77pbnq1ureg.png" alt="Image description" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create EventBridge Rule

&lt;ul&gt;
&lt;li&gt;Create a rule in EventBridge that triggers the Lambda function based on findings from GuardDuty. On creation method, select custom pattern and use the following code:
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "source": ["aws.guardduty"],
  "detail": {
    "type": ["UnauthorizedAccess:EC2/TorClient", "Backdoor:EC2/C&amp;amp;CActivity.B!DNS", "Trojan:EC2/DNSDataExfiltration", "CryptoCurrency:EC2/BitcoinTool.B", "CryptoCurrency:EC2/BitcoinTool.B!DNS"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As target, select the lambda you previously created. Click on create. &lt;/p&gt;

&lt;p&gt;This is one way to manage incident responses automatically in the cloud. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personal learnings from this project:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;•To perform automated basic incident response tasks for containment and gathering data for analysing cyber threats.&lt;br&gt;
•To understand possible actions to take and how to prevent such threats affecting a production environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How could this be improved?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As a reflective professional, I like to spend some time after finishing projects thinking how they can be further improved. &lt;/p&gt;

&lt;p&gt;For future enhancements, I plan to integrate an SNS topic to notify the incident response team. This will enable manual checks in case of any damage and facilitate reverting to the original state when the situation is under control. &lt;/p&gt;

&lt;p&gt;There is also the downside of a 15 minute maximum time for the lambda function. If needed, some environments will probably benefit more of using a different architecture, one that relies on step functions to avoid the time restriction.&lt;/p&gt;

&lt;p&gt; Excited to continue optimising this project for enhanced incident response capabilities. Thanks for reading and happy deploying if you want to give this a go!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>incident</category>
      <category>automation</category>
    </item>
    <item>
      <title>Terraform pipeline (IaC for AWS)</title>
      <dc:creator>Monica Escobar</dc:creator>
      <pubDate>Mon, 27 May 2024 17:42:57 +0000</pubDate>
      <link>https://forem.com/monica_escobar/terraform-pipeline-iac-for-aws-438e</link>
      <guid>https://forem.com/monica_escobar/terraform-pipeline-iac-for-aws-438e</guid>
      <description>&lt;h2&gt;
  
  
  Automating Infrastructure Deployment for Static Websites with Terraform in AWS
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;We'll leverage Terraform, an infrastructure as code (IaC) tool, to orchestrate the process. Here's a breakdown of the key steps involved:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Terraform Setup:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure Terraform is installed and configured locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Configuring the AWS Provider:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform requires an AWS provider configuration file specifying the region and credentials for provisioning resources. We will create a &lt;strong&gt;provider.tf&lt;/strong&gt; file for this. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Defining the VPC Network:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We'll define the Virtual Private Cloud (VPC)in the &lt;strong&gt;vpc.tf&lt;/strong&gt; file where the server on the EC2 instance will reside. This includes creating a public subnet for internet access.&lt;/p&gt;

&lt;p&gt;Creating Internet Connectivity and Security:&lt;/p&gt;

&lt;p&gt;An internet gateway is established (in &lt;strong&gt;network.tf&lt;/strong&gt;) to enable internet access for the VPC. We'll also define a custom route table and associate it with the public subnet for proper routing. Finally, a security group is created to control inbound and outbound traffic for the Jenkins EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Building the Terraform Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The core Terraform configuration file (&lt;strong&gt;main.tf&lt;/strong&gt;) defines the provisioning of our EC2 instance with through a user data script. Additionally, it creates an S3 bucket for hosting the static website and configures it for website hosting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Terraform Workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the configuration is complete, we'll use Terraform commands to initialize, plan, and apply the changes. This will provision the resources in our AWS environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform init&lt;br&gt;
terraform plan&lt;br&gt;
terraform apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t3xe647pl3s42ly8oeh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t3xe647pl3s42ly8oeh.png" alt="terraform init" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m3848o6b0ovjgekoygx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m3848o6b0ovjgekoygx.png" alt="terraform plan" width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyka30u61tcxtdm82s0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyka30u61tcxtdm82s0g.png" alt="terraform apply" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut4hmzv7c8xlvufjg1mh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut4hmzv7c8xlvufjg1mh.png" alt="terraform destroy" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Validating Resources (Optional):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can now validate the created AWS resources like the EC2 instance, security group, VPC, and S3 bucket to ensure everything is set up correctly.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>automation</category>
      <category>iac</category>
    </item>
    <item>
      <title>AWS Code Pipeline - CloudFront - S3 CI/CD Pipeline</title>
      <dc:creator>Monica Escobar</dc:creator>
      <pubDate>Sat, 25 May 2024 17:30:46 +0000</pubDate>
      <link>https://forem.com/monica_escobar/aws-code-pipeline-cloudfront-s3-cicd-pipeline-55gf</link>
      <guid>https://forem.com/monica_escobar/aws-code-pipeline-cloudfront-s3-cicd-pipeline-55gf</guid>
      <description>&lt;h2&gt;
  
  
  Steps to create an automated pipeline in AWS without losing the plot
&lt;/h2&gt;




&lt;p&gt;&lt;strong&gt;Resources used:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 bucket&lt;/li&gt;
&lt;li&gt;CloudFront Distribution&lt;/li&gt;
&lt;li&gt;GitHub&lt;/li&gt;
&lt;li&gt;Route 53&lt;/li&gt;
&lt;li&gt;Domain name&lt;/li&gt;
&lt;li&gt;SSL Certificate&lt;/li&gt;
&lt;li&gt;Code Pipeline&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;The Big Picture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0d1hlnjkily6cdua7pi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0d1hlnjkily6cdua7pi.png" alt="Image description" width="668" height="370"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Steps followed:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Setting Up Your AWS Environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.1 Create an AWS Account&lt;/p&gt;

&lt;p&gt;If you don’t already have an AWS account, sign up for AWS.&lt;/p&gt;

&lt;p&gt;1.2 Set Up IAM User&lt;/p&gt;

&lt;p&gt;Create an IAM user with the necessary permissions to access the services you'll be using:&lt;/p&gt;

&lt;p&gt;Navigate to the IAM Console.&lt;br&gt;
Create a new user and attach the policies for CodePipeline, S3, CloudFront, Route 53, and Certificate Manager.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Purchase a Domain and obtain an SSL Certificate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;2.1 Purchase a Domain via Route 53&lt;/p&gt;

&lt;p&gt;Go to the Route 53 Console.&lt;br&gt;
Click on Domains -&amp;gt; Register Domain.&lt;br&gt;
Search for your desired domain name and follow the prompts to purchase it.&lt;/p&gt;

&lt;p&gt;2.2 Request an SSL Certificate via AWS Certificate Manager&lt;/p&gt;

&lt;p&gt;Navigate to the AWS Certificate Manager (ACM) Console.&lt;br&gt;
Click on Request a certificate.&lt;br&gt;
Select Request a public certificate and enter your domain name.&lt;br&gt;
Follow the steps to validate your domain ownership (via DNS is much faster than via email, remember to create a CNAME record for verification purposes).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Set Up S3 Bucket for Static Hosting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to the S3 Console.&lt;br&gt;
Create a new bucket (e.g: my-portfolio).&lt;br&gt;
Enable static website hosting in the Properties tab (optional tip: add max-age=0 to the header for faster updates).&lt;br&gt;
Set up the bucket policy to allow public read access (or configure CloudFront for secure access).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Configure CloudFront for CDN&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the CloudFront Console.&lt;br&gt;
Create a new distribution.&lt;br&gt;
Set the origin to the S3 bucket you created.&lt;br&gt;
Configure the distribution settings, ensuring to set up the SSL certificate you requested.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Configure Route 53 to Point to CloudFront&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the Route 53 Console, navigate to Hosted Zones.&lt;br&gt;
Select your domain and create a new A record.&lt;br&gt;
Set the alias target to your CloudFront distribution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Set Up GitHub Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a new repository on GitHub and push your application code to it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Create the CI/CD Pipeline with AWS CodePipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;7.1 Set Up CodePipeline&lt;/p&gt;

&lt;p&gt;Go to the CodePipeline Console.&lt;br&gt;
Click Create pipeline and give it a name.&lt;br&gt;
Select New service role to create a new IAM role for CodePipeline.&lt;/p&gt;

&lt;p&gt;7.2 Add Source Stage&lt;/p&gt;

&lt;p&gt;In the Source stage, select GitHub2 as the source provider.&lt;br&gt;
Connect your GitHub account and select the repository and branch you want to use.&lt;/p&gt;

&lt;p&gt;7.3 Add Build Stage (OPTIONAL)&lt;/p&gt;

&lt;p&gt;In the Build stage, leave empty or choose CodeBuild (I personally left it empty). &lt;/p&gt;

&lt;p&gt;7.4 Add Deploy Stage&lt;/p&gt;

&lt;p&gt;In the Deploy stage, choose Amazon S3 as the deploy provider.&lt;br&gt;
Select the S3 bucket you created for static hosting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Test the Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Review the pipeline configuration and click Create pipeline.&lt;br&gt;
Commit a change to your GitHub repository to trigger the pipeline.&lt;br&gt;
Verify the build and deployment process.&lt;br&gt;
Access your application using the domain name configured in Route 53 or if the cache is at its default setting it will take longer to show in your domain, but you can check the S3 URL.&lt;/p&gt;




&lt;p&gt;And this is all you need to create your own automated pipeline from your main branch using AWS and GitHub. &lt;/p&gt;

&lt;p&gt;Happy deploying!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cicd</category>
      <category>automation</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
