<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Brandon Damue</title>
    <description>The latest articles on Forem by Brandon Damue (@brandondamue).</description>
    <link>https://forem.com/brandondamue</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/brandondamue"/>
    <language>en</language>
    <item>
      <title>Microsoft Entra ID: What you need to know</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Mon, 09 Jun 2025 15:34:20 +0000</pubDate>
      <link>https://forem.com/brandondamue/microsoft-entra-id-what-you-need-to-know-8l3</link>
      <guid>https://forem.com/brandondamue/microsoft-entra-id-what-you-need-to-know-8l3</guid>
      <description>&lt;p&gt;Microsoft has been an industry leader in organizational user and object directory services with their Active Directory suite of services. Prior to the prevalence of cloud services, organizations have hosted their directory solutions primarily on-prem.&lt;/p&gt;

&lt;p&gt;Many organizations are now moving these solutions to the cloud and many more have them running in a hybrid configuration (i.e. both on-prem and in the cloud).The purpose of this article is go over Entra ID, the new service organizations are using for identity and access management. Rather than starting this discussion (if I can call it that as I am the only one talking) with what it is, we are going to go in the opposite direction to begin with what it is not. Walk with me!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is it not?
&lt;/h2&gt;

&lt;p&gt;A lot of people think Entra ID is the cloud version of Active Directory Domain Services but it is not. Well, Microsoft is to be blamed for this confusion as Entra ID used to be known as Azure Active Directory.&lt;/p&gt;

&lt;p&gt;If Entra ID is not the cloud version of Active Directory, what is it then? As stated in this &lt;a href="https://learn.microsoft.com/en-us/entra/fundamentals/whatis" rel="noopener noreferrer"&gt;Microsoft Learn documentation&lt;/a&gt; for Entra ID, it is a cloud-based identity and access management service that an organization can use to make it possible for its employees to access external resources such, as the Azure portal, Microsoft 365 apps (an aside: I recently learned that Office 365 and Microsoft 365 do not refer to the exact same suite of applications. I had always assumed they did) and a wide range of SaaS (Software as a Service) applications. Entra ID also helps organization manage access to internal line of business applications and other applications and services existing within the organizations intranet.&lt;/p&gt;

&lt;p&gt;Check out &lt;a href="https://learn.microsoft.com/en-us/entra/fundamentals/compare" rel="noopener noreferrer"&gt;this article&lt;/a&gt; that compares Active Directory Domain Service to Microsoft Entra ID so how they differ.&lt;/p&gt;

&lt;h2&gt;
  
  
  Entra ID Licensing
&lt;/h2&gt;

&lt;p&gt;When you subscribe to any Microsoft Online Business service such as Microsoft 365, you automatically gain access to the free tier (license) of Entra ID. If you need additional features other than those offered in the free tier, you can upgrade to Entra Premium P1 or Premium P2 licenses. Here is what you get depending on the Entra ID license you decide to subscribe to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The free tier provides you with user and group management, the ability to sync your cloud environment with your on-prem directory, self-service password change for cloud users, Single Sign-On (SSO) across Azure and popular SaaS applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With the Entra ID P1 license, you get everything offered in the free tier plus: the ability for hybrid users to access resources that are in the cloud as well as those on-prem. With it also comes the support for advanced administration which includes dynamic membership groups, Microsoft identity Manager, self-service group management, cloud write-backs capabilities which allows on-prem users to use self-service password reset.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you upgrade to the Premium P2 license, you get all the features that come with the free and Premium P1 license. In addition to these features, you get: Microsoft Entra ID Protection that allows you to use risk-based conditional access policies as well as Privileged Identity Management, an offering that makes it possible for you to discover, restrict, monitor administrators, their access to resources and to provide just-in-time access when needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To see a complete list of all the free and premium features that Entra ID offers, you can go &lt;a href="https://learn.microsoft.com/en-us/entra/fundamentals/whatis" rel="noopener noreferrer"&gt;here&lt;/a&gt;to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Entra Authentication
&lt;/h2&gt;

&lt;p&gt;I believe it goes without saying that authenticating user credentials when they try to sign into a device or application is the core feature of any identity and access management solution. In the case of Entra ID, it does more than just authenticate user credentials during sign-in processes. Entra ID also offers security features such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Self-service Password Reset which allows users to be able to reset their passwords if their account is locked or they forget their password.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi-Factor Authentication (MFA)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Write-back of password changes to on-prem environments&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enforcing password policies and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Passwordless Authentication made possible by tools such as Windows Hello for Business which enables users to sign into a device or application without a password but with their biometrics (facial recognition and fingerprints)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these components of authentication, Entra ID makes the sign in process more convenient for end-users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Role Based Access Control (RBAC) in Entra ID
&lt;/h2&gt;

&lt;p&gt;When using Entra ID, you can grant granular access to your admins following the famous Principle of Least Privilege that calls for only granting the access required for completing a task no less, no more.&lt;/p&gt;

&lt;p&gt;When it comes to RBAC in Entra ID, the key concepts you need to have a solid understanding of are:&lt;/p&gt;

&lt;h3&gt;
  
  
  Role Types
&lt;/h3&gt;

&lt;p&gt;In Entra ID you can use either a built-in role (roles created by Microsoft that you can’t modify) or a custom role that you create to handle a specific use case. Built-in roles come with a fixed set of permissions that you can’t change. Microsoft keeps adding to their list of built-in roles to account for more use cases as they come up.&lt;/p&gt;

&lt;p&gt;If there is a task that a role needs to be assigned for and all the permissions required for this task is not fully covered by a built-in role, you can create a custom role with permissions that you specify. This allows for more granular control over what a particular role can do. To be able to create a custom role, you need to create a role definition which is essentially a collection of permissions listing the create, update or delete operations that can be performed on an Entra resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  Role Assignment
&lt;/h3&gt;

&lt;p&gt;As you might have already guessed, role assignment is basically attaching a role definition (sometimes just called a role) to an entity usually called a security principal (which is nothing but a fancy name for a user, a group or an application) at a particular scope (where the permissions apply) so that they can gain access to something. This makes access control more convenient for administrator as all you have to do to grant access is to add a role assignment and then remove it when this access needs to be revoked.&lt;/p&gt;

&lt;p&gt;Before you are able to perform a role assignment, a scope has to be defined. The three levels of scope that can be specified are: Tenant, administrative unit and Entra resource scope. When you specify the scope as Tenant or administrative unit, you are essentially applying the permissions to everything within the container (tenant or administrative unit) except the container itself. When you scope a role to a resource you are applying permissions to everything about that resource. It is also worth noting that scopes are structured in a parent-child type relationship which allows the child scope to automatically inherit the permissions granted to its parent.&lt;/p&gt;

&lt;p&gt;Accept my apologize if I lost you when I started talking about tenants and administrative units. The tenant is this case refers to your entire Entra ID package while the administrative unit refers to just a part (subsection) of your Entra ID tenant.&lt;/p&gt;

&lt;p&gt;It is worth noting that using built-in roles in Entra is free, if you want to create custom roles you need have the Entra Premium P1 license or higher. With the free tier, you can only assign the built-in roles directly to users. When you upgrade one level up to the Premium P1 license, you get the ability to create role-assignable groups that you can add users to and then add roles to the group rather to users directly which makes role assignment a more manageable process. If this is still not enough for your needs, you get the ability to use Privileged Identity Management which makes it possible for you to use just-in-time role assignment to grant access just when it is required and makes roles time-limited rather than permanent. It also goes further to give your more auditing power alongside detailed reporting capabilities.&lt;/p&gt;

&lt;p&gt;To summarize this section, a role assignment has three components: the role definition, the security principal and the scope. You can think of these components are the Who (security principal), the What (role definition) and the Where (scope) of role assignments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There is a lot to learn about Entra ID but the truth is that you don’t have to know everything to start using the features it offers to make identity and access management across your organization better. As with almost everything in life and business, start small and then build up from there as need arise.&lt;/p&gt;

&lt;p&gt;If you want to learn more about Entra ID, Microsoft Learn is all you need or at the very least, a really great place to start. Thank you for taking the time to read this and I hope you learned as much as I did while writing it. If you enjoyed it or learned a thing or two, follow me on here, connect with me on &lt;a href="https://www.linkedin.com/in/brandon-damue/" rel="noopener noreferrer"&gt;Linkedin &lt;/a&gt;or check out my &lt;a href="https://brandondamue.com/" rel="noopener noreferrer"&gt;website&lt;/a&gt; for more.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>entraid</category>
      <category>accesscontrol</category>
    </item>
    <item>
      <title>What the H*ck is Terraform: An Introduction</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Fri, 17 Jan 2025 17:44:36 +0000</pubDate>
      <link>https://forem.com/brandondamue/what-the-hck-is-terraform-an-introduction-2058</link>
      <guid>https://forem.com/brandondamue/what-the-hck-is-terraform-an-introduction-2058</guid>
      <description>&lt;p&gt;I have been absent from sharing my learnings for quite some time. My first article of 2025 is also my first after a four month writing hiatus. Isn’t it a nice coincidence that my first article of the new year is on a new technology (new in the sense that I haven’t done much with it before) that I am currently diving deep into? If you are new to Terraform as well, don’t worry because I got you. The aim of this article is to give you a soft-landing introduction to Terraform. Let’s glide into it!&lt;/p&gt;

&lt;p&gt;Terraform is an open-source, cloud-agnostic (meaning you can use Terraform to provision and manage infrastructure on different cloud platforms) infrastructure as code tool that enables you to safely and predictably provision and manage infrastructure in the cloud.&lt;/p&gt;

&lt;p&gt;If you’ve ever had to do a task such as creating a couple of Amazon EC2 instances with security groups and other configurations by having to click around in the AWS management console, you can attest to how tedious that might have been. Terraform makes carrying out such tasks a breeze. Before we go any further, here are some of the benefits of using Terraform to keep you interested and inspire you to consider using it the next time you are building out infrastructure in the cloud. These benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Terraform templates can be source controlled that is you can use GitHub, BitBucket and related technologies to keep track of the different versions of your Terraform templates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It offers multi-cloud support (cloud-agnostic) with over 100 providers including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Has separate plan and apply stages so you can verify changes to your infrastructure before they happen (more on this later).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Offers a public registry of modules that make provisioning common groups of resources easy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is stateful and keeps track of all infrastructure it provisions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now it’s time to dive deeper. In this section of the article, we will explore Terraform’s CLI tool, Terraform Configuration, the HashiCorp Configuration Language (HCL) and Terraform providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform CLI Tool
&lt;/h2&gt;

&lt;p&gt;Terraform is written in GoLang and packaged as a single binary which makes installing it a breeze. You can download the binaries for the open-source version of Terraform by clicking &lt;a href="https://developer.hashicorp.com/terraform/install?product_intent=terraform" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Make sure you are downloading the binary that matches the OS of the computer you want to install Terraform on.&lt;/p&gt;

&lt;p&gt;Once Terraform is installed, it comes with a list of commands and subcommands you can use to provision and make changes to your infrastructure. The CLI has three main commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Time to look at each one in detail.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt;: This command initializes a working directory containing Terraform configuration files. This is the first command that you should run after writing a new Terraform configuration or cloning an existing one from version control. Running this command multiple times does not negatively affect your project as it skips the re-initializing process unless in scenarios where changes have been made to your configuration.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt;: The command lets you preview the changes Terraform is going to make to your infrastructure when you run terraform apply. It essentially shows you a detailed execution plan of the resources that will created, modified or destroyed. This helps you verify that the proposed changes tie with what you expect to happen when you apply the changes. There is no need to run this command when you are sure that what you have written in your configuration file will run exactly as expected.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform apply&lt;/code&gt;: As the name suggests, this is the command you run when you want to apply the changes made to your configuration files. It is the command that actually creates or modifies your infrastructure. When you run the command without the optional &lt;code&gt;-auto-approve flag&lt;/code&gt;, it requires that you manually confirm that you want to have the proposed changes applied by typing &lt;code&gt;yes&lt;/code&gt; in the interactive terminal.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Terraform’s official documentation recommends that you only use -auto-approve when you are sure that no one is going to make changes to your infrastructure outside of your Terraform workflow. Doing this minimizes the risks of configuration drift and unpredictable changes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In addition to these three primary commands, there are also subcommands that the Terraform CLI offers. You can read about them in the &lt;a href="https://developer.hashicorp.com/terraform" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Providers
&lt;/h2&gt;

&lt;p&gt;Terraform uses a plugin architecture to manage all of the resource providers that it supports so no providers are included when you first install Terraform. You declare which providers you need to use in a configuration file and they are installed from the &lt;a href="https://registry.terraform.io/browse/providers?product_intent=terraform" rel="noopener noreferrer"&gt;Terraform registry&lt;/a&gt; when you run the &lt;code&gt;terraform init&lt;/code&gt; command. This registry is the main directory of publicly available Terraform providers, and hosts providers for most major infrastructure platforms such as AWS, Microsoft Azure, GCP, Alibaba Cloud, and others.&lt;/p&gt;

&lt;p&gt;Each provider has its own documentation, describing its resource types and their arguments. This documentation is included in the registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Configuration and HashiCorp Configuration Language
&lt;/h2&gt;

&lt;p&gt;A Terraform configuration is a complete document in the Terraform language that tells Terraform how to manage a given collection of infrastructure. The Terraform files you work with to manage your infrastructure use the .tf file extension. The Terraform language (aka HCL), is designed to be both human-readable and machine-friendly.&lt;/p&gt;

&lt;p&gt;HCL allows you to treat your infrastructure as code and serves as your system's "living documentation". The main goal of the Terraform language according to the official documentation is declaring resources, which represent infrastructure objects. All other language features exist only to make the definition of resources more flexible and convenient.&lt;/p&gt;

&lt;p&gt;The language is declarative, describing an intended goal rather than the steps to reach that goal. Here is the syntax of the Terraform language as seen in the official documentation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "main" {
  cidr_block = var.base_cidr_block
}

&amp;lt;BLOCK TYPE&amp;gt; "&amp;lt;BLOCK LABEL&amp;gt;" "&amp;lt;BLOCK LABEL&amp;gt;" {
  # Block body
  &amp;lt;IDENTIFIER&amp;gt; = &amp;lt;EXPRESSION&amp;gt; # Argument
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As seen in the syntax, the language consists only of a few basic elements: Blocks, Arguments, and Expressions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Blocks&lt;/strong&gt;&lt;/em&gt; are containers for other content and usually represent the configuration of some kind of object, like a resource. Blocks have a block type, can have zero or more labels, and have a body that contains any number of arguments and nested blocks. Most of Terraform's features are controlled by top-level blocks in a configuration file.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Arguments&lt;/em&gt;&lt;/strong&gt; assign a value to a name. They appear within blocks.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Expressions&lt;/em&gt;&lt;/strong&gt; represent a value, either literally or by referencing and combining other values. They appear as values for arguments, or within other expressions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you are a business that is looking to do more with Terraform, HashiCorp also provides enterprise versions of Terraform that provide additional features for collaboration, infrastructure policy, and governance.&lt;/p&gt;

&lt;p&gt;If it is the case that I succeeded in scratching your curiosity's itch about Terraform with this article, I implore you to follow me, connect with me on &lt;a href="https://www.linkedin.com/in/brandon-damue/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and keep an eye out as I will be putting out more articles on Terraform and related technologies. Until then, you can check out my other articles &lt;a href="https://dev.to/brandondamue"&gt;here&lt;/a&gt;, and don’t stop going after your goals and ideals.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/terraform" rel="noopener noreferrer"&gt;Terraform Official Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/?product_intent=terraform" rel="noopener noreferrer"&gt;Terraform Registry&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>How I cleared the AWS Certified DevOps Engineer — Professional Certification</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Thu, 05 Sep 2024 01:50:48 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-i-cleared-the-aws-certified-devops-engineer-professional-certification-1e99</link>
      <guid>https://forem.com/aws-builders/how-i-cleared-the-aws-certified-devops-engineer-professional-certification-1e99</guid>
      <description>&lt;p&gt;Contrary to “popular” belief that acquiring certifications play just a minute part in being a testament of the expertise an individual has gained in a particular field, I am one of those who can testify to the amount of knowledge and skills gained in the process of preparing for and sitting for certification exams.&lt;/p&gt;

&lt;p&gt;After two months of taking an 87 hour long advanced AWS DevOps course made up of videos, tutorials labs, lab challenges and practice questions on QA (formerly Cloud Academy), I took the certification exam and a few hours after that, I got a congratulatory email with the AWS Certified DevOps Engineer Professional badge.&lt;/p&gt;

&lt;p&gt;As I think back to my preparation for this certification exam, it would be unfair to undermine the amount of insights I got from reading articles and watching YouTube videos of other people writing or talking about their experience. All these inspired me to follow in their footsteps and share my experience as it has the potential to help someone on their certification journey. Stay with me as I share the resources and study tips I used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Certified DevOps Engineer — Professional Official Exam Guide&lt;/strong&gt;: This is the first resource on the list not necessarily because it was the most important while I prepared for the exam but because a lot of people undermine its importance. The exam guide is the blueprint that will help you understand what is required of you to become a certified AWS DevOps engineer professional. While preparing for the exam, I found myself going back to this guide to make sure I was learning and paying attention to the right things.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;QA (formerly Cloud Academy)&lt;/strong&gt;: In the other two AWS certification exams (Solutions Architect Associate and Developer Associate) I have taken, I used Steven Mareek’s Udemy courses which are some of the best resources available online to prepare for those certification exams. As one of the benefits of being an AWS community builder, I have a one year subscription to the QA platform paid in full by AWS in partnership with Community Builder’s program. QA’s advanced AWS DevOps course was the main resource I used in preparing for this exam. I can tell you in all confidence that this was by far the most detailed online course I have ever taken. The instructors took time to explain every concept and service. In addition to that, the course includes detailed lab tutorials and challenges.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Generative AI Tools (ChatGPT, Gemini, and Meta AI)&lt;/strong&gt;: Generative AI tools are some of the best tools that are helpful as you prepare for certifications exams and any other type of exam altogether. Depending on whichever one I felt like using on any given day, I used these tools to widen and deepen my understanding of AWS services, tools and DevOps principles.&lt;br&gt;
Disclaimer: No matter how good these GenAI tools are, they still have the tendency to give you wrong information so make sure you use official documentations and sources to verify information that you are not sure of.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AWS White Papers and articles&lt;/strong&gt;: When you go through practice test questions for the AWS DevOps Engineer exam, you realize that most of the answers to these questions are solutions that have been architected and well documented in AWS whitepaper and articles. Resources such as these help you understand how different AWS services come together to create robust, maintainable and operationally efficient DevOps solutions.&lt;br&gt;
Practice Tests: Even though this is the last resource on this list, it is definitely not the least important. Experience has taught me that when preparing for an exam, practice tests are indispensable. With this in mind, I made sure I went through every practice test I could find several times over.&lt;/p&gt;
&lt;h2&gt;
  
  
  Study Tips
&lt;/h2&gt;

&lt;p&gt;You will agree with that no matter the quantity and quality of the resources you have access to, they will not amount to much if you fail to set up a system you will follow to adequately make use of these resources. Here are some of my study tips:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setting a goal&lt;/strong&gt;: The first step towards achieving anything worthwhile is setting an attainable goal. When you set your goal, you have to turn your back on it and fall to the level of your system. If I lost you there, please give me a second to explain. When you have a goal, the next thing is to create a clear outline and timeline that will aid you in achieving that goal. In my case, I set a two month goal to study for this exam and then I went on to formulate a plan on how I was going to work on achieving the goal from a daily to a weekly and finally a monthly time frame. To create an added sense of urgency, I booked my exam in advance even before I was confidently ready to sit for the exam.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Studying consistently&lt;/strong&gt;: I have witnessed the power of consistency in different areas of my life. It is also a popular belief that consistency is better than intensity. With this in mind, I made sure I studied consistently until my goal was achieved. On days when I had more time and energy, I combined consistency and intensity. They have proven to be a formidable duo.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Hopefully, something I have mentioned in this article will help you as you prepare for your exam. Goodluck with on your exam, go kill them with success! You can connect with me on Linkedin. Please share this article with anyone you think it might be helpful to. Ciao!!&lt;/p&gt;

</description>
      <category>certification</category>
      <category>aws</category>
      <category>studytips</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to implement Memoization in your React projects</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Sun, 07 Jul 2024 19:12:26 +0000</pubDate>
      <link>https://forem.com/brandondamue/how-to-implement-memoization-in-your-react-projects-1fei</link>
      <guid>https://forem.com/brandondamue/how-to-implement-memoization-in-your-react-projects-1fei</guid>
      <description>&lt;p&gt;React is a very powerful frontend library and you begin to witness more of its power when you learn the intricate details of the features that it offers. If you are like me who is always interested in understanding the inner workings of the tools they are using, lean back and keep reading because this article has a lot of insights for you. Even if you are not very fond of intricate details, read on as I will try my best to make the write up enjoyable and insightful still.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Memoization?
&lt;/h2&gt;

&lt;p&gt;Memoization is a programming technique that is used to cache the results of intensive processes so that you don't run the process over and over unnecessarily. So with memoization, you are basically asking a function (process) to remember (cache) the result of the process so that it doesn't have to run the process again if the same input is supplied it. For example: Imagine you have a function that multiplies two numbers. If you do the multiplication of two numbers say 100 and 458, if you want to do the exact same computation again, you'll just get the previous value if you have implemented memoization since the inputs are the same.&lt;/p&gt;

&lt;p&gt;In React, memoization is often used to ensure that components only re-render when it is absolutely necessary rather being re-rendered all the time. It is. important to note that when you are using memoization, you essentially trading memory for speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to implement Memoization in React
&lt;/h2&gt;

&lt;p&gt;It is common knowledge to most React developers that components are typically re-rendered when their states or props change. That said, the way memoization is implemented depends on the type of component you are using i.e. it is not implemented in the same manner for class and functional components. Let's go over how it works and how to implement it in each case.&lt;/p&gt;

&lt;p&gt;In a functional components, when the component is rendered the first time, React takes a snapshot of that component and then uses it as a reference for the next time the component has to be re-rendered. When a re-render is about to happen, React will compare the newer version of the component with the cached snapshot (reference) of the component to verify if there has been any change in the value of its props. For props that are arrays or objects, React performs a shallow comparison. When the comparison is done and React confirms that there has been a change in the component's props, it goes ahead to re-render it, if not the component stays as it was without being re-rendered.&lt;/p&gt;

&lt;p&gt;That is how memoization should work in functional components. However, you might witness some unexpected behaviors in specific scenarios like when you have directly mutated your state object, when the component has props that are functions, or when dealing with HOCs (Higher Order Components).&lt;/p&gt;

&lt;p&gt;In the case of HOCs, when the props change on a parent component, child components within it will be re-rendered even when this change didn't affect the data within them. To fix such unnecessary behaviors you can wrap the child component within the &lt;code&gt;useMemo&lt;/code&gt; hook. Check out the &lt;a href="https://react.dev/reference/react/useMemo" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; to learn more about this hook. The &lt;code&gt;useMemo&lt;/code&gt; hook won't work in components with props that are functions. When dealing with functional props, you can make use of the &lt;code&gt;useCallback&lt;/code&gt; hook instead. &lt;code&gt;useCallback&lt;/code&gt; is used to cache function definitions between renders. Now let's look at memoization in class components.&lt;/p&gt;

&lt;p&gt;In functional programming, there is the concept of pure functions and a pure function is one that for the same set of inputs it will return the same output and its output is strictly determined by its input values only. A react class component can either be a regular class component or a pure component (class components that extend the &lt;code&gt;React.PureComponent&lt;/code&gt; class). Pure components render the same output for the same state and props. They have a &lt;code&gt;shouldComponentUpdate()&lt;/code&gt; method that allows them to perform shallow comparison of states and props and then use the result of that computation to decide whether the component should be re-rendered or not. Building components that extend the &lt;code&gt;React.PureComponent&lt;/code&gt; class will add memoization to their behavior. It really that simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Even though memoization is a great tools to have in your toolbox as a react developer, it isn't a feature that you should jump to using every time because even though it works great for caching the results of intensive or heavy processing computations, it still has some trade off like the hit it takes on your memory in exchange for speed optimization in your application. To drive my point home, you shouldn't start with memoization in all scenarios as there are other other optimization techniques that you can make use of that will add less overhead to your application when compared to memoization.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>memoization</category>
      <category>react</category>
    </item>
    <item>
      <title>A Look at NAT Gateways and VPC Endpoints in AWS</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Fri, 28 Jun 2024 13:00:40 +0000</pubDate>
      <link>https://forem.com/aws-builders/a-look-at-nat-gateways-and-vpc-endpoints-in-aws-28pn</link>
      <guid>https://forem.com/aws-builders/a-look-at-nat-gateways-and-vpc-endpoints-in-aws-28pn</guid>
      <description>&lt;p&gt;Every time I get the chance, I like to write articles that are geared towards enabling you make your cloud infrastructure on AWS and other cloud platforms more secure. In today’s edition of writing about AWS services, we will be learning about NAT Gateways, what they are, how they work and how they enhance our cloud infrastructure. From NAT gateways we will finish it off by talking about VPC endpoints. Allons-y (FYI: that’s “let’s go” in French 😉)&lt;/p&gt;

&lt;h2&gt;
  
  
  NAT Gateways
&lt;/h2&gt;

&lt;p&gt;First and foremost, NAT stands for Network Address Translation. Let’s look at what NAT really is before moving on to NAT gateways proper. Network Address Translation is a process in which private IP addresses used on a network (usually a local area network) are translated into public IP addresses that can be used to access the internet.&lt;/p&gt;

&lt;p&gt;To understand how NAT gateways work, we are going to use the example of a two-tier architecture with a web tier deployed on EC2 instance in a public subnet (a public subnet is a subnet that has a route to an Internet gateway on the route table associated with it) and an application tier deployed on EC2 instances in a private subnet ( a private subnet has no route to an internet gateway on its route table). With this architecture, the EC2 instances that make up the application tier are unable to access the internet because they the subnet in which they reside has no route to an IGW on its route table. How will the instances go about performing tasks like downloading update patches from the internet? The answer lies in using NAT gateways. For the application tier to have access to the internet, we need to provision a NAT gateway in the public subnet housing our web tier.&lt;/p&gt;

&lt;p&gt;When an instance in the application tier wants to connect to the internet, it sends a request which carries information such as the IP address of the instance and the destination of the request to the NAT gateway in the public subnet. The NAT gateway then translates the private IP address of the instance to a public elastic IP address in its address pool and uses it to forward the request to the internet via the internet gateway. One important thing to note about NAT gateways is that, they won’t accept or allow any inbound communication initiated from the internet as it only allows outbound traffic originating from your VPC. This can significantly improve the security posture of your infrastructure.&lt;/p&gt;

&lt;p&gt;NAT gateways are managed by AWS. To create a NAT gateway, all you have to do is specify the subnet it will reside in and then associate an Elastic IP address (EIP). AWS handles every other configuration for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  VPC Endpoints
&lt;/h2&gt;

&lt;p&gt;VPC endpoints allow private access to an array of AWS services using the internal AWS network instead of having to go through the internet using public DNS endpoints. These endpoints enable you to connect to supported services without having to configure an IGW (Internet Gateway), NAT Gateway, a VPN, or a Direct Connect (DX) connection.&lt;/p&gt;

&lt;p&gt;There are two types of VPC endpoints available on AWS. They are the Interface Endpoints and Gateway Endpoints&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interface Endpoints&lt;/strong&gt; — They are fundamentally Elastic Network Interfaces (ENI) placed in a subnet where they act as a target for any traffic that is being sent to a supported service. To be able to connect to an interface endpoint to access a supported service, you use PrivateLink. PrivateLink provides a secure and private connection between VPCs, AWS services and on-premises applications through the internal AWS network.&lt;/p&gt;

&lt;p&gt;To see the suite of services that can be accessed via interface endpoints, check out this &lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/aws-services-privatelink-support.html"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gateway Endpoints&lt;/strong&gt; — They are targets within your route table that enable you to access supported services thereby keeping traffic within the AWS network. At the time of writing, the only services supported by gateway endpoints are: S3 and DynamoDB. Be sure to check the appropriate AWS documentation for any addition to the list of services. One last thing to keep in mind about gateway endpoints is that they only work with IPv4&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Some say the mark of a good dancer is to know when to bow out of the stage. With that, we have officially reached the end of this article about VPC endpoints and NAT gateways. I will like to implore you to keep learning and getting better at using tools such as these for you don’t know when they will come in handy. That could be sooner rather than later. Thank you for riding with me to the very end. Best of luck in all your endeavors.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>networking</category>
    </item>
    <item>
      <title>Understanding Network Access Control Lists and Security Groups in AWS</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Thu, 27 Jun 2024 01:02:09 +0000</pubDate>
      <link>https://forem.com/aws-builders/understanding-network-access-control-lists-and-security-groups-in-aws-3bk4</link>
      <guid>https://forem.com/aws-builders/understanding-network-access-control-lists-and-security-groups-in-aws-3bk4</guid>
      <description>&lt;p&gt;In an article I published exactly a year ago, I wrote about VPCs and subnets in the AWS cloud and all one needs to know about these foundational AWS networking concepts. However, I did not go into the details of Network Access Control Lists (NACLs) and Security Groups (SGs). This doesn't mean the significance of these core aspects of AWS networking is lost on me. The purpose of this write up is to provide you with an in depth examination of Security Groups and NACLs. I recommend reading &lt;a href="https://aws.plainenglish.io/understanding-vpcs-and-subnets-foundations-for-aws-networking-316eae93167f"&gt;the article&lt;/a&gt; I wrote on VPCs and subnets before coming back to this one. If you went to read that, welcome back and without further ado let's get to business!&lt;/p&gt;

&lt;h2&gt;
  
  
  Network Access Control Lists
&lt;/h2&gt;

&lt;p&gt;As we all know, security is a very important component of your AWS infrastructure and it is something that should always be top of mind when you are implementing solutions in the cloud.&lt;/p&gt;

&lt;p&gt;NACLs are security filters that control the flow of traffic in and out of a subnet. When you create a subnet in the AWS cloud, a default NACL is associated with it if you didn't explicitly configure one while creating the subnet. These defaults NACLs allow all inbound and outbound to and from the subnet respectively. Because of this, they pose a security threat. To eliminate this security threat you can configure your NACL by adding rules to it. These rules could either be inbound or outbound.&lt;/p&gt;

&lt;p&gt;Each inbound rule added to you NACL is made up of the following fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;Rule number&lt;/strong&gt; (Rule #) which determines the order in which the rules are evaluated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;Type&lt;/strong&gt; field which determines the type of inbound traffic you want to allow or deny into the subnets the NACL is associated with.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;Protocol&lt;/strong&gt; field which determines the protocols used by the inbound traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Port range&lt;/strong&gt; field which determines the range of ports to be used by the inbound traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Source&lt;/strong&gt; which determines the source IP address range of the inbound traffic and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An &lt;strong&gt;Allow / Deny&lt;/strong&gt; field which determines whether the rule is allowing or denying the inbound traffic.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The image below shows a visual example of NACL inbound rules:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qwq0fqzztsozilzftfx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qwq0fqzztsozilzftfx.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For outbound rules, all the fields are the same except for the Source field which is replaced with a Destination field determining the destination of outbound traffic from the subnets associated with the NACL.&lt;/p&gt;

&lt;p&gt;NACLs are &lt;strong&gt;stateless&lt;/strong&gt;. This means any response traffic generated from a request needs to be explicitly allowed else they are a denied implicitly. To put it simply, when traffic is allowed from particular source with a particular port range, type and protocol, the return traffic to that source is not allowed by default and you have explicitly allow it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Noteworthy: A subnet can only have one NACL associated with it at any point in time but a NACL can be associated with multiple subnets at a time.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now let's move on to security groups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Groups
&lt;/h2&gt;

&lt;p&gt;Security Groups are much like NACLs with a few difference such as: SGs control the flow of traffic in and out of an EC2 instance, they are stateful unlike NACLs which are stateless. Let's unpack each of these aspects in more detail.&lt;/p&gt;

&lt;p&gt;Security Groups also act as traffic filters but rather than working at the subnet level like NACLs do, they work at the instance level. They have similar fields to NACL rules except for the fact that there is no Rule # and Allow / Deny fields. Since SG rules do not have rule numbers to determine the order which they are evaluated, all the rules in a security group have to be evaluated before a decision is made on the flow of traffic.&lt;/p&gt;

&lt;p&gt;SGs have only allow rules implying that any traffic that is not allowed by a security group rule is denied. Because security groups are stateful, any traffic allowed into an instance, the return traffic is allowed by default. The image below shows some examples of security group rules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3afb60rspnl45ztswa01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3afb60rspnl45ztswa01.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a final recap, NACLs filter traffic at the subnet level and they are stateless while SGs filter traffic at the instance level and they are stateful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have seen how security groups and NACLs work together to control the flow of traffic into and out of your AWS environment. Configuring NACLs and SGs is your responsibility as stipulated by the AWS Shared Responsibility Model so learning how to use them properly will greatly improve the security posture of your AWS infrastructure. This is where this article ends but it shouldn't be where you end your journey of learning about Security Groups and NACLs. Good luck in all your endeavors.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>securitygroup</category>
      <category>accesscontrol</category>
    </item>
    <item>
      <title>Orchestrating Serverless Workflows with Ease using AWS Step Functions</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Wed, 26 Jun 2024 18:10:42 +0000</pubDate>
      <link>https://forem.com/aws-builders/orchestrating-serverless-workflows-with-ease-using-aws-step-functions-3mok</link>
      <guid>https://forem.com/aws-builders/orchestrating-serverless-workflows-with-ease-using-aws-step-functions-3mok</guid>
      <description>&lt;p&gt;When we talk about running serverless workloads on AWS (disclaimer: serverless doesn’t mean there are no servers, it just means you don’t have to worry about provisioning and managing them), the service that immediately comes to mind is definitely AWS Lambda. This serverless compute service allows developers to run their code in the cloud, all without managing the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;Although Lambda offers developers the ability to run their code in the cloud, it does have some constraints that limit its usability in specific scenarios and use cases. One of these constraints is Lambda’s maximum execution time of 15 minutes. Unfortunately, this means developers cannot use Lambda to carry out complex operations that take more than 15 minutes to complete.&lt;/p&gt;

&lt;p&gt;However, don’t let this limitation dissuade you from using AWS Lambda! Because this is where AWS Step Functions step in (see what I did there? 😉) to the rescue to make the execution of complex operations possible.&lt;/p&gt;

&lt;p&gt;The main objective of this article is to bring the good news of AWS Step Functions to you my dear friend. So grab your digging equipment and without further ado, let’s start digging into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Step Functions
&lt;/h2&gt;

&lt;p&gt;Simply put, AWS Step Functions is a state machine service. But what exactly is a state machine? Let’s use an analogy to explain. Imagine your office coffee maker. It sits idle in the kitchen, waiting for instructions to make coffee. When someone uses it, they select the type of coffee, quantity, and other options — these are the states the machine goes through to make a cup of coffee. Once it completes the necessary states, the coffee maker returns to its idle state, ready for the next user. AWS Step Functions allow you to create workflows just like the coffee maker, where you can have your system wait for inputs, make decisions, and process information based on the input variables. With this kind of orchestration, we are able to leverage Lambda functions in ways that are not inherently supported by the service itself. For instance, you can run processes in parallel when you have multiple tasks you want to process at one time or in sequence when order is important. In a similar fashion, you can implement retry logic if you want your code to keep executing until it succeeds, or reaches a time out of some sort. This way, we are able to conquer lambda’s 15 minutes code execution limit.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;Now on to how Step Functions works. It operates by getting your workflow from an Amazon State language file which is a JSON based file that is used to define your state machine and its components. This file defines the order and flow of your serverless tasks in AWS Step Functions. It’s like the recipe for your code workflow. Here is an example of what State language looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"StartAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SayHello"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"States"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"SayHello"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT_ID:function:SayHelloFunction"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Goodbye"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Goodbye"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Task"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:REGION:ACCOUNT_ID:function:GoodbyeFunction"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"End"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see from the code above, State Language files are written in JSON format, a language familiar to most developers. However, if you’re new to JSON, there’s no need to worry! AWS Step Functions lets you build your state machine by dragging and dropping components (states) to link them in the AWS Step Functions Workflow Studio. Here’s a picture example of what a state machine looks like in the Workflow Studio.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F281rmlep4loccnx8q1xl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F281rmlep4loccnx8q1xl.png" alt="Image description" width="592" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  State Machine State Type
&lt;/h2&gt;

&lt;p&gt;There are eight commonly used core state types that you can define in your workflow to achieve a particular result. These state types are: the Pass state, Task State, Choice State, Wait, Success State, Fail, Parallel State, and Map State. We’ll take a closer look at each of these in more detail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Pass State&lt;/em&gt;&lt;/strong&gt; — The Pass state doesn’t actually perform a specific action. Instead, it acts as a placeholder state, facilitating transitions between other states without executing any code. While it can be helpful for debugging purposes, such as testing transitions between states, it’s not exclusively a debugging state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Task State&lt;/em&gt;&lt;/strong&gt; — This is where the action happens. As the most common state type, it represents a unit of work, typically executed by an AWS Lambda function or another integrated service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Choice State&lt;/em&gt;&lt;/strong&gt; — This state allows you to evaluate an input and then choose the next state for the workflow based on the evaluation outcome. Essentially, it’s an “if-then” operation that enables further application logic execution based on the chosen path.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Wait State&lt;/strong&gt;&lt;/em&gt; — In this state, you can pause the state machine for a specified duration or until a specific time is reached. This comes in handy if you want to schedule a pause within the workflow. For example, you can use it to send out emails at 10:00 AM every day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Success State&lt;/em&gt;&lt;/strong&gt; — This state is used to indicate the successful completion of a workflow. It can be part of the choice state or to end the state machine in general.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Fail State&lt;/em&gt;&lt;/strong&gt; — It is termination state similar to the success state but indicates that a workflow failed to complete successfully. Fail states should have an error message and a cause for better workflow understanding and troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Parallel State&lt;/em&gt;&lt;/strong&gt; — This state executes a group of states as concurrently as possible and waits for each of them to complete before moving on. Imagine you have a large dataset stored in S3 that needs to be processed. You can use a Parallel state to concurrently trigger multiple Lambda functions, each processing a portion of the data set. Doing this will significantly speed up the overall processing time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Map State&lt;/em&gt;&lt;/strong&gt; — The Map state allows you to loop through a list of items and perform tasks on them. In the map state, you can define the number of concurrent items to be worked on at one time.&lt;/p&gt;

&lt;p&gt;By making use of a combination of these states, you can build dynamic and highly scalable workflows. To find the list of supported AWS service integrations for Step Functions, check out this &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/connect-supported-services.html"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article has introduced you to AWS Step Functions and its potential to streamline your application development. Step Functions manages your application’s components and logic, allowing you to write less code and focus on building and updating your application faster. It offers a wide range of use cases, including submitting and monitoring AWS Batch jobs, running AWS Fargate tasks, publishing messages to SNS topics or SQS queues, starting Glue job runs, and much more. If your workflow involves tasks like these, AWS Step Functions can be a valuable asset.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>orchestration</category>
      <category>stepfunctions</category>
    </item>
    <item>
      <title>Monitoring Underutilized Storage Resources on AWS</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Thu, 20 Jun 2024 21:08:56 +0000</pubDate>
      <link>https://forem.com/aws-builders/monitoring-underutilized-storage-resources-on-aws-1gnf</link>
      <guid>https://forem.com/aws-builders/monitoring-underutilized-storage-resources-on-aws-1gnf</guid>
      <description>&lt;p&gt;When cloud professionals embark on a journey to fish out underutilized resources that may be driving costs up, they rarely pay attention to doing some cost optimization in the direction of storage resources and often focus solely on optimizing their compute resources. In this article, we will go through some tools and strategies you can leverage in monitoring your AWS storage resources. Before moving on to that main event, let’s start by talking briefly about the different storage types available on AWS. If you are ready to roll, let’s go!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage Types
&lt;/h2&gt;

&lt;p&gt;When it comes storage, AWS has a wide array of services you can choose from. You will agree with me that having these many options can add some level of confusion to your decision making process especially when you don’t have an understanding of what the options are and which ones are suitable for what use case. To provide you with guidance for when you have to pick a storage service on AWS, let’s talk about some of the storage types available.&lt;/p&gt;

&lt;p&gt;On AWS, storage is primarily divided into three categories depending on the type of data you intend to store. These categories are: Block storage, Object storage and File storage. We will go over them one after the other, exploring examples of each as we go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Block Storage
&lt;/h2&gt;

&lt;p&gt;To put it simply, a block storage device is a type of storage device that stores data in fixed-size chunks called blocks. The size of each block is dependent on the amount of data the device can read or write in a single input/output (IO) request. So when you want to store data on a block storage device and the size of the data surpasses the size of a single block of data that the device can read or write in single I/O request, the data is broken down into equal-size chunks before it is stored on the underlying storage device. As it is always important to understand the Why behind actions, let me tell you the performance benefit of block storage devices handling data in the manner that they do.&lt;/p&gt;

&lt;p&gt;When data is broken down into blocks, it allows for fast access and retrieval of the data. In addition to fast access, when data is on a block storage device and changes are made to the data, only the blocks affected by the change are re-written. All other blocks remain unchanged which helps to further enhance performance and speed. In AWS, the block storage options include Elastic Block Storage (EBS) volumes and Instance Store volumes. Check out &lt;a href="https://medium.com/aws-in-plain-english/exploring-ec2-instance-storage-understand-your-options-425186bf0974"&gt;this article&lt;/a&gt; I wrote to learn more about EBS and Instance Store Volumes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Object Storage
&lt;/h2&gt;

&lt;p&gt;With object storage, data is not broken down into fixed-sized chunks as is the case with block storage. In object storage, data (files) are stored as single objects no matter their size. This kind of storage is suitable for huge amounts of unstructured data. The object storage service of AWS is S3. With all data being stored as single objects, when some part of that object is updated, the entire object has to be rewritten. You can access data stored in S3 via HTTP, HTTPS or APIs through the AWS CLI or SDK. Some pros that come with using S3 are: it is highly available, tremendously durable, low cost and can scale infinitely not forgetting the fact that you can replicate your data in the same or across regions for disaster recovery purposes. Check out &lt;a href="https://medium.com/@dbrandonbawe/exploring-the-basics-of-amazon-simple-storage-service-s3-f8ad2af0a6f9"&gt;this article&lt;/a&gt; I wrote on S3 to learn more about object storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Storage
&lt;/h3&gt;

&lt;p&gt;File Storage is fundamentally an abstraction of block storage using a file system such as NFS (Network File System) and SMB (Server Message Block). With File storage, the hierarchy of files is maintained with the use of folders and subfolders. The main file storage services of AWS are Amazon EFS and Amazon FSx. File storage is the most commonly used storage type for network shared file systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Underutilized Storage Resources
&lt;/h2&gt;

&lt;p&gt;The opening sentence of this paragraph is a lamentation so to speak on how storage resources are seldom considered when organizations and individuals take cost optimization actions. Even though they often fail to do this, it is just as important to pick the right storage option for your use case and also provision them appropriately. You can right size your storage resources by monitoring, modifying and even deleting those that are underutilized. Let’s examine some of the ways in which you can monitor your storage resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Cloudwatch
&lt;/h3&gt;

&lt;p&gt;Cloudwatch provides out-of-box metrics for monitoring storage services such as S3, DynamoDB, EBS and more. For EBS volumes you can use a metric such as, VolumeIdleTime which specifies the number of seconds there are no read or write requests to the volume within a given time period. With the information that Cloudwatch provides through this metric, you can decide on the action you want to take to manage the underutilized volume. In addition to the metrics that CloudWatch ships with for EBS volumes for example, you can create custom metrics to do things like find under provisioned or over provisioned volumes.&lt;/p&gt;

&lt;p&gt;For S3 buckets, you can use the BucketSizeByte CloudWatch metric which gives you the size of your bucket in bytes. This comes in handy if you have stray S3 buckets that aren’t holding much data. Using this metric, you can quickly find and clean up those buckets.&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 Object Logs &amp;amp; S3 Analytics
&lt;/h3&gt;

&lt;p&gt;With S3 you can use S3 object access logs as well. These will help you track requests that are made to your bucket. Using this, you can find buckets that aren’t accessed frequently, and then determine if you still need the data in that bucket, or if you can move it to a lower cost storage tier or delete it. This is a manual process of determining access patterns. You make use of S3 Analytics if you are interested in a service that provides an automated procedure.&lt;/p&gt;

&lt;p&gt;S3 Analytics can help you determine when to transition data to a different storage class. Using the analytics provided by this service, you can then leverage S3 lifecycle configurations to move data to lower cost storage tiers or delete it, ultimately reducing your spend over time. You can also optionally use the S3 Intelligent-tiering class to analyze when to move your data and automate the movement of the data for you. This is best for data that has unpredictable storage patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compute Optimizer and Trusted Advisor
&lt;/h3&gt;

&lt;p&gt;To monitor for situations such as under-provisioned or over-provisioned EBS volumes, you can also make use of Compute Optimizer and Trusted Advisor for an easier and more automated experience. Compute Optimizer will make throughput and IOPS recommendations for General Purpose SSD volumes and IOPs recommendations for Provisioned IOPs volumes. However, it will identify a list of optimal EBS volume configurations that provide cost savings and potentially better performance. With Trusted Advisor, you can identify a list of underutilized EBS volumes. Trusted Advisor also ingests data from Compute Optimizer to identify volumes that may be over-provisioned as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As a self appointed disciple preaching the gospel of optimizing AWS resources for better cost saving and performance, I hope you have taken a lesson or two from this article to implement in your resources monitoring and optimizing strategies. There are services such as CloudWatch, Trusted Advisor, Compute Optimizer, S3 Analytics and much more for you to add to your bag of tools. To make sure you don’t overwhelm yourself, learn more about each service you intend to make use of, start small and then move up from there. Good luck in your cloud endeavors.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>storage</category>
      <category>cloudcomputing</category>
      <category>awscommunity</category>
    </item>
    <item>
      <title>Unlocking the Power of EC2 Auto Scaling using Lifecycle Hooks</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Thu, 13 Jun 2024 14:01:02 +0000</pubDate>
      <link>https://forem.com/aws-builders/unlocking-the-power-of-ec2-auto-scaling-using-lifecycle-hooks-12fk</link>
      <guid>https://forem.com/aws-builders/unlocking-the-power-of-ec2-auto-scaling-using-lifecycle-hooks-12fk</guid>
      <description>&lt;p&gt;In a previous article in which I wrote about EC2 auto scaling, I failed to talked about instance lifecycle hooks and how AWS practitioners can utilize them to optimize their infrastructure. This article is my way of showing you that I have learned from that mistake.&lt;/p&gt;

&lt;p&gt;A little recap of what auto scaling is: It's a procedure or mechanism that helps you automatically (as the "auto" in auto scaling suggests) increase or decrease the size of your IT resources based on predefined thresholds and metrics. In the context of AWS, there is EC2 auto scaling and a service called AWS Auto Scaling, which is used for scaling ECS, DynamoDB, and Aurora resources. However, the focus of this article is on EC2 auto scaling and how to effectively leverage lifecycle hooks during scaling.&lt;/p&gt;

&lt;p&gt;Before I move on with this article, I give you a real-world example of why auto scaling is important to get you to continue reading this article with an increased level of attention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4a14v2hbalw43r8q9d4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4a14v2hbalw43r8q9d4.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Imagine a popular social media app. Every Sunday evening, after a weekend filled with adventures, users rush to the app to upload and share their photos. Without auto scaling, the app's servers would be overwhelmed during this rush, causing slow loading times or even crashes. However, with auto scaling in place, the app can automatically scale up by launching additional EC2 instances to handle the increased traffic. This ensures a smooth user experience even during peak times, leading to greater customer satisfaction and retention. But auto scaling doesn't stop there. Once the Sunday rush subsides, auto scaling can intelligently scale back in, terminating unused instances. This frees up valuable resources and reduces costs. This automatic provisioning and de-provisioning not only saves money, but also frees up the IT professionals who would otherwise be manually managing server capacity (a very tedious task).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that you are sold on the importance of auto scaling, let's move on to the other parts of this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lifecycle Hooks
&lt;/h2&gt;

&lt;p&gt;Any frontend developer who has used a library like React.js already has an understanding of what a lifecycle hook is. The concept is similar in the context of EC2 instances on AWS. Lifecycle hooks give you the ability to perform custom actions on instances in an Auto Scaling group from the time they are launched through to their termination. They provide a specified amount of time (one hour by default) to wait for the action to complete before the instance transitions to the next state. Let's talk about the different stages in the lifecycle of an EC2 instance during scaling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsglrzllkcz4u6xparp2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsglrzllkcz4u6xparp2g.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When an EC2 instance is launched during a scale out event, it enters a &lt;strong&gt;pending&lt;/strong&gt; state allowing time for the instance to run any bootstrapping scripts specified in the user data section of the launch configuration or template of the Auto Scaling group. Once all this is complete, the instance immediately goes into service that is the &lt;strong&gt;running&lt;/strong&gt; state. On the flip side of things, when an instance is being removed from an Auto Scaling group during a scale in event or because it has failed health checks, it moves to the &lt;strong&gt;terminating&lt;/strong&gt; or &lt;strong&gt;shutting-down&lt;/strong&gt; state until it finally enters the &lt;strong&gt;terminated&lt;/strong&gt; state. Even though this looks like a pretty robust set up, it can constitute some problems. For example, when an instance is launched and the user data script has finished running and the instance enters the in-service (running) state, it doesn't necessarily mean the application to be served by the instance is ready to start receiving and processing requests because it might still need more time to perform tasks such as processing configuration files, loading custom resources or connecting to backend databases amongst others. While all this is still trying to complete, the instance might already be receiving health check requests from a load balancer. What do you think will the result of the health check when this happens? You are right if your answer to that question is that the health checks will likely fail because the application is still loading. How then do we inform an auto scaling group that an instance that has been launched is not ready to start receiving any type of requests yet and needs more time before it is ready to start receiving requests? We will come back to this question in a minute.&lt;/p&gt;

&lt;p&gt;There is another pertinent problem. During a scale-in event, an instance scheduled for termination may still be in the middle of processing requests and may even contain some important logs needed for troubleshooting issues in the future. If the instance is suddenly terminated, both the in-progress requests and logs will be lost. How do you tell your auto scaling group to delay the termination of the instance until it has finished processing pending requests and important log files have been collected into a permanent storage service like Amazon S3? The answer to this question, and the one asked a couple of sentences ago, is, as you might have guessed, lifecycle hooks.&lt;/p&gt;

&lt;p&gt;Using an instance launching lifecycle hook, you can prevent an instance from moving from the pending state straight into service by first moving it into the &lt;strong&gt;pending:wait&lt;/strong&gt; state to ensure the application on the instance can finish loading and is ready to start processing requests. When that event ends, the instance moves to the &lt;strong&gt;pending:proceed&lt;/strong&gt; state where the Auto Scaling group can then attempt to put it in service (running state)&lt;/p&gt;

&lt;p&gt;In a similar manner, you can also make use of instance termination on the flip side of things that is, when an instance is targeted for termination, an instance terminating lifecycle hook will put your instance in a &lt;strong&gt;terminating:wait&lt;/strong&gt; state. During which you can do your final cleanup tasks such as preserving copies of logs by moving them to S3 for example. And once you're done, or a preset timer (one hour by default) expires, the instance will move to &lt;strong&gt;terminating:proceed&lt;/strong&gt; state, and then the Auto Scaling group will take over and proceed to terminate the instance.&lt;/p&gt;

&lt;p&gt;There are many other use cases for lifecycle hooks, such as managing configurations with tools like Chef or Puppet, among others. We won't go into the details of these to avoid making this article too long. Before I conclude this article, let's look at some implementation considerations for lifecycle hooks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Considerations for Lifecycle Hooks
&lt;/h2&gt;

&lt;p&gt;Before making use of lifecycle hooks you should always consider factors such as:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Timeout&lt;/em&gt;&lt;/strong&gt; — The default timeout for a lifecycle hook as I have already mentioned is one hour (3600 seconds). This may be sufficient for most initialization or cleanup tasks. You can set a custom timeout duration based on your specific needs. The timeout should be long enough to complete necessary actions but not so long that it delays scaling operations unnecessarily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Action Success/Failure&lt;/em&gt;&lt;/strong&gt; — You have to clearly define what constitutes a successful completion of the lifecycle hook action. This might include successful software installation, configuration setup, or data backup. You will also need to identify conditions that would result in a failure, such as timeout expiration, script errors, or failed installations. In a similar fashion, you should configure your system to send notifications (e.g., via SNS or CloudWatch) upon completion of lifecycle hook actions. This helps in tracking and auditing.&lt;/p&gt;

&lt;p&gt;Always keep in mind that lifecycle hooks can add latency to scaling events so you should optimize all actions for efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;In this article, we explored the concept of EC2 auto scaling and then looked at lifecycle hooks, illustrating how they enhance the efficiency of Auto Scaling groups. We also discussed key implementation considerations to ensure the effective use of lifecycle hooks in your scaling strategy. By combining auto scaling with lifecycle hooks, you gain a powerful and automated approach to managing your cloud infrastructure. Auto scaling ensures your application has the resources it needs to handle fluctuating demands, while lifecycle hooks provide the control to tailor instance behavior during launch and termination. This gives you the ability to optimize resource utilization, streamline deployments, and ultimately deliver a highly available and scalable application experience. Thank you for taking the time to read this and learn more about EC2 Auto Scaling with me.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>autoscaling</category>
      <category>ec2</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Securing Your Cloud: Proactive Strategies for AWS Security</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Thu, 02 May 2024 15:42:17 +0000</pubDate>
      <link>https://forem.com/aws-builders/securing-your-cloud-proactive-strategies-for-aws-security-4040</link>
      <guid>https://forem.com/aws-builders/securing-your-cloud-proactive-strategies-for-aws-security-4040</guid>
      <description>&lt;p&gt;As a professional working in any information and technology niche you learn quickly that securing every part of your IT infrastructure is arguably the most critical task. There is no room for debate — it is ESSENTIAL. Since the ability to tell stories and tell them well is an important skill in this modern economy, a little backstory if I may. Recently the IT infrastructure of the city of Hamilton, Ontario [where I live] was breached. I bet this set the city back in terms of finances (they needed to upgrade security measures and fortify their defenses) and potentially compromising resident safety (this is not an exaggeration). All this got me thinking: is a purely defensive security posture enough for businesses and IT professionals when push comes to shove? That question led me to write this article in which I explore how we can be more offensive in our security strategies within the AWS cloud environment. We are going to look at the various ways we can leverage various services to anticipate and mitigate potential security threats before they materialize. So Let’s get to it!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Observation, Monitoring and Analysis
&lt;/h2&gt;

&lt;p&gt;As a self-proclaimed storyteller, here’s another backstory for you. While at my first AWS summit last year in Toronto, I noticed that most of the companies with showcase booths at the summit were offering observability services. This is indicative of how important monitoring and observing your cloud infrastructure is. Monitoring and observability tools can help you detect suspicious activity, potential vulnerabilities, and even signs of an ongoing attack. For these, AWS services such as CloudWatch and CloudTrail are there to assist you. But how exactly do you make use of them, you might wonder. Without going into too much detail, here’s how: By analyzing user activity in CloudTrail and centralizing logs in CloudWatch, you can proactively hunt for threats. You can set log alarms for anomalies and use CloudWatch Insights to investigate suspicious activity. CloudTrail data can even help you understand attacker behavior and prioritize security measures. Combining these tools with other strategies we are going to look at will help you shift from reactive defense to proactive threat hunting in your AWS cloud environment. To learn more about CloudWatch and CloudTrail, check out t&lt;a href="https://medium.com/@dbrandonbawe/from-logs-to-compliance-a-guide-to-aws-monitoring-and-auditing-with-cloudwatch-cloudtrail-and-4322b14b1c02"&gt;his article&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Security Tasks and Configurations
&lt;/h2&gt;

&lt;p&gt;Automating security tasks and configurations does not only save time spent correcting mistakes and ensuring consistency but also prevents costly errors that could disrupt operations. Here is how you can do this using services such as IAM, AWS config and AWS Lambda. You can use IAM to control user access (always remember to follow the &lt;a href="https://www.sentinelone.com/cybersecurity-101/what-is-the-principle-of-least-privilege-polp/?utm_source=gdn-paid&amp;amp;utm_medium=paid-display&amp;amp;utm_campaign=nam-pmax-brand-ppc&amp;amp;utm_term=&amp;amp;campaign_id=19502097988&amp;amp;ad_id=&amp;amp;gad_source=1&amp;amp;gclid=CjwKCAjw57exBhAsEiwAaIxaZiHmlUiKSynmK-0w_3QaZxctE54vZyyDi23LvZe1VXCA89RbI1VsARoCT9QQAvD_BwE"&gt;Principle of Least Privilege&lt;/a&gt;), while using Config to continuously monitor your resources against pre-defined security rules. If Config detects a violation, it triggers an AWS Lambda function — a serverless compute service. You can write custom code in Lambda to automate remediation actions. For example, a Lambda function could automatically revert a non-compliant configuration change or send an alert to security personnel. By doing this, you automate security tasks and enforce compliance, freeing you to turn your attention on optimizing other aspects of your security posture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Incident Response Planning
&lt;/h2&gt;

&lt;p&gt;I will not assume that everyone reading this knows what Incident Response Planning is from the jump. Before moving forward let me explain what it is in the first place. Incident Response planning is the process of developing a documented strategy on how your organization will detect, respond to and recover from security incidents. As important as Incident Response (IR) plans are, most companies have attested to the fact that their IR plans are informal or even nonexistent. Understanding that a threat to an organization’s security is not only a technical issue but a threat to the organization’s business continuity can go a long way to change how organizations take on IR planning. You don’t necessarily have to build an IR plan from the ground up by yourself as there are many companies offering incident response services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secure DevOps Practices
&lt;/h2&gt;

&lt;p&gt;The rise of DevOps practices in software development is attributed to a growing need for faster development cycles, improved collaboration, and better software quality. When and where does security come into play in this DevOps conversation? It is when the conversation changes from talking about DevOps to DevSecOps. To achieve DevSecOps on AWS, integrate security into every step of your development process. To integrate early security checks, leverage AWS security services like Inspector and CodeBuild for automated testing within your CodePipeline, enforce security best practices in your Infrastructure as Code (IaC) with &lt;a href="https://medium.com/@dbrandonbawe/navigating-aws-cloudformation-with-confidence-a-sysops-admins-playbook-f118eaf21648"&gt;CloudFormation&lt;/a&gt; and Config, automate patching with &lt;a href="https://medium.com/aws-in-plain-english/automating-aws-operations-a-deep-dive-into-systems-manager-for-sysops-a975412355ea"&gt;Patch Manager&lt;/a&gt;, and cultivate a security-aware DevOps team through training and incident response planning. This continuous approach embeds security within your AWS DevOps workflow for a more secure and efficient development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vulnerability Management
&lt;/h2&gt;

&lt;p&gt;This is the the last offensive strategy we are going to look in this article but it is be no means the last security strategy you can leverage as there are many other robust strategies not included in this article. The whole point of vulnerability management is that regularly scanning your AWS environment for vulnerabilities is an essential security practice. By identifying potential weaknesses before attackers exploit them, you significantly reduce the risk of data breaches and downtime. This not only protects your sensitive data but also helps maintain compliance with industry regulations. Regular scans provide a clear picture of your overall security posture, allowing you to prioritize patching vulnerabilities and continuously strengthen your defenses. It’s a proactive investment that pays off in a more secure and resilient AWS environment. You can use Amazon Inspector and even third-party vulnerability scanners to achieve this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Last words
&lt;/h2&gt;

&lt;p&gt;I hope after reading this article, you were able to take away at least one strategy that you are going to implement to improve the rigidity of the security posture of your AWS cloud environment. Remember, security is an ongoing journey, not a destination. As the threat landscape evolves, so should your security practices. By embracing a proactive and offensive approach, utilizing the powerful tools offered by AWS as well as other service providers, and continuously refining your strategies, you can build a robust and resilient cloud environment that is well-equipped to withstand even the most sophisticated attacks.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Navigating AWS CloudFormation with Confidence: A SysOps Admin's Playbook</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Thu, 12 Oct 2023 16:33:48 +0000</pubDate>
      <link>https://forem.com/brandondamue/navigating-aws-cloudformation-with-confidence-a-sysops-admins-playbook-1chi</link>
      <guid>https://forem.com/brandondamue/navigating-aws-cloudformation-with-confidence-a-sysops-admins-playbook-1chi</guid>
      <description>&lt;p&gt;Anyone who has explored building cloud solutions on any cloud platform, be it AWS, GCP, or Azure, has probably come in contact with an IaC (Infrastructure as Code) tool and witnessed how it made life in the cloud better. It's like discovering a versatile magic wand in the world of tech—a wand that SysOps administrators and other professionals wave to conjure entire cloud infrastructures, automate deployments, and orchestrate resources seamlessly. Among these enchanting tools, CloudFormatiom stands tall as one of the maestros of cloud orchestration.&lt;/p&gt;

&lt;p&gt;In this symphony of IaC mastery on AWS, CloudFormation takes centre stage, offering cloud professionals the power to compose complex cloud symphonies with elegant ease. Picture it as your conductor's baton, allowing you to harmonize infrastructure components, ensuring they play in perfect unison, all while saving you time, enhancing security, and optimizing costs. This is an article in which we journey into the heart of CloudFormation, uncovering its secrets, exploring its nuances, and discovering how it empowers SysOps administrators to craft cloud infrastructures like genius composers, transforming the way we build and manage in the cloud. So, grab your baton, and let's begin this orchestration of AWS CloudFormation's wonderful capabilities.&lt;/p&gt;

&lt;p&gt;As a tradition in all articles I put out, I always like to start with an overview of the topic of interest. With my little tradition at the top of my mind, here is an overview of CloudFormation.&lt;/p&gt;

&lt;p&gt;CloudFormation is a vital tool for SysOps admins and other cloud professionals as it simplifies and streamlines the management of AWS infrastructure through code. It allows you to define and provision AWS resources and their configurations using templates, which are blueprints for the cloud. This brings consistency to infrastructure management, reduces manual errors, and enhances operational efficiency.&lt;/p&gt;

&lt;p&gt;With CloudFormation, you can easily create, configure, and control AWS resources, ensuring they match the desired specifications. This wonderful service also supports the coordination of complex deployments, ensuring resources are provisioned in the correct order. CloudFormation treats infrastructure as code, making it easy to version-control and integrate into DevOps processes(check out &lt;a href="https://medium.com/@mistazidane/what-is-devops-4f8253e58933"&gt;this article&lt;/a&gt; by a good friend of mine to learn more about what DevOps entails).&lt;/p&gt;

&lt;p&gt;I hope that overview was good enough for you. It is time!! Time for what you might wonder. It is time for us to jump into the "CloudFormation pool" where we will explore the intricacies of the subject starting with CloudFormation Templates and Stacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Templates and Stacks
&lt;/h2&gt;

&lt;p&gt;CloudFormation templates make up its backbone, serving as the building blocks for defining and provisioning resources. These templates are written in either JSON or YAML format and follow a structured, declarative approach. In a template, you specify the AWS resources you need, their properties, and their relationships within a stack, all in a human-readable code format.&lt;/p&gt;

&lt;p&gt;One of the key advantages of CloudFormation templates is their reusability. You can create modular templates for commonly used AWS resource patterns, making it easy to maintain consistency across your infrastructure. Templates can also incorporate parameters, allowing users to customize resource configurations when creating stacks (more on stacks soon just keep going :) ) based on the template. Additionally, templates can define outputs, facilitating communication between resources or even between stacks.&lt;/p&gt;

&lt;p&gt;Overall, CloudFormation templates enable SysOps admins and developers to codify their AWS infrastructure requirements, automate resource provisioning, and maintain version-controlled blueprints of their cloud environments. This approach enhances efficiency, reduces manual errors, and promotes best practices in managing AWS resources at scale.&lt;/p&gt;

&lt;p&gt;In CloudFormation, stacks serve as containers for resources defined in templates. Templates as we have seen specify the AWS resources and their configurations. When you create a stack based on a template, CloudFormation reads the template and provisions the specified resources according to the defined settings. Stacks are designed to simplify resource management, allowing for easy orchestration of complex infrastructures.&lt;/p&gt;

&lt;p&gt;Stacks also handle resource dependencies, ensuring that resources are created or updated in the correct order. They can be scoped with their own permissions and IAM roles, enabling fine-grained access control. Moreover, stacks facilitate resource clean up – when you delete a stack, CloudFormation automatically removes all associated resources. This organizational approach streamlines resource provisioning, management, and cleanup, making it a fundamental concept for SysOps admin in AWS. Let us now take a deeper look into stacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack Creation and Updates
&lt;/h2&gt;

&lt;p&gt;Creating and managing stacks in CloudFormation involves two key phases. In the initial creation phase, you start by crafting a CloudFormation template, which acts as a blueprint for resource provisioning. This template defines the resources to be created and their configurations, all articulated in JSON or YAML format. With the template ready, you can go to the AWS Management Console, access the CloudFormation service, and initiate stack creation. During this process, you upload the template, provide essential stack details such as a name and parameters, and meticulously review the configuration to ensure accuracy. Upon confirmation, CloudFormation takes over, promptly provisioning the AWS resources specified in the template.&lt;/p&gt;

&lt;p&gt;In the subsequent management phase, you can update existing stacks to modify your infrastructure. This begins with template adjustments to reflect the desired changes. Before implementing these changes, it's wise to create a change set — a preview of the changes to be made. The stack update is then executed, and CloudFormation systematically applies the updates to the resources. To maintain control during updates, you can employ resource-specific update policies. Continuous monitoring of the update progress, along with the safety net of rollback mechanisms in case of issues, ensures a secure and controlled evolution of the AWS infrastructure. Whether creating new stacks or managing existing ones, this process-driven approach streamlines resource provisioning and maintenance, providing you with the tools needed for efficient infrastructure management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack Policies and Rollbacks
&lt;/h2&gt;

&lt;p&gt;Stack policies in CloudFormation offer a robust mechanism for finely controlling updates to resources within a stack. They are defined in JSON format and allow you to specify which actions are permitted or denied for individual resources. With the ability to set permissions as "Allow" or "Deny," you can create precise policies governing who can modify resources and under what conditions.&lt;/p&gt;

&lt;p&gt;Stack policies are especially valuable for maintaining security and stability in your infrastructure. By using these policies, you can prevent unauthorized or accidental changes to critical resources, reducing the risk of disruptions. Additionally, stack policies give you the flexibility to set conditions, ensuring that updates occur only when specific circumstances are met. While powerful, stack policies should be thoughtfully crafted to avoid overly restrictive controls, striking the right balance between security and operational flexibility.&lt;/p&gt;

&lt;p&gt;In case of failures during stack updates, AWS CloudFormation provides a robust rollback mechanism to automatically revert the stack to its previous state, ensuring that your infrastructure remains in a consistent and stable condition. This feature safeguards your resources from any unintended or disruptive changes. They are a crucial aspect of maintaining the reliability and integrity of your infrastructure. When a failure occurs during a stack update, CloudFormation carefully tracks the changes made and the state of resources. If any part of the update fails, CloudFormation will initiate a rollback, undoing the changes made during the update process and restoring the resources to their prior configurations.&lt;/p&gt;

&lt;p&gt;Rollback behaviours can be configured to suit your specific needs. For example, you can specify whether the entire stack should be rolled back or only the resources affected by the update failure. You can also define rollback triggers, which are custom actions to take during a rollback, allowing you to address specific situations effectively. These robust rollback capabilities help maintain the stability of your resources and minimize the potential impact of failed updates, contributing to the reliability of your infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nested Stacks and Cross-Stack References
&lt;/h2&gt;

&lt;p&gt;Nested stacks offer a structured approach to managing complex infrastructures. By creating parent-child relationships within templates, they allow for a hierarchical structure. Each nested stack acts as a modular unit, simplifying resource organization and code reuse. This separation of concerns not only eases the management of intricate infrastructures but also promotes better dependency management. In addy, updates and rollbacks are isolated to the specific nested stack, minimizing the scope of changes and reducing potential risks. This approach results in more streamlined and maintainable templates for SysOps admins, making it easier to handle intricate infrastructure configurations.&lt;/p&gt;

&lt;p&gt;Referencing resources from one stack in another stack in AWS CloudFormation can be achieved using cross-stack references. This enables you to create dependencies between resources in different stacks, ensuring they are properly linked. Here's how you can reference resources across stacks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Export Values&lt;/strong&gt;: In the source stack (the stack that contains the resource you want to reference), you need to export the value of the resource. To do this, add an Export declaration to the resource's definition in the CloudFormation template. This export declaration assigns a name to the value you want to share.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"Resources"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"MyResource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::SomeResourceType"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"Properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Property1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Value1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Property2"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Value2"&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"Outputs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"ExportedValueName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"Description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Description of the exported value"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"Fn::GetAtt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"MyResource"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AttributeToExport"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"Export"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ExportedValueName"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Import Values&lt;/strong&gt;: In the target stack (the stack where you want to reference the resource), you can import the exported value using the Fn::Import function in the CloudFormation template. This function allows you to access the exported value by its name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"Resources"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"MyOtherResource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::OtherResourceType"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"Properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Property1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"Fn::Import"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ExportedValueName"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use the Referenced Value&lt;/strong&gt;: In the target stack, you can now use the imported value from the source stack within the properties of other resources. In this example, "MyOtherResource" has "Property1" set to the exported value from "MyResource" in the source stack. Referencing stacks like this provides a powerful way to modularize and organize your infrastructure, promoting a structured approach to handling complex architectures.&lt;/p&gt;

&lt;p&gt;Now unto the last piece of our CloudFormation puzzle. By this, I don't mean we have covered all there is to know about CloudFormation and this is the last thing on that list. What it means is that what we are about to look at will be the last CloudFormation intricacy covered in this article. Capish?&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack Set and CloudFormation Drift Detection
&lt;/h2&gt;

&lt;p&gt;Stack Sets in CloudFormation provide a centralized solution for orchestrating the deployment of CloudFormation stacks across numerous AWS accounts and regions. Operating from a single management account, organizations can efficiently manage, create, and maintain consistent stacks in diverse environments. This multi-account and multi-region capability is particularly advantageous for organizations with a distributed infrastructure or global presence, ensuring that resources are provisioned uniformly and effectively.&lt;/p&gt;

&lt;p&gt;With Stack Sets, you define the CloudFormation template and its parameters in the management account, maintaining template consistency while allowing customized parameter values in member accounts. Access control is finely tuned using IAM, allowing for secure and permission-based stack management. These Stack Sets also offer automated rollback mechanisms, reducing the need for manual intervention in case of deployment issues. This powerful feature simplifies the deployment of CloudFormation stacks at scale, streamlining infrastructure management and promoting a structured and consistent approach to resource provisioning.&lt;/p&gt;

&lt;p&gt;Let's drift to CloudFormation drift detection 😉. Drift Detection is a feature that plays a key role in keeping resources in line with their intended configurations. It allows you to identify discrepancies between the desired state articulated in CloudFormation templates and the actual state of deployed resources. This is particularly beneficial for SysOps admins and cloud engineers responsible for maintaining infrastructure consistency, ensuring that the actual resource configurations conform to what is specified in the templates.&lt;/p&gt;

&lt;p&gt;Drift detection can be applied to a wide range of resource types, including AWS-managed resources like EC2 instances and RDS databases, as well as custom resources defined in CloudFormation templates. When you initiate a drift detection, CloudFormation generates a comprehensive report that carefully outlines the differences or inconsistencies between the intended and actual configurations. This report serves as a valuable reference for understanding the scope of configuration drift and provides insights into the specific resources that require attention.&lt;/p&gt;

&lt;p&gt;Once a drift is identified, you have the flexibility to choose how to remediate these differences. This might involve updating the CloudFormation stack to match the desired configurations, manually adjusting individual resources, or utilizing automation mechanisms such as AWS Systems Manager Automation documents or Lambda functions to orchestrate custom remediation workflows. Drift detection, when used proactively, serves as an essential tool for compliance monitoring, particularly in regulated industries where adherence to specific configurations is of utmost importance. By setting up periodic drift detection, you can ensure that your infrastructure remains compliant and consistent, facilitating ongoing infrastructure management and alignment with organizational and regulatory standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Last Words?
&lt;/h2&gt;

&lt;p&gt;To end on a positive note, CloudFormation is more than just a tool for creating and managing resources in the cloud; it's a beacon of control and orchestration in an ever-expanding digital universe. As a SysOps administrator, you hold the conductor's baton, shaping the symphony of infrastructure with precision and flair. With CloudFormation, you wield a power that transforms complexity into simplicity, chaos into order, and potential into reality. It's your backstage pass to AWS, your trusted companion in the journey of digital orchestration, and your key to orchestrating the future. As you continue to navigate the cloud's dynamic pathways, remember that CloudFormation is your guide, your partner, and your creative canvas. So, keep orchestrating, keep innovating, and keep building the future of the cloud, one stack at a time. Until next time, GoodBye!&lt;/p&gt;

</description>
      <category>cloudinfrastructure</category>
      <category>infrastructureascode</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Taking Control of your EC2 instances: From Instance Lifecycle to Termination Protection</title>
      <dc:creator>Brandon Damue</dc:creator>
      <pubDate>Fri, 15 Sep 2023 13:28:02 +0000</pubDate>
      <link>https://forem.com/brandondamue/taking-control-of-your-ec2-instances-from-instance-lifecycle-to-termination-protection-59bp</link>
      <guid>https://forem.com/brandondamue/taking-control-of-your-ec2-instances-from-instance-lifecycle-to-termination-protection-59bp</guid>
      <description>&lt;p&gt;Even though EC2 has been around since the early days of AWS, many people fail to leverage its full potential. In the ever-expanding world of Amazon Web Services, where exciting new services pop up in the twinkle of an eye, it's quite tempting for even experienced sysops administrators to be lured away by the shiny and the new. But here's the thing: while everyone's chasing after the latest AWS innovations, our trusty old friend, Elastic Compute Cloud, or EC2, is quietly holding down the fort.&lt;/p&gt;

&lt;p&gt;As a sysops administrator, you're essentially the wizard behind the scenes in the cloud, conjuring up instances, taming data, and ensuring the digital heartbeat of your organization keeps thumping. But in this ever-shifting realm, it's not just about summoning EC2 instances; it's about conducting them like a maestro leading a symphony.&lt;/p&gt;

&lt;p&gt;This article isn't a dry technical manual; think of it as the spotlight illuminating a hidden orchestra. These are the unsung EC2 features, the unsung musicians waiting for the discerning conductor to let them shine. We're going to dive headfirst into the intricate art of EC2 management, where termination protection and shutdown behaviour are just the tip of the iceberg. Accept my humble invitation beckoning you to embark on this journey with me.&lt;/p&gt;

&lt;p&gt;If you have read any of my articles before, you would have noticed that I begin with an introduction or overview of the principal topic of the article. With that in mind, here is an overview of EC2 instances.&lt;/p&gt;

&lt;p&gt;Amazon EC2 is your powerhouse in the world of AWS cloud computing. It's like having a toolbox filled with virtual servers (EC2 instances) that you can summon whenever the need arises. What sets EC2 apart in the AWS universe is its role as the cornerstone for building adaptable applications and services.&lt;/p&gt;

&lt;p&gt;With EC2, you've got the superpower to effortlessly adjust your computing resources to match your workload. Plus, there's a menu of instance types tailored for specific tasks, so you're always using the right tool for the job. The best part? You only pay for what you actually use, ensuring you get the most bang for your buck. EC2 also comes with cool features like keeping your applications available and secure, and it seamlessly integrates with other AWS services. It's the go-to choice for organizations seeking efficiency and cost-effectiveness in the cloud. Simply put, EC2 is your trusty ally for cloud computing, ready to help you create, expand, and manage your digital realm with ease. Hope that overview is good enough. Let's move on to bigger things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instance Types
&lt;/h2&gt;

&lt;p&gt;Amazon EC2 offers a diverse array of instance types, each finely tuned to cater to specific workload needs. For tasks demanding a balanced mix of CPU, memory, and network resources, the General Purpose (A, M, T, and N series) instances are the go-to choice. They are versatile and ideal for applications with varying workloads, such as web servers and development environments.&lt;/p&gt;

&lt;p&gt;On the other hand, if your workload leans heavily toward computationally intensive tasks like data analytics or scientific simulations, the Compute-Optimized (C series) instances provide high CPU performance. Memory-optimised (R, X, and Z series) instances are designed for applications requiring substantial memory resources, like databases and big data analytics. For specialized tasks like machine learning and graphics rendering, Accelerated Computing (P, G, F, and Inf series) instances come with GPUs or FPGAs. Storage-optimised (H, I, and D series) instances are tailored for data-intensive applications, offering high-speed local storage. Burstable Performance (T series) instances are cost-effective options for workloads with occasional CPU spikes, while High-Performance Computing (HPC, U, and A series) instances cater to scientific simulations and high-performance computing tasks. Dense Storage (D series) instances are well-suited for big data processing and data warehousing, with ample local storage capacity. Finally, Network-Optimized (N series) instances are designed to handle high network throughput tasks such as content delivery and video streaming. By selecting the right instance type based on your workload's specific requirements, you can maximize performance while optimizing costs in the AWS cloud. I have spoken about instance types in more detail in a previous article I wrote. You can check it out &lt;a href="https://damue.hashnode.dev/getting-started-with-ec2-instances-a-beginners-guide"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instance Lifecycle
&lt;/h2&gt;

&lt;p&gt;The lifecycle of an Amazon EC2 instance encompasses several key phases, each with its distinct role. Put on your deep-sea diving suit because we are about to dive deeper into this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Launching Instances _&lt;/strong&gt;— Launching an EC2 instance marks the beginning of its lifecycle. You start by selecting an Amazon Machine Image (AMI), choosing the appropriate instance type based on your workload requirements, configuring security groups to control inbound and outbound traffic, and setting up key pairs for secure access. It's essential to pick the right AMI that aligns with your application's needs, ensuring a solid foundation for your instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Starting Instances&lt;/em&gt;&lt;/strong&gt; — Once you've launched your EC2 instance, you can kick it into action by starting it. This means getting it online and ready to do its job. Starting an instance should be smooth sailing if you set it up right when you launch it. Just make sure your apps and services are all set and good to go when you hit that start button to keep any downtime to a minimum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Stopping Instances&lt;/em&gt;&lt;/strong&gt; — Think of stopping an EC2 instance as a power nap for your virtual server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OEPPg6gL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rvco9dkotjp5xbmy45sg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OEPPg6gL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rvco9dkotjp5xbmy45sg.png" alt="Image description" width="490" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's like hitting the pause button – the instance takes a break but remembers everything it was doing. This is incredibly handy, particularly for instances that don't have to be up and running 24/7, such as your development or testing environments. When you hit the pause button by stopping them, you're not only cutting down on expenses but also ensuring your setup is all set and eagerly waiting for your next project or task. Just a heads-up, though – not all instance types can take a power nap, so make sure you pick the right ones if you want to use this feature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Terminating Instances&lt;/em&gt;&lt;/strong&gt; —Terminating an instance is the final act in its lifecycle. This action forcibly shuts down the instance and deletes it, along with any attached EBS volumes. It's irreversible and should be exercised with caution. Termination is typically used when you no longer need an instance or when you want to release the associated resources, thereby preventing ongoing costs. However, ensure that you back up any critical data before proceeding with termination to avoid data loss.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Rebooting Instances&lt;/em&gt;&lt;/strong&gt; — Rebooting an instance is a way to refresh it without making changes to its data or configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QP4wMQKg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzn4swy2gogvm82ybo0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QP4wMQKg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzn4swy2gogvm82ybo0o.png" alt="Image description" width="600" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's similar to restarting a physical server to address issues or apply updates. Rebooting is less disruptive than stopping and starting an instance, making it a valuable troubleshooting tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Best Practices for Managing Instances Throughout Their Lifecycle&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Continuously monitor instances to detect performance issues, resource constraints, or security vulnerabilities. Utilize AWS CloudWatch for comprehensive instance metrics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implement auto-scaling to adjust the number of instances based on traffic and demand, ensuring optimal resource utilization and application availability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Assign meaningful tags to instances for easy organization, resource tracking, and cost allocation purposes. (more on tagging later)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Establish a robust backup strategy to safeguard critical data and configurations, utilizing AWS services like Amazon RDS or Amazon S3 for secure data storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply security best practices, including timely patching, IAM roles for secure access, and adherence to AWS Security Hub recommendations to fortify your instances against threats.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Periodically review and optimize your instance types and configurations to ensure they align with your workload requirements. AWS Trusted Advisor can help identify cost-saving opportunities.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, effective management of EC2 instances involves careful consideration of their lifecycle stages, coupled with best practices that encompass monitoring, automation, security, resource optimization, and data protection. This way of doing things ensures that your instances run smoothly, save you money where they can, and stay safe and sound from start to finish.&lt;/p&gt;

&lt;h2&gt;
  
  
  Termination Protection
&lt;/h2&gt;

&lt;p&gt;EC2 instance termination protection is like a safety lock for your virtual servers in AWS. Enabling this feature is a bit like hanging a "do not disturb" sign on your instances' virtual doors. It's a straightforward yet incredibly vital function: it prevents anyone, be it you or automated scripts, from mistakenly deleting your instances. In simpler terms, it acts as a protective barrier against unintentional deletions, ensuring your instances stay safe and sound. To actually terminate a protected instance, you'd have to intentionally disable this protection, which adds an extra step to the process.&lt;/p&gt;

&lt;p&gt;So, when should you enable EC2 instance termination protection? Well, if you've got instances running the show for critical applications or important services, it's a no-brainer. Imagine it as your shield against those moments when you accidentally do something you didn't intend to. It's like having an insurance policy for your instances, especially in critical production environments where every second of uptime counts. Additionally, if you're working with instances that store important data or have custom setups you'd rather not risk losing, this feature becomes your invaluable ally.&lt;/p&gt;

&lt;p&gt;In a nutshell, it's a handy tool for sysops admins to avoid accidental disasters and keep things running smoothly in the AWS cloud. It adds an extra layer of security and ensures that terminating an instance requires a deliberate, thought-out action, reducing the chances of disruptive errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shutdown Behaviour
&lt;/h2&gt;

&lt;p&gt;When it comes to shutting down your EC2 instances, you've got two main choices: "Stop" and "Terminate." Think of "Stop" as a polite way of asking your instance to take a nap. It's a gentle shutdown that keeps everything intact, including your data. So, if you've got a dev environment you're not using all the time, this option is like hitting the pause button to save costs while keeping your setup ready for action. On the flip side, "Terminate" is like saying goodbye for good. It's a swift shutdown that not only turns off the instance but wipes it clean – data and all. This is the choice when you're absolutely sure you won't be needing that instance again and want to release the resources it was using.&lt;/p&gt;

&lt;p&gt;So, which one to pick? Well, "Stop" is for those instances you might want to wake up later, like a hibernating bear, while "Terminate" is for when you're saying farewell, like closing a chapter. Just remember that with "Terminate," everything associated with that instance is gone, so always have a backup plan if your data is precious.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tagging Instances
&lt;/h2&gt;

&lt;p&gt;Tagging your EC2 instances is a bit like giving them individual name tags at a bustling conference – it's not just for show; it's essential.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ExB2sScN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pr4atvlfeted1llmzpus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ExB2sScN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pr4atvlfeted1llmzpus.png" alt="Image description" width="500" height="627"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pay no mind to this meme. In the world of EC2 instances, always endeavour to put a label (tag) on your greatness (EC2 instance). These tags bring a sense of order to your AWS world, allowing you to group instances logically by department, project, or application. This organizational clarity is a game-changer, making instance management a breeze. But that's not all – tags are also your financial compass, helping you track and allocate costs accurately. By tagging instances with relevant labels, you can see at a glance how much each department or project is spending, a crucial piece of the puzzle for budgeting and cost optimization. Moreover, tags are your resource management superpower, enabling you to swiftly locate and manage instances, apply policies, automate tasks, and set up alerts based on tags. In a nutshell, tagging isn't just a nice-to-have; it's your secret weapon for maintaining order, optimizing spending, and managing resources effectively in your AWS world, like having a personal assistant for your cloud infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Taking Control of Your Instances with Systems Manager and Instance Connect
&lt;/h3&gt;

&lt;p&gt;AWS Systems Manager and EC2 Instance Connect are essential tools for cloud professionals looking to manage and access EC2 instances at scale. With Systems Manager, you can automate patch management, run commands across multiple instances, and streamline complex tasks through automation. It also provides centralized parameter storage and valuable insights into your instance fleet, enhancing operational efficiency and decision-making. Systems Manager is undeniably a formidable tool. If you want to dive deeper into its capabilities, I recommend checking out &lt;a href="https://medium.com/aws-in-plain-english/automating-aws-operations-a-deep-dive-into-systems-manager-for-sysops-a975412355ea"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On the other hand, EC2 Instance Connect revolutionizes secure instance access by eliminating the need for manual SSH key management. It offers auditable access, fine-grained control, and IAM integration, making SSH sessions more secure and manageable. Having these tools at your disposal can significantly simplify the lives of cloud professionals. You can effortlessly manage your instances, boost security, and make your AWS world run like a well-oiled machine – a real game-changer, especially when you're juggling a bunch of instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Last words
&lt;/h2&gt;

&lt;p&gt;As we conclude this exploration of EC2's intricate features, it's clear that mastering these tools is like wielding a finely crafted instrument. SysOps administrators, armed with the knowledge of termination protection, shutdown behaviours, and other EC2 capabilities, possess the keys to orchestrate cloud environments with precision and finesse. Like a maestro conducting a symphony, they can harmonize efficiency, security, and cost-effectiveness to create a cloud infrastructure that not only functions flawlessly but also elevates their organization to new heights. In the ever-evolving landscape of AWS, these features remain the foundational notes of reliability and control. So, as you embark on your journey to harness the full potential of EC2, remember that your expertise is the baton that can transform your cloud orchestration from ordinary to extraordinary, and your AWS environment into a symphony of success.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudarchitecture</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
