<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: 🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</title>
    <description>The latest articles on Forem by 🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺 (@rumeshsil).</description>
    <link>https://forem.com/rumeshsil</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rumeshsil"/>
    <language>en</language>
    <item>
      <title>Optimizing AWS ECS: Deregistering and Deleting Unused Task Definition Revisions</title>
      <dc:creator>🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</dc:creator>
      <pubDate>Mon, 11 Dec 2023 11:40:15 +0000</pubDate>
      <link>https://forem.com/rumeshsil/optimizing-aws-ecs-deregistering-and-deleting-unused-task-definition-revisions-llg</link>
      <guid>https://forem.com/rumeshsil/optimizing-aws-ecs-deregistering-and-deleting-unused-task-definition-revisions-llg</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B0CWaRuo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AB1_c4gCsFcMIq01yuu22Aw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B0CWaRuo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AB1_c4gCsFcMIq01yuu22Aw.png" alt="Deregistering and Deleting Unused Task Definition Revisions via AWS CLI" width="753" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Web Services (AWS) Elastic Container Service (ECS) offers a robust platform for containerized applications, allowing seamless deployment and scaling. While ECS maintains multiple revisions of task definitions to facilitate rollbacks, managing an excessive amount of unused revisions can become cumbersome over time. In this article, we’ll explore the process of deregistering and deleting unused task definition revisions efficiently using the AWS CLI, especially when dealing with a large number of revisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Task Definition States
&lt;/h3&gt;

&lt;p&gt;Before diving into the deregistration and deletion process, let’s briefly review the different states a task definition revision can be in:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;ACTIVE : The task definition revision is currently in use by one or more ECS tasks or services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;INACTIVE : The task definition revision has been deregistered and is no longer in use. It is marked for potential deletion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DELETE_IN_PROGRESS: Once you’ve initiated the deletion of a task definition, it moves from the INACTIVE state to DELETE_IN_PROGRESS. In this state, Amazon ECS regularly checks if any active tasks or deployments still reference the target task definition. Once it confirms there are none, the task definition is permanently deleted. During the DELETE_IN_PROGRESS state, you’re unable to run new tasks or create new services using that task definition. Importantly, you can initiate the deletion of a task definition at any time without affecting existing tasks and services.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why Deregister and Delete?
&lt;/h3&gt;

&lt;p&gt;Task definition revisions play a crucial role in versioning and maintaining the history of changes. However, as your application evolves, you may accumulate numerous revisions that are no longer in use. Clearing out these unused revisions not only declutters your ECS environment but also optimizes resource utilization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deregistering Task Definition Revisions
&lt;/h3&gt;

&lt;p&gt;Before deleting unused revisions, it’s essential to deregister them. Deregistering makes a task definition revision inactive, marking it for potential deletion.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs --profile &amp;lt;your_aws_profile&amp;gt; deregister-task-definition --task-definition &amp;lt;your-task-definition-arn&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Deleting Task Definition Revisions
&lt;/h3&gt;

&lt;p&gt;Once you’ve deregistered the revisions, you can proceed with deletion. However, keep in mind that only inactive (deregistered) task definitions can be deleted&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs --profile &amp;lt;your_aws_profile&amp;gt; delete-task-definitions --task-definition &amp;lt;your-task-definition-arn&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Managing Large-Scale Cleanup
&lt;/h3&gt;

&lt;p&gt;While AWS Console provides a graphical interface for managing ECS resources, manually deregistering and deleting task definition revisions becomes impractical when dealing with a high volume of revisions. The AWS CLI proves to be a more efficient solution, especially in scenarios where automation and scripting are crucial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Scripting&lt;/em&gt;&lt;/strong&gt; : Utilize scripting languages like Bash, Python, or PowerShell to automate the process of deregistering and deleting task definitions. Iterate through a list of inactive revisions and execute the necessary CLI commands or API calls.&lt;/p&gt;

&lt;p&gt;Let’s write a simple bash script named &lt;strong&gt;&lt;em&gt;deleteandderegistertaskdefs.sh&lt;/em&gt;&lt;/strong&gt; to deregister and delete ECS task definitions using a specified substring/part of the string in the family name;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch deleteandderegistertaskdefs.sh
chmod u+x deleteandderegistertaskdefs.sh
vi deleteandderegistertaskdefs.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Check if the profile argument is provided
if [ -z "$1" ] || [ -z "$2" ]; then
  echo "Usage: $0 &amp;lt;aws_profile&amp;gt; &amp;lt;substring&amp;gt;"
  exit 1
fi

# AWS profile
AWS_PROFILE="$1"

# Replace with the substring you want to match in the family name
SUBSTRING="$2"

# Function to extract family and revision from a task definition ARN
extract_family_revision() {
  local task_definition_arn="$1"
  local family=$(echo "$task_definition_arn" | awk -F'/' '{print $NF}')
  local revision=$(echo "$family" | awk -F':' '{print $NF}')
  family=$(echo "$family" | awk -F':' '{$NF=""; print $0}' | sed 's/ $//')
  echo "$family:$revision"
}

# Deregister task definitions
deregister_task_definitions() {
  local status="$1"
  local query="taskDefinitionArns[?contains(@, '$SUBSTRING')]"
  local task_definition_arns=$(aws --profile "$AWS_PROFILE" ecs list-task-definitions --status "$status" --query "$query" --output json)

  # Loop through each task definition ARN and initiate their deregistration
  echo "Deregistration of $status Taskdefs has started"
  for task_definition_arn in $(echo "$task_definition_arns" | jq -r '.[]'); do
    family_revision=$(extract_family_revision "$task_definition_arn")

    # Deregister the specific revision of the task definition
    aws --profile "$AWS_PROFILE" ecs deregister-task-definition --task-definition "$family_revision"
  done
  echo "$status Deregistration has finished"
}

# Deregister active task definitions
deregister_task_definitions "ACTIVE"


# Get a list of inactive task definition ARNs matching the family name substring
task_definition_arns=$(aws --profile "$AWS_PROFILE" ecs list-task-definitions --status 'INACTIVE' --query "taskDefinitionArns[?contains(@, '$SUBSTRING')]" --output json)

# Loop through each task definition ARN and initiate their deletion
echo "Deletion has started"
for task_definition_arn in $(echo "$task_definition_arns" | jq -r '.[]'); do
  family_revision=$(extract_family_revision "$task_definition_arn")

  # Deleting the specific revision of the task definition
  aws --profile "$AWS_PROFILE" ecs delete-task-definitions --task-definition "$family_revision"
done
echo "Deletion has finished"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In summary, the script performs the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Checks for command-line arguments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sets AWS profile and substring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defines a function to extract family and revision.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defines a function to deregister task definitions based on status.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deregisters active task definitions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gets a list of inactive task definition ARNs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, you can run the script by providing the AWS profile and substring as command line arguments:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./deleteandderegistertaskdefs.sh profile1 testtaskdef
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;profile1&lt;/em&gt;&lt;/strong&gt; is the AWS profile argument.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;testtaskdef&lt;/em&gt;&lt;/strong&gt; is the substring or part of the string used to filter ECS task definitions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are executing this script within an EC2 instance or application that utilizes an IAM role, it is necessary to modify the code by excluding the profile argument entirely. Run the script as follows:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./deregistertaskdefs.sh testtaskdef&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;Efficient management of ECS resources, including task definition revisions, is essential for maintaining a streamlined and cost-effective containerized environment. Leveraging the AWS CLI for deregistering and deleting task definition revisions ensures a systematic and automated approach, particularly when dealing with a large number of unused revisions. By incorporating these practices into your ECS maintenance routine, you can optimize resource utilization and keep your containerized applications running smoothly.&lt;/p&gt;

</description>
      <category>ecs</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Cross-Account Resource Access Using EC2 Instance Metadata.</title>
      <dc:creator>🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</dc:creator>
      <pubDate>Tue, 29 Aug 2023 13:35:36 +0000</pubDate>
      <link>https://forem.com/rumeshsil/cross-account-resource-access-using-ec2-instance-metadata-nem</link>
      <guid>https://forem.com/rumeshsil/cross-account-resource-access-using-ec2-instance-metadata-nem</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2688%2F1%2AUlh0JIgtuwOGhKGDvfui1Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2688%2F1%2AUlh0JIgtuwOGhKGDvfui1Q.png" alt="Cross-Account Resource Access Using EC2 Instance Metadata"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In today’s cloud landscape, it’s common for organizations to utilize multiple AWS accounts to manage different aspects of their infrastructure. While maintaining isolation between accounts enhances security, there are scenarios where collaboration across accounts is necessary. Amazon Elastic Compute Cloud (EC2) instances provide a secure way to facilitate cross-account interactions through the use of instance metadata and profiles. This article delves into the process of enabling an EC2 instance in one AWS account to access resources in another AWS account using instance metadata and profiles. We’ll also explore alternative methods for cross-account resource access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Cross-Account Access with EC2 Instance Metadata and Profiles
&lt;/h3&gt;

&lt;p&gt;Cross-account resource access involves allowing an entity in one AWS account (the source account) to access resources in another AWS account (the target account). EC2 instance metadata and profiles provide a mechanism to achieve this securely:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;**Instance Metadata : **EC2 instance metadata is a service provided by AWS that offers a way to retrieve information about an instance without the need to authenticate. This metadata is stored in a well-known URL, &lt;a href="http://169.254.169.254/latest/meta-data/" rel="noopener noreferrer"&gt;http://169.254.169.254/latest/meta-data/&lt;/a&gt;, within the instance itself. By accessing this URL, instances can retrieve details such as instance ID, instance type, security groups, IAM role name, and much more. In addition to providing instance information, EC2 instance metadata is also used to retrieve temporary security credentials. These credentials grant the instance permissions to access resources in the target account. This capability provides a way to grant instances specific permissions without having to embed long-lived credentials directly into the instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Profiles :&lt;/strong&gt; A profile is a collection of settings and credentials that you can use to specify the AWS resources you want to interact with. Further, profiles allow instances to assume cross-account roles using instance metadata without explicitly specifying role ARNs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Cross-account access using EC2 instance metadata involves the following components:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;Cross-Account IAM Role *&lt;/em&gt;:: In the target AWS account, you create an IAM role with the necessary permissions. This role defines which resources the EC2 instance in the source account is allowed to access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trust Relationship&lt;/strong&gt; : The cross-account IAM role must have a trust relationship policy that specifies the AWS account of the source EC2 instance. This trust policy establishes a connection between the two accounts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Instance Metadata&lt;/strong&gt; : The EC2 instance in the source AWS account accesses instance metadata to retrieve temporary credentials associated with the cross-account IAM role. These credentials grant the instance access to resources in the target account.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Real World Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;Data Aggregation *&lt;/em&gt;: An EC2 instance in one account could collect and aggregate data from multiple AWS accounts without exposing long-lived credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized Logging&lt;/strong&gt; : Instances in various accounts can push logs to a central logging bucket in a different account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-Account Data Processing&lt;/strong&gt; : Instances in one account can process data stored in buckets owned by other accounts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Steps to Enable Cross-Account Access
&lt;/h3&gt;

&lt;p&gt;Let’s illustrate the process with a practical example:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Scenario: An EC2 instance in Account A(source account) requires access a particular Route53 hosted zone residing in Account B(target account) for the purpose of creating Route53 records.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;) Launch an EC2 Instance in Account A&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3072%2F1%2AlgvIz6e58s6M_6Gb1Rh1Kw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3072%2F1%2AlgvIz6e58s6M_6Gb1Rh1Kw.png" alt="ec2 Instance residing in Account A"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An EC2 instance initiated within Account A and associated with an IAM Role named “AppServerRole”&lt;/p&gt;

&lt;p&gt;AppServerRole Trust Policy&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;AppServerRole Permission Policy&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "assumerole",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": "arn:aws:iam::&amp;lt;&amp;lt;Account Number of Account B&amp;gt;&amp;gt;:role/&amp;lt;&amp;lt;Role Name&amp;gt;&amp;gt;",
            "Effect": "Allow"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;The JSON policy document provided above grants permissions for the AppServerRole to assume a role (sts:AssumeRole) in Account B.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Please substitute &amp;lt;&amp;gt; with the name of the Cross Account Role that will be generated in Step 2.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Retrieving Ec2 Instance Metadata&lt;/p&gt;

&lt;p&gt;Retrieving EC2 instance metadata involves accessing information about an Amazon EC2 instance’s configuration and identity. This information is available via a web service running on a special IP address, 169.254.169.254, within the EC2 instance's network.&lt;/p&gt;

&lt;p&gt;IMDSv1&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://169.254.169.254/latest/meta-data/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;he above curl command is used to directly access the EC2 instance metadata service without using IMDSv2 authentication. This approach uses IMDSv1, which can be susceptible to certain security vulnerabilities, especially Server-Side Request Forgery (SSRF) attacks.&lt;/p&gt;

&lt;p&gt;IMDSv2&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3758%2F1%2Ac68-lZ2Uj4OWvZ8NVR0fPg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3758%2F1%2Ac68-lZ2Uj4OWvZ8NVR0fPg.png" alt="IMDSv2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above curl command is used to securely access the instance metadata service using IMDSv2 authentication. The session token adds an additional layer of security and helps mitigate potential vulnerabilities associated with directly accessing instance metadata using IMDSv1. This approach is recommended for security-sensitive environments and scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;The key differences between Instance Metadata Service Version 1 (IMDSv1) and Instance Metadata Service Version 2 (IMDSv2) in Amazon Web Services (AWS) lie in their security mechanisms and protections. IMDSv2 was introduced to address security concerns and enhance the overall security posture when accessing instance metadata.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;2.) Create an IAM Role in Account B (Cross-Account IAM Role)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3016%2F1%2AxPtHB9jeLlLVUd1NOjSsaA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3016%2F1%2AxPtHB9jeLlLVUd1NOjSsaA.png" alt="R53 Role in Account B"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.) Attach Policies to Cross-Account IAM Role in Account B&lt;/p&gt;

&lt;p&gt;R53Role Trust Policy&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::&amp;lt;&amp;lt;Account Number of Account B&amp;gt;&amp;gt;:role/AppServerRole"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above trust policy grants permissions for ‘AppServerRole’ residing in Account A to assume ‘R53Role’ residing in Account B.&lt;/p&gt;

&lt;p&gt;R53Role Permission Policy&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "r53",
            "Effect": "Allow",
            "Action": [
                "route53:GetHostedZone",
                "route53:ChangeResourceRecordSets",
                "route53:GetChange"                
            ],
            "Resource": "arn:aws:route53:::hostedzone/&amp;lt;&amp;lt;hosted-zone-id&amp;gt;&amp;gt;"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above policy enables permission for specific Route 53 Actions (querying hosted zones, modifying resource record sets, and obtaining change details) to a designated hostedzone in Account B.&lt;/p&gt;

&lt;p&gt;4.) Configuring a profile for Amazon EC2 metadata in Account A&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Prerequisite : Ensure that the AWS CLI is installed and properly configured on the EC2 instance&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Access the EC2 instance through SSH or AWS Systems Manager (SSM). Navigate to the AWS CLI configuration file located at ~/.aws/config and append a new profile, such as "account-B" or a name you prefer. In the profile definition, include the ARN of the role you intend to assume (the ARN of the R53Role residing in Account B). Additionally, set the credential_source as Ec2InstanceMetadata.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[profile account-B]
role_arn = arn:aws:iam::&amp;lt;&amp;lt;Account Number of Account B&amp;gt;&amp;gt;:role/R53Role
credential_source = Ec2InstanceMetadata
region = ap-southeast-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Above configuration snippet for an AWS CLI profile named “account-B” that assumes an IAM role from Account B using EC2 instance metadata. This profile allows an EC2 instance to assume the specified role in Account B and use the obtained credentials to interact with AWS resources (Route 53 Hosted Zone in Account B).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;5.) Verifying Cross Account Access&lt;/p&gt;

&lt;p&gt;Connect to the EC2 instance via SSH or AWS Systems Manager (SSM), then execute the following AWS CLI command :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; aws route53 get-hosted-zone --id &amp;lt;&amp;lt;hosted zone id&amp;gt;&amp;gt; --profile account-B
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AEBnXj4ciDny29HjA6KnwyQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AEBnXj4ciDny29HjA6KnwyQ.png" alt="Query HostedZone Details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The output above verifies that the EC2 instance located in Account A successfully retrieved details about a hosted zone situated in Account B.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, let’s try to perform an action in Account B that lacks the necessary authorization, such as attempting to list hosted zones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3720%2F1%2AYLxLJLzdURNgl5pJFFPoYg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3720%2F1%2AYLxLJLzdURNgl5pJFFPoYg.png" alt="List Hosted Zones"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This error message indicates that the AWS identity associated with the role R53Role assumed by the EC2 instance in Account A does not possess the necessary permissions to perform the route53:ListHostedZones action.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Cross-Account Access Methods
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IAM Role Switching&lt;/strong&gt; : Instances assume roles directly using temporary credentials from instance metadata. This method requires explicit role switching and doesn’t rely on profiles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;AWS STS *&lt;/em&gt;: Account A requests temporary credentials from STS to assume a cross-account role. This method can be used programmatically to generate credentials. This method can be used for cross-account access beyond EC2 instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;Resource Based Access *&lt;/em&gt;: Resources like Amazon S3 buckets can be configured to allow access from specific AWS accounts, enabling cross-account resource sharing without temporary credentials.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Leveraging EC2 instance metadata and profiles enables seamless cross-account interactions while maintaining a high level of security. This approach provides the necessary permissions to access resources across accounts without exposing long-lived credentials. In scenarios where security and collaboration are paramount, this mechanism shines.&lt;/p&gt;

&lt;p&gt;Other methods like IAM role switching, AWS STS, and resource-based access offer different avenues for cross-account resource sharing, each with its own use cases and considerations. Understanding these methods empowers you to choose the most suitable approach for your specific requirements.&lt;/p&gt;

&lt;p&gt;As you embrace cross-account interactions, remember to follow best practices, manage permissions judiciously, and keep security at the forefront of your architecture.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>crossaccount</category>
    </item>
    <item>
      <title>Interactively Accessing Amazon ECS Fargate Containers using AWS Systems Manager Session Manager &amp; ECS Exec</title>
      <dc:creator>🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</dc:creator>
      <pubDate>Thu, 17 Aug 2023 17:36:26 +0000</pubDate>
      <link>https://forem.com/rumeshsil/interactively-accessing-amazon-ecs-fargate-containers-using-aws-systems-manager-session-manager-and-ecs-exec-34bm</link>
      <guid>https://forem.com/rumeshsil/interactively-accessing-amazon-ecs-fargate-containers-using-aws-systems-manager-session-manager-and-ecs-exec-34bm</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanina9apmfmp9k9pt5r0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanina9apmfmp9k9pt5r0.png" alt="Accessing Fargate Container via SSM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon ECS (Elastic Container Service) Fargate is a powerful service that allows you to run containers without managing the underlying infrastructure. While Fargate offers numerous benefits in terms of scalability and ease of use, it can sometimes be challenging to interact with containers running within Fargate. AWS Systems Manager Session Manager, combined with ECS Exec, offers a secure and efficient solution for interactively accessing and managing Fargate containers without compromising security or requiring direct SSH access. This guide will walk you through the steps to enable and use ECS Exec with AWS Systems Manager Session Manager to access ECS Fargate containers interactively.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How does ECS Exec function ?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;ECS Exec operates by utilizing AWS Systems Manager Session Manager to create and manage secure communication channels between your local machine and the containers running within Amazon ECS Fargate tasks. This architecture ensures a secure, isolated, and interactive experience for debugging and troubleshooting containerized applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For Amazon ECS Exec to work properly, you need to ensure that you meet several prerequisites to set up the necessary environment and permissions. Here’s a list of prerequisites to ensure ECS Exec works as intended:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;ECS Cluster and Fargate Tasks&lt;br&gt;
Have an active Amazon ECS cluster running Fargate tasks with the ECS agent version that supports ECS Exec. The ECS agent must be at least version &lt;strong&gt;&lt;em&gt;1.47.0&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IAM Roles and Policies&lt;br&gt;
The ECS task role used by your Fargate tasks needs to have appropriate permissions to interact with AWS Systems Manager Session Manager.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Session manager plugin for AWS CLI&lt;br&gt;
The session manager plugin is an extension for your AWS CLI that facilitates connecting to EC2 instances or AWS Fargate tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Network Configuration&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS tasks must be deployed within a Virtual Private Cloud (VPC).&lt;/li&gt;
&lt;li&gt;Ensure that the necessary networking configurations, such as subnets, security groups, and routes, are properly set up to enable communication between ECS Fargate tasks and Systems Manager.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Enabling network communication between ECS Fargate Task and System Manager&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Establishing network connectivity between an ECS Fargate task and AWS Systems Manager requires configuring essential networking elements to guarantee seamless communication between these services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fargate Task Networking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Amazon ECS for AWS Fargate, tasks need to use the “awsvpc” network mode, which grants each task its own elastic network interface.If you choose to use this network mode when launching a task or setting up a service, you need to mention which subnets to connect the network interface to and which security groups to use for the network interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.) Fargate tasks placed in public subnets:&lt;/strong&gt; The task’s elastic network interface should have a public IP address, and there should be a route either directly to the internet or through a NAT gateway that can send internet requests.&lt;strong&gt;&lt;em&gt;In this scenario, the Fargate task can readily interact with the AWS Systems Manager service using the public internet.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.) Fargate tasks placed in private subnets:&lt;/strong&gt; For a Fargate task in a private subnet and needs to connect with the AWS Systems Manager (SSM) service,&lt;strong&gt;&lt;em&gt;it requires either a NAT gateway within the subnet to route requests to the internet or Interface VPC Endpoints specifically configured for the AWS ssm, ec2Messages and ssmmessages services.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Choosing a VPC endpoint offers an array of benefits, including heightened security, privacy, compliance adherence, improved network performance, and enhanced control over data transmission. These factors make it a preferable option over relying on public internet connectivity for interacting with AWS services like AWS Systems Manager.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Setting up ECS Exec for Fargate Tasks
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Installing Session Manager Plugin for AWS Cli&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To install the Session Manager plugin, refer to the &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; and follow the instructions tailored to your client’s operating system.&lt;/p&gt;

&lt;p&gt;The following example shows the installation process of the Session Manager plugin on a Mac OS;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54oaaywfal8lt6t8oyvc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54oaaywfal8lt6t8oyvc.jpeg" alt="Installing SSM Plugin on Mac OS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Include SSM Permissions for SSM in the ECS Task IAM Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Attach a policy to ECS Task IAM role that grants permissions for ECS Exec, such as the following example policy:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Effect": "Allow",
  "Action": [
    "ssmmessages:CreateControlChannel",
    "ssmmessages:CreateDataChannel",
    "ssmmessages:OpenControlChannel",
    "ssmmessages:OpenDataChannel"
  ],
  "Resource": "*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljozj9ctrzkwbmkrubt7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljozj9ctrzkwbmkrubt7.png" alt="SSM Permissions as an inline policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Incorporate ECS ExecuteCommand permissions into your IAM Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Make sure that you’ve added the necessary ECS ExecuteCommand permission to your IAM role. Add a policy that grants ECS ExecuteCommand permission. Here’s an example policy:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ecs:ExecuteCommand"
      ],
      "Resource": "*"
    }
  ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Activate ECS Exec for your services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;i.) List the clusters&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs list-clusters --profile profile1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lt0xqk940zlbc16xego.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lt0xqk940zlbc16xego.png" alt="List the clusters"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command will return a list of ECS cluster ARNs (Amazon Resource Names) associated with the specified AWS CLI profile, “profile1”. Make sure you have the necessary credentials and permissions configured in “profile1” to access the ECS service.&lt;/p&gt;

&lt;p&gt;ii.) List the Task Definitions&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs list-task-definitions --profile profil
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppoc373egyyhj3yf0tgk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppoc373egyyhj3yf0tgk.png" alt="List of Task defs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By executing this command, you’ll receive a list of ARNs (Amazon Resource Names) representing the available ECS task definitions in the “profile1” AWS CLI profile.&lt;/p&gt;

&lt;p&gt;iii.) List the services within TestCluster&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs list-services --cluster TestCluster  --profile profile1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foigs6mpb5jb5j33q4jf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foigs6mpb5jb5j33q4jf9.png" alt="Services within TestCluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By executing this command, you’ll receive a list of ARNs (Amazon Resource Names) representing the services associated with the “TestCluster” ECS cluster in the “profile1” AWS CLI profile.&lt;/p&gt;

&lt;p&gt;iv) Enable ECS Exec&lt;/p&gt;

&lt;p&gt;Enable ECS Exec for an exisiting ECS Service&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile profile1 ecs update-service \
--cluster TestCluster \
--service TestServic2 \
--enable-execute-command \
--force-new-deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjz56eez9qcoojrkeixqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjz56eez9qcoojrkeixqk.png" alt="Enable Exec on existing ECS Service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By running this command, you’re enabling the ECS Exec feature for the specified service and ensuring a new deployment of tasks within the service. This ensures that ECS Exec is effectively integrated into the updated tasks.&lt;/p&gt;

&lt;p&gt;Enable ECS Exec for a new ECS Service&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs create-service --cluster TestCluster \
--service-name TestService3 \
--task-definition TestTaskDef:3 \ 
--desired-count 1 \ 
--network-configuration "{\"awsvpcConfiguration\":{\"subnets\":[\"subnet-0b642d3591fe3cf87\"],\"assignPublicIp\":\"ENABLED\"}}" \
--launch-type FARGATE \
--enable-execute-command \
--profile profile1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgx2c1wigo1pff1lcl1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgx2c1wigo1pff1lcl1h.png" alt="Enabling exec for a new service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By executing this command, you’re creating a new ECS service that deploys the specified criteria, including enabling the ECS Exec feature and configuring the network settings (Deploy the Fargate task in a public subnet and activate automatic assignment of public IP addresses for the NICs) for Fargate tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Accessing the Container using ECS exec&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Upon completing all the aforementioned steps, you will now observe the existence of the following two services running within the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ecz9zhrtyonhjajwmhz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ecz9zhrtyonhjajwmhz.jpeg" alt="Services running within the cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s proceed to attempt accessing &lt;strong&gt;&lt;em&gt;TestService3&lt;/em&gt;&lt;/strong&gt; using AWS Systems Manager (SSM).&lt;/p&gt;

&lt;p&gt;Choose TestService3, navigate to the Tasks tab, and make a note of the Task ID(&lt;strong&gt;&lt;em&gt;cf0be9da96e54446984217c9921435ec&lt;/em&gt;&lt;/strong&gt;) and Container Name (&lt;strong&gt;&lt;em&gt;TestContainer&lt;/em&gt;&lt;/strong&gt;). Afterward, execute the following command;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile profile1 ecs execute-command --cluster TestCluster \
--task cf0be9da96e54446984217c9921435ec \                                                                                                         
--container TestContainer \
--command "/bin/sh" \     
--interactive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;By running this command, you’re utilizing the “profile1” profile to trigger the execution of the specified command within the ECS task, providing an interactive shell interface for interaction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y7chrbwun5g4jcoj7e9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4y7chrbwun5g4jcoj7e9.jpeg" alt="Running ECS exec"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Congratulations! You’ve successfully remotely accessed the running container, powered by Fargate. This remote access allows you to troubleshoot any errors with remarkable ease.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://repost.aws/knowledge-center/fargate-ecs-exec-errors" rel="noopener noreferrer"&gt;How do I troubleshoot errors I receive when performing Amazon ECS Exec on my Fargate tasks?&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Amazon ECS Fargate, coupled with AWS Systems Manager Session Manager and ECS Exec, empowers developers and operators to dynamically troubleshoot and manage containers securely. By following this guide, you can efficiently access and interact with Fargate containers, streamline debugging, and ensure the operational success of your containerized applications.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ecs</category>
    </item>
    <item>
      <title>Understanding AWS Scaling: Achieving Efficiency and Resilience in the Cloud</title>
      <dc:creator>🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</dc:creator>
      <pubDate>Thu, 10 Aug 2023 14:45:25 +0000</pubDate>
      <link>https://forem.com/rumeshsil/understanding-aws-scaling-achieving-efficiency-and-resilience-in-the-cloud-2n85</link>
      <guid>https://forem.com/rumeshsil/understanding-aws-scaling-achieving-efficiency-and-resilience-in-the-cloud-2n85</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tuQiPoe---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AfPg_JTORMkG2_DMdiv6d9Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tuQiPoe---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AfPg_JTORMkG2_DMdiv6d9Q.png" alt="AWS Scaling" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the ever-changing world of cloud computing, it’s crucial for modern apps to grow when needed. Amazon Web Services (AWS) is a leader in making this happen. They offer ways for businesses to adjust to changing needs without a hitch. This article explains AWS scaling: why it matters, the good things it brings, and the ways it helps apps perform better, cost less, and stay strong.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Essence of Scaling in AWS:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;At its core, scaling in AWS refers to the practice of adjusting the capacity of computing resources — such as virtual machines, containers, databases, or serverless functions — to match the current workload. The goal is &lt;strong&gt;&lt;em&gt;to ensure optimal performance and cost efficiency by having the right amount of resources available at the right time&lt;/em&gt;&lt;/strong&gt;. AWS scaling addresses the challenges of both under-provisioning, which can lead to performance issues, and over-provisioning, which results in unnecessary costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Scaling Matters:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Scaling is crucial for several reasons;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;**Performance: **Scaling ensures that applications can handle varying levels of traffic without degradation in performance. It keeps response times fast and maintains a positive user experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Efficiency:&lt;/strong&gt; By dynamically adjusting resources to match the demand, scaling helps prevent over-paying for unused resources. It optimizes resource allocation and cost management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**Resilience: **Scaling enhances application availability and resilience. When one instance fails, the load can be distributed across other healthy instances, minimizing downtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Agility:&lt;/strong&gt; With the cloud’s elasticity, businesses can quickly respond to changes in demand, whether due to unexpected traffic spikes or seasonal variations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Scaling Strategies in AWS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS Scaling can be categorized broadly into two types: Vertical Scaling and Horizontal Scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vertical Scaling&lt;/strong&gt; : Also known as “&lt;strong&gt;&lt;em&gt;scaling&lt;/em&gt;&lt;/strong&gt; up” or “&lt;strong&gt;&lt;em&gt;resizing&lt;/em&gt;&lt;/strong&gt;,” involves adjusting the size or capacity of an individual instance within a system. It typically entails increasing or decreasing the resources allocated to a single instance, such as its CPU, memory, storage, or network capacity. Vertical Scaling is like upgrading your computer’s hardware to a more powerful model.&lt;/p&gt;

&lt;p&gt;When you’re using AWS, vertical scaling means changing the “size” of your virtual machine. It’s like getting a bigger or smaller instance type. For example, you could make your instance go from being small (like a t2.micro) to being bigger (like a t2.large). This helps your instance work better. Vertical Scaling is good when your application needs more resources, but you don’t need to use many instances at once&lt;/p&gt;

&lt;p&gt;But vertical scaling has its boundaries. At some point, making things bigger could become too expensive or the kind of instance type you want might not be an option. That’s when horizontal scaling comes in handy — it means adding more computers to help out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Horizontal Scaling: **Also referred to as “&lt;/strong&gt;&lt;em&gt;scaling out&lt;/em&gt;**,” involves increasing the number of instances or resources to handle a larger load or to improve system redundancy. Instead of making individual instances more powerful (as in vertical scaling), horizontal scaling adds more instances to distribute the workload across multiple resources.&lt;/p&gt;

&lt;p&gt;In AWS, when it comes to horizontal scaling, it usually means putting more instances into a group that can automatically adjust the number of instances as needed. This helps the system handle more users or tasks without relying only on one big instance.&lt;/p&gt;

&lt;p&gt;Horizontal Scaling works best for applications that can be divided and spread out over many instances. This helps if one instance stops working, as the others can still handle the work. But keep in mind that handling lots of instances might need extra setup and automation to use them effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Further Breakdown of AWS Scaling
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x3MLg04d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2524/1%2AQi3sqmcb9C4FLZqdaNxphQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x3MLg04d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2524/1%2AQi3sqmcb9C4FLZqdaNxphQ.png" alt="Further Breakdown of AWS Scaling" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Auto-Scaling ?
&lt;/h3&gt;

&lt;p&gt;AWS Auto Scaling is a service provided by Amazon Web Services that automatically adjusts the number of resources, such as Amazon EC2 instances or ECS tasks, in a group to match the desired performance, availability, and cost requirements. It helps ensure that your application can handle varying levels of traffic without manual intervention.&lt;/p&gt;

&lt;p&gt;With AWS Auto Scaling, you define the desired number of instances or other resources that you want to maintain, and the service automatically scales the group up or down based on factors such as CPU utilization, network traffic, or custom metrics you define. This dynamic scaling ensures that your application remains responsive and cost-efficient, as it can automatically add resources during peak demand and remove them during periods of lower activity.&lt;/p&gt;

&lt;p&gt;In addition to maintaining consistent performance, AWS Auto Scaling also enhances application availability and reduces the risk of overprovisioning or underprovisioning resources. This service is particularly useful in scenarios where the workload is unpredictable or experiences fluctuations over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Horizontal Scaling Strategies
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;**Predictive Scaling : *&lt;/em&gt;&lt;em&gt;Predictive Scaling in AWS is an scaling approach that uses historical data and machine learning algorithms to forecast future traffic patterns. This enables the system to proactively adjust resources before the expected surge or drop in demand occurs. By analyzing past usage trends, predictive scaling ensures that the right number of resources is available precisely when needed, optimizing performance and cost-efficiency. &lt;br&gt;
**Example&lt;/em&gt;* : &lt;em&gt;Imagine you run an online store, and you know that during holidays like Black Friday, your website gets a lot more visitors. With Predictive Scaling, AWS can analyze past holiday seasons’ data and predict when your website will have the most visitors. It will automatically add more servers before the rush starts, ensuring your website doesn’t slow down or crash during the high traffic times.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Scheduled Scaling&lt;/em&gt;&lt;/strong&gt; : Resources are adjusted based on a predefined schedule, useful for predictable demand changes, such as peak business hours.&lt;br&gt;
&lt;strong&gt;Example :&lt;/strong&gt; &lt;em&gt;Let’s say you have a web application that experiences predictable changes in traffic throughout the day. During business hours, the number of users accessing your application increases, but at night, the usage drops significantly. Instead of keeping the same number of servers running all the time, which could be wasteful and expensive, you can use Scheduled Scaling.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manual Scaling **: This involves adjusting resources based on manual intervention. It offers more control but may not be as responsive to rapidly changing workloads.&lt;br&gt;
**Example&lt;/strong&gt;: &lt;em&gt;Suppose you run an online store, and you’re running a special promotion that you expect will bring a surge of visitors to your website. To make sure your website doesn’t slow down or crash during this busy period, you can use Manual Scaling.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dynamic Scaling&lt;/strong&gt; : This approach automatically adjusts resources based on real-time workload changes. AWS Auto Scaling is a service that embodies this strategy, allowing you to define scaling policies based on metrics such as CPU utilization or request count.&lt;br&gt;
&lt;strong&gt;Example&lt;/strong&gt; : &lt;em&gt;Imagine you have a mobile app that offers real-time updates during a live sports event. Normally, your app has a steady number of users, but during the match, the traffic can spike significantly. This is where Dynamic Scaling comes in.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  AWS Dynamic Scaling Policies
&lt;/h3&gt;

&lt;p&gt;AWS Dynamic Scaling Policies are rules or instructions that tell AWS how to automatically adjust the number of resources, such as instances or containers, based on real-time conditions. These policies help ensure that your applications are responsive, efficient, and cost-effective.&lt;/p&gt;

&lt;p&gt;Following are different types of dynamic scaling policies in AWS, each addressing specific scenarios:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Simple Scaling :&lt;/em&gt;&lt;/strong&gt; With this policy, you define specific thresholds for a metric, such as CPU usage. When the metric goes beyond these thresholds, AWS adds or removes resources as needed.&lt;br&gt;
&lt;em&gt;Simple Scaling Policies are exclusively available within Ec2-AutoScaling.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Target Tracking Scaling :&lt;/em&gt;&lt;/strong&gt; This policy keeps a specific metric, like CPU utilization or request rate, at a target value. AWS automatically adds or removes resources to maintain this target, adapting to changing demand.&lt;br&gt;
&lt;em&gt;Both EC2 Auto Scaling and Application Auto Scaling provide support for Target Tracking Policies.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step Scaling :&lt;/em&gt;&lt;/strong&gt; Step Scaling allows you to set up scaling adjustments at specific intervals or steps. For instance, you might add more resources if a metric reaches a certain level and then add even more if it exceeds another threshold.&lt;br&gt;
&lt;em&gt;Both EC2 Auto Scaling and Application Auto Scaling provide support for Step Scaling Policies.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Schedule Scaling&lt;/em&gt;&lt;/strong&gt; : This policy lets you plan scaling actions based on a schedule. You can increase resources before expected traffic spikes and decrease them during quieter times.&lt;br&gt;
&lt;em&gt;Schedule Scaling Policies are exclusively available within Application-Auto Scaling.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS scaling serves as the backbone of a resilient, high-performance, and cost-effective cloud infrastructure. It empowers businesses to dynamically adapt to varying workloads while optimizing resource allocation and minimizing operational complexities. By implementing appropriate scaling strategies and leveraging AWS’s scaling services, organizations can future-proof their applications and ensure exceptional user experiences in the ever-evolving landscape of cloud computing.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>autoscaling</category>
    </item>
    <item>
      <title>Associating a VPC in a Different AWS Account with a Hosted Zone</title>
      <dc:creator>🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</dc:creator>
      <pubDate>Fri, 04 Aug 2023 04:12:25 +0000</pubDate>
      <link>https://forem.com/rumeshsil/associating-a-vpc-in-a-different-aws-account-with-a-hosted-zone-329l</link>
      <guid>https://forem.com/rumeshsil/associating-a-vpc-in-a-different-aws-account-with-a-hosted-zone-329l</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JhGbGfB6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2A4QoSY_6aGanegHZjp-6yxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JhGbGfB6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2A4QoSY_6aGanegHZjp-6yxg.png" alt="Associating Hosted Zone with a VPC" width="773" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon Web Services (AWS) provides a robust environment for building and managing cloud infrastructure. However, certain scenarios require the association of resources across different AWS accounts. One such scenario involves associating a Virtual Private Cloud (VPC) located in one AWS account with a Route 53 hosted zone located in another AWS account. While the AWS Management Console facilitates VPC-to-hosted zone association within the same account, cross-account association requires a different approach. In this article, we will explore how to achieve this association using AWS Command Line Interface (CLI) commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use-case
&lt;/h3&gt;

&lt;p&gt;The use case of associating a VPC in a different AWS account with a hosted zone involves enabling resources in one AWS account’s Virtual Private Cloud (VPC) to interact with a hosted zone in another AWS account’s Route 53 service. This association is useful for DNS resolution and communication purposes.&lt;/p&gt;

&lt;h3&gt;
  
  
  For the purpose of illustration, consider the following scenario where Account_B requires DNS resolution for the private hosted zone in Account_A.
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Two AWS accounts, referred to as &lt;strong&gt;&lt;em&gt;Account_A&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;Account_B&lt;/em&gt;&lt;/strong&gt;, with corresponding account numbers &lt;strong&gt;&lt;em&gt;11111111&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;22222222&lt;/em&gt;&lt;/strong&gt;, respectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Private Hosted Zone has been established in &lt;strong&gt;&lt;em&gt;Account_A&lt;/em&gt;&lt;/strong&gt;, with the Hosted Zone ID being &lt;strong&gt;&lt;em&gt;Z458514111102&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In &lt;strong&gt;&lt;em&gt;Account_B&lt;/em&gt;&lt;/strong&gt;, there exists a VPC identified by the VPC ID &lt;strong&gt;&lt;em&gt;vpc-1458522bhuf&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You’ve configured two AWS profiles on your local computer, each assuming the corresponding AWS role with Route 53 permissions in the respective target accounts. The profiles are as follows: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Account_A&lt;/em&gt;&lt;/strong&gt; is represented by &lt;strong&gt;&lt;em&gt;profile-A&lt;/em&gt;&lt;/strong&gt;, and &lt;strong&gt;&lt;em&gt;Account_B&lt;/em&gt;&lt;/strong&gt; is represented by &lt;strong&gt;&lt;em&gt;profile-B&lt;/em&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Profile_A&lt;/strong&gt; Access Permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;List and get hosted zone in Route 53: &lt;strong&gt;&lt;em&gt;route53:Get&lt;/em&gt;, route53:List&lt;/strong&gt;**&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create and manage hosted zones in Route 53: &lt;strong&gt;*Route53:*HostedZone&lt;/strong&gt;*&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create and manage VPC association authorizations : &lt;strong&gt;route53:*VPCAssociationAuthorization&lt;/strong&gt;*&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Profile_&lt;/strong&gt;B Access Permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Permissions to associate a VPC with a hosted zone : &lt;strong&gt;&lt;em&gt;route53:AssociateVPCWithHostedZone&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;List and describe VPCs in Account_B : &lt;strong&gt;&lt;em&gt;ec2:DescribeVpcs&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Let’s examine the steps required to associate the VPC in Account_B with the hosted zone in Account_A.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create an association-authorization request in Account_A, the account where the hosted zone resides.
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Following command should be executed in the account where the zone is intended to be shared, it is Account_A in our scenario.&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws route53 create-vpc-association-authorization --hosted-zone-id Z458514111102 --vpc VPCRegion=ap-southeast-2,VPCId=vpc-1458522bhuf --profile profile-A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;This AWS CLI command initiates the process of creating an association-authorization request in &lt;strong&gt;&lt;em&gt;Account_A&lt;/em&gt;&lt;/strong&gt;. This request allows the VPC (&lt;strong&gt;&lt;em&gt;vpc-1458522bhuf&lt;/em&gt;&lt;/strong&gt;) from &lt;strong&gt;Account_B&lt;/strong&gt; to be associated with the hosted zone specified by its ID (&lt;strong&gt;&lt;em&gt;Z458514111102&lt;/em&gt;&lt;/strong&gt;) in &lt;strong&gt;Account_A&lt;/strong&gt;. The action is performed using the &lt;strong&gt;&lt;em&gt;profile&lt;/em&gt;&lt;/strong&gt;-A credentials for authentication.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 2: Associate the VPC in Account_B with the hosted zone in Account_A
&lt;/h3&gt;

&lt;p&gt;*Following command should be executed inthe account that requires access to the private zone using AWS Route 53, *In our scenario, it pertains to Account_B.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; aws route53 associate-vpc-with-hosted-zone --hosted-zone-id Z458514111102 --vpc VPCRegion=ap-southeast-2,VPCId=vpc-1458522bhuf --profile profile-B
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;This command performs the association of the specified VPC (&lt;strong&gt;&lt;em&gt;vpc-1458522bhuf&lt;/em&gt;&lt;/strong&gt;) from &lt;strong&gt;&lt;em&gt;Account_B&lt;/em&gt;&lt;/strong&gt; with the hosted zone identified by its ID (&lt;strong&gt;&lt;em&gt;Z458514111102&lt;/em&gt;&lt;/strong&gt;) in &lt;strong&gt;&lt;em&gt;Account_A&lt;/em&gt;&lt;/strong&gt;. The process takes place using the &lt;em&gt;**profile-B *&lt;/em&gt;*credentials for authentication.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Upon completing the aforementioned two steps, the VPC located in Account_B has been effectively associated with the private hosted zone in Account_A.&lt;/p&gt;

&lt;p&gt;Let’s confirm the above by executing the following command;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws route53 list-hosted-zones-by-vpc --vpc-id vpc-1458522bhuf --vpc-region ap-southeast-2 --profile profile-B
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;This command will provide you with information about the all the private hosted zones that the VPC in Account_B (vpc-1458522bhuf) is associated with&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This can be also confirmed using the Route 53 console in Account_A.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A6dzckQ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3236/1%2AStCkaug6MgKP-8Wd4EyDnQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A6dzckQ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3236/1%2AStCkaug6MgKP-8Wd4EyDnQ.png" alt="R53 Console" width="800" height="666"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;What is the outcome of the above ?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The outcome of the above process is the successful establishment of an association between the VPC in &lt;strong&gt;&lt;em&gt;Account_B&lt;/em&gt;&lt;/strong&gt; and the &lt;strong&gt;&lt;em&gt;private hosted zone&lt;/em&gt;&lt;/strong&gt; in &lt;strong&gt;&lt;em&gt;Account_A&lt;/em&gt;&lt;/strong&gt;. This means that any &lt;strong&gt;DNS&lt;/strong&gt;* &lt;strong&gt;&lt;em&gt;queries&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;originating from resources within the VPC in *&lt;/em&gt;&lt;em&gt;Account_B&lt;/em&gt;** will be able to resolve records from the associated private hosted zone in &lt;strong&gt;&lt;em&gt;Account_A&lt;/em&gt;&lt;/strong&gt;. This enables seamless communication and resource access between the VPCs in different AWS accounts using the DNS names defined in the*** private hosted zone***.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Delete association-authorization request initiated in Step 1 (recommended).
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Following commands should be executed in the account where the Association Authorization request is created , it is Account_A in our scenario&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;List the Authorizations created in Account _A&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws route53 list-vpc-association-authorizations --hosted-zone-id Z458514111102 --profile profile-A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Delete the VPC Authorization Association request&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws route53 delete-vpc-association-authorization --hosted-zone-id Z458514111102 --vpc VPCRegion=ap-southeast-2,VPCId=vpc-1458522bhuf --profile profile-A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;This command removes the association-authorization request in &lt;strong&gt;&lt;em&gt;Account_A&lt;/em&gt;&lt;/strong&gt; that allowed the VPC (&lt;strong&gt;&lt;em&gt;vpc-1458522bhuf&lt;/em&gt;&lt;/strong&gt;) from &lt;strong&gt;&lt;em&gt;Account_B&lt;/em&gt;&lt;/strong&gt; to be associated with the hosted zone specified by its ID (&lt;strong&gt;&lt;em&gt;Z458514111102&lt;/em&gt;&lt;/strong&gt;) in &lt;strong&gt;&lt;em&gt;Account_A&lt;/em&gt;&lt;/strong&gt;. The action is performed using the &lt;strong&gt;&lt;em&gt;profile-A&lt;/em&gt;&lt;/strong&gt; credentials for authentication.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Deleting the associations is part of proper resource management. It helps you keep your AWS environment organized and efficient by removing unnecessary permissions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Remember that deleting the association-authorization request won’t impact the existing associations between the VPC and the hosted zone. It simply prevents new associations from being made.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While the AWS Management Console provides an intuitive interface for many AWS tasks, certain scenarios, such as associating a VPC from one account with a Route 53 hosted zone in another account, require the power and flexibility of the AWS CLI. By following this guide, you can successfully accomplish cross-account VPC associations, ensuring efficient resource management and improved security across your AWS infrastructure.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dns</category>
      <category>route53</category>
      <category>vpc</category>
    </item>
    <item>
      <title>A Comprehensive Guide to Various Sceptre Commands</title>
      <dc:creator>🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</dc:creator>
      <pubDate>Thu, 03 Aug 2023 00:43:02 +0000</pubDate>
      <link>https://forem.com/rumeshsil/a-comprehensive-guide-to-various-sceptre-commands-3d5d</link>
      <guid>https://forem.com/rumeshsil/a-comprehensive-guide-to-various-sceptre-commands-3d5d</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Aldv_9D6CCGEkuBR81RE8zw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Aldv_9D6CCGEkuBR81RE8zw.png" alt="Sceptre Commands"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Sceptre provides various commands that cater to different aspects of managing CloudFormation stacks and interacting with AWS resources. In this article, we’ll explore the various Sceptre commands and how they can streamline your cloud infrastructure management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sceptre CLI (Version 4.2.2)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Please consult the official Sceptre documentation or the documentation specific to version 4.2.2 for the most accurate and up-to-date information on commands and their usage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Usage
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Usage: sceptre [OPTIONS] COMMAND [ARGS]...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Sceptre serves as a command-line tool, and if you run it without a sub-command, it will display helpful information by showing a list of the available commands.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre 
sceptre --help
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3500%2F1%2A8aI3eUvb0nblASDPf8GKkA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3500%2F1%2A8aI3eUvb0nblASDPf8GKkA.png" alt="output of sceptre or sceptre — help commands"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Various Sceptre Commands
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;1.) create&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

    sceptre create --help
    Usage: sceptre create [OPTIONS] PATH [CHANGE_SET_NAME]

      Creates a stack for a given config PATH. Or if CHANGE_SET_NAME is specified
      creates a change set for stack in PATH.

    Options:
      -y, --yes                       Assume yes to all questions.
      --disable-rollback / --enable-rollback
                                      Disable or enable the cloudformation
                                      automatic rollback
      --help                          Show this message and exit.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create a stack&lt;/p&gt;

&lt;p&gt;&lt;em&gt;usage: sceptre create [options] PATH&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre create -y s3-bucket-config.yaml 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create a changeset&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Using changesets is a best practice when managing CloudFormation stacks, especially in production and other controlled environments. It promotes a well-defined and cautious approach to making changes, reducing the risk of disruptions and ensuring the stability of your cloud infrastructure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For additional details on changesets, please check this &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html" rel="noopener noreferrer"&gt;page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;usage: sceptre create [options] PATH [CHANGE_SET_NAME]&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre create -y --enable-rollback s3-bucket-config.yaml updatename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;The command creates changeset called “updatename” stack based on the provided configuration file “s3-bucket-config.yaml.” The “-y” option skips confirmation prompts, and “ — enable-rollback” enables automatic rollback in case of stack creation failures.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3500%2F1%2ABA-uiAl8-nyLL2unHL439w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3500%2F1%2ABA-uiAl8-nyLL2unHL439w.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;2.) delete&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre delete --help
Usage: sceptre delete [OPTIONS] PATH [CHANGE_SET_NAME]

  Deletes a stack for a given config PATH. Or if CHANGE_SET_NAME is specified
  deletes a change set for stack in PATH.

Options:
  -y, --yes  Assume yes to all questions.
  --help     Show this message and exit.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Delete a stack&lt;/p&gt;

&lt;p&gt;&lt;em&gt;usage: sceptre delete [options] PATH&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre delete -y s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Delete a changeset&lt;/p&gt;

&lt;p&gt;&lt;em&gt;usage: sceptre delete [options] PATH [CHANGE_SET_NAME]&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre delete -y s3-bucket-config.yaml updatename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3500%2F1%2A9qCNQdJtA7Z78xgldRqWaA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3500%2F1%2A9qCNQdJtA7Z78xgldRqWaA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;3.) describe&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre describe --help
Usage: sceptre describe [OPTIONS] COMMAND [ARGS]...

  Commands for describing attributes of stacks.

Options:
  --help  Show this message and exit.

Commands:
  change-set  Describes the change set.
  policy      Displays the stack policy used.

 sceptre describe change-set  s3-bucket-config.yaml updatename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5780%2F1%2A214cGLz4f9R23U00nvY6Bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5780%2F1%2A214cGLz4f9R23U00nvY6Bg.png" alt="Describe Changeset"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The above output suggests that the change set “updatename” includes a modification to the existing AWS S3 bucket resource named “MyS3Bucket” within the “my-s3-bucket-stack” CloudFormation stack. The change involves updating the properties of the bucket. Additionally, the resource is marked for **replacement&lt;/em&gt;&lt;em&gt;, indicating that a new physical resource will be created to apply the update.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Change sets can also be examined through the AWS CloudFormation console, providing a user-friendly graphical interface to visualize the alterations made to stack resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sceptre describe policy  s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2ATep3_cJrFxMWyQLPOnJJLw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2ATep3_cJrFxMWyQLPOnJJLw.png" alt="Describe the policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;4.) diff&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre diff --help               
Usage: sceptre diff [OPTIONS] PATH

  Indicates the difference between the currently DEPLOYED stacks in the
  command path and the stacks configured in Sceptre right now. This command
  will compare both the templates as well as the subset of stack
  configurations that can be compared. By default, only stacks that would be
  launched via the launch command will be diffed, but you can diff ALL stacks
  relevant to the passed command path if you pass the --all flag.

  Some settings (such as sceptre_user_data) are not available in a
  CloudFormation stack description, so the diff will not be indicated.
  Currently compared stack configurations are:

    * parameters
    * notifications
    * cloudformation_service_role
    * stack_tags

  Important: There are resolvers (notably !stack_output) that rely on other
  stacks to be already deployed when they are resolved. When producing a diff
  on Stack Configs that have such resolvers that point to non-deployed stacks,
  this presents a challenge, since this means those resolvers cannot be
  resolved. This particularly applies to stack parameters and when a stack's
  template uses sceptre_user_data with resolvers in it. In order to continue
  to be useful when producing a diff in these conditions, this command will do
  the following:

  1. If the resolver CAN be resolved, it will be resolved and the resolved
  value will be in the diff results. 2. If the resolver CANNOT be resolved, it
  will be replaced with a string that represents the resolver and its
  arguments. For example: !stack_output my_stack.yaml::MyOutput will resolve
  in the parameters to "{ !StackOutput(my_stack.yaml::MyOutput) }".

  Particularly in cases where the replaced value doesn't work in the template
  as the template logic requires and causes an error, there is nothing further
  Sceptre can do and diffing will fail.

Options:
  -t, --type [deepdiff|difflib]  The type of differ to use. Use "deepdiff" for
                                 recursive key/value comparison. "difflib"
                                 produces a more traditional "diff" result.
                                 Defaults to deepdiff.
  -s, --show-no-echo             If set, will display the unmasked values of
                                 NoEcho parameters generated LOCALLY (NoEcho
                                 parameters for deployed stacks will always be
                                 masked when retrieved from CloudFormation.).
                                 If not set (the default), parameters
                                 identified as NoEcho on the local template
                                 will be masked when presented in the diff.
  -n, --no-placeholders          If set, no placeholder values will be
                                 supplied for resolvers that cannot be
                                 resolved.
  -a, --all                      If set, will perform diffing on ALL stacks,
                                 including ignored and obsolete ones;
                                 Otherwise, it will diff only stacks that
                                 would be created or updated when running the
                                 launch command.
  --help                         Show this message and exit.

sceptre diff s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2AgiwZp7jwjok1g8Dk-u8jMg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2AgiwZp7jwjok1g8Dk-u8jMg.png" alt="sceptre diff"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The detected difference reveals that the “bucketname” parameter within the CloudFormation stack has been modified, changing from “first-secptre-bucket-20230728” to “first-secptre-bucket-20230729.” Notably, the CloudFormation template itself remains unchanged during this update.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;5.) drift&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A “drift” refers to a situation where the actual state of a stack’s resources deviates from the expected state defined in the CloudFormation template. In other words, a drift occurs when there are resource changes made directly in the AWS environment, outside of CloudFormation’s control(via Sceptre).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre drift --help
Usage: sceptre drift [OPTIONS] COMMAND [ARGS]...

  Commands for calling drift detection.

Options:
  --help  Show this message and exit.

Commands:
  detect  Run detect stack drift on running stacks.
  show    Shows stack drift on running stacks.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;detect drift&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sceptre drift detect s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2A4ZgO3JXsr0Xln4duMfUZ8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2A4ZgO3JXsr0Xln4duMfUZ8w.png" alt="Drift detection results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The above output indicates that the “my-s3-bucket-stack” CloudFormation stack is in good condition and does not have any drifted resources. The drift detection process has been completed, and all the resources in the stack are in sync with the defined CloudFormation template.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;show drift&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sceptre drift show s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2AjsVuKm0Ia_uwFX1Z_7oB0Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2AjsVuKm0Ia_uwFX1Z_7oB0Q.png" alt="Show drift results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The output indicates that the specific AWS S3 Bucket resource with the logical ID “MyS3Bucket” within the “my-s3-bucket-stack” CloudFormation stack is in sync with the expected state defined in the template. There are no property differences, and the resource’s properties match those defined in the CloudFormation template, ensuring that it is in the desired state.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;6.) dump&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sceptre dump --help
Usage: sceptre dump [OPTIONS] COMMAND [ARGS]...

  Commands for dumping attributes of stacks.

Options:
  --help  Show this message and exit.

Commands:
  all       Dumps both the rendered (post-Jinja) Stack Configs and the...
  config    Dump the rendered (post-Jinja) Stack Configs.
  template  Prints the template used for stack in PATH.

sceptre dump template s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2AnkbyPKnucW4b_iQA3yCDLw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3552%2F1%2AnkbyPKnucW4b_iQA3yCDLw.png" alt="Prints theTemplate"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre dump config  s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3504%2F1%2AunEbcroOxQft8MFEHZ4Ccg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3504%2F1%2AunEbcroOxQft8MFEHZ4Ccg.png" alt="Prints the Config"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre dump all  s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3504%2F1%2AEZID0Uw2sE8fdwWkwM8-7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3504%2F1%2AEZID0Uw2sE8fdwWkwM8-7g.png" alt="Prints both template and config"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;7.) estimate-cost&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre estimate-cost --help
Usage: sceptre estimate-cost [OPTIONS] PATH

  Prints a URI to STOUT that provides an estimated cost based on the resources
  in the stack. This command will also attempt to open a web browser with the
  returned URI.

Options:
  --help  Show this message and exit.

sceptre estimate-cost  s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5624%2F1%2AQzoA31APMziFmEASiy3RUw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5624%2F1%2AQzoA31APMziFmEASiy3RUw.png" alt="Estimated cost from S3 Pricing Calculator"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;8.) execute&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre execute --help
Usage: sceptre execute [OPTIONS] PATH CHANGE_SET_NAME

  Executes a Change Set.

Options:
  -y, --yes                       Assume yes to all questions.
  --disable-rollback / --enable-rollback
                                  Disable or enable the cloudformation
                                  automatic rollback
  --help                          Show this message and exit.

sceptre execute -y --enable-rollback s3-bucket-config.yaml updatename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Prior to executing the changeset, it’s advisable to thoroughly review its contents. As demonstrated in the ‘describe changeset’ command above, taking this precaution is essential because the changes applied by the changeset can be irreversible and may not have a straightforward rollback mechanism.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4892%2F1%2Ans0wZSf1dwST00oa3ksTnQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4892%2F1%2Ans0wZSf1dwST00oa3ksTnQ.png" alt="Executing changeset via Sceptre"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As per the above output;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The AWS S3 bucket “MyS3Bucket” within the “s3-bucket-config” stack is being updated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Since the bucket update requires the creation of a new physical resource(as the bucket name has been modified) , it indicates that CloudFormation is replacing the existing bucket with a new one. This suggests that the “Delete” deletion policy is applied to the bucket and the new bucket is created, and the old one is deleted.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;9.) fetch-remote-template&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre fetch-remote-template --help
Usage: sceptre fetch-remote-template [OPTIONS] PATH

  Prints the remote template used for stack in PATH.

Options:
  --help  Show this message and exit.

 sceptre fetch-remote-template s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3580%2F1%2AOpG9EH9fAm6xbdMQey2XxA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3580%2F1%2AOpG9EH9fAm6xbdMQey2XxA.png" alt="Fetch-Remote-Template"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;10.) generate&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre generate --help
Usage: sceptre generate [OPTIONS] PATH

  Prints the template used for stack in PATH.

  This command is aliased to the dump template command for legacy support
  reasons. It's the same as running `sceptre dump template`.

Options:
  -n, --no-placeholders  If True, no placeholder values will be supplied for
                         resolvers that cannot be resolved.
  --help                 Show this message and exit.

 sceptre generate s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3580%2F1%2AaxDn80EFjbaHfPb4KSgptw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3580%2F1%2AaxDn80EFjbaHfPb4KSgptw.png" alt="sceptre generate (obsolete)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;11.) launch&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre launch --help                 
Usage: sceptre launch [OPTIONS] PATH

  Launch a Stack or StackGroup for a given config PATH. This command is
  intended as a catch-all command that will apply any changes from Stack
  Configs indicated via the path.

  * Any Stacks that do not exist will be created
  * Any stacks that already exist will be updated (if there are any changes)
  * If any stacks are marked with "ignore: True", those stacks will neither be created nor updated
  * If any stacks are marked with "obsolete: True", those stacks will neither be created nor updated.
  * Furthermore, if the "-p"/"--prune" flag is used, these stacks will be deleted prior to any
    other launch commands

Options:
  -y, --yes                       Assume yes to all questions.
  -p, --prune                     If set, will delete all stacks in the
                                  command path marked as obsolete.
  --disable-rollback / --enable-rollback
                                  Disable or enable the cloudformation
                                  automatic rollback
  --help                          Show this message and exit.\

sceptre launch -y s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4888%2F1%2ARcvt-iqBAhxwyBh-yEs8xA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4888%2F1%2ARcvt-iqBAhxwyBh-yEs8xA.png" alt="sceptre launch stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;According to the provided output, since the stack is already present, the “launch” procedure has brought about modifications to the resources (specifically, the S3 bucket) within the stack. These alterations align with the adjustments introduced in the CloudFormation template or the configuration of the stack (in our case, the stack configuration has been modified to update the bucket name).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;12.) list&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre list --help
Usage: sceptre list [OPTIONS] COMMAND [ARGS]...

  Commands for listing attributes of stacks.

Options:
  --help  Show this message and exit.

Commands:
  change-sets  List change sets for stack.
  outputs      List outputs for stack.
  resources    List resources for stack or stack_group.
  stacks       List sceptre stack config attributes,

sceptre list change-sets s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5812%2F1%2Aeo2EtIZEjNwBHBeh0otmVw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5812%2F1%2Aeo2EtIZEjNwBHBeh0otmVw.png" alt="a list of change sets associated with a stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre list outputs  s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3628%2F1%2A2gZzOjaxKBmg5e8HRpiMWQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3628%2F1%2A2gZzOjaxKBmg5e8HRpiMWQ.png" alt="a list of outputs associated with a stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre list resources  s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3628%2F1%2AlFQJPPtlACFUGoxG-6GUXg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3628%2F1%2AlFQJPPtlACFUGoxG-6GUXg.png" alt="a list of resources associated with a stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre list stacks  s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3628%2F1%2AWFkJFsu0ZjHqOMiTBMvzCw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3628%2F1%2AWFkJFsu0ZjHqOMiTBMvzCw.png" alt="a list of sceptre stack config attributes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;13.) new&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sceptre new --help
Usage: sceptre new [OPTIONS] COMMAND [ARGS]...

  Commands for initialising Sceptre projects.

Options:
  --help  Show this message and exit.

Commands:
  group    Creates a new Stack Group directory in a project.
  project  Creates a new project.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;sceptre new&lt;/em&gt;&lt;/strong&gt; command has been comprehensively explained with an example in &lt;a href="https://medium.com/@rumeshsil/a-guide-to-installing-and-configuring-sceptre-in-multiple-ways-ea298e5788ad" rel="noopener noreferrer"&gt;a previous article&lt;/a&gt; within the section titled “Setting Up the Directory Structure for a New Sceptre Project.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;14.). prune&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre prune --help
Usage: sceptre prune [OPTIONS] [PATH]

  This command deletes all obsolete stacks in the project. Only obsolete
  stacks can be deleted via prune; If any non-obsolete stacks depend on
  obsolete stacks, an error will be raised and this command will fail.

Options:
  -y, --yes  Assume yes to all questions.
  --help     Show this message and exit.

 sceptre prune s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3628%2F1%2AQ-TejpYR-5VtsMiMtnoyWQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3628%2F1%2AQ-TejpYR-5VtsMiMtnoyWQ.png" alt="prune stacks"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;How to make a stack “obselete” ?&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;By setting the obsolete parameter to True, you are indicating that this stack is no longer actively managed and is considered obsolete. This helps communicate the status of the stack to the team, making it clear that this stack is not intended for further updates or maintenance.&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;template:
  path: s3-bucket-template.yaml
  type: file

stack_name: my-s3-bucket-stack
obsolete: True

parameters:
  bucketname: first-secptre-bucket-20230730
  deletionpolicy: Delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;What will be the outcome if you execute the prune command at this moment&lt;/em&gt;&lt;/strong&gt;? The stack will be deleted as it is marked as “obselete”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3948%2F1%2ApwIQZPxsBlKa480rG28YHQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3948%2F1%2ApwIQZPxsBlKa480rG28YHQ.png" alt="Deleting obsolete stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;15.) set-policy&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre set-policy --help
Usage: sceptre set-policy [OPTIONS] PATH [POLICY_FILE]

  Sets a specific Stack policy for either a file or using a built-in policy.

Options:
  -b, --built-in [deny-all|allow-all]
                                  Specify a built in stack policy.
  --help                          Show this message and exit.Specifies the resources you wish to safeguard against accidental modifications during a stack update

sceptre set-policy -b allow-all  s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;The purpose of using this command is to establish a standardized policy that governs what types of changes can be made to the stack resources. The “allow-all” policy, as implied by its name, allows all possible updates to the stack. This can be useful in scenarios where you want to enable unrestricted updates to the stack resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3948%2F1%2AX-BfsfuI2QxvicSkSUOWAw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3948%2F1%2AX-BfsfuI2QxvicSkSUOWAw.png" alt="set-policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s run the describe policy command and see the output now;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3948%2F1%2ALk3UNIhmmuBerZUoaoEZ-g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3948%2F1%2ALk3UNIhmmuBerZUoaoEZ-g.png" alt="describe policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s define a custom stack policy in JSON format that you can use to deny updates to all resources within a CloudFormation stack:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;config/policies/deny-policy.json&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Statement" : [
      {
        "Effect" : "Deny",
        "Action" : "Update:*",
        "Principal": "*",
        "Resource" : "*"
      }
    ]
  }

sceptre set-policy s3-bucket-config.yaml config/policies/deny-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3948%2F1%2ABV75vGlodN1uZoQEsit8Wg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3948%2F1%2ABV75vGlodN1uZoQEsit8Wg.png" alt="Setting a custom policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s evaluate the impact of the “deny” stack policy on the stack by trying to perform a stack update.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4864%2F1%2AZgDXcp4ixPkOSDBGMdx9cA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4864%2F1%2AZgDXcp4ixPkOSDBGMdx9cA.png" alt="Updating the stack which is associated with a deny policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As anticipated, the update operation failed due to the stack policy that prohibits any updates on all resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Remember that creating and applying policies should be done carefully, as they significantly impact the actions that can be performed on your stack resources. Always test policies in a controlled environment before applying them to production stacks&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;16.) update&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre update --help 
Usage: sceptre update [OPTIONS] PATH

  Updates a stack for a given config PATH. Or perform an update via change-set
  when the change-set flag is set.

Options:
  -c, --change-set                Create a change set before updating.
  -v, --verbose                   Display verbose output.
  -y, --yes                       Assume yes to all questions.
  --disable-rollback / --enable-rollback
                                  Disable or enable the cloudformation
                                  automatic rollback
  --help                          Show this message and exit.

sceptre update s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4864%2F1%2AemQWbKBP2_LYrJSyxYQS0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4864%2F1%2AemQWbKBP2_LYrJSyxYQS0w.png" alt="sceptre update"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Updating stack with a changeset&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre update --change-set s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5824%2F1%2ASiBjXswx8IZrpqFSZVD9lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5824%2F1%2ASiBjXswx8IZrpqFSZVD9lg.png" alt="update a stack using changeset"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s worth observing that while “Update Change Set” and “Create Change Set” might appear similar, they actually serve distinct purposes. Despite their eventual outcomes being similar, these two operations are applied in different scenarios. “Create Change Set” is typically used when &lt;strong&gt;&lt;em&gt;making significant changes to an existing stack&lt;/em&gt;&lt;/strong&gt;, while “Update Change Set” is used &lt;strong&gt;&lt;em&gt;for incremental changes to an existing stack&lt;/em&gt;&lt;/strong&gt;. Both operations provide a safety net by allowing you to review changes before they are applied, reducing the risk of unintended consequences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;17.) validate&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceptre validate --help
Usage: sceptre validate [OPTIONS] PATH

  Validates the template used for stack in PATH.

Options:
  -n, --no-placeholders  If True, no placeholder values will be supplied for
                         resolvers that cannot be resolved.
  --help                 Show this message and exit.

sceptre validate s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5604%2F1%2AjxIyoZk-XjWKryqtT7OTMA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5604%2F1%2AjxIyoZk-XjWKryqtT7OTMA.png" alt="Validating a stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As we explored various Sceptre commands, we learned how they help create, update, and manage cloud infrastructure effortlessly. With its command-line toolkit, Sceptre becomes a dependable companion for modern cloud enthusiasts, making cloud deployment efficient and automated.&lt;/p&gt;

&lt;p&gt;Whether you’re experienced in cloud engineering or new to Infrastructure as Code, learning Sceptre commands will surely boost your cloud management skills and speed up your path to cloud expertise. Embrace Sceptre’s capabilities and enter a new phase of managing cloud infrastructure.&lt;/p&gt;

&lt;p&gt;By understanding and utilizing these Sceptre commands, you’ll be well-equipped to optimize your cloud infrastructure and ensure the scalability, reliability, and security of your applications.&lt;/p&gt;

&lt;p&gt;(Note: This article offers a broad look at Sceptre commands and what they can do. For more detailed and up-to-date information, readers are advised to consult the official Sceptre documentation and additional resources.)&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sceptre</category>
      <category>infrastructureascode</category>
      <category>devops</category>
    </item>
    <item>
      <title>Simplifying Infrastructure Deployment with Sceptre: Your First Stack — An S3 Bucket</title>
      <dc:creator>🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</dc:creator>
      <pubDate>Tue, 01 Aug 2023 00:28:57 +0000</pubDate>
      <link>https://forem.com/rumeshsil/simplifying-infrastructure-deployment-with-sceptre-your-first-stack-an-s3-bucket-4cib</link>
      <guid>https://forem.com/rumeshsil/simplifying-infrastructure-deployment-with-sceptre-your-first-stack-an-s3-bucket-4cib</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2DFCPghc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2A8nOfTrrXBAzui3V6HacsaA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2DFCPghc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2A8nOfTrrXBAzui3V6HacsaA.png" alt="Creating an S3 bucket using Sceptre" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article, we’ll walk you through creating and deploying your first stack using Sceptre, with a simple example of creating an S3 bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, make sure you have gone through the article on &lt;a href="https://medium.com/p/ea298e5788ad"&gt;A Guide to Installing and Configuring Sceptre&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Infrastructure — The S3 Bucket
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Directory Structure
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TC76J6Jg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AE25_7razGVChtJ8vLNjQRQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TC76J6Jg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AE25_7razGVChtJ8vLNjQRQ.jpeg" alt="" width="674" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Defining the S3 Bucket Config — s3-bucket-config.yaml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; template:
      path: s3-bucket-template.yaml
      type: file

    stack_name: my-s3-bucket-stack

    parameters:
      bucketname: first-secptre-bucket-20230727
      deletionpolicy: Delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This Sceptre config file creates an S3 bucket using the specified CloudFormation template file (s3-bucket-template.yaml) with the given parameters. The S3 bucket's name will be "my-first-sceptre-bucket-20230727," and it will have the "Delete" deletion policy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Below is the explanation of each section in the config file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;path&lt;/em&gt;&lt;/strong&gt;: s3-bucket-template.yaml: This line specifies the path to the CloudFormation template file (s3-bucket-template.yaml) relative to the Sceptre configuration file’s location. The template file contains the CloudFormation code that defines the S3 bucket’s properties. &lt;em&gt;The path property may consist of either an absolute or relative path to the template file. When using a relative path, this handler assumes it is relative to the ‘sceptre_project_dir/templates’ directory&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;type&lt;/em&gt;&lt;/strong&gt;: file: This indicates the type of the template, which in this case is a file. This is one of the possible values for the type property, and it is used to indicate the external source location of the template.(e.g., file,s3,http).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;In the provided example, the CloudFormation template is stored in an S3 bucket, and its path is provided using the path attribute. Sceptre will retrieve the template from the specified S3 bucket and use it to create the stack.&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;template:
  path: s3://my-bucket/s3-bucket-template.yaml
  type: s3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;stack_name&lt;/em&gt;&lt;/strong&gt;: my-s3-bucket-stack: This line defines the name of the CloudFormation stack that will be created. In this case, the stack will be named “my-s3-bucket-stack.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;parameters&lt;/em&gt;&lt;/strong&gt;: This section defines the input parameters that will be passed to the CloudFormation stack during its creation. These parameters customize the properties of the S3 bucket being created. The parameters listed are:&lt;br&gt;
&lt;em&gt;– bucketname&lt;/em&gt;: This parameter is set to “my-first-sceptre-bucket- 20230727,” which specifies the desired name for the S3 bucket.&lt;br&gt;
&lt;em&gt;– deletionpolicy&lt;/em&gt;: This parameter is set to “Delete,” which indicates the desired deletion policy for the S3 bucket. In this case, the bucket will be deleted when the CloudFormation stack is deleted&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Defining the S3 Bucket Template— s3-bucket-template.yaml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Description: Template creates an empty S3 Bucket

    Parameters:
      bucketname:
        Type: String

      deletionpolicy:
        Type: String
        Default : Retain
        AllowedValues:
          - Delete
          - Retain

    Resources:
      MyS3Bucket: 
        DeletionPolicy: !Ref deletionpolicy
        Type: 'AWS::S3::Bucket'
        Properties:
          BucketName: !Ref bucketname

    Outputs: 
      S3BucketName: 
        Value: !Ref MyS3Bucket 
        Description: Name of the S3 Bucket 
      S3BucketARN: 
        Value: !GetAtt MyS3Bucket.Arn 
        Description: ARN of the Bucket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above template allows users to create an S3 bucket with a custom name and specify the desired deletion policy. By using the outputs, users can easily access and utilize the bucket name and ARN for further operations and integration within their AWS environment.&lt;/p&gt;

&lt;p&gt;Let’s break down each section of the template:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Parameters : This section defines the input parameters that can be provided when launching the CloudFormation stack. In this template, there are two parameters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resources : This section defines the AWS resources that the CloudFormation stack will create. In this template, there is one resource. &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html"&gt;How to specify an S3 bucket in CloudFormation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Outputs : This section defines the information that will be displayed once the CloudFormation stack is created. In this template, there are two outputs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. Different Sceptre commands for deploying and managing the stacks
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Before executing the Sceptre commands, ensure that you change the current working directory to the project directory. In our scenario, the project directory is named “first-sceptre-directory.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Validating the stack&lt;/em&gt;&lt;/strong&gt; : You can verify that your CloudFormation templates are well-formed, follow the correct YAML or JSON syntax, and reference valid AWS resources and properties.&lt;/p&gt;

&lt;p&gt;sceptre validate s3-bucket-config.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Using Docker&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -v "$(pwd):/project"  -v "$HOME/.aws:/root/.aws" -w /project sceptre-image validate s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;This Docker command creates a container based on the “sceptre-image” Docker image, maps the current working directory and AWS configuration from the host to appropriate directories inside the container, sets the working directory for the Sceptre command to “/project,” and then runs the Sceptre “validate” command on the “s3-bucket-config.yaml” configuration file.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WVlYXRnx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4224/1%2AxsaOj2iwLaRBgCczIRCu4w.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WVlYXRnx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4224/1%2AxsaOj2iwLaRBgCczIRCu4w.jpeg" alt="" width="800" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Creating the stack&lt;/em&gt;&lt;/strong&gt;: This command will initiate the stack creation process using the associated CloudFormation template.&lt;br&gt;
sceptre validate s3-bucket-config.yaml&lt;/p&gt;

&lt;p&gt;sceptre create s3-bucket-config.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Using Docker&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -v "$(pwd):/project"  -v "$HOME/.aws:/root/.aws" -w /project sceptre-image create s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3kpWPgNi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3504/1%2AT0Mc-smTtih0qf6xgHKntA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3kpWPgNi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3504/1%2AT0Mc-smTtih0qf6xgHKntA.jpeg" alt="" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Updating the stack&lt;/em&gt;&lt;/strong&gt;: This command will trigger the update process, and CloudFormation will apply the changes specified in the template to the existing stack.&lt;/p&gt;

&lt;p&gt;sceptre update s3-bucket-config.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Using Docker&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -v "$(pwd):/project"  -v "$HOME/.aws:/root/.aws" -w /project sceptre-image update s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EV_FnzQZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2964/1%2AS0tsw0BOnC-Vub3S9iMy1w.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EV_FnzQZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2964/1%2AS0tsw0BOnC-Vub3S9iMy1w.jpeg" alt="" width="800" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the deployment is successful, go to the AWS Cloudformation console and verify the creation of your stack and S3 bucket with the specified name (my-first-sceptre-bucket in our example).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Go8FGmBe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4488/1%2Adm_hICYC5NbOMwukRohDvw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Go8FGmBe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4488/1%2Adm_hICYC5NbOMwukRohDvw.jpeg" alt="Stack is in create-complete state" width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CQWNHVoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3140/1%2AT_IP61CWqXumLvNpotM2Bw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CQWNHVoX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3140/1%2AT_IP61CWqXumLvNpotM2Bw.jpeg" alt="S3 Bucket resource in the stack" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2bENaF2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3132/1%2Avme2YuaDZELx9pkRuBCDLQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2bENaF2y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3132/1%2Avme2YuaDZELx9pkRuBCDLQ.jpeg" alt="Parameters passed to the stack" width="800" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yU39LTv_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3172/1%2AZE8ZDfMGGJdLY7YsbZgN6Q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yU39LTv_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3172/1%2AZE8ZDfMGGJdLY7YsbZgN6Q.jpeg" alt="Outputs defined in the stack" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Deleting the stack&lt;/em&gt;&lt;/strong&gt;: This command will delete the CloudFormation stack and the associated resources.&lt;/p&gt;

&lt;p&gt;sceptre delete s3-bucket-config.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Using Docker&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -v "$(pwd):/project"  -v "$HOME/.aws:/root/.aws" -w /project sceptre-image delete s3-bucket-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3UXnRDcZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3508/1%2AZu9ulv5z53hj--FjkNivHw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3UXnRDcZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3508/1%2AZu9ulv5z53hj--FjkNivHw.png" alt="" width="800" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You have successfully created and deployed your first stack using Sceptre.&lt;/p&gt;

&lt;p&gt;This simple example of creating an S3 bucket serves as a foundation for more complex deployments and scenarios. As you explore additional features and customization options, refer to the official Sceptre documentation for a deeper understanding.&lt;/p&gt;

&lt;p&gt;Leveraging Sceptre, you can streamline infrastructure management and elevate your AWS cloud deployment experience. Happy deploying!&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>sceptre</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>A Guide to Installing and Configuring Sceptre</title>
      <dc:creator>🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</dc:creator>
      <pubDate>Thu, 27 Jul 2023 00:32:47 +0000</pubDate>
      <link>https://forem.com/rumeshsil/a-guide-to-installing-and-configuring-sceptre-32ag</link>
      <guid>https://forem.com/rumeshsil/a-guide-to-installing-and-configuring-sceptre-32ag</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VFr5hKyV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zw721xxfjrokmphlw9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VFr5hKyV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zw721xxfjrokmphlw9l.png" alt="Installing Sceptre" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sceptre, an open-source tool developed by Cloudreach, offers a robust solution for managing cloud infrastructure as code. In this article, we will explore various methods of installing and configuring Sceptre, including using Docker, based on the official &lt;a href="https://docs.sceptre-project.org/latest/docs/install.html"&gt;Sceptre documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installing Sceptre via pip (Python Package Manager)
&lt;/h3&gt;

&lt;p&gt;The simplest way to install Sceptre is using pip, the Python package manager. First, ensure you have Python installed on your system&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;AWS Account: You will need an AWS account to work with Sceptre since it interacts with CloudFormation and other AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Python: Sceptre requires Python, so ensure you have Python (version 3.6+) installed on your system.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that we have the prerequisites covered, let’s proceed with installing Sceptre:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open your terminal or command prompt.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use pip to install Sceptre by running the following command:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;pip install sceptre&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OarqjUgL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2268/1%2AvSxsd1V7BwTaOr3O3UaLww.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OarqjUgL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2268/1%2AvSxsd1V7BwTaOr3O3UaLww.jpeg" alt="" width="800" height="66"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Wait for the installation to complete, and you’re now ready to use Sceptre.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the installation is complete, you can verify it by running&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sceptre --version&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MaiF5Lp1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2268/1%2AfXKJzEIkLNf4pV81x_zkIg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MaiF5Lp1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2268/1%2AfXKJzEIkLNf4pV81x_zkIg.jpeg" alt="" width="800" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Sceptre
&lt;/h3&gt;

&lt;p&gt;With Sceptre installed, the next step is to set up the necessary configuration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configuring AWS Credentials for Sceptre&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sceptre uses the standard AWS SDK for Python (Boto3) to interact with AWS services, and it follows the same credential resolution order as Boto3. The AWS credentials can be specified in multiple ways, and Sceptre automatically picks up the appropriate credentials based on this order:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;i.) Environment Variable&lt;/strong&gt;s: Sceptre checks for the presence of the following environment variables:&lt;/p&gt;

&lt;p&gt;AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY : Specifies the AWS access key ID and secret access key.&lt;br&gt;
AWS_SESSION_TOKEN: Specifies the session token for temporary security credentials when using AWS Identity and Access Management (IAM) roles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ii.) AWS Config File&lt;/strong&gt;: Sceptre looks for the AWS configuration file (~/.aws/config) that can define multiple profiles, each with its set of credentials. The [default] profile is used if no specific profile is specified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;iii.) AWS Credentials File&lt;/strong&gt;: Sceptre checks for the AWS credentials file (~/.aws/credentials), which can also define multiple profiles with their respective credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;iv.) IAM Role for Amazon EC2 Instance&lt;/strong&gt;: If Sceptre is running on an Amazon EC2 instance, it automatically retrieves the credentials associated with the IAM role attached to that EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v.) IAM Role via Instance Metadata Service&lt;/strong&gt;: If Sceptre is running on an Amazon EC2 instance and there is no IAM role directly attached to the instance, it will check the instance metadata service to determine if there is an IAM role associated with the EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vi.) AWS CLI Configuration&lt;/strong&gt;: If you have configured the AWS CLI with aws configure, Sceptre will also pick up the credentials from the CLI configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vii) Explicitly Defined Credentials&lt;/strong&gt;: You can explicitly define credentials in your Sceptre configuration files (config/*.yaml). By specifying a profile name or providing access key ID, secret access.&lt;/p&gt;

&lt;blockquote&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;It’s important to note that the credentials specified explicitly in the Sceptre configuration files or directly in the AWS CLI commands take precedence over other methods for credential resolution.
By following this order, Sceptre ensures that it can automatically access the correct AWS credentials based on the environment or explicit settings, making it easier for users to manage and deploy AWS resources with different sets of credentials in various scenarios.
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Creating the directory structure for a new Sceptre project&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sceptre provides a convenient command, sceptre new, which can be used to create the directory structure and initial configuration files for a new Sceptre project. This command streamlines the setup process and ensures that you start with a well-organized project layout. To create a new Sceptre project using the sceptre new command, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open your terminal or command prompt.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the location where you want to create the new Sceptre project directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sceptre new project first-sceptre-project&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ALR7tYbZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3664/1%2A_fvJ8gCCucqpgOj0WR8ovA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ALR7tYbZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3664/1%2A_fvJ8gCCucqpgOj0WR8ovA.jpeg" alt="" width="800" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above command will result in the following directory structure:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  my_sceptre_projects
  ├──first-sceptre-project
     ├── config
     │   └── config.yaml
     └── templates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;In the Sceptre project, you will find the configuration directory where you can store the configurations for your Stacks, and the templates directory is designated for holding your CloudFormation templates&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Using Docker for Sceptre
&lt;/h3&gt;

&lt;p&gt;For those who prefer containerization, running Sceptre in a Docker container is an excellent option. First, ensure you have Docker installed on your system.&lt;/p&gt;

&lt;p&gt;To pull the official Sceptre Docker image from Docker Hub, use the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`docker pull cloudreach/sceptre`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rFEiJUAd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3664/1%2AqyYufrQNiHzWGKqaf8yyqg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rFEiJUAd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3664/1%2AqyYufrQNiHzWGKqaf8yyqg.jpeg" alt="" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After downloading the image, either create a new directory for your Sceptre project or navigate to the existing project directory in your terminal. Let’s navigate to the Sceptre project directory named “first-sceptre-project” that was created in a previous step.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd first-sceptre-project&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, you can run Sceptre commands inside the Docker container using the following syntax:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`docker run -v "$(pwd)":/project -w /project cloudreach/sceptre [sceptre-command]`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;For example, to check the sceptre version, run:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`docker run -v "$(pwd)":/project -w /project cloudreach/sceptre --version`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JA6mac1F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4376/1%2A_JTFX9_kkB0uWlutf3cbyg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JA6mac1F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4376/1%2A_JTFX9_kkB0uWlutf3cbyg.jpeg" alt="" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This command mounts your current project directory inside the container and sets it as the working directory for Sceptre commands.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If a custom ENTRYPOINT is desired, you can modify the Docker command accordingly:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --entrypoint='' -it --rm -v "$(pwd)":/project -w /project cloudreach/sceptre sh&lt;/code&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By running the command above, you will enter the Docker container’s shell, allowing you to execute Sceptre commands, which is particularly useful for development purposes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead of pulling down the Sceptre Docker image from Dockerhub, an alternative approach is to build a custom Docker image.&lt;/p&gt;

&lt;p&gt;Create a Dockerfile with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; FROM python:3.9

 RUN pip install sceptre

 ENTRYPOINT ["sceptre"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, build the Docker image by running:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -t sceptre-image .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With our custom-built Docker image ready, we can now utilize it to execute Sceptre commands in our project directory, as previously explained in the steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Sceptre provides a robust and flexible solution for managing cloud infrastructure as code. In this article, we explored various methods of installing and configuring Sceptre, including running it inside a Docker container. The pip installation method is straightforward and is ideal for those who prefer working directly on their system. On the other hand, Docker offers a containerized approach, providing isolation and consistency for Sceptre commands.&lt;/p&gt;

&lt;p&gt;Regardless of the method you choose, Sceptre simplifies cloud infrastructure management, enabling you to define, deploy, and manage AWS resources efficiently. Incorporate Sceptre into your workflow to unleash the true potential of cloud infrastructure as code and take your cloud management practices to new heights.&lt;/p&gt;

&lt;p&gt;Remember to consult the official Sceptre documentation for more advanced features and customization options. Happy coding!&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>sceptre</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>Sceptre: The Powerful Infrastructure as Code Tool for AWS</title>
      <dc:creator>🇦🇺 ☁️ Rumesh Silva  ☁️ 🇦🇺</dc:creator>
      <pubDate>Wed, 26 Jul 2023 03:43:52 +0000</pubDate>
      <link>https://forem.com/rumeshsil/sceptre-the-powerful-infrastructure-as-code-tool-for-aws-51il</link>
      <guid>https://forem.com/rumeshsil/sceptre-the-powerful-infrastructure-as-code-tool-for-aws-51il</guid>
      <description>&lt;h2&gt;
  
  
  Sceptre: The Powerful Infrastructure as Code Tool for AWS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AiI3DGlD9AVgW_ae8DnVf-Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AiI3DGlD9AVgW_ae8DnVf-Q.png" alt="Functioning mechanism of Sceptre"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In today’s fast-paced cloud computing environment, Infrastructure as Code (IAC) has emerged as a fundamental practice for efficiently managing and provisioning cloud resources. By defining infrastructure in a code-like format, IAC tools enable developers and DevOps teams to automate the process of creating, modifying, and managing cloud resources. Among the numerous IAC tools available, Sceptre stands out as a powerful and flexible choice specifically designed for AWS (Amazon Web Services) environments. In this article, we will explore the features and benefits of Sceptre as an IAC tool for AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;em&gt;What is Sceptre?&lt;/em&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Sceptre is an open-source IAC tool developed by Cloudreach that provides a simple yet robust way to define cloud infrastructure as code for AWS. Built on top of AWS CloudFormation, Sceptre leverages the power and versatility of CloudFormation templates while offering a higher level of abstraction and additional functionality to make cloud resource management more intuitive and efficient.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Reasons for choosing Sceptre over CloudFormation, AWS CLI, and Boto3 include its ability to streamline stack deployment, simplify the management of CloudFormation templates, support chaining stack outputs to parameters, offer seamless handling of role assumption and multi-account scenarios, and provide a comprehensive, user-friendly tool to manage AWS infrastructure deployments.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Key Features of Sceptre
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;YAML Configuration: Sceptre uses YAML configuration files to define cloud resources, allowing for human-readable and version-controllable code. This ensures that infrastructure changes are transparent and easy to track.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stacks and Templates: In Sceptre, you define infrastructure in reusable templates and organize them into stacks. Stacks are logical groupings of resources, making it convenient to manage complex infrastructures with ease.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cross-Stack References: Sceptre enables easy referencing of resources across different stacks. This allows for better modularization and reduces duplication of code, promoting best practices in IAC development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Powerful Hooks: Sceptre allows you to execute pre and post-stack creation/update hooks, giving you the ability to customize deployments further. This feature is particularly useful for integration with third-party tools and scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parameter Handling: The tool offers advanced parameter handling, making it effortless to pass dynamic values to CloudFormation templates and modify stack configurations as needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multiple Environments: With Sceptre, you can easily manage multiple environments (e.g., development, staging, production) by defining separate environment-specific configurations, providing better isolation and control.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Core Elements of Sceptre
&lt;/h3&gt;

&lt;p&gt;The core elements of Sceptre can be outlined as follows, with &lt;em&gt;Templates&lt;/em&gt; and &lt;em&gt;Config&lt;/em&gt; being two essential components that must be defined&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stacks&lt;/strong&gt;: Stacks are the core building blocks in Sceptre. A stack represents a collection of AWS resources that are provisioned and managed together. Stacks are defined using templates written in JSON or YAML format.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Templates&lt;/strong&gt;: Templates define the desired state of your infrastructure. They specify the AWS resources you want to create, configure, and manage. Sceptre supports multiple template formats such as JSON, YAML, Jinja2, and Python DSLs like Troposphere.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Config:&lt;/strong&gt; Config files provide a way to configure and parameterize your stacks and templates. Config files allow you to define reusable variables, manage input parameters, and specify stack dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hooks:&lt;/strong&gt; Hooks are scripts or commands that can be executed at specific stages during the stack lifecycle. They allow you to perform custom actions, such as pre or post-stack creation/update tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resolvers:&lt;/strong&gt; Resolvers are used to resolve variables or references in the templates and config files. They provide flexibility and dynamic behavior to your infrastructure definitions. Sceptre supports various resolvers, including environment variables, S3 bucket contents, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Overrides:&lt;/strong&gt; Overrides allow you to modify specific parameters or properties of your stack or template during the creation or update process. They provide a way to customize the behavior of your infrastructure without modifying the original templates.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefits of Sceptre for AWS IAC
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Simplified Cloud Resource Management: Sceptre abstracts away some of the complexities of AWS CloudFormation, making it easier for developers and DevOps teams to create and manage resources in AWS without diving deep into CloudFormation syntax.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reusability and Modularity: The ability to define reusable templates and cross-stack references promotes code reusability, leading to more maintainable and scalable IAC codebases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flexibility and Customization: Hooks and parameter handling offer a high degree of flexibility and customization, enabling seamless integration with existing tools and the ability to tailor stacks to specific use cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Version Control and Collaboration: Storing infrastructure definitions as code in YAML format facilitates version control and fosters smoother collaboration among team members, enhancing overall development practices.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Best Practices: Sceptre encourages adherence to AWS best practices by providing a structured and organized way to manage cloud resources, promoting consistency and reducing potential misconfigurations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;As AWS continues to be a leading cloud service provider, the need for efficient IAC tools becomes increasingly vital for seamless infrastructure management. Sceptre fills this role by offering a user-friendly and powerful solution for defining AWS infrastructure as code. Its simplicity, flexibility, and compatibility with AWS CloudFormation make it a standout choice for teams seeking to harness the benefits of IAC in their AWS environments.&lt;/p&gt;

&lt;p&gt;If you haven’t explored Sceptre yet, now is the time to do so. Embrace the power of Infrastructure as Code with Sceptre and unlock new possibilities for automating, scaling, and optimizing your AWS infrastructure. Happy coding and best of luck on your cloud journey!&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>cloudformation</category>
      <category>devops</category>
      <category>sceptre</category>
    </item>
  </channel>
</rss>
