<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ioannis Moustakis</title>
    <description>The latest articles on Forem by Ioannis Moustakis (@imoustak).</description>
    <link>https://forem.com/imoustak</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/imoustak"/>
    <language>en</language>
    <item>
      <title>Atlantis vs. Terraform Cloud / Terraform Enterprise – Comparison</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Wed, 14 Sep 2022 19:00:41 +0000</pubDate>
      <link>https://forem.com/spacelift/atlantis-vs-terraform-cloud-terraform-enterprise-comparison-58pn</link>
      <guid>https://forem.com/spacelift/atlantis-vs-terraform-cloud-terraform-enterprise-comparison-58pn</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--127NVWYN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w2nxecabmey6fyhqd5u0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--127NVWYN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w2nxecabmey6fyhqd5u0.png" alt="Image description" width="834" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This blog post will look into two Infrastructure as Code (IaC) automation tools, Atlantis and Terraform Cloud/Enterprise and analyze their similarities and differences. &lt;/p&gt;

&lt;p&gt;Atlantis allows users to orchestrate &lt;a href="https://spacelift.io/blog/terraform-automation"&gt;Terraform automation&lt;/a&gt; through pull requests by using comments, and it’s a great tool suited for small-scale projects and casual usage. Terraform Cloud provides a specialized CI/CD platform for Terraform automation and a great remote backend solution. Terraform Cloud is more scalable than Atlantis but offers fewer extensibility options.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Atlantis
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.runatlantis.io/"&gt;Atlantis&lt;/a&gt; is an open-source and self-hosted Terraform “pull request-based” automation tool. It offers an easy way to automate the Terraform workflow using pull request comments. On every new pull request, Atlantis automatically runs the &lt;code&gt;terraform plan&lt;/code&gt; command and comments the output back on the pull request. After the suggested changes have been reviewed, a team member can leave a pull request comment with a special meaning to apply the changes. &lt;/p&gt;

&lt;p&gt;A great benefit of the Atlantis workflow is that it doesn’t add a new user interface (UI) for operators and developers but integrates nicely with your choice’s version control system (VCS) provider. It provides the option to perform code reviews and Terraform operations via the same graphical user interface. Users don’t need access credentials for the infrastructure provider, and errors can be caught during the code review step. With the Atlantis model, each pull request contains a detailed audit log of changes made via Terraform.&lt;/p&gt;

&lt;p&gt;Atlantis self-hosted runners can be given an identity native to your cloud (e.g., AWS instance profile) for access without credentials to the state and managed resources. They can also be configured to run inside the &lt;a href="https://aws.amazon.com/vpc/"&gt;Virtual Private Cloud (VPC)&lt;/a&gt; to access local resources (e.g., VPC-internal database) but need inbound connectivity from the VCS provider to receive webhooks. Its configuration is primarily done using environment variables passed to the statically linked binary and the &lt;a href="https://www.runatlantis.io/docs/configuring-atlantis.html"&gt;YAML file&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Atlantis is stateless, but one of its main drawbacks is that it doesn’t support a high-availability setup or any scaling and queueing support. To accommodate scaling and highly available setups, a substantial engineering effort and creativity are required to build a custom in-house solution. &lt;/p&gt;

&lt;p&gt;Flexibility is one of the core advantages of Atlantis, as it allows easy integration with other Terraform-helper tools(e.g., &lt;a href="https://github.com/aquasecurity/tfsec"&gt;tfsec&lt;/a&gt;, &lt;a href="https://www.checkov.io/"&gt;checkov&lt;/a&gt;, &lt;a href="https://github.com/infracost/infracost-atlantis"&gt;Infracost&lt;/a&gt;, or &lt;a href="https://www.terratag.io/"&gt;Terratag&lt;/a&gt;). It can work with Terraform wrappers, such as &lt;a href="https://terragrunt.gruntwork.io/"&gt;Terragrunt&lt;/a&gt;, out of the box and even add some of Terragrunt’s features to vanilla Terraform – like &lt;a href="https://www.runatlantis.io/docs/custom-workflows.html"&gt;before and after hooks&lt;/a&gt; for every execution stage (init, plan, apply, etc.). &lt;/p&gt;

&lt;p&gt;Atlantis has a vibrant and active community, with new versions being released often. Something to note here is that although the development is active and there are regular contributions, the efforts aren’t focused on new major features since the lead contributor &lt;a href="https://medium.com/runatlantis/joining-hashicorp-200ee9572dc5"&gt;moved to Hashicorp.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Overall, it’s a great tool suited for small-scale operations and infrastructure topologies. It is much appreciated by its user community and offers a flexible automation solution for occasional use. Having said that, it is strongly limited by its architecture, and scaling it isn’t straightforward. If your company has large-scale infrastructure needs, other more robust and mature solutions exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Terraform Cloud
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cloud.hashicorp.com/products/terraform"&gt;Terraform Cloud&lt;/a&gt; is a more comprehensive infrastructure provisioning tool that works exclusively for Terraform, developed by Hashicorp. It provides a scalable solution to automate infrastructure delivery, handle compliance, and manage resources in a cloud-agnostic way, utilizing Terraform. It is Hashicorp’s SaaS managed service offering targeting the Terraform workflow.&lt;/p&gt;

&lt;p&gt;One of Its main offerings is a specialized CI/CD platform to standardize Terraform deployments and reduce their time. It supports an excellent remote state backend and an API for remote Terraform operations and integration with existing workflows. It integrates with VCS providers and allows fully automated, or manual approval checks for infrastructure provisioning flows. &lt;/p&gt;

&lt;p&gt;Interaction with Terraform Cloud can be achieved with the command-line interface (CLI), UI, API, or CI jobs. The r&lt;a href="https://www.terraform.io/language/settings/backends/remote"&gt;emote or enhanced backend&lt;/a&gt; allows teams to run the Terraform binary from their laptops or a third-party CI job, but the operation is executed on a remote machine. This is especially useful for one-off administrative tasks like tainting or migrating resources – things that are not trivial with Atlantis and may require dedicated solutions like &lt;a href="https://github.com/minamijoyo/tfmigrate"&gt;tfmigrate&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Terraform Cloud offers basic security essentials such as RBAC with custom workspace permissions and different access levels for different types of users. Its integration with single sign-on (SSO) allows administrators easy user onboarding and management.&lt;/p&gt;

&lt;p&gt;Unlike Atlantis, Terraform Cloud architecture is highly scalable, so it will take a while to outgrow it. It offers a shared state, distributed execution, concurrent runs, notifications for workspace events, and VCS integrations to support its scalability. &lt;/p&gt;

&lt;p&gt;Teams can leverage Terraform Cloud’s rich API imperatively from external scripts or declaratively from Terraform itself, using &lt;a href="https://registry.terraform.io/providers/hashicorp/tfe/latest/docs"&gt;their provider&lt;/a&gt;. Managing Terraform with Terraform is often a secret to managing IaC at scale in dynamic organizations. It also supports exporting audit logs to external systems via its API. &lt;/p&gt;

&lt;p&gt;One drawback of Terraform Cloud compared to Atlantis is that it is less extensible. While Atlantis lets you execute arbitrary shell commands as part of your Terraform job, Terraform Cloud depends on clever hacks like the &lt;a href="https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource"&gt;null resource&lt;/a&gt; or an external wrapper. For example, suppose you are a Terragrunt user. In that case, you may want to have a CI job (e.g., Jenkins or GitHub Actions) trigger Terragrunt that’s later shelling out to Terraform, which executes the job on your remote Terraform Cloud environment. This extra layer complicates the architecture and workflow and introduces another party to a sensitive flow.&lt;/p&gt;

&lt;p&gt;Terraform Cloud offers some native integrations and third-party tools to incorporate into the Terraform workflow, like their proprietary policy-as-code framework, Sentinel. Leveraging Sentinel, you can create security and compliance guardrails. The disadvantage of Sentinel is that it’s not an industry standard and open-source like &lt;a href="https://spacelift.io/blog/what-is-open-policy-agent-and-how-it-works"&gt;Open Policy Agent (OPA)&lt;/a&gt;. HashiCorp recently announced publishing &lt;a href="https://www.hashicorp.com/blog/introducing-sentinel-policies-to-the-terraform-registry"&gt;reusable Sentinel policies in their public Terraform Registry&lt;/a&gt;, which may give Sentinel a new lease of life in the Terraform ecosystem.&lt;/p&gt;

&lt;p&gt;Last but not least, Terraform Cloud recently announced a &lt;a href="https://www.hashicorp.com/blog/terraform-cloud-adds-drift-detection-for-infrastructure-management"&gt;drift detection feature&lt;/a&gt;, which allows you to monitor the synchronization between your resources and their respective Terraform definitions. This feature can, to some extent, be replicated using /plan and /apply HTTP endpoints from Atlantis, but it’s a far cry from the native, built-in solution that Terraform Cloud offers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Cloud vs. Terraform Enterprise
&lt;/h2&gt;

&lt;p&gt;If for any reason (compliance, regulatory needs, etc.), your organization needs an on-premise version of Terraform Cloud, you can use Terraform Enterprise. &lt;a href="https://www.terraform.io/enterprise"&gt;Terraform Enterprise&lt;/a&gt; is a self-hosted distribution of Terraform Cloud.&lt;/p&gt;

&lt;p&gt;It provides organizations with a private environment installation of the Terraform Cloud instance and enterprise-grade features like single sign-on, compliance enforcement with policies, and audit logging. &lt;/p&gt;

&lt;p&gt;If you plan on hosting your own Terraform Enterprise distribution, have a look at the requirements, &lt;a href="https://www.terraform.io/enterprise/reference-architecture"&gt;reference architectures&lt;/a&gt; for common cloud providers, and the &lt;a href="https://www.terraform.io/enterprise/install/pre-install-checklist"&gt;installation and configuration guide&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Atlantis and Terraform Cloud Similarities
&lt;/h2&gt;

&lt;h3&gt;
  
  
  On-premise Support
&lt;/h3&gt;

&lt;p&gt;Both Atlantis and Terraform Enterprise offer the possibility to host your own installation of the tools. &lt;/p&gt;

&lt;h3&gt;
  
  
  Integration with VCS providers
&lt;/h3&gt;

&lt;p&gt;Most of the standard VCS providers are supported and integrated seamlessly with Atlantis and Terraform Cloud/Enterprise.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration with CI/CD
&lt;/h3&gt;

&lt;p&gt;Both tools can be incorporated into your organization’s existing CI/CD flows and work in parallel with existing continuous integration jobs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CBHHRkn7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zxmiptmlm38oawvqana.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CBHHRkn7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zxmiptmlm38oawvqana.png" alt="Image description" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Atlantis and Terraform Cloud Differences
&lt;/h2&gt;

&lt;p&gt;SaaS Offering&lt;br&gt;
Terraform Cloud/Enterprise is a managed service SaaS offering, while Atlantis doesn’t have a similar offering.&lt;/p&gt;

&lt;h3&gt;
  
  
  User Interface
&lt;/h3&gt;

&lt;p&gt;Atlantis uses the same UI as the VCS provider that you are using and allows operators to trigger automation jobs from pull requests. Terraform Cloud/Enterprise comes with its own UI and portal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open Source Availability
&lt;/h3&gt;

&lt;p&gt;Atlantis is entirely open-source and free, while Terraform Cloud/Enterprise is a proprietary solution, although it offers a free version.&lt;/p&gt;

&lt;h3&gt;
  
  
  Remote State Backend
&lt;/h3&gt;

&lt;p&gt;As part of its offering, Terraform Cloud provides an excellent backend for the Terraform state, while Atlantis doesn’t.&lt;/p&gt;

&lt;h3&gt;
  
  
  High Availability
&lt;/h3&gt;

&lt;p&gt;Terraform Cloud/Enterprise is designed for scale and offers a highly available setup. Scaling and building highly available setups with Atlantis requires additional effort.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flexibility/Extensibility
&lt;/h3&gt;

&lt;p&gt;Atlantis is very flexible and can integrate with other helper tools easily. Extending Terraform Cloud functionality is a bit more cumbersome.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Features
&lt;/h3&gt;

&lt;p&gt;Atlantis keeps all the infrastructure changes and ties them to pull requests that can be used as audit logs. Terraform Cloud offers more elaborate security essentials like RBAC, single sign-on with SAML, and an audit log.&lt;/p&gt;

&lt;h3&gt;
  
  
  Drift Detection
&lt;/h3&gt;

&lt;p&gt;Terraform Cloud offers drift detection, whereas Atlantis doesn’t by default, although similar functionality can be replicated with additional effort.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Calculation
&lt;/h3&gt;

&lt;p&gt;Terraform Cloud offers cost estimation, whereas Atlantis doesn’t by default, although similar functionality can be replicated by adding external tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Atlantis and Terraform Cloud Synergies
&lt;/h2&gt;

&lt;p&gt;Actually, Atlantis and Terraform Cloud can be used together since &lt;a href="https://www.runatlantis.io/docs/terraform-cloud.html"&gt;Atlantis integrates seamlessly with Terraform Cloud/Enterprise&lt;/a&gt;. It doesn’t matter which flavor or Terraform Cloud/Enterprise your team uses since Atlantis can work with all of them.&lt;/p&gt;

&lt;p&gt;If that’s up your alley, you can have the “pull request-based” flow with some of the benefits of a managed solution, like history, access to policies with Sentinel, stopping runs, secret storage, etc. At this point, any generic CI tool would likely do the trick, so there may not be a point in maintaining a self-hosted installation of Atlantis to hand over the work to Terraform Cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternative to Terraform Cloud and Atlantis - Try Spacelift
&lt;/h2&gt;

&lt;p&gt;Terraform Cloud has been one of the first players in the space, but it’s not the most feature-rich platform anymore. Atlantis is great for small projects, but missing features and scaling might cause headaches.&lt;/p&gt;

&lt;p&gt;If you’re choosing between Atlantis and Terraform Cloud, why not give a chance to &lt;a href="https://spacelift.io/?utm_source=blog&amp;amp;utm_medium=text&amp;amp;utm_id=blogpost&amp;amp;utm_content=%7Batlantis_terraform%7D"&gt;Spacelift&lt;/a&gt;, a modern collaborative infrastructure delivery tool with a great focus on user experience? It works with Terraform, Terragrunt, and many other IaC frameworks, supports self-hosted on-prem workers, workflow customization, drift detection, and much more.&lt;/p&gt;

&lt;p&gt;For more differences between the tools, I encourage you to check the article &lt;a href="https://spacelift.io/blog/alternative-to-atlantis"&gt;Spacelift vs. Atlantis&lt;/a&gt; and &lt;a href="https://spacelift.io/terraform-cloud-alternative"&gt;Spacelift vs. Terraform Cloud&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Spacelift provides a more mature way of automating the whole infrastructure provisioning lifecycle. Its &lt;a href="https://docs.spacelift.io/concepts/run/"&gt;flexible and robust workflow&lt;/a&gt; allows teams to get up to speed quickly and collaborate efficiently. Spacelift is highly extensible and will enable teams to enhance the Terraform workflow with custom providers, linters, security tools, and any other custom tooling they see fit.&lt;/p&gt;

&lt;p&gt;Spacelift connects directly to the version control system of your choice and provides a truly GitOps native approach. It can support setups with multiple repositories or massive monorepos and leverages the APIs of the VCS provider to give you visibility. &lt;/p&gt;

&lt;p&gt;Spacelift has a built-in CI/CD functionality for developing custom modules allowing teams to incorporate testing, checks, and linting early into the development phase of modules. Another benefit of using Spacelift is its flexible workflow management. It provides a policy-based process to handle dependencies between projects and deployments with &lt;a href="https://docs.spacelift.io/concepts/policy/trigger-policy"&gt;Trigger Policies&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Spacelift provides a plethora of &lt;a href="https://docs.spacelift.io/concepts/policy/"&gt;Policies&lt;/a&gt; to allow teams to define and automate rules governing the infrastructure as code. By utilizing &lt;a href="https://www.openpolicyagent.org/"&gt;Open Policy Agent&lt;/a&gt;, users can create their own custom policies and ensure the compliance of Terraform resources.&lt;/p&gt;

&lt;p&gt;Check out the &lt;a href="https://docs.spacelift.io/getting-started"&gt;Getting Started Guide&lt;/a&gt; and start automating your infrastructure delivery seamlessly!&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Points
&lt;/h2&gt;

&lt;p&gt;We have looked into two infrastructure automation and delivery tools, Atlantis and Terraform Cloud. We analyzed each of them and discussed their strengths and weaknesses, along with a feature comparison. Finally, we saw how a modern collaborative infrastructure delivery tool like Spacelift could be used as an alternative.&lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this article as much as I did.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>iac</category>
    </item>
    <item>
      <title>Terraform Deployments Automation and Ιnfrastructure Provisioning</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Sat, 03 Sep 2022 09:45:41 +0000</pubDate>
      <link>https://forem.com/spacelift/terraform-deployments-automation-and-infrastructure-provisioning-25bf</link>
      <guid>https://forem.com/spacelift/terraform-deployments-automation-and-infrastructure-provisioning-25bf</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BN59qTxL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tf83486ptu8n2foiggae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BN59qTxL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tf83486ptu8n2foiggae.png" alt="Image description" width="834" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terraform provides a well-defined and concise way to deploy infrastructure resources and changes. The typical workflow involves manual steps and checks that aren’t easily scalable and depend on human intervention to complete successfully. &lt;/p&gt;

&lt;p&gt;This article will look into different approaches to automating infrastructure provisioning with Terraform and the pros and cons of each one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Typical Terraform Workflow
&lt;/h2&gt;

&lt;p&gt;Terraform is one of the most prominent tools in the Infrastructure as Code space. Part of its success lies in the straightforward and easy-to-operate workflow it provides. If you aren’t familiar with Terraform, check the various &lt;a href="https://spacelift.io/blog/terraform"&gt;Terraform articles on Spacelift’s blog&lt;/a&gt; to get an idea. &lt;/p&gt;

&lt;p&gt;The core Terraform workflow consists of three concrete stages. First, we generate the infrastructure as code configuration files representing our environment’s desired state. Next, we check the output of the generated plan based on our manifests. After carefully reviewing the changes, we apply the plan to provision infrastructure resources. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TlKHUYBl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pjbssf32i0y285kxt2e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TlKHUYBl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pjbssf32i0y285kxt2e.png" alt="Image description" width="880" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Typically, this workflow involves some manual steps that are easily automatable. For example, using the Terraform CLI we have to run the command &lt;code&gt;terraform plan&lt;/code&gt; to check the effect of our newly prepared configuration files. Similarly, we have to execute the command &lt;code&gt;terraform apply&lt;/code&gt; to propagate the changes to the live environment. &lt;/p&gt;

&lt;p&gt;If you want to have all of the important Terraform commands in one place, take a look at our &lt;a href="https://spacelift.io/blog/terraform-commands-cheat-sheet"&gt;Terraform Cheat Sheet&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For individual contributors and small teams, operating Terraform with the typical workflow and with manual steps to plan and apply the code might work perfectly fine. When teams get bigger, though, and we want to scale Terraform’s usage across organizations, we quickly reach bottlenecks and issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Infrastructure Provisioning with Terraform
&lt;/h2&gt;

&lt;p&gt;As Terraform’s usage across teams matures, adding some kind of deployment automation is beneficial. Let’s look into some of the approaches to running Terraform in automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhance the Terraform Workflow with Custom Tooling
&lt;/h2&gt;

&lt;p&gt;Some teams continue running Terraform locally, but they add custom tooling, pre-commit hooks, and wrappers(e.g., &lt;a href="https://terragrunt.gruntwork.io/"&gt;Terragrunt&lt;/a&gt;) to enhance the core Terraform workflow. There are different wrapper tools to choose from that provide extra functionalities, such as keeping your configuration DRY, managing remote state, and managing different environments. Other teams prefer writing their own custom wrapper scripts to prepare Terraform working directories according to some standards. &lt;/p&gt;

&lt;p&gt;This semi-manual approach is more native to the core Terraform workflow and allows direct access to output and running operational commands (e.g., terraform import). On the other hand, since it involves manual steps, it is error-prone and requires human intervention. One more thing to note is that usually, this approach requires privileged access to the underlying infrastructure provider as well as the Terraform state file, which might be a security risk.&lt;/p&gt;

&lt;p&gt;One step ahead, some teams develop their own custom platform on top of Terraform manifests that allow end users to provision infrastructure resources via tweaking some configuration changes on a UI. &lt;/p&gt;

&lt;p&gt;This approach abstracts any unnecessary level of detail and Terraform-specific knowledge from end users and allows them to manage infrastructure on demand without another team blocking them. On the flip side, this path requires substantial engineering effort to develop a useful platform and adds a maintenance overhead to the platform team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build infrastructure provisioning pipelines
&lt;/h2&gt;

&lt;p&gt;The most common approach for running Terraform in automation is to build infrastructure provisioning pipelines with a CI/CD tool. With this method, teams can enforce best practices, add tests and checks, and integrate the Terraform workflow to any CI/CD tool they are familiar with. &lt;/p&gt;

&lt;p&gt;Building infrastructure delivery pipelines in CI/CD tools brings several challenges as it needs to adjust the core Terraform workflow for non-interactive environments. &lt;/p&gt;

&lt;p&gt;The first step for automating Terraform deployments is to embrace Infrastructure as Code and GitOps and store your manifests in the version control system of your choice. Having versioned repositories as the source of truth for automating infrastructure delivery is a core requirement. &lt;/p&gt;

&lt;p&gt;Next, the typical Terraform workflow is adjusted for running in remote environments. Since the run might be triggered in ephemeral environments, we have to initialize the Terraform working directory, run any custom checks or tests and produce a planned output for changing resources. &lt;/p&gt;

&lt;p&gt;A common tactic is integrating these steps as part of every proposed code change (e.g., Pull Requests). Once other team members review the proposed changes and find the produced plan acceptable, they can approve and merge the Pull Request. Merging new code to the branch that is considered the source of truth triggers a terraform apply to provision the latest changes.&lt;/p&gt;

&lt;p&gt;Here are some things to consider for building your own automated Terraform delivery pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your code should be stored in a version control system.&lt;/li&gt;
&lt;li&gt;Leverage the -input=false flag to disable interactive prompts. The command line, environment variables, or configuration files should provide any necessary input.&lt;/li&gt;
&lt;li&gt;Use a backend that supports remote Terraform State to allow runs on different machines and state locking for safety against race conditions. &lt;/li&gt;
&lt;li&gt;Prepare an environment to run Terraform with any dependencies pre-installed. To avoid downloading the provider plugins every time with the init command, use the flag -plugin-dir to provide the path of preconfigured plugins on the automation system.&lt;/li&gt;
&lt;li&gt;To allow changing the default backend configuration to deploy with different permissions or to different environments, you can utilize the -backend-config=path flag when initializing. If you only need to run checks on the Terraform files that don’t require initializing the backend (e.g., terraform validate), consider using the flag -backend=false.&lt;/li&gt;
&lt;li&gt;Integrate Terraform formatting, validating, linting, checking policies, and custom testing to the CI/CD pipelines to ensure your code conforms to your organization’s standards. &lt;/li&gt;
&lt;li&gt;Usually, CI/CD pipelines run on distributed systems. To ensure that we will apply the correct plan, we can output the plan to a file and package the whole terraform working directory after each plan. These artifacts will be stored somewhere to be fetched by the apply step to avoid accidentally applying different changes to the ones reviewed. &lt;/li&gt;
&lt;li&gt;Optionally, use the flag -auto-approve to apply the changes without human intervention. &lt;/li&gt;
&lt;li&gt;Use environment variables prefixed with TF_VAR_ to pass any necessary values using the CI/CD tool mechanisms. &lt;/li&gt;
&lt;li&gt;Set the environment variable TF_IN_AUTOMATION to indicate that Terraform is running in automation mode. This adjusts the output of some commands to avoid outputting messages that are misleading in an automation environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Integrating the Terraform workflow in &lt;a href="https://spacelift.io/blog/terraform-in-ci-cd"&gt;CI/CD infrastructure&lt;/a&gt; provisioning pipelines is a great way to automate infrastructure delivery. Running Terraform in CI/CD automation eliminates the need for people’s privileged access, enforces a consistent workflow and way of working, and removes any human intervention.&lt;/p&gt;

&lt;p&gt;On the other hand, we have to take into account considerations for running on distributed systems, spend substantial engineering time and put a lot of effort into building a custom pipeline that satisfies our team’s needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Spacelift Can Help You Automate Terraform Deployments
&lt;/h2&gt;

&lt;p&gt;A more robust approach to automating your Terraform workflows end-to-end would be to use &lt;a href="https://spacelift.io/?utm_source=blog&amp;amp;utm_medium=text&amp;amp;utm_id=blogpost&amp;amp;utm_content=%7Bterraform_automation%7D"&gt;Spacelift&lt;/a&gt;, a collaborative infrastructure delivery tool. Spacelift provides a more mature way of automating the whole infrastructure provisioning lifecycle. Its &lt;a href="https://docs.spacelift.io/concepts/run/"&gt;flexible and robust workflow&lt;/a&gt; allows teams to get up to speed quickly and collaborate efficiently.&lt;/p&gt;

&lt;p&gt;Spacelift connects directly to the version control system of your choice and provides a truly GitOps native approach. It can support setups with multiple repositories or massive monorepos and leverages the APIs of the VCS provider to give you visibility. &lt;/p&gt;

&lt;p&gt;The Spacelift runners are fully customizable Docker containers. This allows teams to enhance the Terraform workflow with custom providers, linters, security tools, and any other custom tooling you might see fit. &lt;/p&gt;

&lt;p&gt;Spacelift has a built-in CI/CD functionality for developing custom modules allowing teams to incorporate testing, checks, and linting early into the development phase of modules. Another benefit of using Spacelift is its flexible workflow management. It provides a policy-based process to handle dependencies between projects and deployments with &lt;a href="https://docs.spacelift.io/concepts/policy/trigger-policy"&gt;Trigger Policies&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Sometimes, the actual state of live environments drifts from the desired state, a concept known as configuration drift. Spacelift can assist you in &lt;a href="https://docs.spacelift.io/concepts/stack/drift-detection"&gt;automatically detecting and if desired, reconciling configuration drift&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Spacelift provides a plethora of &lt;a href="https://docs.spacelift.io/concepts/policy/"&gt;Policies&lt;/a&gt; to allow teams to define and automate rules governing the infrastructure as code. By utilizing &lt;a href="https://www.openpolicyagent.org/"&gt;Open Policy Agent&lt;/a&gt;, users can create their own custom policies and ensure the compliance of Terraform resources.&lt;/p&gt;

&lt;p&gt;Check out the &lt;a href="https://docs.spacelift.io/getting-started"&gt;Getting Started Guide&lt;/a&gt; and start automating your infrastructure delivery easily!&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Points
&lt;/h2&gt;

&lt;p&gt;In this blog post, we discussed different approaches and strategies to automate Terraform deployments and provision infrastructure in an automated fashion. We looked into the typical Terraform workflow and saw how we can enhance it with orchestration tools. Finally, we saw how Spacelift greatly assists us in bringing our Terraform automation to the next level.&lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this article as much as I did.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>automation</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Terraform vs. Kubernetes : Key Differences and Comparison</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Sat, 03 Sep 2022 09:28:27 +0000</pubDate>
      <link>https://forem.com/spacelift/terraform-vs-kubernetes-key-differences-and-comparison-2m7</link>
      <guid>https://forem.com/spacelift/terraform-vs-kubernetes-key-differences-and-comparison-2m7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1fHLv9RJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cdx5ur5o3blg178onbe3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1fHLv9RJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cdx5ur5o3blg178onbe3.png" alt="Image description" width="834" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article will compare two of the most dominant tools in the cloud infrastructure space, Terraform and Kubernetes. These two tools share some similarities but are built to serve different purposes. Terraform is a tool focused on infrastructure provisioning and operates in the Infrastructure as code space. Kubernetes focuses on running container workloads and operates in the container orchestration space.&lt;/p&gt;

&lt;p&gt;We will briefly take a look at each one of them and discuss their similarities and differences.&lt;/p&gt;

&lt;p&gt;To learn more about these two foundational cloud infrastructure technologies, check the multiple tutorials on Spacelift’s blog around &lt;a href="https://spacelift.io/blog/kubernetes"&gt;Kubernetes&lt;/a&gt; and &lt;a href="https://spacelift.io/blog/terraform"&gt;Terraform&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform in a Nutshell
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://spacelift.io/blog/what-is-terraform"&gt;Terraform&lt;/a&gt; is an open-source software tool that allows us to safely and predictably manage infrastructure at scale using cloud-agnostic and infrastructure as code principles. It is a powerful tool developed by Hashicorp that enables infrastructure provisioning both on the cloud and on-premises. &lt;/p&gt;

&lt;p&gt;Terraform is written in a declarative configuration language, Hashicorp Configuration Language (HCL), and facilitates the automation of infrastructure management in any environment. It allows IT professionals to collaborate and perform changes safely on cloud environments and scale them on-demand according to the business needs. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://spacelift.io/blog/what-are-terraform-modules-and-how-do-they-work"&gt;Modules&lt;/a&gt; provide excellent reusability and code-sharing opportunities to boost the collaboration and productivity of teams operating on the cloud. &lt;a href="https://spacelift.io/blog/terraform-providers"&gt;Providers&lt;/a&gt; are plugins that offer integration and interaction with different APIs and are one of the main ways to extend Terraform’s functionality. &lt;/p&gt;

&lt;p&gt;Terraform keeps an internal &lt;a href="https://www.terraform.io/language/state"&gt;state&lt;/a&gt; of the managed infrastructure, which represents resources, configuration, metadata, and their relationships. The state is actively maintained by Terraform and utilized to create plans, track changes, and enable modifications of infrastructure environments. The state should be stored remotely to allow teamwork and collaboration as a best practice. &lt;/p&gt;

&lt;p&gt;The core Terraform workflow consists of three concrete stages. First, we generate the infrastructure as code configuration files representing our environment’s desired state. Next, we check the output of the generated plan based on our manifests. After carefully reviewing the changes, we apply the plan to provision infrastructure resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X3P6-IBs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sus12w1gxz5u31cuq6c4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X3P6-IBs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sus12w1gxz5u31cuq6c4.png" alt="Image description" width="880" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes in a Nutshell
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; (K8s) is an open-source system for container orchestration, automating deployments, and managing containerized apps. Its powerful orchestration system enables applications to scale seamlessly and achieve high availability. It has been designed and developed by Google, leveraging its vast experience in running and maintaining critical workloads in production.&lt;/p&gt;

&lt;p&gt;Kubernetes strives to be cloud agnostic at its core, providing great flexibility in running workloads across cloud and on-premises environments. Additionally, it is designed with extensibility in mind, providing the option to add features and custom tooling to clusters easily.  &lt;/p&gt;

&lt;p&gt;One of its main benefits is the self-healing capabilities it provides. Containers that fail are automatically restarted and rescheduled, nodes can be configured to be automatically replaced, and traffic is served only by healthy components based on health checks.  &lt;/p&gt;

&lt;p&gt;Rollouts are handled progressively, and Kubernetes provides smart mechanisms to monitor application health during deployments. Rolling back problematic changes happens automatically if the application health doesn’t report a healthy status after a new deployment. Keeping the application running while rolling out new software versions has been a hot topic in the Kubernetes ecosystem over the past years, with many possible deployment strategies.&lt;/p&gt;

&lt;p&gt;Kubernetes handles service discovery and load-balances traffic between similar pods natively without the need for complex external solutions. It has extendable built-in mechanisms to manage configuration and secrets for your applications. Scaling your applications has never been easier since it provides autoscaling options, scaling through commands, or via a UI. &lt;/p&gt;

&lt;p&gt;Kubernetes provides a cluster of nodes, a group of worker machines that run containerized applications. Each node hosts pods that hold application workload containers. The brain of the whole system is the control plane. Each cluster consists of several components that manage the worker nodes and pods and guarantee operational continuity. &lt;/p&gt;

&lt;p&gt;The API server is the component exposing the Kubernetes API and operates as the front-end of the control plane by handling all the communication between other parts. The etcd component is used to store all cluster data and state. The scheduler manages how pods are assigned to nodes and takes all the workload scheduling decisions. The controller manager components run different controller processes to ensure that the cluster’s desired state matches its current state. The cloud controller manager integrates Kubernetes clusters with external cloud providers, embeds their logic, and links the Kubernetes API with the Cloud Provider’s API. On each node, the kubelet is the agent responsible for running containers in pods, and kube-proxy is the component that adds the necessary networking capabilities for communication between pods and nodes. &lt;/p&gt;

&lt;p&gt;Check out this article to learn more about the &lt;a href="https://spacelift.io/blog/kubernetes-cluster"&gt;Key Kubernetes Cluster Components&lt;/a&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bk-DCKQL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r9xf72z87db4r1weznle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bk-DCKQL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r9xf72z87db4r1weznle.png" alt="Image description" width="880" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes and Terraform Differences
&lt;/h2&gt;

&lt;p&gt;These two modern technologies have many similarities but also fundamental differences. Let’s look into some of them in more detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Area of Focus
&lt;/h3&gt;

&lt;p&gt;First and foremost, Terraform and Kubernetes have different purposes and try to solve different problems. Terraform focuses on provisioning infrastructure components and targets the Infrastructure as Code space. Kubernetes aims to enable us to run container workloads and targets the container orchestration space. &lt;/p&gt;

&lt;h3&gt;
  
  
  2) Configuration Language and CLI
&lt;/h3&gt;

&lt;p&gt;Manifests in Terraform are written in HCL language, while in Kubernetes in YAML or JSON. Each tool has its own command line utility and tool-specific internals to understand before being productive. &lt;/p&gt;

&lt;h3&gt;
  
  
  3) Tool workflow
&lt;/h3&gt;

&lt;p&gt;The Terraform workflow is generally considered easy to understand and provides a welcoming experience for new users. On the other hand, to be effective in running applications in Kubernetes, one has to understand a lot of cluster internal components and mechanics, and usually, it takes more time for users to get up to speed with Kubernetes. &lt;/p&gt;

&lt;h4&gt;
  
  
  4) Configuration Drift &amp;amp; Planning Phase
&lt;/h4&gt;

&lt;p&gt;Terraform provides a native way to detect and inform you about configuration drift and unwanted changes by leveraging the planning phase of the typical workflow. In contrast, Kubernetes doesn’t support this functionality out of the box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QWAauphJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9p58g0jgrozifzyae3bl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QWAauphJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9p58g0jgrozifzyae3bl.png" alt="Image description" width="880" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes and Terraform Similarities
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) DevOps Tools
&lt;/h3&gt;

&lt;p&gt;Both tools operate in the DevOps space and are typically set up and configured by the same type of IT practitioners; Site Reliability, DevOps, and Cloud engineers. &lt;/p&gt;

&lt;h3&gt;
  
  
  2) Open Source &amp;amp; Cloud Agnostic
&lt;/h3&gt;

&lt;p&gt;Both tools are open source with various contributions by their online communities. They also take a similar approach to striving to be as cloud, platform, and API agnostic as possible to accommodate workloads across different environments. Even though they try to keep the core of the projects agnostic to external providers, both tools have mature and actively maintained integrations with the most common cloud providers.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) Declarative Configuration
&lt;/h3&gt;

&lt;p&gt;Although they use different languages, Terraform and Kubernetes take a similar approach conceptually to define the configuration. Manifests in both tools are written with the declarative approach. &lt;/p&gt;

&lt;h3&gt;
  
  
  4) State Management
&lt;/h3&gt;

&lt;p&gt;The notion of the state exists in both tools, although it is implemented differently. Terraform and Kubernetes apply some logic to reconcile the desired state configured in declarative configuration files with the running state.&lt;/p&gt;

&lt;h3&gt;
  
  
  5) Extensibility
&lt;/h3&gt;

&lt;p&gt;Both tools are highly extensible by leveraging external plugins, connecting to external APIs, or defining custom resources if necessary. &lt;/p&gt;

&lt;h3&gt;
  
  
  6) Well-Suited for Scale
&lt;/h3&gt;

&lt;p&gt;Terraform and Kubernetes are battle-tested technologies that can support huge scale since they are designed and architected with scaling considerations for modern cloud-native environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  7) CI/CD Compatibility
&lt;/h3&gt;

&lt;p&gt;Since both tools offer highly and easily automatable workflows, they can be integrated and combined with CI/CD pipelines to automate their lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes and Terraform Synergies
&lt;/h2&gt;

&lt;p&gt;By decomposing all the information we discussed, we realize that Kubernetes and Terraform complement each other since they operate at two different levels and can be utilized in parallel. &lt;/p&gt;

&lt;p&gt;A typical model that cloud practitioners adopt is to use Terraform to provision infrastructure resources (e.g. Kubernetes clusters) and use Kubernetes to manage the containerized apps that run on top of the clusters. &lt;/p&gt;

&lt;p&gt;Terraform’s approach simplifies and standardizes the complex task of provisioning Kubernetes clusters. Terraform, in this case, enables a unified flow for provisioning Kubernetes clusters across providers with a declarative approach that is preferred over command line utilities. This approach works great, but users must use separate flows to manage infrastructure and application resources. &lt;/p&gt;

&lt;p&gt;Another approach is to use Terraform to manage Kubernetes-specific application components as well. This model has the advantage of adding the Terraform workflow to Kubernetes components. This way, IT operators can detect configuration drift on Kubernetes and manage infrastructure and application resources with the same workflow and configuration language. &lt;/p&gt;

&lt;p&gt;This approach has a significant disadvantage since Terraform requires a well-defined schema for each managed resource. Thus each Kubernetes resource needs to be translated into a Terraform schema to be available. This dependency makes maintaining Kubernetes resources through Terraform cumbersome at times.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes and Terraform with Spacelift
&lt;/h2&gt;

&lt;p&gt;Spacelift supports both &lt;a href="https://docs.spacelift.io/vendors/terraform"&gt;Terraform&lt;/a&gt; and &lt;a href="https://docs.spacelift.io/vendors/kubernetes"&gt;Kubernetes&lt;/a&gt; and enables users to create &lt;a href="https://docs.spacelift.io/concepts/stack"&gt;stacks&lt;/a&gt; based on them. Leveraging Spacelift, you can build CI/CD pipelines to combine them and get the best of each tool. This way, you will use a single tool to manage your Terraform and Kubernetes resources lifecycle, allow your teams to collaborate easily, and add some necessary security controls to your workflows.&lt;/p&gt;

&lt;p&gt;You could, for example, deploy Kubernetes clusters with Terraform stacks and then, on separate Kubernetes stacks, deploy your containerized applications to your clusters. With this approach, you can easily integrate drift detection into your Kubernetes stacks and enable your teams to &lt;a href="https://docs.spacelift.io/concepts/stack/organizing-stacks"&gt;manage all your stacks from a single place&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;To take this one step further, you could add &lt;a href="https://docs.spacelift.io/concepts/policy"&gt;custom policies&lt;/a&gt; to harden the security and reliability of your configurations and deployments. Spacelift provides different types of policies and workflows easily customizable to fit every use case. You could, for instance, add &lt;a href="https://docs.spacelift.io/concepts/policy/terraform-plan-policy"&gt;plan policies&lt;/a&gt; to restrict or warn about security or compliance violations or &lt;a href="https://docs.spacelift.io/concepts/policy/approval-policy"&gt;approval policies&lt;/a&gt; to add an approval step during deployments. The possibilities are endless with Spacelift since it provides a great way to blend Terraform and Kubernetes and enhance their capabilities with extra functionality. &lt;/p&gt;

&lt;p&gt;Take a look at the &lt;a href="https://docs.spacelift.io/getting-started"&gt;Getting Started Guide&lt;/a&gt; to liftoff with Spacelift!&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Points
&lt;/h2&gt;

&lt;p&gt;We delved into two of the most used modern DevOps tools, Kubernetes and Terraform. We discovered what makes each of them appealing and what functionalities they provide to IT operators and developers. We discussed their similarities, differences, and synergies and explored ways to combine them with Spacelift.&lt;/p&gt;

&lt;p&gt;Thank you all for reading, and I hope you enjoyed this as much as I did.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>containers</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Ansible Modules – How To Use Them Efficiently (Examples)</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Fri, 29 Jul 2022 17:16:00 +0000</pubDate>
      <link>https://forem.com/spacelift/ansible-modules-how-to-use-them-efficiently-examples-5gl6</link>
      <guid>https://forem.com/spacelift/ansible-modules-how-to-use-them-efficiently-examples-5gl6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhytygovm6whkqwt1c8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhytygovm6whkqwt1c8q.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article delves into Ansible modules, one of the core building blocks of Ansible. In this blog post, we will examine the purpose and usage of modules, along with information on how to build them and best practices. &lt;/p&gt;

&lt;p&gt;If you are interested in other Ansible concepts, &lt;a href="https://spacelift.io/blog/ansible" rel="noopener noreferrer"&gt;these Ansible tutorials&lt;/a&gt; posted on Spacelift’s blog might be helpful for you.&lt;/p&gt;

&lt;p&gt;What Are Ansible Modules?&lt;br&gt;
&lt;em&gt;Modules&lt;/em&gt; represent distinct units of code, each one with specific functionality. Basically, they are standalone scripts written for a particular job and are used in tasks as their main functional layer.&lt;/p&gt;

&lt;p&gt;We build Ansible modules to abstract complexity and provide end-users with an easier way to execute their automation tasks without needing all the details. This way, some of the cognitive load of more complex tasks is abstracted away from Ansible users by leveraging the appropriate modules. &lt;/p&gt;

&lt;p&gt;Here’s an example of a task using the apt package manager module to install a specific version of &lt;em&gt;Nginx&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: "Install Nginx to version {{ nginx_version }} with apt module"
   ansible.builtin.apt:
     name: "nginx={{ nginx_version }}"
     state: present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modules can be executed as well directly from the command line. Here’s an example of running the &lt;em&gt;ping&lt;/em&gt; module against all the &lt;em&gt;database&lt;/em&gt; hosts from the command line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible databases -m ping
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Working With Ansible Modules
&lt;/h2&gt;

&lt;p&gt;A well-designed module provides a predictable and well-defined interface that accepts arguments that make sense and are consistent with other modules. Modules take some arguments as input and return values in JSON format after execution. &lt;/p&gt;

&lt;p&gt;Ansible modules should follow &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Idempotency" rel="noopener noreferrer"&gt;idempotency principles&lt;/a&gt;, which means that consecutive runs of the same module should have the same effect if nothing else changes. Well-designed modules detect if the current and desired state match and avoid making changes if that’s the case. &lt;/p&gt;

&lt;p&gt;We can utilize handlers to control the flow execution of modules and tasks in a &lt;a href="https://spacelift.io/blog/ansible-playbooks" rel="noopener noreferrer"&gt;playbook&lt;/a&gt;. Modules can trigger additional downstream modules and tasks by notifying specific handlers.&lt;/p&gt;

&lt;p&gt;As mentioned, modules return data structures in JSON data. We can store these return values in variables and use them in other tasks or display them to the console. Look at the &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html" rel="noopener noreferrer"&gt;common return values&lt;/a&gt; for all modules to get an idea. &lt;/p&gt;

&lt;p&gt;For custom modules, the return values should be documented along with other useful information for the module. The command-line tool &lt;code&gt;ansible-doc&lt;/code&gt; displays this information. &lt;/p&gt;

&lt;p&gt;Here’s an example output of running the &lt;em&gt;ansible-doc&lt;/em&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-doc apt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxt56hdle3gykvc2lrrsa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxt56hdle3gykvc2lrrsa.png" alt=" " width="635" height="1117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the latest versions of Ansible, most modules are part of &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/collections_using.html" rel="noopener noreferrer"&gt;collections&lt;/a&gt;, a distribution format that includes roles, modules, plugins, and playbooks. Many of the core modules we use extensively are part of the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/index.html#plugins-in-ansible-builtin" rel="noopener noreferrer"&gt;Ansible.Builtin&lt;/a&gt; collection. To find other available modules have a look at the &lt;a href="https://docs.ansible.com/ansible/latest/collections/index.html#list-of-collections" rel="noopener noreferrer"&gt;Collection docs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  12 Useful &amp;amp; Common Ansible Modules
&lt;/h2&gt;

&lt;p&gt;In this part, we explore some of the most used and helpful modules, and for each, we provide a working example. The modules in this list are picked based on their popularity within the Ansible community and functionality to perform everyday automation tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Package Manager Modules yum &amp;amp; apt
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_module.html" rel="noopener noreferrer"&gt;apt module&lt;/a&gt; is part of &lt;em&gt;ansible-core&lt;/em&gt; and manages apt packages for Debian/Ubuntu Linux distributions. Here’s an example that updates the repository cache and updates the &lt;em&gt;Nginx&lt;/em&gt; package to the latest version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Update the repository cache and update package "nginx" to latest version
  ansible.builtin.apt:
    name: nginx
    state: latest
    update_cache: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/yum_module.html" rel="noopener noreferrer"&gt;yum module&lt;/a&gt; is also part of ansible-core and manages packages with yum for RHEL/Centos/Fedora Linux distributions. Here’s the same example as above with the &lt;em&gt;yum&lt;/em&gt; module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Update the repository cache and update package "nginx" to latest version
   ansible.builtin.yum:
     name: nginx
     state: latest
     update_cache: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Service Module
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/service_module.html" rel="noopener noreferrer"&gt;service module&lt;/a&gt; controls services on remote hosts and can leverage different init systems depending on their availability in a system. This module provides a nice abstraction layer for underlying service manager modules. Here’s an example of restarting the &lt;em&gt;docker&lt;/em&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Restart docker service
   ansible.builtin.service:
     name: docker
     state: restarted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  File Module
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/file_module.html" rel="noopener noreferrer"&gt;file module&lt;/a&gt; handles operations to files, symlinks, and directories. Here’s an example of using this module to create a directory with specific permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Create the directory "/etc/test" if it doesnt exist and set permissions
  ansible.builtin.file:
    path: /etc/test
    state: directory
    mode: '0750'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Copy Module
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html" rel="noopener noreferrer"&gt;copy module&lt;/a&gt; copies files to the remote machine and handles file transfers or moves within a remote system. Here’s an example of copying a file to the remote machine with permissions set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Copy file with owner and permissions
  ansible.builtin.copy:
    src: /example_directory/test
    dest: /target_directory/test
    owner: joe
    group: admins
    mode: '0755'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Template Module
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html" rel="noopener noreferrer"&gt;template module&lt;/a&gt; assists us to template files out to target hosts by leveraging the &lt;a href="https://jinja.palletsprojects.com/en/3.1.x/" rel="noopener noreferrer"&gt;Jinja2 templating language&lt;/a&gt;. Here’s an example of using a template file and some set Ansible variables to generate an Nginx configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Copy and template the Nginx configuration file to the host
  ansible.builtin.template:
    src: templates/nginx.conf.j2
    dest: /etc/nginx/sites-available/default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Lineinfile &amp;amp; Blockinfile Modules
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/lineinfile_module.html" rel="noopener noreferrer"&gt;lineinfile module&lt;/a&gt; adds, replaces, or ensures that a particular line exists in a file. It’s pretty common to use this module when we need to update a single line in configuration files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Add a line to a file if it doesnt exist
  ansible.builtin.lineinfile:
    path: /tmp/example_file
    line: "This line must exist in the file"
    state: present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/blockinfile_module.html" rel="noopener noreferrer"&gt;blockinfile module &lt;/a&gt;inserts, updates, or removes a block of lines from a file. It has the same functionality as the previous module but is used when you want to manipulate multi-line text blocks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Add a block of config options at the end of the file if it doesn’t exist
  ansible.builtin.blockinfile:
    path: /etc/example_dictory/example.conf
    block: |
      feature1_enabled: true
      feature2_enabled: false
      feature2_enabled: true
    insertafter: EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cron Module
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/cron_module.html" rel="noopener noreferrer"&gt;cron module&lt;/a&gt; manages crontab entries and environment variables entries on remote hosts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run daily DB backup script at 00:00
  ansible.builtin.cron:
    name: "Run daily DB backup script at 00:00"
    minute: "0"
    hour: "0"
    job: "/usr/local/bin/db_backup_script.sh &amp;gt; /var/log/db_backup_script.sh.log 2&amp;gt;&amp;amp;1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Wait_for Module
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/wait_for_module.html" rel="noopener noreferrer"&gt;wait_for module&lt;/a&gt; provides a way to stop the execution of plays and wait for conditions, amount of time to pass, ports to become open, processes to finish, files to be available, strings to exist in files, etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Wait until a string is in the file before continuing
  ansible.builtin.wait_for:
    path: /tmp/example_file
    search_regex: "String exists, continue"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Command &amp;amp; Shell Modules
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/command_module.html" rel="noopener noreferrer"&gt;command&lt;/a&gt; and &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/shell_module.html" rel="noopener noreferrer"&gt;shell&lt;/a&gt; modules execute commands on remote nodes. Their main difference is that the &lt;em&gt;command&lt;/em&gt; module bypasses the local shell, and consequently, variables like $HOSTNAME or $HOME aren’t available, and operations like “&amp;lt;”, “&amp;amp;” don’t work. If you need these features, you have to use the &lt;em&gt;shell&lt;/em&gt; module. &lt;/p&gt;

&lt;p&gt;On the other hand, the remote local environment won’t affect the &lt;em&gt;command&lt;/em&gt; module, so its outcome is considered more predictable and secure. &lt;/p&gt;

&lt;p&gt;Usually, it’s always preferred to use specialized Ansible modules to perform tasks instead of &lt;em&gt;command&lt;/em&gt; and &lt;em&gt;shell&lt;/em&gt;. There are cases, though, where you won’t be able to get the functionality that you need from specialized modules, and you will have to use one of these two. Use command and shell with care, and always try to check if there is a specialized module that can serve you better before relying on them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Execute a script in remote shell and capture the output to file
  ansible.builtin.shell: script.sh &amp;gt;&amp;gt; script_output.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building Ansible Modules
&lt;/h2&gt;

&lt;p&gt;For advanced users, there is always the option to develop custom modules if they have needs that can’t be satisfied by existing modules. Since modules always should return JSON data, they can be written in any programming language. &lt;/p&gt;

&lt;p&gt;Before jumping into module development, ensure that a similar module doesn’t exist to avoid unnecessary work. Αdditionally, you might be able to combine different modules to achieve the functionality you need. In this case, you might be able to replicate the behavior you want by creating a role that leverages other modules. Another option is to use &lt;a href="https://docs.ansible.com/ansible/latest/plugins/plugins.html" rel="noopener noreferrer"&gt;plugins&lt;/a&gt; to enhance Ansible’s basic functionality with logic and new features accessible to all modules.&lt;/p&gt;

&lt;p&gt;Next, we will go through an example of creating a custom module that takes as input a string that represents an epoch timestamp and converts it to its human-readable equivalent of type &lt;a href="https://docs.python.org/3/library/datetime.html" rel="noopener noreferrer"&gt;datetime&lt;/a&gt; in Python. You can find the code for this tutorial &lt;a href="https://github.com/spacelift-io-blog-posts/Blog-Technical-Content/tree/master/ansible-modules" rel="noopener noreferrer"&gt;on this repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, let’s create a &lt;code&gt;library&lt;/code&gt; directory on the top of our repository to place our custom module. Playbooks with a &lt;code&gt;./library&lt;/code&gt; directory relative to their YAML file can add custom ansible modules that can be recognized in the ansible module path. This way, we can group custom modules and their related playbooks. &lt;/p&gt;

&lt;p&gt;We create our custom Python module &lt;code&gt;epoch_converter.py&lt;/code&gt; inside the library directory. This simple module takes as input the argument &lt;code&gt;epoch_timestamp&lt;/code&gt; and converts it to datetime type. We use another argument, &lt;code&gt;state_changed&lt;/code&gt;, to simulate a change in the target system by this module.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;library/epoch_converter.py&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#!/usr/bin/python
&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;__future__&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;absolute_import&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;division&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;print_function&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;__metaclass__&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;

&lt;span class="n"&gt;DOCUMENTATION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'''&lt;/span&gt;&lt;span class="s"&gt;
---
module: epoch_converter

short_description: This module converts an epoch timestamp to human-readable date.

# If this is part of a collection, you need to use semantic versioning,
# i.e. the version is of the form &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2.5.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; and not &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2.4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.
version_added: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;

description: This module takes a string that represents a Unix epoch timestamp and displays its human-readable date equivalent.

options:
   epoch_timestamp:
       description: This is the string that represents a Unix epoch timestamp.
       required: true
       type: str
   state_changed:
       description: This string simulates a modification of the target&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s state.
       required: false
       type: bool

author:
   - Ioannis Moustakis (@Imoustak)
&lt;/span&gt;&lt;span class="sh"&gt;'''&lt;/span&gt;

&lt;span class="n"&gt;EXAMPLES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'''&lt;/span&gt;&lt;span class="s"&gt;
# Convert an epoch timestamp
- name: Convert an epoch timestamp
 epoch_converter:
   epoch_timestamp: 1657382362
&lt;/span&gt;&lt;span class="sh"&gt;'''&lt;/span&gt;

&lt;span class="n"&gt;RETURN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'''&lt;/span&gt;&lt;span class="s"&gt;
# These are examples of possible return values, and in general should use other names for return values.
human_readable_date:
   description: The human-readable equivalent of the epoch timestamp input.
   type: str
   returned: always
   sample: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2022-07-09T17:59:22&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;
original_timestamp:
   description: The original epoch timestamp input.
   type: str
   returned: always
   sample: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;16573823622&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;

&lt;/span&gt;&lt;span class="sh"&gt;'''&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;ansible.module_utils.basic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AnsibleModule&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_module&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
   &lt;span class="c1"&gt;# define available arguments/parameters a user can pass to the module
&lt;/span&gt;   &lt;span class="n"&gt;module_args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
       &lt;span class="n"&gt;epoch_timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;str&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
       &lt;span class="n"&gt;state_changed&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;bool&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;)&lt;/span&gt;

   &lt;span class="c1"&gt;# seed the result dict in the object
&lt;/span&gt;   &lt;span class="c1"&gt;# we primarily care about changed and state
&lt;/span&gt;   &lt;span class="c1"&gt;# changed is if this module effectively modified the target
&lt;/span&gt;   &lt;span class="c1"&gt;# state will include any data that you want your module to pass back
&lt;/span&gt;   &lt;span class="c1"&gt;# for consumption, for example, in a subsequent task
&lt;/span&gt;   &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
       &lt;span class="n"&gt;changed&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;human_readable_date&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;original_timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;
   &lt;span class="p"&gt;)&lt;/span&gt;

   &lt;span class="c1"&gt;# the AnsibleModule object will be our abstraction working with Ansible
&lt;/span&gt;   &lt;span class="c1"&gt;# this includes instantiation, a couple of common attr would be the
&lt;/span&gt;   &lt;span class="c1"&gt;# args/params passed to the execution, as well as if the module
&lt;/span&gt;   &lt;span class="c1"&gt;# supports check mode
&lt;/span&gt;   &lt;span class="n"&gt;module&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AnsibleModule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
       &lt;span class="n"&gt;argument_spec&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;module_args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;supports_check_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
   &lt;span class="p"&gt;)&lt;/span&gt;

   &lt;span class="c1"&gt;# if the user is working with this module in only check mode we do not
&lt;/span&gt;   &lt;span class="c1"&gt;# want to make any changes to the environment, just return the current
&lt;/span&gt;   &lt;span class="c1"&gt;# state with no modifications
&lt;/span&gt;   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;check_mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
       &lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

   &lt;span class="c1"&gt;# manipulate or modify the state as needed (this is going to be the
&lt;/span&gt;   &lt;span class="c1"&gt;# part where your module will do what it needs to do)
&lt;/span&gt;   &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;original_timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;epoch_timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
   &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;human_readable_date&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromtimestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;epoch_timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;

   &lt;span class="c1"&gt;# use whatever logic you need to determine whether or not this module
&lt;/span&gt;   &lt;span class="c1"&gt;# made any modifications to your target
&lt;/span&gt;   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;state_changed&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
       &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;changed&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;

   &lt;span class="c1"&gt;# during the execution of the module, if there is an exception or a
&lt;/span&gt;   &lt;span class="c1"&gt;# conditional state that effectively causes a failure, run
&lt;/span&gt;   &lt;span class="c1"&gt;# AnsibleModule.fail_json() to pass in the message and the result
&lt;/span&gt;   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;epoch_timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;fail&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
       &lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fail_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;You requested this to fail&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

   &lt;span class="c1"&gt;# in the event of a successful module execution, you will want to
&lt;/span&gt;   &lt;span class="c1"&gt;# simple AnsibleModule.exit_json(), passing the key/value results
&lt;/span&gt;   &lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
   &lt;span class="nf"&gt;run_module&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;


&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
   &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test our module, let’s create a &lt;code&gt;test_custom_module.yml&lt;/code&gt;playbook in the same directory as our &lt;code&gt;library&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;test_custom_module.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Test my new module
  hosts: localhost
  tasks:
  - name: Run the new module
    epoch_converter:
      epoch_timestamp: '1657382362'
      state_changed: yes
    register: show_output
  - name: Show Output
    debug:
      msg: '{{ show_output }}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Last stop, let’s execute the playbook to test our custom module. Since we opted to set the &lt;code&gt;state_changed&lt;/code&gt; argument, we expect the task state to appear as &lt;code&gt;changed&lt;/code&gt; and displayed in yellow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6l4lvnz9onpqymhe6gp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6l4lvnz9onpqymhe6gp.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you wish to contribute to an existing Ansible collection or create and publish a new one with your custom modules, look at &lt;a href="https://docs.ansible.com/ansible/latest/dev_guide/developing_collections_distributing.html#publishing-your-collection" rel="noopener noreferrer"&gt;Distributing collections&lt;/a&gt; and &lt;a href="https://docs.ansible.com/ansible/latest/community/index.html#ansible-community-guide" rel="noopener noreferrer"&gt;Ansible Community Guide&lt;/a&gt;, where you can find information on how to configure and distribute Ansible content.&lt;/p&gt;

&lt;p&gt;Ansible Modules Best Practices&lt;br&gt;
&lt;strong&gt;Use specialized modules over shell or command:&lt;/strong&gt; Although It might be tempting to use the shell or command module often, it’s considered a best practice to leverage more specific modules for each job. Specialized modules are typically recommended because they implement the concept of desired state and idempotency, have been tested, and fulfill basic standards, like error handling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specify arguments when it makes sense:&lt;/strong&gt; Some module arguments have default values that can be omitted. To be more transparent and explicit, we can opt to specify some of these arguments like the state in our playbook definitions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prefer multi-tasks in a module over loops:&lt;/strong&gt; The most efficient way of defining a list of similar tasks, like installing packages, is to use multiple tasks in a single module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Install Docker dependencies
  ansible.builtin.apt:
     name:
       - curl
       - ca-certificates
       - gnupg2
       - lsb-release
     state: latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above method should be preferred over the loop or defining multiple separate tasks using the same module.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom modules should be simple and tackle a specific job:&lt;/strong&gt; If you decide to build your own module, focus on solving a particular problem. Each module should have a concise functionality, be as simple as possible, and perform one thing well. If what you try to achieve goes beyond the scope of a single module, consider developing a new collection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom modules should have predictable parameters:&lt;/strong&gt; Try to enable others to use your module by defining a transparent and predictable user interface. The arguments should be well-scoped and understandable, and their structures should be as simple as possible. Follow the typical convention of parameter names in lowercase and use underscores as the word separator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Document and test your custom modules:&lt;/strong&gt; Every custom module should include examples, explicitly document dependencies, and describe return responses. New modules should be tested thoroughly before releasing. You can create roles and playbooks to test your custom modules and different test cases. &lt;/p&gt;

&lt;h2&gt;
  
  
  Key Points
&lt;/h2&gt;

&lt;p&gt;We deep-dived into Ansible modules and examined their use and functionality in detail. We discussed best practices and showed practical examples of leveraging the most commonly-used modules. Lastly, we went through a complete example of developing a custom module. &lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this article as much as I did.&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>automation</category>
      <category>cloud</category>
    </item>
    <item>
      <title>9 DevOps Best Practices – What You Should Do and NOT Do</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Sun, 26 Jun 2022 14:58:07 +0000</pubDate>
      <link>https://forem.com/spacelift/9-devops-best-practices-what-you-should-do-and-not-do-15md</link>
      <guid>https://forem.com/spacelift/9-devops-best-practices-what-you-should-do-and-not-do-15md</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F472y7lzlnly2tuehs2xv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F472y7lzlnly2tuehs2xv.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Utilizing DevOps practices to maximize speed and value creation has been a hot topic in the software industry for the past decade. We have embraced these practices and changed how we work and think about development, operations, project management, code quality, observability, and continuous feedback. &lt;/p&gt;

&lt;p&gt;As organizations started applying these practices, we noticed many anti-patterns emerging. In this article, we will see some DevOps best practices and ways to improve our workflows, but also we will explore some of the typical DevOps anti-patterns and how to avoid them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is DevOps, and Why it's Important?
&lt;/h2&gt;

&lt;p&gt;Probably you have seen hundreds of different definitions of DevOps by now. For me, DevOps is a set of best practices around the software development lifecycle and the effort to continually improve and deliver value more efficiently.  &lt;/p&gt;

&lt;p&gt;On its basis, DevOps is a culture that spreads equally between developers and operations rather than a specific role. In reality, the term has been used as an umbrella term to characterize the engineering roles of cloud-savvy people who share the pains and responsibilities of devs and ops. At the same time, they strive to enable and promote DevOps practices within organizations. &lt;/p&gt;

&lt;p&gt;So why all this fuss? Implementing these practices has proven to improve software quality. Different software and operations teams collaborate more efficiently, reduce friction and lead time, integrate and test their code continuously, and deploy more often.&lt;/p&gt;

&lt;p&gt;It’s all about finding the inefficiencies in our workflows and building a culture of continuous communication and trust. Other aspects of it are handling failures and unplanned work, leveraging automation, and focusing significantly on observability to get meaningful feedback. &lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps Best Practices to Follow
&lt;/h2&gt;

&lt;p&gt;Now that we set the foundation let’s explore some DevOps best practices without further ado. The list isn’t supposed to be exhaustive but rather a guide with hints and pointers that will ease your journey towards adopting a healthy DevOps culture. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. Foster a Culture of Collaboration and Blameless Communication
&lt;/h3&gt;

&lt;p&gt;First and foremost, for this journey to be successful, we must focus intensely on growing a culture that allows people to collaborate freely and remove the fear of failure. Organizations and teams that promote values like trust and empathy tend to have a heads-up advantage in adopting DevOps practices. Break down the silos between teams and make them work together towards a common goal, bringing value to the company. &lt;/p&gt;

&lt;p&gt;One of the tools that deliver an &lt;a href="https://spacelift.io/" rel="noopener noreferrer"&gt;enhanced collaboration layer for IaC&lt;/a&gt; is Spacelift. At Spacelift, you can invite security and compliance teams to collaborate on and approve workflows and policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Adopt Continuous Integration and Delivery (CI/CD)
&lt;/h3&gt;

&lt;p&gt;Integrating small batches of code frequently into a central code repository is a practice that allows developers to collaborate efficiently. With this approach, the repository is always kept in a good state since we introduce small changes that are easier to handle. Continuous Integration (CI) enables early error detection and improves code quality since these small batches of changes are validated each time with automated builds and tests. &lt;/p&gt;

&lt;p&gt;The next step after integrating our code is deploying it to our environment. Continuous Delivery (CD) is the practice of getting the code into a deployable state continuously for every small batch of change. This simplifies our deployments and provides our developers with an easy and automated method to push code to production.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Set up Automated Testing
&lt;/h3&gt;

&lt;p&gt;A continuation of the previous point and an integral part of DevOps success is setting up and curating meaningful automated tests as part of our CI/CD pipelines. This way, we don’t rely on humans to run manual tests on our code; instead, we set up automated tests that run on every minor change introduced. &lt;/p&gt;

&lt;p&gt;By increasing the testing frequency and the number of tests, we reduce our chances of introducing bugs to production systems. The tests vary depending on the use case but typically could include unit testing, integration testing, end-to-end testing, load testing, smoke testing, etc. &lt;/p&gt;

&lt;h3&gt;
  
  
  4. Focus on Observability and Find the Right Metrics
&lt;/h3&gt;

&lt;p&gt;DevOps practices are based on getting feedback and continuously improving our processes. We need to find and track the right metrics to achieve that and measure our results. Figuring out the right metrics is an arduous journey that each organization has to go through. &lt;/p&gt;

&lt;p&gt;These metrics will differ from organization to organization and from team to team, depending on the goals and key results targeted. Still, it’s a crucial exercise for achieving success. Some typical examples of DevOps metrics are deployment time, frequency of deploys, deployment failure rate, availability of critical services, mean time to detect, mean time to restore, cost per unit, code coverage, and change lead time. &lt;/p&gt;

&lt;p&gt;Moving one step forward, we also have to focus on the observability of our apps and software running in production. We have to define a strategy for effectively storing, managing, and distributing logs, traces, and metrics of our applications to quickly solve issues, improve the understandability of our systems and enable our teams to operate efficiently. &lt;/p&gt;

&lt;h3&gt;
  
  
  5. Avoid Manual Work with Automation
&lt;/h3&gt;

&lt;p&gt;By reducing manual work and automating recurring tasks, we accelerate our processes and provide increased consistency to our results. Automation can allow us to focus on what is essential and avoid human intervention and time spent on chores. It also provides more confidence in our systems and processes, removes human errors and miscommunication, and speeds up the performance of teams. If you need an automation layer for your cloud resources, take a look at &lt;a href="https://spacelift.io/" rel="noopener noreferrer"&gt;Spacelift’s self-service infrastructure&lt;/a&gt; (automated workflow management feature especially).&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Incorporate Security Early in the Development Lifecycle
&lt;/h3&gt;

&lt;p&gt;Security shouldn’t be one of the last things to integrate into software development. The birth of DevSecOps emphasizes thinking about application and infrastructure security early in the development lifecycle, incorporating security into the initial design and integrating it into the CI/CD pipelines. &lt;/p&gt;

&lt;p&gt;Security should be a responsibility shared among different teams and through the entire application lifecycle and be considered an integral part of the process, not an optional add-on. Recently, a strong focus has been given to securing the software supply chain due to increased malicious attacks over the last years.&lt;/p&gt;

&lt;p&gt;In the world of infrastructure, even the tiniest of mistakes can cause major outages. That’s why Spacelift adds &lt;a href="https://docs.spacelift.io/concepts/policy" rel="noopener noreferrer"&gt;an extra layer of policy that allows you to control&lt;/a&gt; – separately from your infrastructure project – what code can be executed, what changes can be made, when and by whom. This isn’t only useful to protect yourself from the baddies but allows you to implement an automated code review pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Learn from Incidents and Build Processes around Them
&lt;/h3&gt;

&lt;p&gt;Incidents are inevitable in the IT world. It doesn’t matter how well prepared your team is; eventually, there will be an incident that you will have to address. In these cases, it’s essential to focus on blameless communication, understand the issue, communicate effectively with the affected parties, and collaborate to find a solution. &lt;/p&gt;

&lt;p&gt;Equally crucial to fixing the issue is to have a process to log incidents and learn from them. After the incident has been tackled, spend some time with your team to craft a post-incident review, and discuss how the incident was handled. Try to find any possible improvements in the incident handling process that can help you next time. &lt;/p&gt;

&lt;h3&gt;
  
  
  8. Focus first on Concepts, then Find the Right Tools
&lt;/h3&gt;

&lt;p&gt;The DevOps landscape is moving extremely fast, with new tools and services emerging daily. Instead of continuously integrating new shiny tools and services, concentrate on understanding the core concepts that allow companies to accelerate their business with DevOps practices. &lt;/p&gt;

&lt;p&gt;Only once you understand the concepts and prioritize the missing pieces accordingly, you will be successful in selecting the right tools for the job. Remember, you won’t be able to build everything within your team. It’s ok to rely on &lt;a href="https://spacelift.io/blog/infrastructure-as-code#infrastructure-as-code-tooling" rel="noopener noreferrer"&gt;DevOps tooling&lt;/a&gt; and managed services when it makes sense. Be smart about how you use your team’s time, try to understand your in-house expertise and needs, put effort into building custom tooling when it makes sense, and rely on external tooling and services for the rest. &lt;/p&gt;

&lt;h3&gt;
  
  
  9. Embrace Infrastructure as Code (IaC) and Push for a Self-Service Infra Model
&lt;/h3&gt;

&lt;p&gt;Cloud infrastructure should be considered an integral part of software development and treated equally to application code. By leveraging &lt;a href="https://spacelift.io/blog/infrastructure-as-code" rel="noopener noreferrer"&gt;infrastructure as code&lt;/a&gt;, we can incorporate the best practices we use for software development, such as version control and CI/CD, into infrastructure creation. This model removes the need to manually set up and configure resources via UIs and further strengthens our automation efforts across the IT landscape. Changes are always auditable and transparent, and we can quickly roll back infrastructure systems to a previous state when there are issues. &lt;/p&gt;

&lt;p&gt;Thinking one step ahead, instead of adding another bottleneck of waiting for cloud infrastructure engineers to create the necessary resources, push for a self-service infrastructure model. In this model, the developers and anyone who needs infra resources can leverage some tooling to generate the required pieces. This way, we increase productivity and speed while giving autonomy to our developers, all via a single workflow. &lt;/p&gt;

&lt;p&gt;Check out &lt;a href="https://spacelift.io/blog/how-specialized-solution-can-improve-your-iac" rel="noopener noreferrer"&gt;how Spacelift can improve your IaC&lt;/a&gt;, assist your team in adopting a collaborative infrastructure model and achieving operational excellence. &lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps Anti-Patterns
&lt;/h2&gt;

&lt;p&gt;Along with the rise of DevOps, we have also seen many anti-patterns emerge. During the quest to adopt DevOps practices, people have misinterpreted their scope and made mistakes that lead to common anti-patterns. Let’s look at some common challenges, traps, and misconceptions companies face while implementing DevOps principles.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Don’t Create Separate DevOps Team
&lt;/h3&gt;

&lt;p&gt;Common mistake companies make when adopting DevOps practices is creating a separate team to handle the DevOps transformation. Unfortunately, this adds one more silo to the process and breaks the central promise of DevOps, which is increased collaboration and shared ownership between the existing teams. &lt;/p&gt;

&lt;p&gt;Similarly, we see operations teams rebranding into DevOps teams without actual changes in organizations’ culture, communication, and collaboration. DevOps is about bringing the different groups closer, not creating a new one. &lt;/p&gt;

&lt;h3&gt;
  
  
  2. Avoid Having the DevOps Hero
&lt;/h3&gt;

&lt;p&gt;At times, specific team members are more involved in the DevOps practices than others. That could be due to accumulated knowledge, a higher level of experience, or increased effort by a person. When this pattern emerges, it could lead quickly to the DevOps hero anti-pattern where a specific team member becomes indispensable to the team. &lt;/p&gt;

&lt;p&gt;This situation is highly problematic since the team’s performance and velocity depend on a single person. At the same time, this person may face an extremely high amount of work that eventually leads to burnout and potentially leaving the company. To avoid this anti-pattern, ensure the knowledge is spread across teams and team members. Divide the work equally and don’t rely on heroes but on teamwork and rigid processes to achieve results. &lt;/p&gt;

&lt;h3&gt;
  
  
  3. Don’t Attempt to Automate and Change Everything at Once
&lt;/h3&gt;

&lt;p&gt;Starting from scratch to apply DevOps practices in an organization could be daunting at first. As with most things, attempting to tackle everything at once is not the way to go. First, analyze the current situation and processes within your company. People usually don’t happily accept many changes, so you need to think strategically. Prioritize the tasks accordingly, find quick wins, automate the stuff that will have a higher impact, and focus on one thing at a time.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Avoid Chasing New Tools
&lt;/h3&gt;

&lt;p&gt;As new services and tools pop up almost every day, adopting and using these new shiny toys is always tempting. It’s common for engineers to fall into this trap of introducing a new tool just because it’s trending without proper analysis of whether it’s needed or the best choice. &lt;/p&gt;

&lt;p&gt;Picking the right tools for the job is critical but is a process that should be reviewed meticulously. For every new service or tool we add, we should also consider its maintainability and the operational overhead, dependencies, complexity, and new cognitive load that we introduce in the process.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Don’t Sacrifice Quality for Speed
&lt;/h3&gt;

&lt;p&gt;Since one of the main factors of DevOps success is velocity, many teams try to speed up their processes at the cost of quality and usually security. Many of the typical DevOps metrics are based on how fast we deliver, deploy and provide value, but they are not enough by themselves as they only tell half the story. Due to this disproportionate focus on speed, it’s easy to lose perspective of what is important; delivering quality software. Treat speed and quality equally, add meaningful automated tests and avoid cutting corners just to ship faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Don’t Give up on Continuous Improvement
&lt;/h3&gt;

&lt;p&gt;Applying effective DevOps practices is a dynamic process that should be curated continuously. It might be tempting to rest and relax after implementing all the DevOps best practices in the roadmap, but unfortunately, this process never stops. &lt;/p&gt;

&lt;p&gt;Every step of the way, we should focus on reviewing our workflows and continuously improving our systems, processes, and products. We have to set up flows of constant feedback that allow us to review and reflect on our choices and ultimately improve. New paradigms, best practices, and improved models always appear, and we should be restless if we want our teams to survive, perform, and succeed. &lt;/p&gt;

&lt;h3&gt;
  
  
  7. Don’t Neglect Documentation and Information Sharing
&lt;/h3&gt;

&lt;p&gt;By definition, successful adoption of DevOps practices relies on sharing information efficiently within an organization and creating a workplace where collaboration thrives organically. Unfortunately, neglecting documentation and efficient information sharing is an anti-pattern that occurs too often in software teams. Documentation, when done right, could be a handy tool for developers. &lt;/p&gt;

&lt;p&gt;Try to integrate documentation tasks on team backlogs and treat docs as first-class citizens within your organization. Docs aren’t static and should be kept up to date, created consistently, and accessible to anyone who needs them. &lt;/p&gt;

&lt;h2&gt;
  
  
  Key Points
&lt;/h2&gt;

&lt;p&gt;We have explored different DevOps best practices and paradigms and analyzed how we can incorporate them to accelerate team performance and value creation. We also saw some hidden traps and anti-patterns to be aware of and avoid while pursuing DevOps excellence.&lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this article as much as I did.&lt;/p&gt;

&lt;p&gt;Originally published at &lt;a href="//spacelift.io"&gt;spacelift.io&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS IAM Policies : Best Practices &amp; How to Create an IAM Policy</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Sat, 18 Jun 2022 13:43:50 +0000</pubDate>
      <link>https://forem.com/spacelift/aws-iam-policies-best-practices-how-to-create-an-iam-policy-3job</link>
      <guid>https://forem.com/spacelift/aws-iam-policies-best-practices-how-to-create-an-iam-policy-3job</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zl6sj1l8c8adbm32dd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zl6sj1l8c8adbm32dd2.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Securely accessing cloud resources is one of the most critical requirements for IT teams. To achieve this, cloud administrators leverage web services that assist with managing access to cloud environments. &lt;/p&gt;

&lt;p&gt;In this article, we will explore the AWS IAM service, and more specifically, we will look into IAM policies, learn how they are structured, how to create them, and some best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS IAM?
&lt;/h2&gt;

&lt;p&gt;AWS Identity and Access Management(IAM) is a web service that assists with managing and controlling access and permissions to resources and other AWS services. Leveraging this service, we can set up the building blocks to control authentication and authorization to our AWS accounts. &lt;/p&gt;

&lt;p&gt;Before we continue talking about IAM, let’s define some terminology for its essential components that we will reference throughout the article. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt;: The process of validating a user’s identity. Basically, verifying who someone claims to be. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authorization&lt;/strong&gt;: Defining the level of permissions and access rights for a given user. Authorization happens after authentication and establishes what someone is allowed to do.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Principals&lt;/strong&gt;: Persons or applications that try to perform actions on AWS. 
Users: Identities in IAM that correspond to users in an organization. They represent actual persons or applications and service accounts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Groups&lt;/strong&gt;: IAM users are grouped logically with IAM groups. This way, we can assign permissions to multiple users in a bulk fashion. A common pattern is to create groups according to different roles in an organization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Roles&lt;/strong&gt;: Roles provide temporary access to resources and aren’t associated with specific users. Instead, roles enable principals to temporarily assume a set of permissions to complete an operation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Policies&lt;/strong&gt;: To manage access on AWS we generate IAM policies that define levels of permissions and attach them to IAM identities(users, groups, roles) or AWS resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requests&lt;/strong&gt;: Principals attempt to perform actions on AWS by instantiating requests to AWS via the AWS API, Management Console, or CLI. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actions&lt;/strong&gt;: Every request has an action definition that declares the specific operation requested. If authentication and authorization are cleared the action is approved. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resources&lt;/strong&gt;: A resource in this context can be considered any AWS object within a service.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Policy Types
&lt;/h2&gt;

&lt;p&gt;IAM Policies are one of the most basic blocks of access management in AWS since they define the permissions of an identity or a resource. For every request, these policies are evaluated, and based on their definition; the requests are allowed or denied. Let’s look at the different types of policies that exist in AWS.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Identity-based policies&lt;/strong&gt; are policies attached to identities(users, groups, roles) and provide them with the required permissions. Identity-based policies could be either Managed policies or Inline policies. Managed policies are either prepared and managed by AWS for common use cases(&lt;strong&gt;AWS managed policies&lt;/strong&gt;) or custom policies created by users(&lt;strong&gt;Customer managed policies&lt;/strong&gt;) suitable for achieving fine-grained control. &lt;strong&gt;Inline policies&lt;/strong&gt; are used when we need to make a policy part of a principal’s entity and maintain a strict one-to-one relationship between them. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource-based policies&lt;/strong&gt; are attached directly to resources and specify permissions for specific actions on the resource by some principals. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM permissions boundaries&lt;/strong&gt; define the maximum permissions for an IAM entity and are used as safeguards. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access control lists(ACLs)&lt;/strong&gt; are attached to resources and control cross-account permissions for principals from other accounts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Organizations Service Control Policies(SCPs)&lt;/strong&gt; specify the maximum level of permissions for an organization’s accounts. These policies are used to limit the permissions that can be assigned within member accounts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session policies&lt;/strong&gt; are advanced policies used during temporary sessions for roles or federated users.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Policy Document Structure
&lt;/h2&gt;

&lt;p&gt;Now that we have seen the different types of IAM policies that exist in the context of AWS, let’s see how they are structured. Most of them are stored in JSON format and are attached to resources or identities. The only type of policy that uses a different format is ACL, but we won’t focus on ACLs in this article. &lt;/p&gt;

&lt;p&gt;Each JSON policy document might have optional informational elements at the top of the document and must have one or more statements. A statement contains all the necessary information about a permission that we would like to declare. &lt;/p&gt;

&lt;p&gt;Here’s a simple identity-based policy that allows the principal who has it attached to list the objects of a single S3 bucket named &lt;code&gt;this-is-an-example-s3-bucket-name&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ListObjectsInBucket"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:ListBucket"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::this-is-an-example-s3-bucket-name"&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;IAM Policies are built using a combination of the below elements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Version&lt;/strong&gt;: Defines the version of the policy language. Always use the latest version.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Statement&lt;/strong&gt;: This argument is used as a parent element for the different statements in the policy. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sid&lt;/strong&gt;: This is an optional element that allows us to define a statement ID.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Effect&lt;/strong&gt;: This element can have the values &lt;code&gt;Allow&lt;/code&gt; or &lt;code&gt;Deny&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action&lt;/strong&gt;: The list of actions related to the policy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource&lt;/strong&gt;: Defines the list of resources to which the policy is applied. For resource-based policies, this is optional since the policy applies to the resource that has it attached. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Principal&lt;/strong&gt;: Defines the identities that are allowed or denied access to resource-based policies. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Condition&lt;/strong&gt;: Defines some conditions under which the policy applies. This element is practical when we need to achieve custom rules for fine-grained access. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are options for resources, principals, and actions to define deny access controls by using &lt;em&gt;NotPrincipal&lt;/em&gt;, &lt;em&gt;NotAction&lt;/em&gt;, or &lt;em&gt;NotResource&lt;/em&gt;. Note that these are mutually exclusive with their contrasting elements. For example, you can’t use both Resource and NotResource elements in a statement.&lt;/p&gt;

&lt;p&gt;If you want to learn more about these elements and the JSON policy components, check out the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html" rel="noopener noreferrer"&gt;IAM JSON policy reference&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, we will see an example of a more complex IAM policy with more than one statement using conditions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DenyAllOutsideRequestedRegions"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Deny"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"NotAction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="s2"&gt;"cloudfront:*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="s2"&gt;"iam:*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="s2"&gt;"route53:*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="s2"&gt;"support:*"&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="nl"&gt;"StringNotEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                   &lt;/span&gt;&lt;span class="nl"&gt;"aws:RequestedRegion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                       &lt;/span&gt;&lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                       &lt;/span&gt;&lt;span class="s2"&gt;"eu-central-1"&lt;/span&gt;&lt;span class="w"&gt;
                   &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DenyAccessFromNonCorporateIPs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Deny"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="nl"&gt;"NotIpAddress"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                   &lt;/span&gt;&lt;span class="nl"&gt;"aws:SourceIp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                       &lt;/span&gt;&lt;span class="s2"&gt;"192.0.1.0/24"&lt;/span&gt;&lt;span class="w"&gt;
                   &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="nl"&gt;"Bool"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"aws:ViaAWSService"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"false"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the policy above, we define two separate statements. The first defines some rules to deny access based on the requested AWS region. More specifically, it denies all actions for regions not defined in the condition, except for the actions mentioned in the &lt;em&gt;NotAction&lt;/em&gt; element.&lt;/p&gt;

&lt;p&gt;The second one defines a policy to deny access to AWS based on the source IP of the requests. Here, since we have two conditions, they are combined with a logical &lt;code&gt;AND&lt;/code&gt;. More specifically, the policy denies all AWS actions when requests originate from an IP that doesn’t come from the corporate IP range(in this example: 192.0.1.0/24) and when an AWS service isn’t making the call. &lt;/p&gt;

&lt;p&gt;Note that both policies don’t actually allow any actions and are used to restrict access to AWS resources. &lt;/p&gt;

&lt;p&gt;Finally, here’s a resource-based policy for an S3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DenyS3AccessWithNoMFA"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Deny"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::this-is-an-example-s3-bucket-name/example-directory/*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"Null"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"aws:MultiFactorAuthAge"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By associating this policy with the bucket &lt;code&gt;this_is_an_example_s3_bucket_name&lt;/code&gt; every action on the bucket is denied if the request isn’t authenticated with a multi-factor authentication mechanism. Note the use of the &lt;em&gt;Principal&lt;/em&gt; element here since we are using a resource-based policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an IAM Policy
&lt;/h2&gt;

&lt;p&gt;There are three main ways to create an IAM policy. We can either use the AWS console, the AWS CLI, or the AWS API. For this demo, we will go through the exercise of creating an IAM customer-managed policy via the AWS Console. &lt;/p&gt;

&lt;p&gt;The easiest option to create a policy is to use the helpful visual editor that the AWS console provides without having to write a JSON file and care about proper syntax. After signing in to the AWS Management Console, head to &lt;a href="https://console.aws.amazon.com/iam/" rel="noopener noreferrer"&gt;IAM&lt;/a&gt; and select &lt;em&gt;Policies&lt;/em&gt; and &lt;em&gt;Create Policy&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;From this screen, you can choose to either use the Visual editor or JSON.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4afdv6l194unahvml6wb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4afdv6l194unahvml6wb.png" alt=" " width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s replicate our first example policy from above that allows listing the objects in an S3 bucket. For &lt;em&gt;Service&lt;/em&gt; we select &lt;em&gt;S3&lt;/em&gt;, for &lt;em&gt;Actions&lt;/em&gt; choose &lt;em&gt;ListBucket&lt;/em&gt;, and for &lt;em&gt;Resources&lt;/em&gt; use the arn of the S3 bucket &lt;em&gt;arn:aws:s3:::this-is-an-example-s3-bucket-name&lt;/em&gt;. Then, select &lt;em&gt;Next: Tags&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhudgh2zb9vnq5ecgvm0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhudgh2zb9vnq5ecgvm0y.png" alt=" " width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that by default, the policy allows the actions chosen. To deny actions, you have to select &lt;em&gt;Switch to deny permissions&lt;/em&gt; before selecting the actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7eycx6kutwskci55iiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7eycx6kutwskci55iiu.png" alt=" " width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can optionally add some tags to your customer-managed identity-based policy on the next page. For example, we could add tags for name, environment, department, etc. Then, select &lt;em&gt;Next: Review&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0k0rjl2o4wxfsjknj0c6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0k0rjl2o4wxfsjknj0c6.png" alt=" " width="800" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, add a Name to your policy and optionally a description for its purpose and select &lt;em&gt;Create policy&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfgjbfp27zsmukl0wsff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfgjbfp27zsmukl0wsff.png" alt=" " width="800" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nice,  we have replicated the IAM policy we introduced in the first example.&lt;/p&gt;

&lt;p&gt;If you feel comfortable with JSON syntax, you can instead use the &lt;em&gt;JSON&lt;/em&gt; tab to define an IAM Policy like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrldowscbmtceu4mmiqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrldowscbmtceu4mmiqm.png" alt=" " width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another option using the AWS API under the hood would be to create your IAM resources with the Infrastructure as Code tool of your choice. Here’s an example of using Terraform to define the same policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_policy"&lt;/span&gt; &lt;span class="s2"&gt;"list_bucket_policy_example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

 &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"list_bucket_policy_example"&lt;/span&gt;
 &lt;span class="nx"&gt;path&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/"&lt;/span&gt;
 &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS IAM Policy example for listing the objects of a bucket"&lt;/span&gt;
 &lt;span class="nx"&gt;policy&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Action": "s3:ListBucket",
     "Resource": "arn:aws:s3:::this-is-an-example-s3-bucket-name",
     "Effect": "Allow",
     "Sid": "ListObjectsInBucket"
   }
 ]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Validating IAM Policies
&lt;/h2&gt;

&lt;p&gt;When creating or editing IAM policies using the AWS Management Console, their syntax and grammar are inspected to verify they comply with the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html" rel="noopener noreferrer"&gt;IAM policy grammar&lt;/a&gt;. If there is any issue or error, you get notified accordingly and have to fix the policy. &lt;/p&gt;

&lt;p&gt;AWS provides &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html" rel="noopener noreferrer"&gt;IAM Access Analyzer&lt;/a&gt; with additional policy checks and recommendations to improve policies. When creating or editing a policy using the JSON tab, a policy validation pane below the policy provides different findings for the policy. There you get information for issues categorized as &lt;em&gt;Security&lt;/em&gt;, &lt;em&gt;Errors&lt;/em&gt;, &lt;em&gt;Warnings&lt;/em&gt;, and &lt;em&gt;Suggestions&lt;/em&gt;. Update your policy accordingly to resolve the findings.&lt;/p&gt;

&lt;p&gt;Here’s an example of a malformed policy that triggers three separate findings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa76j4forkj6ygjnpx5wo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa76j4forkj6ygjnpx5wo.png" alt=" " width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We get an &lt;em&gt;Error&lt;/em&gt; finding because we don’t specify a correct ARN field for the &lt;em&gt;Resource&lt;/em&gt; element.&lt;/p&gt;

&lt;p&gt;Next, on the &lt;em&gt;Warnings&lt;/em&gt; tab, we get notified that we omitted the top-level &lt;em&gt;Version&lt;/em&gt; element.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lnnibl7anudqe3rrm00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lnnibl7anudqe3rrm00.png" alt=" " width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, on the Suggestions tab, we get notified that our Action element includes no actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F334e54tesfr8fjwlko9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F334e54tesfr8fjwlko9w.png" alt=" " width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing IAM Policies
&lt;/h2&gt;

&lt;p&gt;Another handy tool for AWS administrators and users who manage IAM policies is the &lt;a href="https://policysim.aws.amazon.com/home/index.jsp?#" rel="noopener noreferrer"&gt;IAM Policy Simulator&lt;/a&gt;. Using this simulator, you can troubleshoot issues with different policy types and try to understand why some requests are allowed or denied. &lt;/p&gt;

&lt;p&gt;The IAM Policy Simulator console provides a testing playground for IAM policies and an easy way to test which actions are allowed or denied to specific principals for specific resources. The simulator doesn’t actually make the requests, so it’s a safe space to experiment and get information. You can even test new policies that don’t yet exist on our account and simulate real-world scenarios by defining complex conditions. Finally, since the returned output is a message with the result and relevant information, you can identify which statement in a policy denies or allows access. &lt;/p&gt;

&lt;p&gt;For example, we have created an IAM user test-user and attached the policy &lt;em&gt;ListBucketContents&lt;/em&gt; that we created earlier. We can use the AWS Policy Simulator to validate that this user can indeed list the objects of the example bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobjhpjx69uynbx6qrofa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobjhpjx69uynbx6qrofa.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After defining all the parameters for the simulation, we choose &lt;em&gt;Run Simulation&lt;/em&gt; and check that the Permission result is &lt;em&gt;allowed&lt;/em&gt;. Check out the detailed &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_policies.html" rel="noopener noreferrer"&gt;Troubleshooting IAM Policies guide&lt;/a&gt; for issues with IAM policies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Versioning IAM Policies
&lt;/h2&gt;

&lt;p&gt;A valuable feature of IAM Policies that you should be aware of is that AWS automatically keeps five versions of each policy. Whenever you make changes to a policy and save it, a new version is created instead of overwriting the existing one. &lt;/p&gt;

&lt;p&gt;When the limit of five kept versions is reached, you can select which older version to remove. This feature is convenient because it allows us to quickly revert to a previous policy version in case of issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  IAM Policies Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Follow Least Privilege Principles&lt;/strong&gt;: When creating IAM policies grant only the necessary permissions to perform the job. Specify specific actions, resources, and principles and add custom conditions to achieve the required controls. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate policies based on access activity&lt;/strong&gt;: Although it’s considered a best practice, it might be challenging to implement fine-grained policies that grant the least privilege when starting with IAM policies. For this reason, you can initially use policies that allow more permissions than they should and then refine them by &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_generate-policy.html" rel="noopener noreferrer"&gt;generating a new policy based on the access activity&lt;/a&gt; of an IAM entity. This is a more pragmatic approach to achieving the least privilege for inexperienced users that makes the journey to improving security and access management smoother. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get started with managed policies&lt;/strong&gt;: It’s totally ok to start with AWS Managed policies as you start experimenting and learning. These policies cover common use cases and help your team set up necessary permissions to start quickly. When you feel comfortable enough with IAM, consider restricting your policies by creating customer-managed policies that follow the least privileged approach.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assign permissions with roles and groups &amp;amp; avoid inline policies&lt;/strong&gt;: It’s considered best to avoid assigning permissions directly to users or using inline policies. For easier access management and better security controls, attach policies to groups or roles. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validate policies&lt;/strong&gt;: Every time you create or edit policies, validate them using the AWS helper tools, as we have seen in the section Validating IAM Policies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use IAM Roles for EC2 instances(EC2 Instance Profiles)&lt;/strong&gt;: For apps that run on EC2 instances, attach the necessary permissions to IAM roles and specify the role as a launch parameter for the instance. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use policy conditions&lt;/strong&gt;: Leverage policy conditions to achieve complex policies with custom rules. By combining different conditions in policies, we can comply with specific requirements according to our organization’s standards. Check out the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html" rel="noopener noreferrer"&gt;Policy Elements: Condition&lt;/a&gt; reference page for more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test IAM policies with IAM Policy Simulator&lt;/strong&gt;: Test existing and new policies quickly and without affecting any environment with the IAM Policy Simulator. Use it to understand your policies better and troubleshoot different issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review IAM policies on a regular basis&lt;/strong&gt;: Cloud environments aren’t static; they evolve over time. For this reason, IAM policies should be reviewed and updated to reflect changes. Following best practices and least privilege, it’s not something that you configure once and forget but requires continuous effort. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tag IAM policies&lt;/strong&gt;: Tag your IAM policies to add metadata to them the same way you tag other AWS resources. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control access to resources with tags&lt;/strong&gt;: If you have a proper tagging strategy for your AWS resources, you can create elaborate IAM policies that control access to them based on tags. This method becomes handy when you want to separate access to resources based on owners or teams.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use IaC to create your IAM policies&lt;/strong&gt;: In this tutorial, we have seen how to create IAM policies from the AWS Console, and this might be ok as a starting point, but eventually, you should treat your IAM resources as infrastructure components and apply the same Infrastructure as Code principles as any other AWS resource.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Points
&lt;/h2&gt;

&lt;p&gt;We have explored IAM concepts in the context of AWS, and more specifically, we focused on IAM policies. We saw how to create and test policies, assign necessary permissions, and combine different IAM mechanisms to achieve efficient access management controls on AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.spacelift.io/integrations/cloud-providers/aws" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you can learn more about Spacelift integration with AWS, and &lt;a href="https://docs.spacelift.io/concepts/policy" rel="noopener noreferrer"&gt;here&lt;/a&gt; about the usage of policies with Spacelift.&lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this article as much as I did.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>iac</category>
    </item>
    <item>
      <title>Ansible Roles: Basics &amp; How to Combine Them With Playbooks</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Thu, 09 Jun 2022 19:05:54 +0000</pubDate>
      <link>https://forem.com/spacelift/ansible-roles-basics-how-to-combine-them-with-playbooks-3ag9</link>
      <guid>https://forem.com/spacelift/ansible-roles-basics-how-to-combine-them-with-playbooks-3ag9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pmtzwiy3ppt3g7v1q8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pmtzwiy3ppt3g7v1q8a.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This blog post explores the concept of Ansible roles, their structure, and how we can combine them with our playbooks. We will analyze their functionality and usage along with ways to create new roles and retrieve public shared roles from Ansible Galaxy, a public repository for Ansible resources.&lt;/p&gt;

&lt;p&gt;If you are new to Ansible, you might also find these tutorials helpful &lt;a href="https://spacelift.io/blog/ansible-tutorial" rel="noopener noreferrer"&gt;Ansible Tutorial for Beginners&lt;/a&gt;, &lt;a href="https://spacelift.io/blog/ansible-playbooks" rel="noopener noreferrer"&gt;Working with Ansible Playbooks&lt;/a&gt;, and &lt;a href="https://spacelift.io/blog/ansible-variables" rel="noopener noreferrer"&gt;How to Use Different Types of Ansible Variables&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Roles Are Useful in Ansible
&lt;/h2&gt;

&lt;p&gt;When starting with Ansible, it’s pretty common to focus on writing playbooks to automate repeating tasks quickly. As new users automate more and more tasks with playbooks and their Ansible skills mature, they reach a point where using just Ansible playbooks is limiting. Ansible Roles to the rescue!&lt;/p&gt;

&lt;p&gt;Roles enable us to reuse and share our Ansible code efficiently. They provide a well-defined framework and structure for setting your tasks, variables, handlers, metadata, templates, and other files. This way, we can reference and call them in our playbooks with just a few lines of code while we can reuse the same roles over many projects without the need to duplicate our code.&lt;/p&gt;

&lt;p&gt;Since we have our code grouped and structured according to the Ansible standards, it is quite straightforward to share it with others. We will see an example of how we can accomplish that later with Ansible Galaxy. &lt;/p&gt;

&lt;p&gt;Organizing our Ansible content into roles provides us with a structure that is more manageable than just using playbooks. This might not be evident in minimal projects but as the number of playbooks grows, so does the complexity of our projects. &lt;/p&gt;

&lt;p&gt;Lastly, placing our Ansible code into roles lets us organize our automation projects into logical groupings and follow the separation of concerns design principle. Collaboration and velocity are also improved since different users can work on separate roles in parallel without modifying the same playbooks simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ansible Role Structure
&lt;/h2&gt;

&lt;p&gt;Let’s have a look at the standard role directory structure. For each role, we define a directory with the same name. Inside, files are grouped into subdirectories according to their function. A role must include &lt;em&gt;at least one of these standard directories and can omit any&lt;/em&gt; that isn’t actively used.&lt;/p&gt;

&lt;p&gt;To assist us with quickly creating a well-defined role directory structure skeleton, we can leverage the command &lt;strong&gt;ansible-galaxy init &lt;/strong&gt;. The &lt;em&gt;ansible-galaxy&lt;/em&gt; command comes bundled with Ansible, so there is no need to install extra packages.&lt;/p&gt;

&lt;p&gt;Create a skeleton structure for a role named &lt;em&gt;test_role&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrg47735adjj7ep62ydb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrg47735adjj7ep62ydb.png" alt=" " width="800" height="63"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command generates this directory structure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpol0c3phi89tyqtgl5hi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpol0c3phi89tyqtgl5hi.png" alt=" " width="605" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ansible checks for main.yml files, possible variations, and relevant content in each subdirectory. It’s possible to include additional YAML files in some directories. For instance, you can group your tasks in separate YAML files according to some characteristic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;defaults&lt;/strong&gt; –  Includes default values for variables of the role. Here we define some sane default variables, but they have the lowest priority and are usually overridden by other methods to customize the role.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;files&lt;/strong&gt;  – Contains static and custom files that the role uses to perform various tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;handlers&lt;/strong&gt; – A set of &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html" rel="noopener noreferrer"&gt;handlers&lt;/a&gt; that are triggered by tasks of the role. 
meta – Includes metadata information for the role, its dependencies, the author, license, available platform, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tasks&lt;/strong&gt; – A list of tasks to be executed by the role. This part could be considered similar to the task section of a playbook.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;templates&lt;/strong&gt; – Contains &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html" rel="noopener noreferrer"&gt;Jinja2&lt;/a&gt; template files used by tasks of the role.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tests&lt;/strong&gt; – Includes configuration files related to role testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vars&lt;/strong&gt; – Contains variables defined for the role. These have quite a high &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html" rel="noopener noreferrer"&gt;precedence&lt;/a&gt; in Ansible.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another directory that wasn’t automatically generated by the &lt;em&gt;ansible-galaxy init&lt;/em&gt; command but is mentioned in the official Ansible docs, and you might find helpful in some cases, is the &lt;strong&gt;library&lt;/strong&gt; directory. Inside it, we define any custom modules and plugins that we have written and used by the role. Finally, we also have a preconfigured &lt;strong&gt;README.md&lt;/strong&gt; file that we can fill with details and useful information about our role.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Roles
&lt;/h2&gt;

&lt;p&gt;A common tactic is to refactor an Ansible playbook into a role. To achieve that, we have to decompose the different parts of a playbook and stitch them together into an Ansible role using the directories we’ve just seen in the previous section. &lt;/p&gt;

&lt;p&gt;This section will go through an example of creating a new role for installing and configuring a minimal Nginx web server from scratch. If you wish to follow along, you will need &lt;a href="https://www.virtualbox.org/" rel="noopener noreferrer"&gt;VirtualBox&lt;/a&gt;, &lt;a href="https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt;, and &lt;a href="https://www.vagrantup.com/" rel="noopener noreferrer"&gt;Vagrant&lt;/a&gt; installed locally.&lt;/p&gt;

&lt;p&gt;Ansible searches for referenced roles in common paths like the orchestrating playbook’s directory, the roles/ directory, or the configured &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-roles-path" rel="noopener noreferrer"&gt;roles_path&lt;/a&gt; configuration value. It’s also possible to set a custom path when referencing a role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- hosts: all
  roles:
    - role: "/custom_path/to/the/role"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using &lt;em&gt;the ansible-galaxy init&lt;/em&gt; command, we generate the initial directory structure for a role named &lt;em&gt;webserver&lt;/em&gt; inside a parent directory named &lt;em&gt;roles&lt;/em&gt;. Let’s go ahead and delete the tests directory since we won’t be using it. We will see how to utilize all the other directories during our demo. &lt;/p&gt;

&lt;p&gt;The final structure of our role looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6slevcwpicl2j9fdm157.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6slevcwpicl2j9fdm157.png" alt=" " width="484" height="952"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, let’s define the most fundamental part of our role, its tasks. Head to the &lt;em&gt;tasks&lt;/em&gt; directory and edit the &lt;em&gt;main.yml&lt;/em&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;roles/webserver/tasks/main.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
# tasks file for webserver
- name: Update and upgrade apt
  ansible.builtin.apt:
    update_cache: yes
    cache_valid_time: 3600
    upgrade: yes

- name: "Install Nginx to version {{ nginx_version }}"
  ansible.builtin.apt:
    name: "nginx={{ nginx_version }}"
    state: present

- name: Copy the Nginx configuration file to the host
  template:
    src: templates/nginx.conf.j2
    dest: /etc/nginx/sites-available/default
- name: Create link to the new config to enable it
  file:
    dest: /etc/nginx/sites-enabled/default
    src: /etc/nginx/sites-available/default
    state: link

- name: Create Nginx directory
  ansible.builtin.file:
    path: "{{ nginx_custom_directory }}"
    state: directory

- name: Copy index.html to the Nginx directory
  copy:
    src: files/index.html
    dest: "{{ nginx_custom_directory }}/index.html"
  notify: Restart the Nginx service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we define a handful of tasks that update the operating system, install an Nginx web server, and set up a minimal custom configuration for demo purposes.&lt;/p&gt;

&lt;p&gt;Next, we move to the &lt;em&gt;defaults&lt;/em&gt; directory, where we will set default values for the variables used in the tasks. If there is no other definition for these variables, they will be picked up and used by the role, but usually, they are meant to be easily overwritten.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;roles/webserver/defaults/main.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
# defaults file for webserver
nginx_version: 1.18.0-0ubuntu1.3
nginx_custom_directory: /var/www/example_domain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Moving on to our &lt;em&gt;vars&lt;/em&gt; directory, we define values with higher precedence that aren’t meant to be overridden at the play level. Here, we override the default variable that defines the Nginx custom directory.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;roles/webserver/vars/main.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
# vars file for webserver
nginx_custom_directory: /home/ubuntu/nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the &lt;em&gt;handlers&lt;/em&gt; directory, we define any handler that is triggered by our tasks. One of our tasks includes a &lt;em&gt;notify&lt;/em&gt; keyword since it needs to trigger our &lt;em&gt;Restart the Nginx service&lt;/em&gt; handler.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;roles/webserver/handlers/main.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
# handlers file for webserver
- name: Restart the Nginx service
  service:
    name: nginx
    state: restarted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the &lt;em&gt;templates&lt;/em&gt; directory, we leverage a Jinja2 template file for the Nginx configuration that gets the Nginx custom directory value from one of our previously defined variables.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;roles/webserver/templates/nginx.conf.j2&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
        listen 80;
        listen [::]:80;
        root {{ nginx_custom_directory }};
        index index.html;
        location / {
                try_files $uri $uri/ =404;
        }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the &lt;em&gt;files&lt;/em&gt; directory, we define a static file &lt;em&gt;index.html&lt;/em&gt; that will serve as our static demo webpage.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;roles/webserver/files/index.html&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;html&amp;gt;
 &amp;lt;head&amp;gt;
   &amp;lt;title&amp;gt;Hello from Ngnix &amp;lt;/title&amp;gt;
 &amp;lt;/head&amp;gt;
 &amp;lt;body&amp;gt;
 &amp;lt;h1&amp;gt;This is our test webserver&amp;lt;/h1&amp;gt;
 &amp;lt;p&amp;gt;This Nginx web server was deployed by Ansible.&amp;lt;/p&amp;gt;
 &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use the &lt;em&gt;meta&lt;/em&gt; directory to add metadata and information about the role. Any role dependencies by other roles go here as well.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;roles/webservers/meta/main.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;galaxy_info:
  author: Ioannis Mosutakis
  description: Installs Nginx and configures a minimal test webserver
  company: ACME Corp
  license: Apache-2.0
  role_name: websercer

  min_ansible_version: "2.1"

 # If this is a Container Enabled role, provide the minimum Ansible Container version.
 # min_ansible_container_version:

 #
 # Provide a list of supported platforms, and for each platform a list of versions.
 # If you don't wish to enumerate all versions for a particular platform, use 'all'.
 # To view available platforms and versions (or releases), visit:
 # https://galaxy.ansible.com/api/v1/platforms/
 #
  platforms:
  - name: Ubuntu
    versions:
      - bionic
      - focal

  galaxy_tags:
    - nginx
    - webserver
    - development
    - test

dependencies: []
 # List your role dependencies here, one per line. Be sure to remove the '[]' above,
 # if you add dependencies to this list.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, we update the &lt;em&gt;README.md _file accordingly. The autogenerated file provided by the _ansible-galaxy init&lt;/em&gt; command includes many pointers and guidance on filling this nicely. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;roles/webserver/README.md&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Role Name
=========

Role Name
=========

This is a role created for demonstration purposes that configures a basic nginx webserver with a minimal configuration.

Requirements
------------

Any prerequisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required.

* Ansible
* Jinja2

Role Variables
--------------

A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well.

### defaults/main.yml
Default nginx installation variables.

* nginx_version: Specific version of nginx to install
* nginx_custom_directory: Custom directory for nginx installation

### vars/main.yml
Here we define variables that have high precedence and aren't intended to be changed by the play.

* nginx_custom_directory: Custom directory for nginx installation

Dependencies
------------

A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles.

Example Playbook
----------------

Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too:

   - hosts: all
     become: true
     roles:
       - webserver

License
-------

Apache-2.0

Author Information
------------------

Ioannis Moustakis

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Roles
&lt;/h2&gt;

&lt;p&gt;Once we have defined all the necessary parts of our role, it’s time to use it in plays. The classic and most obvious way is to reference a role at the play level with the &lt;strong&gt;roles&lt;/strong&gt; option:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- hosts: all
  become: true
  roles:
    - webserver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this option, each &lt;em&gt;role&lt;/em&gt; defined in our playbook is executed before any other tasks defined in the play.&lt;/p&gt;

&lt;p&gt;This is an example play to try out our new webserver role. Let’s go ahead and execute this play. To follow along, you should first run the &lt;code&gt;vagrant up&lt;/code&gt; command from the top directory of &lt;a href="https://github.com/spacelift-io-blog-posts/Blog-Technical-Content/tree/master/ansible-roles" rel="noopener noreferrer"&gt;this repository&lt;/a&gt; to create our target remote host.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ubzktr2mwuzprxsb2zs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ubzktr2mwuzprxsb2zs.png" alt=" " width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sweet! All the tasks have been completed successfully. Let’s also validate that we have configured our Ngnix web server correctly.&lt;/p&gt;

&lt;p&gt;Use the command &lt;code&gt;vagrant ssh host1&lt;/code&gt; to connect to our demo Vagrant host. Then execute &lt;code&gt;systemctl status nginx&lt;/code&gt; to verify that Nginx service is up and running. Finally, run the command &lt;code&gt;curl localhost&lt;/code&gt; to check if the web server responds with the custom page that we configured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo29mvu44k28zr8ektps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo29mvu44k28zr8ektps.png" alt=" " width="800" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When using the &lt;em&gt;roles&lt;/em&gt; option at the play level, we can override any of the default role’s variables or pass other keywords, like &lt;em&gt;tags&lt;/em&gt;. Tags are added to all tasks within the role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- hosts: all
 become: true
 roles:
   - role: webserver
     vars:
       nginx_version: 1.17.10-0ubuntu1
     tags: example_tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we override the default variable &lt;em&gt;nginx_version&lt;/em&gt; with another version.&lt;/p&gt;

&lt;p&gt;Apart from defining roles at the play level with the &lt;em&gt;roles&lt;/em&gt; option, we can use them also at the tasks level with the options &lt;strong&gt;include_role&lt;/strong&gt; dynamically and &lt;strong&gt;import_role&lt;/strong&gt; statically. These are useful when we would like to run our role tasks in a specific order and not necessarily before any other playbook tasks. This way, roles run in the order they are defined in plays.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- hosts: all
  tasks:
    - name: Print a message
      ansible.builtin.debug:
        msg: "This task runs first and before the example role"

    - name: Include the example role and run its tasks
      include_role:
        name: example

    - name: Print a message
      ansible.builtin.debug:
        msg: "This task runs after the example role"

    - name: Include the example_2 role and run its tasks in the end
      include_role:
        name: example_2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#using-roles" rel="noopener noreferrer"&gt;official docs&lt;/a&gt; for achieving more fine-grained control of your role’s task execution order. &lt;/p&gt;

&lt;p&gt;Even if we define a role multiple times, Ansible will execute it only once. Occasionally, we might want to run a role multiple times but with different parameters. Passing a different set of parameters to the same roles allows executing the role more than once. &lt;/p&gt;

&lt;p&gt;Example of executing the role test three times:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- hosts: all
  roles:
    - role: test
      message: "First time"
    - role: test
      message: "Second time"
    - role: test
      message: "Third time"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Sharing Roles with Ansible Galaxy
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://galaxy.ansible.com/" rel="noopener noreferrer"&gt;Ansible Galaxy&lt;/a&gt; is an online open-source, public repository of Ansible content. There, we can search, download and use any shared roles and leverage the power of its community. We have already used its client, &lt;em&gt;ansible-galaxy&lt;/em&gt;, which comes bundled with Ansible and provides a framework for creating well-structured roles.&lt;/p&gt;

&lt;p&gt;You can use Ansible Galaxy to browse for roles that fit your use case and save time by using them instead of writing everything from scratch. For each role, you can see its code repository, documentation, and even a rating from other users. Before running any role, check its code repository to ensure it’s safe and does what you expect. Here’s a blog post on &lt;a href="https://www.jeffgeerling.com/blog/2019/how-evaluate-community-ansible-roles-your-playbooks" rel="noopener noreferrer"&gt;How to evaluate community Ansible roles&lt;/a&gt;. If you are curious about Galaxy, check out its &lt;a href="https://galaxy.ansible.com/docs/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; page for more details.&lt;/p&gt;

&lt;p&gt;To download and install a role from Galaxy, use the ansible-galaxy install command. You can usually find the installation command necessary for the role on Galaxy. For example, look at &lt;a href="https://galaxy.ansible.com/geerlingguy/postgresql" rel="noopener noreferrer"&gt;this role that installs a PostgreSQL server&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Install the role with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-galaxy install geerlingguy.postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then use it in a playbook while overriding the default role variable &lt;em&gt;postgresql_users&lt;/em&gt; to create an example user for us.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: all
  become: true
  roles:
    - role: geerlingguy.postgresql
      vars:
        postgresql_users:
          - name: christina

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Ansible Roles Tips &amp;amp; Tricks
&lt;/h2&gt;

&lt;p&gt;This section gathers some tips and tricks that might help you along your journey with Ansible roles.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always use &lt;strong&gt;descriptive names for your roles, tasks, and variables. Document the intent&lt;/strong&gt; and the purpose of your roles thoroughly and point out any variables that the user has to set. Set &lt;strong&gt;sane defaults&lt;/strong&gt; and simplify your roles as much as possible to allow users to get onboarded quickly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Never place secrets and sensitive data in your roles YAML files&lt;/strong&gt;. Secret values should be passed to the role at execution time by the play as a variable and should never be stored in any code repository.&lt;/li&gt;
&lt;li&gt;At first, it might be tempting to define a role that handles many responsibilities. For instance, we could create a role that installs multiple components, a common anti-pattern. Try to follow the separation of concerns design principle as much as possible and &lt;strong&gt;separate your roles based on different functionalities or technical components&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Try to keep your &lt;strong&gt;roles as loosely coupled as possible&lt;/strong&gt; and avoid adding too many dependencies. &lt;/li&gt;
&lt;li&gt;To control the execution order of roles and tasks, use the &lt;em&gt;import_role&lt;/em&gt; or &lt;em&gt;include_role&lt;/em&gt; tasks instead of the classic &lt;em&gt;roles&lt;/em&gt; keyword.&lt;/li&gt;
&lt;li&gt;When it makes sense, group your tasks in separate task files for improved clarity and organization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Points
&lt;/h2&gt;

&lt;p&gt;We deep dived into Ansible Roles and their utility and saw how to refactor our playbooks into roles or generate them from scratch. We went through a complete example of creating and using a role and explored how we can benefit from the Ansible Galaxy community.&lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this “Ansible Roles” blog post as much as I did.&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>automation</category>
      <category>iac</category>
    </item>
    <item>
      <title>Terraform Output Values : Complete Guide &amp; Examples</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Sun, 15 May 2022 19:08:02 +0000</pubDate>
      <link>https://forem.com/spacelift/terraform-output-values-complete-guide-examples-36gm</link>
      <guid>https://forem.com/spacelift/terraform-output-values-complete-guide-examples-36gm</guid>
      <description>&lt;p&gt;This blog post will deep dive into how Terraform handles output and how we can leverage and use output values efficiently across our Terraform projects. Output values allow us to share data between modules and workspaces while also providing us the flexibility to pass values to external systems for automation purposes.&lt;/p&gt;

&lt;p&gt;You have come to the right place if you are new to Terraform! Spacelift has curated a ton of valuable material, tutorials, and &lt;a href="https://spacelift.io/blog/terraform" rel="noopener noreferrer"&gt;blog posts around Terraform&lt;/a&gt; and how industry experts use it on its Spacelift blog. &lt;/p&gt;

&lt;h2&gt;
  
  
  Output vs Input Values
&lt;/h2&gt;

&lt;p&gt;Input variables permit us to customize Terraform configurations without hardcoding any values. This way, we can reuse &lt;a href="https://spacelift.io/blog/what-are-terraform-modules-and-how-do-they-work" rel="noopener noreferrer"&gt;Terraform modules&lt;/a&gt; while assigning custom values based on our needs. Usually, we refer to them as just &lt;strong&gt;variables&lt;/strong&gt; in the context of Terraform.&lt;/p&gt;

&lt;p&gt;To define input variables, we must declare them using a variable block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_region" {
  description = "AWS region"
  type        = string
}

variable "ec2_instance_type" {
  description = "Instance type for EC2 instances"
  type        = string
  default     = "t2.small"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The variable’s name is the label we set following the variable keyword. For every variable, we have the option to set some arguments such as &lt;em&gt;default&lt;/em&gt;, &lt;em&gt;type&lt;/em&gt;, &lt;em&gt;description&lt;/em&gt;, &lt;em&gt;validation&lt;/em&gt;, &lt;em&gt;sensitive&lt;/em&gt;, and &lt;em&gt;nullable&lt;/em&gt;. Check the official documentation about these arguments and how to set them in detail &lt;a href="https://www.terraform.io/language/values/variables#arguments" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;After declaring our input variables, we can utilize them in modules by referencing them like this &lt;strong&gt;var.&lt;/strong&gt; where  matches the label following the &lt;em&gt;variable&lt;/em&gt; keyword. For example, to reference the variable &lt;em&gt;ec2_instance_type&lt;/em&gt; that we defined above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "web_server" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = var.ec2_instance_type
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the other hand, output values empower us to export helpful information from our Terraform projects that we have defined and provisioned with Terraform. In the context of Terraform, we refer to output values as just &lt;strong&gt;outputs&lt;/strong&gt; for simplicity. &lt;/p&gt;

&lt;p&gt;Combining input and output variables, we get the flexibility to customize, automate, reuse and share our Terraform code easily. Input variables are similar to function arguments in traditional programming, while output variables work similarly to the return values of a function. Both are equally important to make our Terraform projects functional and facilitate data’s incoming and outgoing flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Outputs Use Cases
&lt;/h2&gt;

&lt;p&gt;More specifically, output values are quite helpful in certain use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can expose information from child modules to a parent module using outputs.&lt;/li&gt;
&lt;li&gt;Even more, from a root module, we can print outputs in the command line or pass these output values to external systems for automation purposes. 
When we use a remote state, we can access the root module outputs by other configurations using the &lt;a href="https://www.terraform.io/language/state/remote-state-data" rel="noopener noreferrer"&gt;terraform_remote_state&lt;/a&gt; data source. Output values from child modules aren’t accessible.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Declaring and Using Output Values
&lt;/h2&gt;

&lt;p&gt;In order to define an output value, we have to use the &lt;strong&gt;output&lt;/strong&gt; block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "instance_public_ip" {
  description = "Public IP of EC2 instance"
  value       = aws_instance.web_server.public_ip
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, we define an output value with the name &lt;em&gt;instance_public_ip&lt;/em&gt;. This way, we can pass the value to the parent module or display it to the end-user if it’s an output of the root module.&lt;/p&gt;

&lt;p&gt;The value argument, which is the returned output value, takes an expression referencing other resources or module attributes. Terraform only renders and displays outputs when executing &lt;em&gt;terraform apply&lt;/em&gt; and not when executing &lt;em&gt;terraform plan&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;To use outputs of nested modules from parent modules, we have to reference them as: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;module.&amp;lt;module_name&amp;gt;.&amp;lt;output_value_name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For example, to reference the output value &lt;em&gt;instance_public_ip&lt;/em&gt; that we have declared above in a module named &lt;em&gt;aws_web_server_instance&lt;/em&gt; from its parent module, we have to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.aws_web_server_instance.instance_public_ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s examine how we can use all this in a real-world example. In &lt;a href="https://github.com/spacelift-io-blog-posts/Blog-Technical-Content/tree/master/terraform-output" rel="noopener noreferrer"&gt;this GitHub repository&lt;/a&gt;, we define the Terraform configuration for this example’s infrastructure. To follow along, you will need to &lt;a href="https://spacelift.io/blog/how-to-install-terraform" rel="noopener noreferrer"&gt;install Terraform&lt;/a&gt;, have an &lt;a href="https://aws.amazon.com/console/" rel="noopener noreferrer"&gt;AWS account&lt;/a&gt; ready, and &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;authenticate with your AWS keys&lt;/a&gt; via the command line. Note that you might be charged a few dollars in your AWS account if you follow along.&lt;/p&gt;

&lt;p&gt;In this example, we create the necessary infrastructure for a webserver. For the needs of this demo, we split our Terraform configuration into three modules, the root one and two child modules responsible for handling VPC-related resources and EC2 instance-related resources.&lt;/p&gt;

&lt;p&gt;The project structure looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60759qvjsecacoo29v57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60759qvjsecacoo29v57.png" alt=" " width="567" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For each module, we define a &lt;em&gt;main.tf&lt;/em&gt; file that handles the main functionality of the module. &lt;/p&gt;

&lt;p&gt;Variable’s declarations and default values are populated in &lt;em&gt;variables.tf&lt;/em&gt; files, while for the root module, we also use a &lt;em&gt;terraform.tfvars&lt;/em&gt; file to set some variable values.&lt;/p&gt;

&lt;p&gt;A good practice is to define our outputs in separate &lt;em&gt;outputs.tf&lt;/em&gt; files, as you can see in the above example project structure. By declaring output values in an &lt;strong&gt;outputs.tf&lt;/strong&gt; file per module, we improve the clarity of our modules as it’s easier for users to understand what outputs to expect from them quickly. &lt;/p&gt;

&lt;p&gt;The root module utilizes and configures the aws provider and then just simply calls two child modules &lt;em&gt;aws_web_server_vpc&lt;/em&gt; and &lt;em&gt;aws_web_server_instance&lt;/em&gt; in main.tf of the top directory. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;root module main.tf&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.16.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

module "aws_web_server_vpc" {
  source = "./modules/aws-web-server-vpc"
}

module "aws_web_server_instance" {
  source            = "./modules/aws-web-server-instance"
  ec2_instance_type = var.ec2_instance_type
  vpc_id            = module.aws_web_server_vpc.vpc_id
  subnet_id         = module.aws_web_server_vpc.subnet_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We notice that when calling the module &lt;em&gt;aws_web_server_instance&lt;/em&gt;, we are passing two expressions using output values from the &lt;em&gt;aws_web_server_vpc&lt;/em&gt; module with the notation &lt;em&gt;module..&lt;/em&gt; we have seen earlier. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;root module outputs.tf&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vpc_id" {
  description = "ID of the vpc"
  value       = module.aws_web_server_vpc.vpc_id
}

output "instance_id" {
  description = "ID of EC2 instance"
  value       = module.aws_web_server_instance.instance_id
}

output "instance_public_ip" {
   description = "Public IP of EC2 instance"
   value       = module.aws_web_server_instance.instance_public_ip
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We define three output values for our root module, and we expect to see them at the command line after our infrastructure is provisioned. Checking the value parameter of each block, we notice that all of them are coming from output values of the two child modules, and by declaring them as output values of the root module, we are able to pass them through to the command line.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;root module variables.tf&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_region" {
  description = "AWS region"
  type        = string
}

variable "ec2_instance_type" {
  description = "Instance type for EC2 instances"
  type        = string
  default     = "t2.small"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;root module terraform.tfvars&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws_region        = "us-east-1"
ec2_instance_type = "t2.nano"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s examine next our two child modules and how we use output values to pass parameters between them.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;aws-web-server-vpc module main.tf&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "web_server" {
  cidr_block       = var.vpc_cidr_block
  instance_tenancy = "default"

  tags = {
    Name = var.vpc_name
  }
}

resource "aws_subnet" "web_server" {
  vpc_id                  = aws_vpc.web_server.id
  cidr_block              = var.subnet_cidr_block
  map_public_ip_on_launch = true
  availability_zone       = var.aws_az

  tags = {
    Name = var.subnet_name
  }
}

resource "aws_internet_gateway" "web_server" {
  vpc_id = aws_vpc.web_server.id

  tags = {
    Name = var.igw_name
  }
}

resource "aws_route_table" "web_server" {
  vpc_id = aws_vpc.web_server.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.web_server.id
  }

  tags = {
    Name = var.rt_name
  }
}

resource "aws_route_table_association" "web_server" {
  subnet_id      = aws_subnet.web_server.id
  route_table_id = aws_route_table.web_server.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above module, we define some resources necessary for the networking layer of our infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;aws-web-server-vpc module variables.tf&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "vpc_cidr_block" {
  description = "CIDR block for webserver VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "vpc_name" {
  description = "Name of the vpc"
  type        = string
  default     = "web_server"
}

variable "subnet_cidr_block" {
  description = "CIDR block for the webserver subnet"
  type        = string
  default     = "10.0.0.0/24"
}

variable "subnet_name" {
  description = "Name for the webserver subnet"
  type        = string
  default     = "web_server"
}

variable "aws_az" {
  description = "Availability Zone for the webserver subnet"
  type        = string
  default     = "us-east-1a"
}

variable "igw_name" {
  description = "Name for the Internet Gateway of the webserver vpc"
  type        = string
  default     = "web_server"
}

variable "rt_name" {
  description = "Name for the route table of the webserver vpc"
  type        = string
  default     = "web_server"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;aws-web-server-vpc module outputs.tf&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vpc_id" {
  description = "ID of the VPC"
  value       = aws_vpc.web_server.id
}

output "subnet_id" {
  description = "ID of the VPC subnet"
  value       = aws_subnet.web_server.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The two outputs we export here from this module are passed to the &lt;em&gt;aws-web-server-instance&lt;/em&gt; module as parameters in order to create the EC2 instance inside the vpc and subnet that we have just created. We saw how this was handled in the main.tf file of the root module. The output value &lt;em&gt;vpc_id&lt;/em&gt; is passed along as an output of the root module and should be printed in the command line after we apply the plan.&lt;/p&gt;

&lt;p&gt;Finally, the Terraform configuration for the &lt;em&gt;aws-web-server-instance&lt;/em&gt; module uses the passed info from the &lt;em&gt;aws-web-server-vpc&lt;/em&gt; module. It creates and configures the web server instance accordingly.  &lt;/p&gt;

&lt;p&gt;aws-web-server-instance module main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]
  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}

resource "aws_security_group" "web_server" {
  name        = var.ec2_security_group_name
  description = var.ec2_security_group_description
  vpc_id      = var.vpc_id

  ingress {
    description      = "Allow traffic on port 80 from everywhere"
    from_port        = 80
    to_port          = 80
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = {
    Name = var.ec2_security_group_name
  }
}

resource "aws_instance" "web_server" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = var.ec2_instance_type

  subnet_id              = var.subnet_id
  vpc_security_group_ids = [aws_security_group.web_server.id]

  tags = {
    Name = var.ec2_instance_name
  }

  user_data = &amp;lt;&amp;lt;-EOF
   #!/bin/bash
   sudo yum update -y
   sudo yum install httpd -y
   sudo systemctl enable httpd
   sudo systemctl start httpd
   echo "&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;div&amp;gt;This is a test webserver!&amp;lt;/div&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;" &amp;gt; /var/www/html/index.html
   EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;aws-web-server-instance module variables.tf&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "ec2_instance_name" {
  description = "Name for web server EC2 instance"
  type        = string
  default     = "web_server"
}

variable "ec2_instance_type" {
  description = "Instance type for web server EC2 instance"
  type        = string
  default     = "t2.micro"
}

variable "ec2_security_group_name" {
  description = "Security group name for web server EC2 instance"
  type        = string
  default     = "web_server"

}

variable "ec2_security_group_description" {
  description = "Security group description for web server EC2 instance"
  type        = string
  default     = "Allow traffic for webserver"
}

variable "vpc_id" {
  description = "VPC id for web server EC2 instance"
  type        = string
}

variable "subnet_id" {
  description = "Subnet id for web server EC2 instance"
  type        = string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The two output values that we pass through the root module are also defined in this module’s outputs.tf file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "instance_id" {
  description = "ID of EC2 instance"
  value       = aws_instance.web_server.id
}

output "instance_public_ip" {
  description = "Public IP of EC2 instance"
  value       = aws_instance.web_server.public_ip
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time to wrap up everything and execute the plan to provision our demo infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nm0gvf71i4elaocrydc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nm0gvf71i4elaocrydc.png" alt=" " width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our &lt;em&gt;terraform plan&lt;/em&gt; shows 7 new resources to be added and displays the changes to our three output values declared in the root module. Let’s go ahead and apply the plan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8p6m5nhyvf5k9f4031b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8p6m5nhyvf5k9f4031b.png" alt=" " width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As expected, the three outputs declared in the root module are displayed at the command line, sweet! &lt;/p&gt;

&lt;p&gt;We could use these values to automate other parts of our systems and process, but for now, we can get the value from &lt;em&gt;instance_public_ip&lt;/em&gt; and head to &lt;em&gt;http://&lt;/em&gt;, and we should see our demo web server up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb74xb6ka3gloxxo3ku4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb74xb6ka3gloxxo3ku4u.png" alt=" " width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great! Everything works as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Output Command
&lt;/h2&gt;

&lt;p&gt;Output values are stored in the state Terraform file. Since we have successfully applied our plan, we can now access these output values at will. We can leverage the &lt;em&gt;terraform output&lt;/em&gt; command for this purpose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdc8ii8bh5hzmc51izhg4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdc8ii8bh5hzmc51izhg4.png" alt=" " width="758" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, we can query individual output values by name like this&lt;/p&gt;

&lt;p&gt;&lt;em&gt;terraform output &lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajidpev73ess0tl4cms8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajidpev73ess0tl4cms8.png" alt=" " width="800" height="59"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get the raw value without quotes, use the &lt;strong&gt;-raw&lt;/strong&gt; flag.&lt;/p&gt;

&lt;p&gt;_terraform output -raw  _&lt;/p&gt;

&lt;p&gt;To get the JSON-formatted output, we can use the &lt;strong&gt;-json&lt;/strong&gt; flag.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpa8stld5kqed0gcmzfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpa8stld5kqed0gcmzfj.png" alt=" " width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is quite useful when we want to pass the outputs to other tools for automation since JSON is way easier to handle programmatically. Note that Terraform does not protect sensitive output values when using the &lt;em&gt;-json&lt;/em&gt; flag.&lt;/p&gt;

&lt;p&gt;When we are done, let’s go ahead and delete all these resources to avoid paying for them. &lt;/p&gt;

&lt;p&gt;From the top of our repository, execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Output Values Options &amp;amp; Arguments
&lt;/h2&gt;

&lt;p&gt;When defining output values, we have a couple of options that might help us better define and organize them. &lt;/p&gt;

&lt;p&gt;The argument &lt;strong&gt;description&lt;/strong&gt; is optional, but it is always considered good practice to include it in our output declarations to document their purpose. This argument should briefly explain each output’s intent and should be used as a helper description for the users of the module. We have already seen examples like this since we defined the description argument in all our output block declarations in our previous demo.&lt;/p&gt;

&lt;p&gt;In cases where we want to handle sensitive values and suppress them in command line output, we can declare an output value as &lt;strong&gt;sensitive&lt;/strong&gt;. Terraform will redact the values of sensitive outputs when planning, applying, destroying, or querying outputs to avoid printing them to the console. In practice, this is a good use case when we would like to pass values to other Terraform modules or automation tools without exposing them to the intermediate users.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "example_password" {
  description = "An example DB password"
  value       = aws_db_instance.database.password
  sensitive   = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that Terraform won’t redact sensitive output values when you query a specific output by name. After we apply a plan with an output declared as sensitive, the console displays a message with the value redacted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxiwijwobdwqu3mite6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxiwijwobdwqu3mite6i.png" alt=" " width="768" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These values are still recorded in the state files, so anyone who can access them can also access any sensitive values of our Terraform configuration. &lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;depends_on&lt;/strong&gt; argument on output declarations is used to define dependencies explicitly when this is necessary. Most of the time, Terraform handles this automatically, but there are some rare uses cases where you might find this option handy when it’s not the case. Consider including a comment when you use this option to explain why this is necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Remote State Data Source
&lt;/h2&gt;

&lt;p&gt;Occasionally, we might need to share data between different Terraform configurations with separate states. This is where the &lt;strong&gt;terraform_remote_state&lt;/strong&gt; data sources come into play. We can retrieve the root module outputs from another Terraform configuration using this data source. This built-in data source is available without any extra configuration needed. &lt;/p&gt;

&lt;p&gt;Following up on our previous example, let’s say that we would like to create a new subnet in the vpc of our &lt;em&gt;aws-web-server-vpc&lt;/em&gt; module. This time, the new subnet needs to be defined in a completely separate Terraform configuration that has its own state. We can leverage the &lt;em&gt;terraform_remote_state&lt;/em&gt; to get the value of the &lt;em&gt;vpc_id&lt;/em&gt; defined as an output of our previous example’s root module. In this case, we use the local backend to reach the state of another configuration in the local machine. The backend could be any remote backend that points to a Terraform state in a real-world scenario.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.16.0"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

data "terraform_remote_state" "terraform_output" {
  backend = "local"

  config = {
    path = "../terraform-output/terraform.tfstate"
  }
}

resource "aws_subnet" "test_terraform_remote_state_subnet" {
  vpc_id            = data.terraform_remote_state.terraform_output.outputs.vpc_id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "us-east-1b"

  tags = {
   Name = "test_terraform_remote_state_subnet"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that only the output values of the root module are accessible from the remote state. If we want to pass values from nested modules, we have to configure a passthrough output value declaration as we defined earlier in the root module of our previous example. &lt;/p&gt;

&lt;p&gt;Although this option is handy for some use cases, it also has some caveats. To use this data source, the user must have access to the entire state snapshot, which could potentially expose sensitive data. Check out the official docs to find &lt;a href="https://www.terraform.io/language/state/remote-state-data#alternative-ways-to-share-data-between-configurations" rel="noopener noreferrer"&gt;alternative ways to share data between configurations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Key Points&lt;br&gt;
We have seen how Terraform handles and exports output values between modules and the different options for outputs configuration. Even more, we compared input and output variables and examined multiple use cases where the use of outputs is helpful. Finally, we went through a complete example of using output values in our Terraform configuration between different modules and printing them to the console. &lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this “Terraform Outputs” blog post as much as I did.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to Use Different Types of Ansible Variables(Examples)</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Sun, 15 May 2022 18:21:47 +0000</pubDate>
      <link>https://forem.com/spacelift/how-to-use-different-types-of-ansible-variablesexamples-5c8i</link>
      <guid>https://forem.com/spacelift/how-to-use-different-types-of-ansible-variablesexamples-5c8i</guid>
      <description>&lt;p&gt;This blog post deep dives into Ansible Variables, which allow us to parametrize different Ansible components. Variables store values for reuse inside an Ansible project. &lt;/p&gt;

&lt;p&gt;If you are still learning how to use Ansible, you might also find helpful the introductory &lt;a href="https://spacelift.io/blog/ansible-tutorial" rel="noopener noreferrer"&gt;Ansible Tutorial&lt;/a&gt; or &lt;a href="https://spacelift.io/blog/ansible-playbooks" rel="noopener noreferrer"&gt;Working with Ansible Playbooks&lt;/a&gt; blog posts. You can find this article’s code on this &lt;a href="https://github.com/spacelift-io-blog-posts/Blog-Technical-Content/tree/master/ansible-variables" rel="noopener noreferrer"&gt;repository&lt;/a&gt; if you wish to follow along.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Variables Are Useful in Ansible
&lt;/h2&gt;

&lt;p&gt;The use of variables simplifies the management of dynamic values throughout an Ansible project and can potentially reduce the number of human errors. We have a convenient way to handle variations and differences between different environments and systems with variables. &lt;/p&gt;

&lt;p&gt;Another advantage of variables in Ansible is that we have the flexibility to define them in multiple places with different precedence according to our use case. We can also register new variables in our playbooks by using the returned value of a task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html#vars-and-facts" rel="noopener noreferrer"&gt;Ansible facts&lt;/a&gt; are a special type of variables that Ansible retrieves from any remote host for us to leverage them in Ansible projects. For example, we can get information regarding the operating system distribution with &lt;em&gt;ansible_distribution&lt;/em&gt;, information about devices on the host, the python version that Ansible is using with &lt;em&gt;ansible_python_version&lt;/em&gt;, and the system architecture, among others. To access this data, we have to reference the &lt;em&gt;ansible_facts&lt;/em&gt; variable. &lt;/p&gt;

&lt;h2&gt;
  
  
  Variable Name Rules
&lt;/h2&gt;

&lt;p&gt;Ansible has a strict set of rules to create valid variable names. Variable names can contain only letters, numbers, and underscores and must start with a letter or underscore. Some strings are reserved for other purposes and aren’t valid variable names, such as &lt;a href="https://docs.python.org/3/reference/lexical_analysis.html#keywords" rel="noopener noreferrer"&gt;Python Keywords&lt;/a&gt; or &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html#playbook-keywords" rel="noopener noreferrer"&gt;Playbook Keywords&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Defining and Referencing Simple Variables
&lt;/h2&gt;

&lt;p&gt;The simplest use case of variables is to define a variable name with a single value using standard YAML syntax. Although this pattern can be used in many places, we will show an example in a playbook for simplicity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Simple Variable
  hosts: all
  become: yes
  vars:
    username: bob

  tasks:
  - name: Add the user {{ username }}
    ansible.builtin.user:
      name: "{{ username }}"
      state: present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, after the &lt;strong&gt;vars&lt;/strong&gt; block, we define the variable &lt;strong&gt;username&lt;/strong&gt;, and assign the value &lt;em&gt;bob&lt;/em&gt;. Later, to reference the value in the task, we use &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html" rel="noopener noreferrer"&gt;Jinja2 syntax&lt;/a&gt; like this &lt;code&gt;"{{ username }}"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If a variable’s value starts with curly braces, &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#when-to-quote-variables-a-yaml-gotcha" rel="noopener noreferrer"&gt;we must quote the whole expression&lt;/a&gt; to allow YAML to interpret the syntax correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  List, Dictionary &amp;amp; Nested Variables
&lt;/h2&gt;

&lt;p&gt;There are many other options to define more complex variables like lists, dictionaries, and nested structures. To create a variable with multiple values, we can use YAML lists syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vars:
  version:
    - v1
    - v2
    - v3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To reference a specific value from the list we must select the correct field. For example, to access the third value v3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "{{ version[2] }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another useful option is to store key-value pairs in variables as dictionaries. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vars:
  users: 
    - user_1: maria
    - user_2: peter
    - user_3: sophie
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, to reference the third field from the dictionary, use the bracket or dot notation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;users['user_3']
users.user_3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the bracket notation is preferred as you might encounter problems using the dot notation in special cases.&lt;/p&gt;

&lt;p&gt;Sometimes, we have to create or use nested variable structures. For example, facts are nested data structures. We have to use a bracket or dot notation to reference nested variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vars:
  cidr_blocks:
      production:
        vpc_cidr: "172.31.0.0/16"
      staging:
        vpc_cidr: "10.0.0.0/24"

tasks:
- name: Print production vpc_cidr
  ansible.builtin.debug:
    var: cidr_blocks['production']['vpc_cidr']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Special Variables
&lt;/h2&gt;

&lt;p&gt;There are certain types of variables that we consider special in the context of Ansible. These include magic variables, connection variables, and facts. The names of these variables are reserved. &lt;/p&gt;

&lt;p&gt;Ansible allows us to access information about itself, hosts, groups, inventory, roles, and other Ansible manifests with the so-called &lt;strong&gt;magic variables&lt;/strong&gt;. For a complete list of different options, have a look &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html#magic-variables" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We already talked about facts. These variables contain all the information that Ansible can get from the current host. To use them, Ansible has to gather them first. To see all the facts that you can gather on a host, execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible &amp;lt;hostname&amp;gt; -m ansible.builtin.setup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, we have &lt;strong&gt;connection variables&lt;/strong&gt;. They are used to configure Ansible execution behavior and actions on hosts. The most common ones configure the user that Ansible logs in, set privilege escalation, set the IP of the target host, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Registering Variables
&lt;/h2&gt;

&lt;p&gt;During our plays, we might find it handy to utilize the output of a task as a variable that we can use in the following tasks. We can use the keyword &lt;strong&gt;register&lt;/strong&gt; to create our own custom variables from task output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Register Variable Playbook
  hosts: all

  tasks:
  - name: Run a script and register the output as a variable
    shell: "find hosts"
    args:
      chdir: "/etc"
    register: find_hosts_output
  - name: Use the output variable of the previous task
    debug:
      var: find_hosts_output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, we register the output of the command find &lt;em&gt;/etc/hosts&lt;/em&gt;, and we showcase how we can use the variable in the next task by printing its value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppt1v9uyeaobymeeuvhd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppt1v9uyeaobymeeuvhd.png" alt=" " width="800" height="833"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A powerful pattern is to combine registered variables with conditionals to create tasks that will only be executed when certain custom conditions are true.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Registered Variables Conditionals
  hosts: all

  tasks:
  - name: Register an example variable
    shell: cat /etc/hosts
    register: hosts_contents

  - name: Check if hosts file contains the word "localhost"
    debug:
      msg: "/etc/hosts file contains the word localhost"
    when: hosts_contents.stdout.find("localhost") != -1
      var: find_hosts_output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we registered in the variable &lt;em&gt;hosts_contents&lt;/em&gt; the contents of /etc/hosts file, and we execute the second task only if the file contains the word &lt;em&gt;localhost&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7ifmanti0vlvqfd1uhq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7ifmanti0vlvqfd1uhq.png" alt=" " width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since registered variables are stored in memory, it’s not possible to use them in future plays, and they are only available for the current playbook run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Share Variables with YAML Anchors and Aliases
&lt;/h2&gt;

&lt;p&gt;When we want to reuse and share variables, we can leverage Y_AML anchors and aliases_. They provide us with great flexibility in handling shared variables and help us reduce the repetition of data.&lt;/p&gt;

&lt;p&gt;Anchors are defined with &lt;strong&gt;&amp;amp;&lt;/strong&gt;, and then referenced with an alias denoted with *****. Let’s go and check a hands-on example in a playbook.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Anchors and Aliases
  hosts: all
  become: yes
  vars:
    user_groups: &amp;amp;user_groups
     - devs
     - support
    user_1:
        user_info: &amp;amp;user_info
            name: bob
            groups: *user_groups
            state: present
            create_home: yes
    user_2:
        user_info:
            &amp;lt;&amp;lt;: *user_info
            name: christina
    user_3:
        user_info:
            &amp;lt;&amp;lt;: *user_info
            name: jessica
            groups: support

  tasks:
  - name: Add several groups
    ansible.builtin.group:
      name: "{{ item }}"
      state: present
    loop: "{{ user_groups }}"

  - name: Add several users
    ansible.builtin.user:
      &amp;lt;&amp;lt;: *user_info
      name: "{{ item.user_info.name }}"
      groups: "{{ item.user_info.groups }}"
    loop:
      - "{{ user_1 }}"
      - "{{ user_2 }}"
      - "{{ user_3 }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, since some options are shared between users, instead of rewriting the same values, we share the common ones with the anchor &lt;em&gt;&amp;amp;user_info&lt;/em&gt;. For every subsequent user declaration, we use the alias &lt;em&gt;*user_info&lt;/em&gt; to avoid repeating ourselves as much as possible.&lt;/p&gt;

&lt;p&gt;The values for state and create_home are the same for all the users, while &lt;em&gt;name&lt;/em&gt; and &lt;em&gt;groups&lt;/em&gt; are replaced using the merge operator &amp;lt;&amp;lt;. &lt;/p&gt;

&lt;p&gt;Similarly, we reuse the &lt;em&gt;user_groups&lt;/em&gt; declaration in the definition of the &lt;em&gt;user_info&lt;/em&gt; anchor. This way, we don’t have to type the same groups again for &lt;em&gt;user_2&lt;/em&gt; while we still have the flexibility to override the groups, as we do for &lt;em&gt;user_3&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;The result is that &lt;em&gt;user_1&lt;/em&gt; and &lt;em&gt;user_2&lt;/em&gt; are added to groups &lt;em&gt;devs&lt;/em&gt; and &lt;em&gt;support&lt;/em&gt;, while &lt;em&gt;user_3&lt;/em&gt; is added only to the &lt;em&gt;support&lt;/em&gt; group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o3dc7swbdemj82l5eks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o3dc7swbdemj82l5eks.png" alt=" " width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Variable Scope
&lt;/h2&gt;

&lt;p&gt;Ansible provides many options on setting variables, and the ultimate decision on where to set them lies with us based on the scope we would like them to have. Conceptually, there are three main options available for scoping variables. &lt;/p&gt;

&lt;p&gt;First, we have the &lt;strong&gt;global&lt;/strong&gt; scope where the values are set for all hosts. This can be defined by the Ansible configuration, environment variables, and command line. &lt;/p&gt;

&lt;p&gt;We set values for a particular host or group of hosts using the &lt;strong&gt;host&lt;/strong&gt; scope. For example, there is an option to define some variables per host in the &lt;em&gt;inventory&lt;/em&gt; file.&lt;/p&gt;

&lt;p&gt;Lastly, we have the &lt;strong&gt;play&lt;/strong&gt; scope, where values are set for all hosts in the context of a play. An example would be the vars section we have seen in previous examples in each playbook.&lt;/p&gt;

&lt;h2&gt;
  
  
  Variable Setting Options &amp;amp; Precedence
&lt;/h2&gt;

&lt;p&gt;Variables can be defined with Ansible in many different places. There are options to set variables in playbooks, roles, inventory, var files, and command line. Let’s go and explore some of these options. &lt;/p&gt;

&lt;p&gt;As we have previously seen, the most straightforward way is to define variables in a play with the &lt;strong&gt;vars&lt;/strong&gt; section.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Set variables in a play
  hosts: all
  vars:
    version: 12.7.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another option is to define variables in the &lt;strong&gt;inventory&lt;/strong&gt; file. We can set variables per host or set shared variables for groups. This example defines a different &lt;em&gt;ansible user&lt;/em&gt; to connect for each host as a &lt;strong&gt;host variable&lt;/strong&gt; and the same &lt;em&gt;HTTP port&lt;/em&gt; for all web servers as a &lt;strong&gt;group variable&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[webservers]
webserver1 ansible_host=10.0.0.1 ansible_user=user1
webserver2 ansible_host=10.0.0.2 ansible_user=user2

[webservers:vars]
http_port=80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To better organize our variables, we could gather them in separate host and group variables files. In the same directory where we keep our inventory or playbook files, we can create two folders named &lt;strong&gt;group_vars&lt;/strong&gt; and &lt;strong&gt;host_vars&lt;/strong&gt; that would contain our variable files. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;group_vars/databases 
group_vars/webservers
host_vars/host1
host_vars/host2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Variables can also be set in custom var files. Let’s check an example that uses variables from an external file and the group_vars and host_vars directories.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example External Variables file
  hosts: all
  vars_files:
    - ./vars/variables.yml

  tasks:
  - name: Print the value of variable docker_version
    debug: 
      msg: "{{ docker_version}} "

  - name: Print the value of group variable http_port
    debug: 
      msg: "{{ http_port}} "

  - name: Print the value of host variable app_version
    debug: 
      msg: "{{ app_version}} "
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;vars/variables.yml&lt;/em&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker_version: 20.10.12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;group_vars/webservers&lt;/em&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http_port: 80
ansible_host: 127.0.0.1
ansible_user: vagrant
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;host_vars/host1&lt;/em&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app_version: 1.0.1
ansible_port: 2222
ansible_ssh_private_key_file: ./.vagrant/machines/host1/virtualbox/private_key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;host_vars/host2&lt;/em&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app_version: 1.0.2
ansible_port: 2200
ansible_ssh_private_key_file: ./.vagrant/machines/host2/virtualbox/private_key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The inventory file contains a group named webservers that includes our two hosts, &lt;em&gt;host1&lt;/em&gt; and &lt;em&gt;host2&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[webservers]
host1 
host2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we run this playbook, we notice the same value is used in both hosts for the group variable  &lt;em&gt;http_port&lt;/em&gt; but a different one for the host variable &lt;em&gt;app_version&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9miqoef7uj5zegwi5v8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9miqoef7uj5zegwi5v8k.png" alt=" " width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A good use case for having separate variables files is that you can keep in them sensitive values without storing them in playbooks or source control systems.&lt;/p&gt;

&lt;p&gt;Occasionally we might find it helpful to define or override variables at runtime by passing them at the command line with &lt;strong&gt;--extra-vars&lt;/strong&gt; or &lt;strong&gt;–e&lt;/strong&gt; argument. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook example-external-vars.yml --extra-vars "app_version=1.0.3"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since variables can be set in multiple places, Ansible applies variable precedence to select the variable value according to some hierarchy. The general rule is that variables defined with a more explicit scope have higher priority.&lt;/p&gt;

&lt;p&gt;For example, role defaults are overridden by mostly every other option. Variables are also flattened to each host before each play so all group and hosts variables are merged. Host variables have higher priority than group variables. &lt;/p&gt;

&lt;p&gt;Explicit variables definitions like the vars directory or an include_vars task override variables from the inventory. Finally, extra vars defined at runtime always win precedence. For a complete list of options and their hierarchy, look at the official documentation &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#understanding-variable-precedence" rel="noopener noreferrer"&gt;Understanding variable precedence&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to Set Variables &amp;amp; Best Practices
&lt;/h2&gt;

&lt;p&gt;Since Ansible provides a plethora of options to define variables, it might be a bit confusing to figure out the best way and place to set them. Let’s go and check some common &amp;amp; best practices around setting variables that might help us better organize our Ansible projects.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always give descriptive and clear names to your variables. Taking a moment to properly think about how to name variables always pays off long-term.&lt;/li&gt;
&lt;li&gt;If there are default values for common variables, set them in &lt;strong&gt;group_vars/all&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Prefer setting group and host vars in group_vars and host_vars directories instead of in the inventory file.&lt;/li&gt;
&lt;li&gt;If variables related to geography or behavior are tied to a specific group, prefer to set them as group variables.&lt;/li&gt;
&lt;li&gt;If you are using roles, always set default role variables in &lt;strong&gt;roles/your_role/defaults/main.yml&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;When you call roles, pass variables that you wish to override as parameters to make your plays easier to read.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;roles:
       - role: example_role
         vars:
            example_var: 'example_string'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;You can always use --extra-vars or –e to override every other option.&lt;/li&gt;
&lt;li&gt;Don’t store sensitive variables in your source code repository in plain text. You can leverage &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/vault.html#creating-encrypted-variables" rel="noopener noreferrer"&gt;Ansible Vault&lt;/a&gt; in these cases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In general, try to keep variables usage as simple as possible. You don’t have to use all the existing options and spread variables definition all over the place because that makes debugging your Ansible projects difficult. Try to find a structure that suits your needs best and stick to it! &lt;/p&gt;

&lt;p&gt;Key Points&lt;br&gt;
In this article, we deep-dived into Ansible Variables and saw how we can define and use them in playbooks. Moreover, we explored different options for sharing, setting, and referencing them, along with some guidelines and best practices to make our Ansible journey easier.&lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this “Ansible Variables” article as much as I did.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Working with Ansible Playbooks – Tips &amp; Tricks with Examples</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Sun, 15 May 2022 17:14:57 +0000</pubDate>
      <link>https://forem.com/spacelift/working-with-ansible-playbooks-tips-tricks-with-examples-b0i</link>
      <guid>https://forem.com/spacelift/working-with-ansible-playbooks-tips-tricks-with-examples-b0i</guid>
      <description>&lt;p&gt;In this article, we are exploring Ansible Playbooks, which are basically blueprints for automation actions. Playbooks allow us to define a recipe with all the steps we would like to automate in a repeatable, simple, and consistent manner. &lt;/p&gt;

&lt;p&gt;If you are entirely new to Ansible, check out this introductory &lt;a href="https://spacelift.io/blog/ansible-tutorial" rel="noopener noreferrer"&gt;Ansible Tutorial&lt;/a&gt; first.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an Ansible Playbook?
&lt;/h2&gt;

&lt;p&gt;Playbooks are one of the basic components of Ansible as they record and execute Ansible’s configuration. Generally, a playbook is the primary way to automate a set of tasks that we would like to perform on a remote machine. &lt;/p&gt;

&lt;p&gt;They help our automation efforts by gathering all the resources necessary to orchestrate ordered processes or avoid repeating manual actions. Playbooks can be reused and shared between persons, and they are designed to be human-friendly and easy to write in YAML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Playbook Structure
&lt;/h2&gt;

&lt;p&gt;A playbook is composed of one or more plays to run in a specific order. A play is an ordered list of tasks to run against the desired group of hosts. &lt;/p&gt;

&lt;p&gt;Every task is associated with a module responsible for an action and its configuration parameters. Since most tasks are idempotent, we can safely rerun a playbook without any issue.&lt;/p&gt;

&lt;p&gt;As discussed, playbooks are written in &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html#yaml-syntax" rel="noopener noreferrer"&gt;YAML&lt;/a&gt; using the standard extension .yml with minimal syntax. &lt;/p&gt;

&lt;p&gt;We must use spaces to align data elements that share the same hierarchy for indentation. Items that are children of other items must be indented more than their parents. There is no strict rule for the number of spaces used for indentation, but it’s pretty common to use two spaces while Tab characters are not allowed. &lt;/p&gt;

&lt;p&gt;Below is an example simple playbook with only two plays, each one having two tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Example Simple Playbook
  hosts: all
  become: yes

  tasks:
  - name: Copy file example_file to /tmp with permissions
    ansible.builtin.copy:
      src: ./example_file
      dest: /tmp/example_file
      mode: '0644'

  - name: Add the user 'bob' with a specific uid 
    ansible.builtin.user:
      name: bob
      state: present
      uid: 1040

- name: Update postgres servers
  hosts: databases
  become: yes

  tasks:
  - name: Ensure postgres DB is at the latest version
    ansible.builtin.yum:
      name: postgresql
      state: latest

  - name: Ensure that postgresql is started
    ansible.builtin.service:
      name: postgresql
      state: started
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We define a &lt;strong&gt;descriptive name&lt;/strong&gt; for each play according to its purpose on the top level. Then we represent the group of &lt;strong&gt;hosts&lt;/strong&gt; on which the play will be executed, taken from the inventory. Finally, we define that these plays should be executed as the root user with the &lt;strong&gt;become&lt;/strong&gt; option set to yes.&lt;/p&gt;

&lt;p&gt;You can also define many other &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html" rel="noopener noreferrer"&gt;Playbook Keywords&lt;/a&gt; at different levels such as play, tasks, playbook to configure Ansible’s behavior. Even more, most of these can be set at runtime as command-line flags in the ansible configuration file, ansible.cfg, or the inventory. Check out the &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/general_precedence.html#general-precedence-rules" rel="noopener noreferrer"&gt;precedence rules&lt;/a&gt; to understand how Ansible behaves in these cases.&lt;/p&gt;

&lt;p&gt;Next, we use the &lt;strong&gt;tasks&lt;/strong&gt; parameter to define the list of tasks for each play. For each task, we define a clear and descriptive name. Every task leverages a module to perform a specific operation. &lt;/p&gt;

&lt;p&gt;For example, the first task of the first play uses the &lt;strong&gt;ansible.builtin.copy&lt;/strong&gt; module. Along with the module, we usually have to define some &lt;strong&gt;module arguments&lt;/strong&gt;. For the second task of the first play, we use the module ansible.builtin.user that helps us manage user accounts. In this specific case, we configure the name of the user, the state of the user account, and its uid accordingly. &lt;/p&gt;

&lt;h2&gt;
  
  
  Running a Playbook
&lt;/h2&gt;

&lt;p&gt;When we are running a playbook, Ansible executes each task in order, one at a time, for all the hosts that we selected. This default behavior could be adjusted according to different use cases using strategies. &lt;/p&gt;

&lt;p&gt;If a task fails, Ansible stops the execution of the playbook to this specific host but continues to others that succeeded. During execution, Ansible displays some information about connection status, task names, execution status, and if any changes have been performed. &lt;/p&gt;

&lt;p&gt;At the end, Ansible provides a summary of the playbook’s execution along with failures and successes. Let’s see these in action by running the example playbook we saw earlier with the ansible-playbook command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6stg0o5qw7lu942o1711.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6stg0o5qw7lu942o1711.png" alt=" " width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the output, we notice the Play names, the Gathering Facts task, the Play tasks, and the Play Recap in the end. Since we didn’t define a databases hosts group, the second play of the playbook was skipped. &lt;/p&gt;

&lt;p&gt;We can use the &lt;strong&gt;--limit&lt;/strong&gt; flag to limit the Playbook’s execution to specific hosts. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook example-simple-playbook.yml --limit host1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Variables in Playbooks
&lt;/h2&gt;

&lt;p&gt;Variables are placeholders for values that you can reuse throughout a Playbook or other Ansible objects. They can only contain letters, numbers, and underscores and start with letters. &lt;/p&gt;

&lt;p&gt;Variables can be defined in Ansible in multiple levels, so look at &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable" rel="noopener noreferrer"&gt;variable precedence&lt;/a&gt; to understand how they are applied. For example, we can set variables at the global scope for all hosts, at the host scope for a particular host, or at the play scope for a specific play. &lt;/p&gt;

&lt;p&gt;To set host and group variables, create the directories &lt;strong&gt;group_vars&lt;/strong&gt; and &lt;strong&gt;host_vars&lt;/strong&gt;. For example, to define group variables for the &lt;strong&gt;databases&lt;/strong&gt; group, create the file group_vars/databases. Set common default variables in a &lt;strong&gt;group_vars/all&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;Even more, to define host variables for a specific host, create a file with the same name as the host under the hosts_vars directory.&lt;/p&gt;

&lt;p&gt;To substitute any variables during runtime, use the &lt;strong&gt;-e&lt;/strong&gt; flag. &lt;/p&gt;

&lt;p&gt;The most straightforward method to define variables is to use a &lt;strong&gt;vars&lt;/strong&gt; block at the beginning of a play. They are defined using standard YAML syntax.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Variables Playbook
  hosts: all
  vars:
    username: bob
    version: 1.2.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another way is to define variables in external YAML files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Variables Playbook
  hosts: all
  vars_files:
    - vars/example_variables.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use them in tasks, we have to reference them by placing their name inside double braces using the &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html" rel="noopener noreferrer"&gt;Jinja2 syntax&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Variables Playbook
  hosts: all
  vars:
    username: bob

  tasks:
  - name: Add the user {{ username }}
    ansible.builtin.user:
      name: "{{ username }}"
      state: present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a variable’s value starts with curly braces, &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#when-to-quote-variables-a-yaml-gotcha" rel="noopener noreferrer"&gt;we must quote the whole expression&lt;/a&gt; to allow YAML to interpret the syntax correctly. &lt;/p&gt;

&lt;p&gt;We can also define variables with multiple values as lists.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package:
  - foo1
  - foo2
  - foo3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s also possible to reference individual values from a list. For example, to select the first value foo1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package: "{{ package[0] }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another possible option is to define variables using YAML dictionaries. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dictionary_example: 
  - foo1: one
  - foo2: two
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, to get the first field from the dictionary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dictionary_example['foo1']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To reference nested variables, we have to use a bracket or dot notation. For example, to get the example_name_2 value from this structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vars:
  var1:
    foo1:
      field1: example_name_1
      field2: example_name_2

tasks:
- name: Create user for field2 value
  user: 
    name: "{{ var1['foo1']['field2'] }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can create variables using the register statement that captures the output of a command or task and then use them in other tasks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example-2 Variables Playbook
  hosts: all

  tasks:
  - name: Run a script and register the output as a variable
    shell: "find example_file"
    args:
      chdir: "/tmp"
    register: example_script_output

  - name: Use the output variable of the previous task
    debug:
      var: example_script_output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Handling Sensitive Data
&lt;/h2&gt;

&lt;p&gt;At times, we would need to access sensitive data (API keys, passwords, etc.) in our playbooks. Ansible provides Ansible Vault to assist us in these cases. Storing them as variables in plaintext is considered a security risk so we can use the &lt;strong&gt;ansible-vault&lt;/strong&gt; command to encrypt and decrypt these secrets.&lt;/p&gt;

&lt;p&gt;After the secrets have been encrypted with a password of your choice, you can safely put them under source control in your code repositories. Ansible Vault protects only data at rest. After the secrets are decrypted, it’s our responsibility to handle them with care and not accidentally leak them. &lt;/p&gt;

&lt;p&gt;We have the option to encrypt variables or files. Encrypted variables are decrypted on-demand only when needed, while encrypted files are always decrypted as Ansible doesn’t know in advance if it needs content from them. &lt;/p&gt;

&lt;p&gt;In any case, we need to think about how are we going to &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/vault.html#managing-vault-passwords" rel="noopener noreferrer"&gt;manage our vault passwords&lt;/a&gt;. To define encrypted content, we add the &lt;strong&gt;!vault&lt;/strong&gt; tag, which tells Ansible that the content needs to be decrypted and the | character before our multi-line encrypted string.&lt;/p&gt;

&lt;p&gt;To create a new encrypted file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-vault create new_file.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, an editor is launched to add our content to be encrypted. It’s also possible to encrypt existing files with the &lt;strong&gt;encrypt&lt;/strong&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-vault encrypt existing_file.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view an encrypted file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-vault view existing_file.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To edit an encrypted file in place, use the &lt;strong&gt;edit&lt;/strong&gt; command to decrypt the file temporarily:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-vault edit existing_file.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use a different password on an encrypted file, use the &lt;strong&gt;rekey&lt;/strong&gt; command by using the original password:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-vault rekey existing_file.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case you need to decrypt a file, you can do so with the &lt;strong&gt;decrypt&lt;/strong&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-vault decrypt existing_file.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, we use the &lt;strong&gt;encrypt_string&lt;/strong&gt; command to encrypt individual strings that we can use later in variables and include them in playbooks or variables files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-vault encrypt_string &amp;lt;password_source&amp;gt; '&amp;lt;string_to_encrypt&amp;gt;' –'&amp;lt;variable_name&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, to encrypt the db_password string ‘12345679’ using the ansible vault:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8ifeerby4w72wp1cljf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8ifeerby4w72wp1cljf.png" alt=" " width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since we omitted the , we manually entered the Vault password. This could also be achieved by passing a password file with &lt;strong&gt;--vault-password-file&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To view the contents of the above example encrypted variable that we saved in the vars.yml file, use the same password as before with the &lt;strong&gt;--ask-vault-pass&lt;/strong&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible localhost -m ansible.builtin.debug -a var="db_password" -e "@vars.yml" --ask-vault-pass

Vault password:

localhost | SUCCESS =&amp;gt; {
    "changed": false,
    "db_password": "12345678"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For managing multiple passwords, use the option &lt;strong&gt;--vault-id&lt;/strong&gt; to set a label. For example, to set the label dev on a file and prompt for a password to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-vault encrypt existing_file.yml --vault-id dev@prompt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To suppress output from a task that might log a sensitive value to the console, we use the no_log: true attribute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tasks:
- name: Hide sensitive value example
  debug:
    msg: "This is sensitive information"
  no_log: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we run this task we will notice that the message isn’t printed on the console:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [Hide sensitive value example] ***********************************
ok: [host1]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, let’s use the example encrypted variable we created above in a playbook and execute it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dbgz792118jpba1h4mv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dbgz792118jpba1h4mv.png" alt=" " width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub5wve9kstkcu2zd7eeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub5wve9kstkcu2zd7eeq.png" alt=" " width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nice, we verified that we could decrypt the value successfully and use it in tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Triggering tasks on change with Handlers
&lt;/h2&gt;

&lt;p&gt;In general, Ansible modules are idempotent and can be executed safely multiple times, but there are cases where we would like to run a task only when a change is made on the host. For example, we would like to restart a service only when updating its configuration files. &lt;/p&gt;

&lt;p&gt;Ansible uses handlers triggered when notified by other tasks to solve this use case. Tasks only notify their handlers, with the &lt;strong&gt;notify&lt;/strong&gt;: parameter, when the tasks actually change something. &lt;/p&gt;

&lt;p&gt;Handlers should have globally unique names, and it’s common to author them at the bottom of the playbooks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example with handler - Update apache config
  hosts: webservers

  tasks:
  - name: Update the apache config file
    ansible.builtin.template:
      src: ./httpd.conf
      dest: /etc/httpd.conf
    notify:
    - Restart apache

  handlers:
    - name: Restart apache
      ansible.builtin.service:
        name: httpd
        state: restarted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, the Restart apache task will only be triggered when we change something in the configuration. In reality, handlers can be considered inactive tasks waiting to be triggered with a notify statement.&lt;/p&gt;

&lt;p&gt;An important thing to note about handlers is that they run by default after all the other tasks have been completed. This way, the handlers only run once, even if triggered many times.&lt;/p&gt;

&lt;p&gt;To control this behavior, we can leverage the &lt;em&gt;**meta: flush_handlers&lt;/em&gt;* task that triggers any handlers that have been already notified at that time.&lt;/p&gt;

&lt;p&gt;It’s also possible for a task to notify more than one handler in its notify statement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conditional Tasks
&lt;/h2&gt;

&lt;p&gt;To further control execution flow in Ansible, we can leverage conditionals. Conditionals allow us to run or skip tasks based on if certain conditions are met. Variables, facts, or results of previous tasks along with operators, can be used to create such conditions. &lt;/p&gt;

&lt;p&gt;Some examples of use cases could be to update a variable based on a value of another variable, skip a task if a variable has a specific value, execute a task only if a fact from the host returns a value higher than a threshold.&lt;/p&gt;

&lt;p&gt;To apply a simple conditional statement, we use the &lt;strong&gt;when&lt;/strong&gt; parameter on a task. If the condition is met, the task is executed. Otherwise, it is skipped.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Simple Conditional
  hosts: all
  vars:
    trigger_task: true

  tasks:
  - name: Install nginx
    apt:
      name: "nginx"
      state: present
    when: trigger_task
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, the task is executed since the condition is met. &lt;/p&gt;

&lt;p&gt;Another common pattern is to control task execution based on attributes of the remote host that we can obtain from &lt;strong&gt;facts&lt;/strong&gt;. Check out this list with &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html#commonly-used-facts" rel="noopener noreferrer"&gt;commonly-used facts&lt;/a&gt; to get an idea of all the facts we can utilize in conditions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Facts Conditionals 
  hosts: all
  vars:
    supported_os:
      - RedHat
      - Fedora

  tasks:
  - name: Install nginx
    yum:
      name: "nginx"
      state: present
    when: ansible_facts['distribution'] in supported_os
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s possible to combine multiple conditions with logical operators and group them with parenthesis:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;when: (colour=="green" or colour=="red") and (size="small" or size="medium")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then when statement supports using a list in cases when we have multiple conditions that all need to be true:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;when:
  - ansible_facts['distribution'] == "Ubuntu"
  - ansible_facts['distribution_version'] == "20.04"
  - ansible_facts['distribution_release'] == "bionic"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another option is to use conditions based on registered variables that we have defined in previous tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Example Registered Variables Conditionals
  hosts: all

  tasks:
  - name: Register an example variable
    ansible.builtin.shell: cat /etc/hosts
    register: hosts_contents

  - name: Check if hosts file contains "localhost"
    ansible.builtin.shell: echo "/etc/hosts contains localhost"
    when: hosts_contents.stdout.find(localhost) != -1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Loops
&lt;/h2&gt;

&lt;p&gt;Ansible allows us to iterate over a set of items in a task to execute it multiple times with different parameters without rewriting it. For example, to create several files, we would use a task that iterates over a list of directory names instead of writing five tasks with the same module.&lt;/p&gt;

&lt;p&gt;To iterate over a simple list of items, use the &lt;strong&gt;loop&lt;/strong&gt; keyword. We can reference the current value with the loop variable &lt;strong&gt;item&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: "Create some files"
  ansible.builtin.file:
    state: touch
    path: /tmp/{{ item }}
  loop:
    - example_file1
    - example_file2
    - example_file3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of the above task that uses &lt;em&gt;loop&lt;/em&gt; and &lt;em&gt;item&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [Create some files] *********************************
changed: [host1] =&amp;gt; (item=example_file1)
changed: [host1] =&amp;gt; (item=example_file2)
changed: [host1] =&amp;gt; (item=example_file3)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s also possible to iterate over dictionaries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: "Create some files with dictionaries"
  ansible.builtin.file:
    state: touch
    path: "/tmp/{{ item.filename }}"
    mode: "{{ item.mode }}"
  loop:
    - { filename: 'example_file1', mode: '755'}
    - { filename: 'example_file2', mode: '775'}
    - { filename: 'example_file3', mode: '777'}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another useful pattern is to iterate over a group of hosts of the inventory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Show all the hosts in the inventory
  ansible.builtin.debug:
    msg: "{{ item }}"
  loop: "{{ groups['databases'] }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By combining conditionals and loops, we can select to execute the task only on some items in the list and skip it for others:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Execute when values in list are lower than 10
  ansible.builtin.command: echo {{ item }}
  loop: [ 100, 200, 3, 600, 7, 11 ]
  when: item &amp;lt; 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, another option is to use the keyword until to retry a task until a condition is true.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Retry a task **until** we find the word "success" in the logs
  shell: cat /var/log/example_log
  register: logoutput
  until: logoutput.stdout.find("success") != -1
  retries: 10
  delay: 15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, we check the file &lt;em&gt;example_log&lt;/em&gt; 10 times, with a delay of 15 seconds between each check until we find the word success. If we let the task run and add the word success to the example_log file after a while, we notice that the task stops successfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [Retry a task until we find the word “success” in the logs] *********
FAILED - RETRYING: Retry a task until we find the word "success" in the logs (10 retries left).
FAILED - RETRYING: Retry a task until we find the word "success" in the logs (9 retries left).
changed: [host1]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check out the &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html" rel="noopener noreferrer"&gt;official Ansible guide&lt;/a&gt; on Loops for more advanced use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ansible Playbooks Tips and Tricks
&lt;/h2&gt;

&lt;p&gt;Keeping these tips and tricks in mind when building your playbooks will help you be more productive and improve your efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Keep it as simple as possible
&lt;/h3&gt;

&lt;p&gt;Try to keep your tasks simple. There are many options and nested structures in Ansible, and by combining lots of features, you can end up with fairly complex setups. Spending some time simplifying your Ansible artifacts pays off in the long term.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Place your Ansible artifacts under version control
&lt;/h3&gt;

&lt;p&gt;It’s considered best practice to store playbooks in git or any other version control system and take advantage of its benefits. &lt;/p&gt;

&lt;h3&gt;
  
  
  3) Always give descriptive names to your tasks, plays, and playbooks
&lt;/h3&gt;

&lt;p&gt;Choose names that help you and others quickly understand the artifact’s functionality and purpose.&lt;/p&gt;

&lt;h3&gt;
  
  
  4) Strive for readability
&lt;/h3&gt;

&lt;p&gt;Use consistent indentation and add blank lines between tasks to increase readability.&lt;/p&gt;

&lt;h3&gt;
  
  
  5) Always mention the state of tasks explicitly
&lt;/h3&gt;

&lt;p&gt;Many modules have a default state that allows us to skip the state parameter. It’s always better to be explicit in these cases to avoid confusion.&lt;/p&gt;

&lt;h3&gt;
  
  
  6) Use comments when necessary
&lt;/h3&gt;

&lt;p&gt;There will be times when the task definition won’t be enough to explain the whole situation, so feel free to use comments for more complex parts of playbooks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Points
&lt;/h2&gt;

&lt;p&gt;In this article, we had a look into Ansible’s core automation component, playbooks. We saw how to create, structure, and trigger playbook runs.&lt;/p&gt;

&lt;p&gt;Moreover, we explored leveraging variables, handling sensitive data, controlling task execution with handlers and conditions, and iterating over tasks with loops.&lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this “Ansible: Working with Playbooks” article as much as I did.&lt;/p&gt;

&lt;p&gt;Btw. If you are looking to manage infrastructure as code, &lt;a href="https://spacelift.io/product?utm_source=blog&amp;amp;utm_medium=text&amp;amp;utm_id=blogpost&amp;amp;utm_content=%7Bansible_vs_terraform%7D" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt; is the way to go. It supports Git workflows, policy as code, programmatic configuration, context sharing, and many more great features. It currently works with Terraform, Pulumi, and CloudFormation, with support for Ansible on the way! You can test drive it for free, by going &lt;a href="https://spacelift.io/free-trial?utm_source=blog&amp;amp;utm_medium=text&amp;amp;utm_id=blogpost&amp;amp;utm_content=%7Bansible_vs_terraform%7D" rel="noopener noreferrer"&gt;here&lt;/a&gt; and creating a trial account.&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>devops</category>
      <category>cloud</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Ansible Tutorial for Beginners: Playbook &amp; Examples</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Sun, 15 May 2022 16:47:38 +0000</pubDate>
      <link>https://forem.com/spacelift/ansible-tutorial-for-beginners-playbook-examples-188p</link>
      <guid>https://forem.com/spacelift/ansible-tutorial-for-beginners-playbook-examples-188p</guid>
      <description>&lt;p&gt;Ansible is one of the most used tools for managing cloud and on-premises infrastructure. If you are looking for a flexible and powerful tool to automate your &lt;a href="https://spacelift.io/blog/infrastructure-as-code" rel="noopener noreferrer"&gt;infrastructure&lt;/a&gt; management and configuration tasks Ansible is the way to go.&lt;/p&gt;

&lt;p&gt;In this introductory guide, you will learn everything you need to get started with Ansible and start building robust automation solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Ansible?
&lt;/h2&gt;

&lt;p&gt;Ansible is a software tool that enables cross-platform automation and orchestration at scale and has become over the years the standard choice among enterprise automation solutions. &lt;/p&gt;

&lt;p&gt;It’s mostly addressed to IT operators, administrators &amp;amp; decision-makers helping them to achieve operational excellence across their entire infrastructure ecosystem.&lt;/p&gt;

&lt;p&gt;Backed by RedHat and a loyal open source community, it is considered an excellent option for configuration management, infrastructure provisioning, and application deployment use cases. &lt;/p&gt;

&lt;p&gt;Its automation opportunities are endless across hybrid clouds, on-prem infrastructure, and IoT and it’s an engine that can greatly improve the efficiency and consistency of your IT environments.&lt;/p&gt;

&lt;p&gt;Ready to automate everything? Let’s go!&lt;/p&gt;

&lt;h3&gt;
  
  
  How does Ansible work?
&lt;/h3&gt;

&lt;p&gt;Ansible uses the concepts of control and managed nodes. It connects from the &lt;strong&gt;control node&lt;/strong&gt;, any machine with Ansible installed, to the &lt;strong&gt;managed nodes&lt;/strong&gt; sending commands and instructions to them.&lt;/p&gt;

&lt;p&gt;The units of code that Ansible executes on the managed nodes are called &lt;strong&gt;modules&lt;/strong&gt;. Each module is invoked by a &lt;strong&gt;task&lt;/strong&gt;, and an ordered list of tasks together forms a &lt;strong&gt;playbook&lt;/strong&gt;. Users write playbooks with tasks and modules to define the desired state of the system.&lt;/p&gt;

&lt;p&gt;The managed machines are represented in a simplistic &lt;strong&gt;inventory&lt;/strong&gt; file that groups all the nodes into different categories.&lt;/p&gt;

&lt;p&gt;Ansible leverages a very simple language, &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html" rel="noopener noreferrer"&gt;YAML&lt;/a&gt;, to define playbooks in a human-readable data format that is really easy to understand from day one.&lt;/p&gt;

&lt;p&gt;Even more, Ansible doesn’t require the installation of any extra agents on the managed nodes so it’s simple to start using it.&lt;/p&gt;

&lt;p&gt;Typically, the only thing a user needs is a terminal to execute Ansible commands and a text editor to define the configuration files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of using Ansible
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A free and open-source community project with a huge audience.&lt;/li&gt;
&lt;li&gt;Battle-tested over many years as the preferred tool of IT wizards.&lt;/li&gt;
&lt;li&gt;Easy to start and use from day one, without the need for any special coding skills.&lt;/li&gt;
&lt;li&gt;Simple deployment workflow without any extra agents.&lt;/li&gt;
&lt;li&gt;Includes some sophisticated features around modularity and reusability that come in handy as users become more proficient.&lt;/li&gt;
&lt;li&gt;Extensive and comprehensive official documentation that is complemented by a plethora of online material produced by its community.&lt;/li&gt;
&lt;li&gt;To sum up, Ansible is simple yet powerful, agentless, community-powered, predictable, and secure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Basic Concepts &amp;amp; Terms
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Host&lt;/strong&gt;: A remote machine managed by Ansible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Group&lt;/strong&gt;: Several hosts grouped together that share a common attribute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inventory&lt;/strong&gt;: A collection of all the hosts and groups that Ansible manages. Could be a static file in the simple cases or we can pull the inventory from remote sources, such as cloud providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modules&lt;/strong&gt;: Units of code that Ansible sends to the remote nodes for execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tasks&lt;/strong&gt;: Units of action that combine a module and its arguments along with some other parameters.&lt;/p&gt;

&lt;p&gt;​​&lt;strong&gt;Playbooks&lt;/strong&gt;: An ordered list of tasks along with its necessary parameters that define a recipe to configure a system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Roles&lt;/strong&gt;: Redistributable units of organization that allow users to share automation code easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YAML&lt;/strong&gt;: A popular and simple data format that is very clean and understandable by humans.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to install Ansible
&lt;/h2&gt;

&lt;p&gt;To start using Ansible, you will need to install it on a control node, this could be your laptop for example. From this control node, Ansible will connect and manage other machines and orchestrate different tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation Requirements
&lt;/h3&gt;

&lt;p&gt;Your control node can be any machine with Python 3.8 or newer, but Windows is not supported. &lt;/p&gt;

&lt;p&gt;For the managed nodes, Ansible needs to communicate with them over SSH and SFTP (this can also be switched to SCP via the ansible.cfg file) or WinRM for Windows hosts. The managed nodes also need Python 2 (version 2.6 or later) or Python (version 3.5 or later) and in the case of Windows nodes PowerShell 3.0 or later and at least .NET 4.0 installed.&lt;/p&gt;

&lt;p&gt;The exact installation procedure depends on your machine and operating system but the most common way would be to use pip.&lt;/p&gt;

&lt;p&gt;To install &lt;strong&gt;pip&lt;/strong&gt;, in case it’s not already available on your system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl https://bootstrap.pypa.io/get-pip.py &lt;span class="nt"&gt;-o&lt;/span&gt; get-pip.py
&lt;span class="nv"&gt;$ &lt;/span&gt;python get-pip.py &lt;span class="nt"&gt;--user&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After pip is installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--user&lt;/span&gt; ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since there are different ways to install it for every operating system you can also have a look &lt;a href="https://docs.ansible.com/ansible-core/devel/installation_guide/intro_installation.html#installing-ansible-on-specific-operating-systems" rel="noopener noreferrer"&gt;here&lt;/a&gt; to find the official suggested way for your environment. For example, check &lt;a href="https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-ubuntu" rel="noopener noreferrer"&gt;this guide for Ubuntu&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can test on your terminal if it’s successfully installed by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Demo requirements
&lt;/h3&gt;

&lt;p&gt;Now that we have Ansible installed, we are going to create our first demo setup. I am going to use my personal laptop as the control node and &lt;a href="https://www.vagrantup.com/docs/installation" rel="noopener noreferrer"&gt;Vagrant&lt;/a&gt; along with VirtualBox to generate locally 2 Ubuntu machines that we will manage with Ansible.&lt;/p&gt;

&lt;p&gt;If you wish to follow along, install Vagrant and &lt;a href="https://www.virtualbox.org/" rel="noopener noreferrer"&gt;VirtualBox&lt;/a&gt;. You can also find all the files that we are going to be using in this demo in this &lt;a href="https://github.com/Imoustak/ansible_intro" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;From the top of this GitHub repository execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vagrant up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will spin up 2 Ubuntu hosts in VirtualBox so that we can use them as our managed hosts in this demo exercise. You can also open VirtualBox to verify the existence of your 2 virtual machines.&lt;/p&gt;

&lt;p&gt;Finally, to get the info necessary to build our hosts file run the vagrant ssh-config command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vagrant ssh-config
Host host1
  HostName 127.0.0.1
  User vagrant
  Port 2222
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /Users/ioannis/Desktop/blog/ansible_intro/.vagrant/machines/host1/virtualbox/private_key
  IdentitiesOnly &lt;span class="nb"&gt;yes
  &lt;/span&gt;LogLevel FATAL

Host host2
  HostName 127.0.0.1
  User vagrant
  Port 2200
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /Users/ioannis/Desktop/blog/ansible_intro/.vagrant/machines/host2/virtualbox/private_key
  IdentitiesOnly &lt;span class="nb"&gt;yes
  &lt;/span&gt;LogLevel FATAL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Ansible Inventory
&lt;/h2&gt;

&lt;p&gt;As previously mentioned, the inventory is the collection of the machines that we would like to manage. Usually, the default location for inventory is &lt;code&gt;/etc/ansible/hosts&lt;/code&gt; but we can also define a custom one in any directory. &lt;/p&gt;

&lt;p&gt;In the GitHub repository you will see a file named hosts that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;host1 ansible_host=127.0.0.1 ansible_user=vagrant ansible_port=2222 ansible_ssh_private_key_file=./.vagrant/machines/host1/virtualbox/private_key

host2 ansible_host=127.0.0.1 ansible_user=vagrant ansible_port=2200 ansible_ssh_private_key_file=./.vagrant/machines/host2/virtualbox/private_key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We used the information acquired by Vagrant to populate our hosts file. Currently, our hosts file contains only 2 entries for the hosts that we want to manage.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;host1&lt;/strong&gt; and &lt;strong&gt;host2&lt;/strong&gt; are the &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#inventory-aliases" rel="noopener noreferrer"&gt;aliases&lt;/a&gt; we used to name them.&lt;/p&gt;

&lt;p&gt;We specified some variables, such as the &lt;strong&gt;host&lt;/strong&gt;, &lt;strong&gt;user&lt;/strong&gt;, and &lt;strong&gt;SSH connection parameters&lt;/strong&gt; necessary to connect to our managed nodes. Here’s a full list of &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#connecting-to-hosts-behavioral-inventory-parameters" rel="noopener noreferrer"&gt;inventory parameters&lt;/a&gt; that we can configure per host.&lt;/p&gt;

&lt;p&gt;We will keep this inventory simple for the time being, but check out &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html" rel="noopener noreferrer"&gt;this guide&lt;/a&gt; to explore other inventory options such as creating host groups, adding ranges of hosts, and grouping variables.&lt;/p&gt;

&lt;p&gt;For example, we can define groups of hosts like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[webservers]
webserver1.example.com
webserver2.example.com
webserver3.example.com
192.0.6.45

[databases]
database1.example.com
database2.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, we defined two groups of hosts, &lt;strong&gt;webservers&lt;/strong&gt; and &lt;strong&gt;databases&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Two special groups always exist by default; &lt;strong&gt;all&lt;/strong&gt; that includes every host and &lt;strong&gt;ungrouped&lt;/strong&gt; that includes all the hosts that aren’t in any groups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ansible ad hoc commands
&lt;/h2&gt;

&lt;p&gt;Using ad hoc commands is a quick way to run a single task on one or more managed nodes. &lt;/p&gt;

&lt;p&gt;Some examples of valid use cases are rebooting servers, copying files, checking connection status, managing packages, gathering facts, etc.&lt;/p&gt;

&lt;p&gt;The pattern for ad hoc commands looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible &lt;span class="o"&gt;[&lt;/span&gt;host-pattern] &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;module] &lt;span class="nt"&gt;-a&lt;/span&gt; “[module options]”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;host-pattern&lt;/strong&gt;: the managed hosts to run against&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-m&lt;/strong&gt;: the module to run&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-a&lt;/strong&gt;: the list of arguments required by the module&lt;/p&gt;

&lt;p&gt;This is a good opportunity to use our first Ansible ad hoc command and at the same time validate that our inventory is configured as expected. Let’s go ahead and execute a ping command against all our hosts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible &lt;span class="nt"&gt;-i&lt;/span&gt; hosts all &lt;span class="nt"&gt;-m&lt;/span&gt; ping

host1 | SUCCESS &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"ansible_facts"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"discovered_interpreter_python"&lt;/span&gt;: &lt;span class="s2"&gt;"/usr/bin/python"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="s2"&gt;"changed"&lt;/span&gt;: &lt;span class="nb"&gt;false&lt;/span&gt;,
    &lt;span class="s2"&gt;"ping"&lt;/span&gt;: &lt;span class="s2"&gt;"pong"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

host2 | SUCCESS &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"ansible_facts"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"discovered_interpreter_python"&lt;/span&gt;: &lt;span class="s2"&gt;"/usr/bin/python"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="s2"&gt;"changed"&lt;/span&gt;: &lt;span class="nb"&gt;false&lt;/span&gt;,
    &lt;span class="s2"&gt;"ping"&lt;/span&gt;: &lt;span class="s2"&gt;"pong"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nice, seems like we can successfully ping the 2 hosts that we have defined in our hosts file.&lt;/p&gt;

&lt;p&gt;Next, run a live command only to the host2 node by using the &lt;em&gt;--limit&lt;/em&gt; flag&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible all &lt;span class="nt"&gt;-i&lt;/span&gt; hosts &lt;span class="nt"&gt;--limit&lt;/span&gt; host2 &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"/bin/echo hello"&lt;/span&gt;

host2 | CHANGED | &lt;span class="nv"&gt;rc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;
hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another example would be to copy a file to our remote nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible all &lt;span class="nt"&gt;-i&lt;/span&gt; hosts &lt;span class="nt"&gt;-m&lt;/span&gt; ansible.builtin.copy &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"src=./hosts dest=/tmp/hosts"&lt;/span&gt;
host1 | CHANGED &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"ansible_facts"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"discovered_interpreter_python"&lt;/span&gt;: &lt;span class="s2"&gt;"/usr/bin/python"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="s2"&gt;"changed"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
    &lt;span class="s2"&gt;"checksum"&lt;/span&gt;: &lt;span class="s2"&gt;"0bd8efb12ac716fdddf6dd8feedb750a7fc8c370"&lt;/span&gt;,
    &lt;span class="s2"&gt;"dest"&lt;/span&gt;: &lt;span class="s2"&gt;"/tmp/hosts"&lt;/span&gt;,
    &lt;span class="s2"&gt;"gid"&lt;/span&gt;: 1000,
    &lt;span class="s2"&gt;"group"&lt;/span&gt;: &lt;span class="s2"&gt;"vagrant"&lt;/span&gt;,
    &lt;span class="s2"&gt;"md5sum"&lt;/span&gt;: &lt;span class="s2"&gt;"f425732ff83fe576b00f37dd63d94544"&lt;/span&gt;,
    &lt;span class="s2"&gt;"mode"&lt;/span&gt;: &lt;span class="s2"&gt;"0664"&lt;/span&gt;,
    &lt;span class="s2"&gt;"owner"&lt;/span&gt;: &lt;span class="s2"&gt;"vagrant"&lt;/span&gt;,
    &lt;span class="s2"&gt;"size"&lt;/span&gt;: 291,
    &lt;span class="s2"&gt;"src"&lt;/span&gt;: &lt;span class="s2"&gt;"/home/vagrant/.ansible/tmp/ansible-tmp-1645033325.338188-14454-35489936437202/source"&lt;/span&gt;,
    &lt;span class="s2"&gt;"state"&lt;/span&gt;: &lt;span class="s2"&gt;"file"&lt;/span&gt;,
    &lt;span class="s2"&gt;"uid"&lt;/span&gt;: 1000
&lt;span class="o"&gt;}&lt;/span&gt;
host2 | CHANGED &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"ansible_facts"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"discovered_interpreter_python"&lt;/span&gt;: &lt;span class="s2"&gt;"/usr/bin/python"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="s2"&gt;"changed"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
    &lt;span class="s2"&gt;"checksum"&lt;/span&gt;: &lt;span class="s2"&gt;"0bd8efb12ac716fdddf6dd8feedb750a7fc8c370"&lt;/span&gt;,
    &lt;span class="s2"&gt;"dest"&lt;/span&gt;: &lt;span class="s2"&gt;"/tmp/hosts"&lt;/span&gt;,
    &lt;span class="s2"&gt;"gid"&lt;/span&gt;: 1000,
    &lt;span class="s2"&gt;"group"&lt;/span&gt;: &lt;span class="s2"&gt;"vagrant"&lt;/span&gt;,
    &lt;span class="s2"&gt;"md5sum"&lt;/span&gt;: &lt;span class="s2"&gt;"f425732ff83fe576b00f37dd63d94544"&lt;/span&gt;,
    &lt;span class="s2"&gt;"mode"&lt;/span&gt;: &lt;span class="s2"&gt;"0664"&lt;/span&gt;,
    &lt;span class="s2"&gt;"owner"&lt;/span&gt;: &lt;span class="s2"&gt;"vagrant"&lt;/span&gt;,
    &lt;span class="s2"&gt;"size"&lt;/span&gt;: 291,
    &lt;span class="s2"&gt;"src"&lt;/span&gt;: &lt;span class="s2"&gt;"/home/vagrant/.ansible/tmp/ansible-tmp-1645033325.356349-14456-242443746329447/source"&lt;/span&gt;,
    &lt;span class="s2"&gt;"state"&lt;/span&gt;: &lt;span class="s2"&gt;"file"&lt;/span&gt;,
    &lt;span class="s2"&gt;"uid"&lt;/span&gt;: 1000
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perfect, the files have been copied! We can verify this ourselves by “sshing” into one of the managed nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;vagrant ssh host1
vagrant@vagrant:~&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /tmp/hosts

host1 &lt;span class="nv"&gt;ansible_host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;127.0.0.1 &lt;span class="nv"&gt;ansible_user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vagrant &lt;span class="nv"&gt;ansible_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2222 &lt;span class="nv"&gt;ansible_ssh_private_key_file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./.vagrant/machines/host1/virtualbox/private_key
host2 &lt;span class="nv"&gt;ansible_host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;127.0.0.1 &lt;span class="nv"&gt;ansible_user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vagrant &lt;span class="nv"&gt;ansible_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2200 &lt;span class="nv"&gt;ansible_ssh_private_key_file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./.vagrant/machines/host2/virtualbox/private_key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most modules of Ansible are &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Idempotency" rel="noopener noreferrer"&gt;idempotent&lt;/a&gt;, which implies that the changes are applied only if needed.  &lt;/p&gt;

&lt;p&gt;If we try to run a command with an idempotent module, such as copy, we will see that the second time since the file already exists the tasks succeed without performing any actions.&lt;/p&gt;

&lt;p&gt;Notice the different colors (green indicates no actions) of the command output after we execute the same command a second time:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyntpwssldhu7198kmuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyntpwssldhu7198kmuw.png" alt=" " width="800" height="773"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you would like to get more information about Ansible ad hoc commands, check out the &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html#introduction-to-ad-hoc-commands" rel="noopener noreferrer"&gt;Intro to ad hoc commands&lt;/a&gt; official user guide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro to Ansible Playbooks
&lt;/h2&gt;

&lt;p&gt;Playbooks are the simplest way in Ansible to automate repeating tasks in the form of reusable and consistent configuration files. Playbooks are defined in YAML files and contain any ordered set of steps to be executed on our managed nodes.&lt;/p&gt;

&lt;p&gt;As mentioned, tasks in a playbook are executed from top to bottom. At a minimum, a playbook should define the managed nodes to target and some tasks to run against them.&lt;/p&gt;

&lt;p&gt;In playbooks, data elements at the same level must share the same indentation while items that are children of other items must be indented more than their parents. &lt;/p&gt;

&lt;p&gt;Let’s look at a simple playbook to get an idea of how that looks in practice. &lt;/p&gt;

&lt;p&gt;For the needs of this demo, we will use a simple playbook that runs against all hosts and copies a file, creates a user, and upgrades all apt packages on the remote machines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Intro to Ansible Playbooks
  hosts: all

  tasks:
  - name: Copy file hosts with permissions
    ansible.builtin.copy:
      src: ./hosts
      dest: /tmp/hosts_backup
      mode: '0644'
  - name: Add the user 'bob'
    ansible.builtin.user:
      name: bob
    become: yes
    become_method: sudo
  - name: Upgrade all apt packages
    apt:
      force_apt_get: yes
      upgrade: dist
    become: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the top section, we define the group of hosts on which to run the playbook and its name. After that, we define a list of tasks. Each of the tasks contains some information about the task and the module to be executed along with the necessary arguments.&lt;/p&gt;

&lt;p&gt;To avoid specifying the location of our inventory file every time we can define this via a configuration file (&lt;strong&gt;ansible.cfg&lt;/strong&gt;). To find out more about Ansible configuration options check &lt;a href="https://docs.ansible.com/ansible/latest/installation_guide/intro_configuration.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[defaults]
inventory=./hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can validate that this works as expected by running the &lt;a href="https://docs.ansible.com/ansible/latest/cli/ansible-inventory.html" rel="noopener noreferrer"&gt;ansible-inventory&lt;/a&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-inventory --list
{
    "_meta": {
        "hostvars": {
            "host1": {
                "ansible_host": "127.0.0.1",
                "ansible_port": 2222,
                "ansible_ssh_private_key_file": "./.vagrant/machines/host1/virtualbox/private_key",
                "ansible_user": "vagrant"
            },
            "host2": {
                "ansible_host": "127.0.0.1",
                "ansible_port": 2200,
                "ansible_ssh_private_key_file": "./.vagrant/machines/host2/virtualbox/private_key",
                "ansible_user": "vagrant"
            }
        }
    },
    "all": {
        "children": [
            "ungrouped"
        ]
    },
    "ungrouped": {
        "hosts": [
            "host1",
            "host2"
        ]
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are ready to run our first playbook using the &lt;a href="https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#ansible-playbook" rel="noopener noreferrer"&gt;ansible-playbook&lt;/a&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-playbook intro_playbook.yml
 ___________________________________
&amp;lt; PLAY [Intro to Ansible Playbooks] &amp;gt;
 -----------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

 ________________________
&amp;lt; TASK [Gathering Facts] &amp;gt;
 ------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

ok: [host2]
ok: [host1]
 _____________________________________________
&amp;lt; TASK [Copy file with owner and permissions] &amp;gt;
 ---------------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

changed: [host1]
changed: [host2]
 ___________________________
&amp;lt; TASK [Add the user 'bob'] &amp;gt;
 ---------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

changed: [host1]
changed: [host2]
 _________________________________
&amp;lt; TASK [Upgrade all apt packages] &amp;gt;
 ---------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

[WARNING]: Updating cache and auto-installing missing dependency: python-apt
changed: [host2]
changed: [host1]
 ____________
&amp;lt; PLAY RECAP &amp;gt;
 ------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

host1                      : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
host2                      : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our playbook has been executed successfully and we can follow the ordered execution of the tasks per host by checking the command output. At the bottom, a summary of the playbook execution is provided by Ansible. &lt;/p&gt;

&lt;p&gt;Something that you might find useful at times, is to validate the syntax of a playbook with the flag &lt;strong&gt;--syntax-check&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Another handy option is to use the &lt;strong&gt;-C&lt;/strong&gt; flag to perform a dry run of the playbook’s execution. This option doesn’t actually make any changes but it just reports the changes that will happen during a real run.&lt;/p&gt;

&lt;p&gt;The official &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks.html" rel="noopener noreferrer"&gt;Working with playbooks&lt;/a&gt; user guide includes many more details and options about playbooks so make sure to read it when you start moving into more advanced use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Variables in Playbooks
&lt;/h2&gt;

&lt;p&gt;Variables can be defined in Ansible at more than one level and Ansible chooses the variable to use based on &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable" rel="noopener noreferrer"&gt;variable precedence&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Let’s see how we can use variables at the playbook level. &lt;/p&gt;

&lt;p&gt;The most common method is to use a vars block at the beginning of each playbook. After declaring them, we can use them in tasks. Use &lt;code&gt;**{{ variable_name }}**&lt;/code&gt; to reference a variable in a task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Variables playbook
  hosts: all
  vars:
      state: latest
      user: bob
  tasks:
  - name: Add the user {{ user }}
    ansible.builtin.user:
      name: "{{ user }}"
  - name: Upgrade all apt packages
    apt:
      force_apt_get: yes
      upgrade: dist
  - name: Install the {{ state }} of package "nginx"
    apt:
      name: "nginx"
      state: "{{ state }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, we have used the variables &lt;strong&gt;user&lt;/strong&gt; and &lt;strong&gt;state&lt;/strong&gt;. When referencing a variable as another variable’s value, we must add quotes around the value as shown in our example.&lt;/p&gt;

&lt;p&gt;During the playbook run below, we see that the variable’s substitution happens successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzrncuex2p8zqt7rxq8v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzrncuex2p8zqt7rxq8v.png" alt=" " width="800" height="932"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take a look at the &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#" rel="noopener noreferrer"&gt;Using Variables&lt;/a&gt; official user guide to learn more about advanced use cases of their usage in Ansible.&lt;/p&gt;

&lt;p&gt;Key Points&lt;br&gt;
In this article, we explored Ansible’s basic concepts, features, and functionality while we also explained why it is such a great tool for automation purposes. &lt;/p&gt;

&lt;p&gt;Even more, we configured an Ansible demo setup with examples on how to create an inventory, execute ad hoc commands, write and run simple playbooks with variables. &lt;/p&gt;

&lt;p&gt;Thank you for reading and I hope you enjoyed this "Intro to Ansible" article as much as I did.&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>tutorial</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Terraform Best Practices for Better Infrastructure Management</title>
      <dc:creator>Ioannis Moustakis</dc:creator>
      <pubDate>Sun, 15 May 2022 14:39:53 +0000</pubDate>
      <link>https://forem.com/spacelift/terraform-best-practices-for-better-infrastructure-management-3ib3</link>
      <guid>https://forem.com/spacelift/terraform-best-practices-for-better-infrastructure-management-3ib3</guid>
      <description>&lt;p&gt;In this article, we explore best practices for managing Infrastructure as Code (IaC) with Terraform. Terraform is one of the most used tools in the &lt;a href="https://spacelift.io/blog/infrastructure-as-code" rel="noopener noreferrer"&gt;IaC&lt;/a&gt; space that enables us to safely and predictably apply changes to our infrastructure. &lt;/p&gt;

&lt;p&gt;Starting with Terraform can feel like an intimidating task at first, but a beginner can quickly reach a basic understanding of the tool. After the initial learning period, a new user can then start running commands, creating, and refactoring Terraform code. During this process, many new users face nuances and issues around how to structure their code correctly, use advanced features, apply software development best practices in their IaC process, etc. &lt;/p&gt;

&lt;p&gt;Let’s check together some best practices that will assist you in pushing your Terraform skills to the next level. If you are entirely new to Terraform, look at the &lt;a href="https://spacelift.io/blog/terraform" rel="noopener noreferrer"&gt;Terraform Spacelift Blog&lt;/a&gt;, where you can find a plethora of material, tutorials, and examples to boost your skills.&lt;/p&gt;

&lt;h1&gt;
  
  
  Terraform Key Concepts
&lt;/h1&gt;

&lt;p&gt;In this section, we will describe some key Terraform concepts briefly. If you are already familiar with these, you can skip this section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Configuration Language
&lt;/h2&gt;

&lt;p&gt;Terraform uses its own &lt;a href="https://www.terraform.io/language" rel="noopener noreferrer"&gt;configuration language&lt;/a&gt; to declare infrastructure objects and their associations. The goal of this language is to be declarative and describe the system’s state that we want to reach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Resources represent infrastructure objects and are one of the basic blocks of the Terraform language. &lt;/p&gt;

&lt;h2&gt;
  
  
  Data Sources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://spacelift.io/blog/terraform-data-sources-how-they-are-utilised" rel="noopener noreferrer"&gt;Data sources&lt;/a&gt; feed our Terraform configurations with external data or data defined by separate Terraform projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modules
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://spacelift.io/blog/what-are-terraform-modules-and-how-do-they-work" rel="noopener noreferrer"&gt;Modules&lt;/a&gt; help us group several resources and are the primary way to package resources in Terraform for reusability purposes. &lt;/p&gt;

&lt;h2&gt;
  
  
  State
&lt;/h2&gt;

&lt;p&gt;Terraform keeps the information about the state of our infrastructure to store track mappings to our live infrastructure and metadata, create plans and apply new changes. &lt;/p&gt;

&lt;h2&gt;
  
  
  Providers
&lt;/h2&gt;

&lt;p&gt;To interact with resources on cloud providers, Terraform uses some plugins named providers.&lt;/p&gt;

&lt;h1&gt;
  
  
  IaC Best Practices
&lt;/h1&gt;

&lt;p&gt;Before moving to Terraform, first, let’s check some fundamental best practices that apply to all Infrastructure as Code projects. These should be used in your processes regardless of your tool to manage your cloud infrastructure. &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Use version control and prevent manual changes
&lt;/h2&gt;

&lt;p&gt;This seems like an obvious statement in 2022, but it’s the basis of everything else. We should treat our infrastructure configurations as application code and apply the same best practices for managing, testing, reviewing, and bringing it to production. We should embrace a &lt;a href="https://www.weave.works/technologies/gitops/" rel="noopener noreferrer"&gt;GitOps&lt;/a&gt; approach that fits our use case and implement an automated CI/CD workflow for applying changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Shift your culture to Collaborative IaC
&lt;/h2&gt;

&lt;p&gt;Keeping your infrastructure in version-controlled repositories is the first step to improving your manual infrastructure management. Next, we should strive to enable usage across teams with self-service infrastructure, apply policies and compliance according to our organization’s standards, and access relevant insights and information. Thankfully, &lt;a href="http://spacelift.io/?utm_source=blog&amp;amp;utm_medium=text&amp;amp;utm_id=blogpost&amp;amp;utm_content=%7Bterraform_best_practices%7D" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt; can assist you along the way to achieving these.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to Structure Your Terraform Projects
&lt;/h1&gt;

&lt;p&gt;This section will explore some strategies for structuring our Terraform projects. In the world of Terraform, there is no right or wrong way to structure our configurations, and most of the suggested structures that you will find online are heavily opinionated.&lt;/p&gt;

&lt;p&gt;When deciding how to set up your Terraform configuration, the most important thing is to &lt;strong&gt;understand your infrastructure needs&lt;/strong&gt; and your use case and &lt;strong&gt;craft a solution that fits your team and the project&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If we are dealing with a small project with limited infrastructure components, it’s not a bad idea to keep our Terraform configuration as simple as possible. In these cases, we can configure only the necessary files for our root module, which are the configuration files that exist in the root directory. A small project can contain just these files &lt;strong&gt;main.tf&lt;/strong&gt;, &lt;strong&gt;variables.tf&lt;/strong&gt;, &lt;strong&gt;README.md&lt;/strong&gt;. Some other files that you might find handy to use are the &lt;strong&gt;outputs.tf&lt;/strong&gt; to define the output values of your project, &lt;strong&gt;versions.tf&lt;/strong&gt; to gather any pinned versions for the configurations, and &lt;strong&gt;providers.tf&lt;/strong&gt; to configure options related to the providers you use, especially if there are multiple. &lt;/p&gt;

&lt;p&gt;Our primary entry point is main.tf, and in simple use cases, we can add all our resources there. We define our &lt;a href="https://spacelift.io/blog/how-to-use-terraform-variables" rel="noopener noreferrer"&gt;variables&lt;/a&gt; in variables.tf and assign values to them in terraform.tfvars. We use the file outputs.tf to declare &lt;a href="https://spacelift.io/blog/terraform-output" rel="noopener noreferrer"&gt;output values&lt;/a&gt;. You can find a similar example project structure &lt;a href="https://github.com/spacelift-io-blog-posts/Blog-Technical-Content/tree/master/terraform-best-practices/small-terraform-project-structure" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsnc96blh86if73e6f55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsnc96blh86if73e6f55.png" alt=" " width="389" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When dealing with larger projects, things get a bit more complicated, and we have to take a step back to figure out the best structure for our projects. &lt;/p&gt;

&lt;p&gt;We first have to &lt;strong&gt;break down our Terraform code into reusable components&lt;/strong&gt; that abstract details from consumers and different teams can use and customize accordingly. We can achieve this by creating separate modules for pieces of infrastructure that should be reused in different environments, projects, and teams. &lt;/p&gt;

&lt;p&gt;A common practice is to separate our modules according to ownership and responsibility, rate of change, and ease of management. For every module, we need to define its inputs and outputs and document them thoroughly to enable consumers to use them effectively. We can then leverage outputs and terraform_remote_state to reference values across modules or even different Terraform states. Beware that using the terraform_remote_state data source implies access to the entire state snapshot and this might post a security issue. Another option to share parameters between different states is to &lt;a href="https://www.terraform.io/language/state/remote-state-data#alternative-ways-to-share-data-between-configurations" rel="noopener noreferrer"&gt;leverage an external tool&lt;/a&gt; for publishing and consuming the data like Amazon SSM Parameter Store or HashiCorp Consul.&lt;/p&gt;

&lt;p&gt;We have to make the next decision to either keep all our Terraform code in a single repository (monorepo) or to separate our Terraform configurations in multiple code repositories. This is considered a &lt;a href="https://www.hashicorp.com/blog/terraform-mono-repo-vs-multi-repo-the-great-debate" rel="noopener noreferrer"&gt;great debate&lt;/a&gt; since both approaches have drawbacks and benefits. There is a tendency in the industry to avoid gigantic monorepos and use separate configurations to enable faster module development and flexibility. Personally, this is the approach I prefer too.&lt;/p&gt;

&lt;p&gt;Usually, we have to deal with a plethora of different Infrastructure environments, and there are multiple ways to handle this in Terraform. A good and easy practice to follow is to have &lt;strong&gt;separate Terraform configurations for different environments&lt;/strong&gt;. This way, different environments have their own state and can be tested and managed separately, while shared behavior is achieved with shared or remote modules. &lt;/p&gt;

&lt;p&gt;One option is using a separate directory per environment and keeping a separate state for each directory. Another option would be to keep all the Terraform configurations in the same directory and pass different environment variables per environment to parametrize the configuration accordingly. Check out the &lt;a href="https://www.youtube.com/watch?v=wgzgVm7Sqlk" rel="noopener noreferrer"&gt;Evolving Your Infrastructure with Terraform&lt;/a&gt; and &lt;a href="https://www.youtube.com/watch?v=Nr5Km_xGLVs" rel="noopener noreferrer"&gt;How I Manage More Environments with Less Code in Terraform&lt;/a&gt; talks for some inspiration around structuring your projects. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/spacelift-io-blog-posts/Blog-Technical-Content/tree/master/terraform-best-practices/separate-environments-project-structure" rel="noopener noreferrer"&gt;Here&lt;/a&gt;, you can find an example structure for three different environments per directory: production, staging, and test. Each environment has its own state and is managed separately from the others while leveraging common or shared modules.  Although this approach comes with some code duplication, we gain improved clarity, environment isolation, and scalability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faogb8tg5ph2l7nah2ty8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faogb8tg5ph2l7nah2ty8.png" alt=" " width="648" height="875"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a general rule, we want to define Terraform configurations with &lt;strong&gt;limited scope&lt;/strong&gt; and blast radius with specific owners. To minimize risk, we should try to decompose our projects into small workspaces/stacks and segment access to them using Role-based access control(RBAC).&lt;/p&gt;

&lt;h1&gt;
  
  
  Terraform Specific Best Practices
&lt;/h1&gt;

&lt;p&gt;Alright, in the previous sections, we talked about some generic IaC best practices. We explored some options for optimizing our Terraform code according to our organizational structure and needs. &lt;/p&gt;

&lt;p&gt;In this part, we are deep-diving into specific points that will take our Terraform code to the next level. This list isn’t exhaustive, and some parts are opinionated based on my personal preferences and experiences. &lt;/p&gt;

&lt;p&gt;The goal here is to give you hints and guidance on experimenting, researching, and implementing the practices that make sense to your use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Remote state
&lt;/h2&gt;

&lt;p&gt;It’s ok to use the local state when experimenting, but use a remote shared state location for anything above that point. Having a single remote backend for your state is considered one of the first best practices you should adopt when working in a team. Pick one that supports &lt;strong&gt;state locking&lt;/strong&gt; to avoid multiple people changing the state simultaneously. Treat your state as immutable and avoid manual state changes at all costs. Make sure you have backups of your state that you can use in case of a disaster. For some backends, like AWS S3, you can enable versioning to allow for quick and easy state recovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Use existing shared and community modules
&lt;/h2&gt;

&lt;p&gt;Instead of writing your own modules for everything and reinventing the wheel, check if there is already a module for your use case. This way, you can save time and harness the power of the Terraform community. If you feel like it, you can also help the community by improving them or reporting issues. You can check the &lt;a href="https://registry.terraform.io/browse/modules" rel="noopener noreferrer"&gt;Terraform Registry&lt;/a&gt; for available modules. &lt;/p&gt;

&lt;h2&gt;
  
  
  3. Import existing infrastructure
&lt;/h2&gt;

&lt;p&gt;If you inherited a project that is a couple of years old, chances are that some parts of its infrastructure were created manually. Fear not, you can &lt;a href="https://spacelift.io/blog/importing-exisiting-infrastructure-into-terraform" rel="noopener noreferrer"&gt;import existing infrastructure into Terraform&lt;/a&gt; and avoid managing infrastructure from multiple endpoints.  &lt;/p&gt;

&lt;h2&gt;
  
  
  4. Avoid variables hard-coding
&lt;/h2&gt;

&lt;p&gt;It might be tempting to hardcode some values here and there but try to avoid this as much as possible. Take a moment to think if the value you are assigning directly would make more sense to be defined as a variable to facilitate changes in the future. Even more, check if you can get the value of an attribute via a data source instead of setting it explicitly. For example, instead of finding our AWS account id from the console and setting it in terraform.tfvars as&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws_account_id=”99999999999”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can get it from a data source.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_caller_identity" "current" {}

locals {
    account_id    = data.aws_caller_identity.current.account_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Always format and validate
&lt;/h2&gt;

&lt;p&gt;In IaC, consistency is essential long-term, and Terraform provides us with some tools to help us in this quest. Remember to run &lt;strong&gt;terraform fmt&lt;/strong&gt; and &lt;strong&gt;terraform validate&lt;/strong&gt; to properly format your code and catch any issues that you missed. Ideally, this should be done auto-magically via a CI/CD pipeline or pre-commit hooks.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Use a consistent naming convention
&lt;/h2&gt;

&lt;p&gt;You can find online many suggestions for naming conventions for your Terraform code. The most important thing isn’t the rules themselves but &lt;strong&gt;finding a convention that your team is comfortable with&lt;/strong&gt; and trying collectively to be consistent with it. If you need some guidance, here’s a list of rules that are easy to follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use underscores(_) as a separator and lowercase letters in names. &lt;/li&gt;
&lt;li&gt;Try not to repeat the resource type in the resource name.&lt;/li&gt;
&lt;li&gt;For single-value variables and attributes, use singular nouns. For lists or maps, use plural nouns to show that it represents multiple values.&lt;/li&gt;
&lt;li&gt;Always use descriptive names for variables and outputs, and remember to include a description.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  7. Tag your Resources
&lt;/h2&gt;

&lt;p&gt;A robust and consistent tagging strategy will help you tremendously when issues arise or trying to figure out which part of your infrastructure exploded your cloud vendor’s bill. You can also craft some nifty access control policies based on tags when needed. Like when defining naming conventions, try to be consistent and always tag your resources accordingly.&lt;/p&gt;

&lt;p&gt;The Terraform argument &lt;strong&gt;tags&lt;/strong&gt; should be declared as the last argument(only depends_on or lifecycle arguments should be defined after tags if relevant). A handy option to help you with tagging is defining some default_tags that apply to all resources managed by a provider. Check out &lt;a href="https://learn.hashicorp.com/tutorials/terraform/aws-default-tags" rel="noopener noreferrer"&gt;this example&lt;/a&gt; to see how to set and override default tags for the AWS provider. If the provider you use doesn’t support default tags, you must manually pass these tags through to your modules and apply them to your resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Introduce Policy as Code
&lt;/h2&gt;

&lt;p&gt;As our teams and infrastructure scale, our trust in individual users is generally reduced. We should set up some policies to ensure our systems continue to be operational and secure. Having a Policy as Code process in place allows us to define the rules of what is considered secure and acceptable at scale and automatically verify these rules.  Spacelift leverages an open-source engine to achieve this, &lt;a href="https://spacelift.io/blog/what-is-open-policy-agent-and-how-it-works" rel="noopener noreferrer"&gt;Open Policy Agent(OPA)&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  9. Implement a Secrets Management Strategy
&lt;/h2&gt;

&lt;p&gt;Usually, users won’t admit that they have secrets in their Terraform code, but we have all been there. When you are starting with Terraform, it’s normal that secret management isn’t your top priority, but eventually, you will have to define a strategy for handling secrets. &lt;/p&gt;

&lt;p&gt;As you probably heard already, &lt;strong&gt;never store secrets in plaintext and commit them in your version control system&lt;/strong&gt;. One technique that you can use is to pass secrets by setting environment variables with &lt;strong&gt;TF_VAR&lt;/strong&gt; and marking your sensitive variables with &lt;strong&gt;sensitive = true&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A more mature solution would be to set up a secret store like Hashicorp Vault or AWS Secrets Manager to handle access to secrets for you. This way, you can protect your secrets at rest and enforce encryption without too much trouble. You can also opt for more advanced features like secret rotation and audit logs. Beware that this approach usually comes with the cost of using this managed service. &lt;/p&gt;

&lt;h2&gt;
  
  
  10. Test your Terraform code
&lt;/h2&gt;

&lt;p&gt;As with all other code, IaC code should be tested properly. There are different approaches here, and again, you should find one that makes sense for you. Running terraform plan is the easiest way to verify if your changes will work as expected quickly. Next, you can perform some static analysis for your Terraform code without the need to apply it. Unit testing is also an option to verify the normal operation of distinct parts of your system.&lt;/p&gt;

&lt;p&gt;Another step would be to integrate a Terraform linter to your CI/CD pipelines and try to catch any possible errors related to Cloud Providers, deprecated syntax, enforce best practices, etc. One step ahead, you can set up some integration tests by spinning up a replica sandbox environment, applying your plan there, verifying that everything works as expected, collecting results, destroying the sandbox, and moving forward by applying it to production. &lt;/p&gt;

&lt;p&gt;There are a lot of tools out there that can help you with testing your Terraform code. I will list some of them in the “Helper Tools” section below.&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Enable debug/troubleshooting
&lt;/h2&gt;

&lt;p&gt;When issues arise, we have to be quick and effective in gathering all the necessary information to solve them. You might find it helpful to set the Terraform log level to debug in these cases.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TF_LOG=DEBUG &amp;lt;terraform command&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another thing that you might find helpful is to persist logs in a file by setting the TF_LOG_PATH environment variable. Check out this &lt;a href="https://spacelift.io/blog/terraform-debug" rel="noopener noreferrer"&gt;Terraform Debug &amp;amp; Troubleshoot&lt;/a&gt; tutorial for some hands-on examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. Leverage Helper tools to make your life easier
&lt;/h2&gt;

&lt;p&gt;Terraform is one of the most used and loved IaC tools out there, and naturally, there is a big community around it. Along with this community, many helper tools are being constantly created to help us through our Terraform journey. Picking up and adopting the right tools for our workflows isn’t always straightforward and usually involves an experimentation phase. Here you can find a list of tools that I find helpful based on my experience, but it is definitely not an exhaustive list. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/terraform-linters/tflint" rel="noopener noreferrer"&gt;tflint&lt;/a&gt; – Terraform linter for errors that the plan can’t catch.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tfutils/tfenv" rel="noopener noreferrer"&gt;tfenv&lt;/a&gt; – Terraform version manager&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/bridgecrewio/checkov/" rel="noopener noreferrer"&gt;checkov&lt;/a&gt; –  Terraform static analysis tool &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/gruntwork-io/terratest/" rel="noopener noreferrer"&gt;terratest&lt;/a&gt; – Go library that helps you with automated tests for Terraform&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/antonbabenko/pre-commit-terraform" rel="noopener noreferrer"&gt;pre-commit-terraform&lt;/a&gt; – Pre-commit git hooks for automation &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/terraform-docs/terraform-docs" rel="noopener noreferrer"&gt;terraform-docs&lt;/a&gt; – Quickly generate docs from modules&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://spacelift.io/?utm_source=blog&amp;amp;utm_medium=text&amp;amp;utm_id=blogpost&amp;amp;utm_content=%7Bterraform_best_practices%7D" rel="noopener noreferrer"&gt;spacelift&lt;/a&gt; – Collaborative Infrastructure Delivery Platform&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/runatlantis/atlantis" rel="noopener noreferrer"&gt;atlantis&lt;/a&gt; – Workflow for collaborating on Terraform projects&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/antonbabenko/terraform-cost-estimation" rel="noopener noreferrer"&gt;terraform-cost-estimation&lt;/a&gt; – Free cost estimation service for your plans.
Check out also this list that includes a lot more &lt;a href="https://github.com/shuaibiyy/awesome-terraform#tools" rel="noopener noreferrer"&gt;awesome terraform tools&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Key Points
&lt;/h1&gt;

&lt;p&gt;We have explored many different best practices for Terraform and Infrastructure as Code, analyzed various options for handling and structuring our Terraform projects, and saw how adopting helper tools could make our life easier. &lt;/p&gt;

&lt;p&gt;Remember, this isn’t a recipe that you have to follow blindly but a guide that aims to provide pointers and cues and trigger you to build your own optimal Terraform workflows and projects.&lt;/p&gt;

&lt;p&gt;Thank you for reading, and I hope you enjoyed this “Terraform Best Practices” article as much as I did.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>iac</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
