<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: env zero Team</title>
    <description>The latest articles on Forem by env zero Team (@envzeroteam).</description>
    <link>https://forem.com/envzeroteam</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/envzeroteam"/>
    <language>en</language>
    <item>
      <title>The Ultimate Guide to Terraform Drift Detection: How to Detect, Prevent, and Remediate Infrastructure Drift</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Thu, 08 Jan 2026 00:32:23 +0000</pubDate>
      <link>https://forem.com/envzero/the-ultimate-guide-to-terraform-drift-detection-how-to-detect-prevent-and-remediate-5hji</link>
      <guid>https://forem.com/envzero/the-ultimate-guide-to-terraform-drift-detection-how-to-detect-prevent-and-remediate-5hji</guid>
      <description>&lt;p&gt;Terraform drift detection identifies when the actual cloud infrastructure diverges from the declared Terraform infrastructure as code configuration, and this guide shows why that gap matters for security, compliance, reliability, and cost. You will learn concrete detection techniques, prevention best practices, and step-by-step remediation workflows that scale across teams and clouds. The article covers native Terraform commands and their limits, automated continuous monitoring patterns, governance controls such as policy-as-code and RBAC, remediation decision frameworks, and a practical tool-evaluation checklist for selecting drift detection solutions. Real-world tradeoffs—including operational overhead, auditability, and cost implications—are emphasized so you can choose between manual processes and automated reconciliation. Throughout, the guide integrates platform-level capabilities that accelerate detection and remediation while keeping the analysis vendor-neutral except where env zero is introduced as a concrete example of automation and governance.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Is Terraform Drift and Why Does It Matter?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Infrastructure drift in Terraform describes a mismatch between the "blueprint" (your .tf files and modules) and the actual resources running in the cloud, and the Terraform state file is the artifact that represents the platform's understanding of that blueprint. The state file contains resource IDs, attributes, and metadata and can be stored locally or remotely with locking support to avoid concurrent writes. A simple example: a manually changed instance type in the cloud console that differs from the instance_type declared in Terraform creates an attribute drift that terraform plan will surface if state and provider refresh are performed. Understanding the role of the state file and backends is essential because misconfigured backends, stale credentials, or local-only state increase the chance that drift goes undetected.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Causes Terraform Drift and How Does It Impact Your Cloud Environment?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Common causes of drift include manual edits in the cloud console, external automation such as ad-hoc scripts or autoscaling, inconsistent state backends or locking, emergency hotfixes applied outside Git workflows, and misaligned CI/CD processes. Each cause creates different operational impacts: manual console changes often introduce security gaps, external automation can produce configuration entropy that breaks reproducibility, and stale or split state files lead to conflicting updates during deployments. The business consequences include compliance violations from undocumented changes, unexpected cloud spend when resources are misconfigured, and increased mean time to repair when teams lack a single source of truth. Recognizing these root causes allows teams to prioritize controls that reduce the frequency and severity of drift.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Can You Detect Terraform Drift Effectively?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Detecting drift effectively combines native Terraform commands with continuous, platform-driven monitoring that integrates with VCS and CI/CD to provide traceability and alerting. A reliable detection strategy uses on-demand checks for immediate troubleshooting and scheduled or continuous scans for ongoing guardrails, while correlating findings back to VCS commits and approvals. Choosing the right mix depends on team size, compliance needs, and the scale of infrastructure changes. Below we examine native tools and then discuss how automated platforms can extend detection into continuous governance.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Are the Native Terraform Commands for Drift Detection?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Terraform provides several native commands for surfacing drift: terraform plan previews differences between the configuration and the state, terraform refresh updates the state from provider APIs, and terraform state inspect or list lets operators query recorded attributes directly. Running terraform plan is the most common way to detect attribute diffs before an apply, but it requires correct provider credentials and the current state; terraform refresh can show drift by updating the state but may overwrite local modifications if used incorrectly. The limitations are practical: native commands are manual or tied to CI jobs, lack centralized dashboards, and don't provide scheduled scanning, investigator workflows, or unified alerting across many workspaces, so teams often complement them with automation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Native workflow for immediate checks&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Run terraform refresh to sync the state from providers.&lt;/li&gt;
&lt;li&gt;  Run terraform plan to visualize diffs before apply.&lt;/li&gt;
&lt;li&gt;  Inspect state with terraform state commands for resource details.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This native capability is essential for debugging, and it leads to the question of how automation can scale these checks across large fleets and teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Do Automated Solutions Like env zero Enhance Detection?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Automated drift detection platforms extend native commands by scheduling checks, integrating with VCS and CI/CD to link detection events to commits and pull requests, and centralizing alerting and reporting across environments. Continuous monitoring detects drift as it appears, not only when an operator manually runs plan, and scheduled scans provide historical context for recurring deviations. Platforms with VCS integration create traceability from change request to deployed state, and dashboards allow teams to triage drift by severity or compliance posture. For teams evaluating platforms, &lt;a href="https://www.env0.com/solutions/cloud-governance-and-risk-management" rel="noopener noreferrer"&gt;env zero&lt;/a&gt; offers &lt;a href="https://docs.envzero.com/guides/admin-guide/environments/drift-detection" rel="noopener noreferrer"&gt;automated drift detection&lt;/a&gt; across environments with VCS-integrated continuous monitoring, scheduling, and centralized governance that pairs detection with notifications and reporting to accelerate investigation and remediation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Are the Best Practices for Preventing Terraform Drift?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Preventing drift focuses on reducing the opportunities for out-of-band changes and increasing automated enforcement through policy and process. Key strategies include implementing policy-as-code that blocks non-compliant changes, enforcing strict RBAC and least-privilege for consoles and APIs, adopting immutable infrastructure patterns that favor replace-over-mutate, and ensuring consistent remote state backends with locking. These practices remove common human and tool-based error modes and make configuration the single source of truth. Next we explore how policy-as-code and access controls operate in practice to prevent drift.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Does Policy-as-Code Help Prevent Drift?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Policy-as-code translates governance rules into executable checks that run before or during deployment, preventing non-compliant changes from being applied and creating an auditable decision trail. Tools such as &lt;a href="https://docs.envzero.com/guides/policies-governance/policies" rel="noopener noreferrer"&gt;OPA-style frameworks or platform-enforced policies&lt;/a&gt; validate Terraform plans against constraints like approved instance sizes, required tags, or encryption settings, ensuring policy checks occur in CI/CD or the deployment platform. Automated enforcement stops risky or unintended modifications early, and policy results become evidence for compliance audits. Integrating policy-as-code with drift detection ensures that drift-remediation actions also respect organizational guardrails.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Policy-as-code benefits&lt;/strong&gt;:‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Enforces constraints automatically before apply.&lt;/li&gt;
&lt;li&gt;  Produces auditable results for compliance.&lt;/li&gt;
&lt;li&gt;  Prevents common misconfigurations that lead to drift.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;These policy controls naturally lead into access control patterns that further minimize manual drift vectors.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Role Do Access Controls and Immutable Infrastructure Play?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Access controls such as &lt;a href="https://docs.envzero.com/guides/admin-guide/user-role-and-team-management/user-management" rel="noopener noreferrer"&gt;role-based access control (RBAC)&lt;/a&gt;, least-privilege IAM policies, and approval workflows limit who can change infrastructure outside of Terraform, reducing the risk of manual console edits. Immutable infrastructure patterns—where changes roll out by replacing resources rather than mutating them—reduce surface area for drift because the last-known-good configuration is in version control and redeploys create consistent, repeatable builds. Approval workflows and change audits capture context for exceptions and help teams reconcile emergency fixes back into code. Together, these controls reduce drift frequency and simplify remediation when deviations occur.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PracticeMechanismBenefit&lt;/strong&gt;Policy as codeAutomated plan checks in CI/CDPrevents non-compliant changes and creates audit trailsRBAC &amp;amp; approvalsRole scoping and approval gatesLimits manual edits and reduces unauthorized changesImmutable infrastructureReplace-not-mutate deploymentsImproves reproducibility and reduces configuration entropy&lt;/p&gt;

&lt;p&gt;These best practices map directly to fewer drift incidents and faster recovery when drift is detected, and they set the stage for remediation choices.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Do You Remediate Terraform Drift Efficiently?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Remediation begins with assessing drift severity and impact, deciding whether to update code, revert a manual change, or reconcile by guided or automated apply, and then executing with appropriate governance and audit logs. An efficient workflow balances speed and safety: critical security-related drifts require fast, controlled remediation with approvals, whereas low-risk attribute mismatches may be queued for the next standard deployment. Below we contrast manual and automated strategies and then show how platform automation supports consistent remediation.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Are Manual vs. Automated Remediation Strategies?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Manual remediation workflows typically involve inspecting the terraform plan diff, updating Terraform code or manually changing cloud attributes, and then applying the approved change; this approach is transparent but slow and prone to human error for large fleets. Automated remediation uses reconcilers or guided automated apply workflows that can re-sync resources to declared state, often with approval gates or policy checks, which speeds recovery and reduces toil but requires robust testing and rollback controls. Tradeoffs include speed versus risk: automated reconciliation accelerates recovery at scale but needs strong policy enforcement and auditing to avoid unintended mass changes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Decision factors for remediation:‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Severity&lt;/strong&gt;: security/regulatory drifts demand fast, controlled fixes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Scale&lt;/strong&gt;: widespread drift favors automated reconcilers.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Auditability&lt;/strong&gt;: compliance needs favor tracked, approved remediation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;These tradeoffs point to the value of platforms that combine automation, governance, and audit trails to streamline remediation.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Can env zero Streamline Drift Remediation and Governance?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;env zero provides &lt;a href="https://docs.envzero.com/guides/admin-guide/environments/drift-detection/automatic-drift-remediation" rel="noopener noreferrer"&gt;automated remediation workflows&lt;/a&gt; linked to continuous detection and policy enforcement, enabling teams to reconcile drift with guided or automated applies while maintaining approval workflows and audit logs. By combining scheduled monitoring, VCS integration for traceability, and policy checks during remediation, env zero helps teams balance speed and governance: automated reconciliation can run where safe, while higher-risk changes pass through approval gates. Notifications and centralized reporting keep stakeholders informed and create an auditable sequence from detection to resolution, reducing mean time to remediation while preserving compliance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remediation ApproachRequired InputsTime / Scale Implication&lt;/strong&gt;Manual inspect-and-applyDiff, operator expertise, change planSlower, low-scale, high oversightGuided remediation workflowDrift report, suggested code changes, approvalsModerate speed, suitable for medium scaleAutomated reconciliationPolicy rules, automated apply agents, rollback planFast at scale, requires strong governance&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Which Terraform Drift Detection Tools Should You Consider?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Choosing a drift detection tool requires evaluating detection cadence, remediation support, VCS/CI integrations, policy enforcement, multi-cloud coverage, and cost-management features. Prioritize continuous monitoring and VCS integration if traceability and scale are essential; prioritize lightweight CLI or on-demand tools for focused troubleshooting tasks. Tools like HCP Terraform (formerly Terraform Cloud) and env zero provide comprehensive infrastructure drift detection to manage your infrastructure automation and Terraform workflow. Below is a compact feature-focused comparison to help you match organizational needs to capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ToolDetection MethodValue&lt;/strong&gt;On-demand CLI/workflowManual plan/refresh runsBest for troubleshooting single workspaces; limited scalingScheduled scanning platformPeriodic scans with alertsGood for historical trend analysis and recurring drift detectionContinuous monitoring platform (VCS-integrated)Real-time or near-real-time checks linked to VCSBest for traceability, large teams, and automated workflows&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Does Terraform Cloud Drift Detection Work?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In HCP Terraform (formerly Terraform Cloud), drift detection is a core component of Health Assessments that provides continuous visibility into whether your real-world infrastructure matches your versioned configuration. It works by performing automatic, background evaluations of your workspaces at scheduled intervals—typically starting 24 hours after the last successful run. During these assessments, HCP Terraform executes a background refresh to query the cloud provider’s APIs and sync the current state of managed resources. It then compares this live state against the expected configuration defined in your code to identify "configuration drift"—discrepancies caused by manual changes, service failures, or external automation. When a mismatch is identified, the platform updates the workspace status to a "Drift" designation, populates a dedicated Drift tab with a visualization of the specific attribute changes, and triggers customizable notifications via email, Slack, or webhooks. This automated loop allows operators to proactively remediate drift by either overwriting the external changes with a standard plan or accepting them by updating the configuration through a refresh-only plan. See our &lt;a href="https://www.env0.com/blog/terraform-cloud-tfc-alternatives-comprehensive-buyers-guide" rel="noopener noreferrer"&gt;Terraform Cloud Alternatives: 2026 In-Depth Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Does env zero Compare to Other Drift Detection Solutions?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuy33iiqo5lzagl65ytph.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuy33iiqo5lzagl65ytph.jpeg" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Two env zero environments detecting Terraform drift, and a drift error&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;env zero emphasizes continuous monitoring integrated with VCS, &lt;a href="https://docs.envzero.com/guides/admin-guide/environments/drift-detection#detect-drift" rel="noopener noreferrer"&gt;scheduling&lt;/a&gt;, &lt;a href="https://docs.envzero.com/guides/policies-governance/policies" rel="noopener noreferrer"&gt;policy enforcement&lt;/a&gt;, &lt;a href="https://docs.envzero.com/guides/admin-guide/environments/drift-detection/automatic-drift-remediation" rel="noopener noreferrer"&gt;automated remediation workflows&lt;/a&gt;, &lt;a href="https://docs.envzero.com/guides/policies-governance/cost-estimation" rel="noopener noreferrer"&gt;cost management&lt;/a&gt;, and centralized governance—features organizations typically prioritize when moving from manual checks to platform-scale reconciliation. Where alternatives may focus on single aspects (for example, CLI-based detection or policy-only enforcement), env zero bundles detection, governance, and remediation workflows to reduce integration effort and operational overhead. Selecting a solution depends on tradeoffs between UI-driven centralized control, CLI/automation-first approaches, and pricing or deployment model preferences.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;env zero strengths&lt;/strong&gt;:‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Continuous monitoring with VCS traceability.&lt;/li&gt;
&lt;li&gt;  Integrated remediation workflows and policy enforcement.&lt;/li&gt;
&lt;li&gt;  Centralized automation and governance across environments.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctqe7n203q9kqdprk0xe.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctqe7n203q9kqdprk0xe.jpeg" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;env zero confirming a Terraform drift remediation action&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Are the Key Features to Look for in Drift Detection Solutions?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When evaluating tools, use a prioritized checklist that separates critical features from recommended and optional capabilities so procurement focuses on operational needs first. Critical features ensure detection reliability and governance, recommended features improve efficiency, and optional features add value based on specific organizational priorities like multi-cloud complexity or cost optimization.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Critical features&lt;/strong&gt;: Continuous monitoring and scheduling to detect drift proactively. VCS and CI/CD integration for traceability from commit to state. Policy-as-code enforcement and RBAC to maintain governance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recommended&lt;/strong&gt; &lt;strong&gt;features&lt;/strong&gt;: Automated or guided remediation workflows to reduce manual toil. Audit logs and approval workflows for compliance coverage. Multi-cloud support to handle heterogeneous environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optional features&lt;/strong&gt;: Built-in cost management tied to detection events. Anomaly detection or prioritization helpers. Deep UI-driven orchestration for non-CLI teams.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Are the Future Trends and Advanced Techniques for Drift Detection in Terraform?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Emerging trends point toward smarter, prioritized detection and tighter reconciliation loops that reduce noise and surface only high-risk drift to operators. AI and ML techniques can help by identifying anomalous changes and prioritizing events based on historical impact and context, while human-in-the-loop models preserve control over automated remediation. Additionally, cross-cloud abstractions and standardized state management approaches are evolving to address fragmentation in multi-cloud environments. These directions reflect a broader shift from reactive detection to proactive, risk-based automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Will AI and Machine Learning Impact Drift Detection?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;AI and ML are poised to enhance drift detection by learning normal configuration drift patterns, identifying anomalies that indicate security incidents or misconfigurations, and prioritizing remediation events by estimated business impact. Predictive models could forecast which drifts are likely to cause outages or cost spikes, enabling preemptive action and smarter alert routing. However, ML outputs must be interpretable and auditable to satisfy compliance needs, which means human-in-loop approaches and clear provenance will remain essential.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Are the Challenges of Drift Detection in Multi-Cloud Environments?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Multi-cloud drift detection faces inconsistent resource models, varying provider APIs, and fragmented state backends that complicate unified observability and reconciliation. Teams must contend with naming conventions, different lifecycle semantics, and cross-cloud orchestration that make single-pane-of-glass detection difficult without abstraction layers. Recommended mitigations include standardized state handling, adopting cross-cloud abstractions where practical, and using tools that normalize resource models to present consistent alerts and remediation options across providers. These strategies reduce semantic gaps and make drift management more predictable across complex environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion: Managing Drift in Terraform
&lt;/h3&gt;

&lt;p&gt;Effectively managing Terraform drift is crucial for maintaining security, compliance, and operational efficiency in cloud environments. By implementing robust detection and remediation strategies, teams can minimize risks associated with configuration discrepancies and ensure their infrastructure remains aligned with desired states. Embracing tools like env zero can streamline these processes, providing automated workflows and governance that enhance overall performance. Start optimizing your drift detection and remediation today by &lt;a href="https://www.env0.com/demo-request" rel="noopener noreferrer"&gt;requesting a demo of env zero&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>devchallenge</category>
      <category>infrastructureascode</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Protect Secrets and Passwords with Ansible Vault: A Practical Guide with Examples</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Wed, 19 Mar 2025 12:17:47 +0000</pubDate>
      <link>https://forem.com/envzero/protect-secrets-and-passwords-with-ansible-vault-a-practical-guide-with-examples-5ael</link>
      <guid>https://forem.com/envzero/protect-secrets-and-passwords-with-ansible-vault-a-practical-guide-with-examples-5ael</guid>
      <description>&lt;p&gt;Configuration as Code helps teams manage infrastructure efficiently by automating repetitive tasks and improving reliability. However, it also brings new challenges—managing secrets securely is one of the most critical. Without proper handling, sensitive data like API keys, passwords, and certificates can be exposed, creating security risks.&lt;/p&gt;

&lt;p&gt;Protecting secrets while maintaining automation benefits is essential. For instance, expecting operators to manually input secrets during an automated process is both impractical and error-prone. In this post, we explore Ansible Vault, a powerful tool that secures sensitive data without disrupting DevOps workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Ansible Vault
&lt;/h2&gt;

&lt;p&gt;Ansible Vault is an encryption tool included with &lt;a href="https://www.env0.com/blog/the-ultimate-ansible-tutorial-a-step-by-step-guide" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt; that protects secrets while enabling DevOps workflows. In this article, we will take a comprehensive look at Ansible Vault to understand its use cases and features.&lt;/p&gt;

&lt;p&gt;Ansible Vault is a utility included in Ansible that can encrypt and decrypt arbitrary data using a password. It can encrypt a variety of data using AES256 encryption, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Structured YAML files, such as variable files or even entire playbooks&lt;/li&gt;
&lt;li&gt;  Configuration files with sensitive information&lt;/li&gt;
&lt;li&gt;  Individual variables in &lt;a href="https://www.env0.com/blog/mastering-ansible-playbooks-step-by-step-guide" rel="noopener noreferrer"&gt;Ansible playbooks&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, Ansible Vault integrates transparently with other Ansible commands, such as &lt;code&gt;ansible&lt;/code&gt; and &lt;code&gt;ansible-playbook&lt;/code&gt;. These commands can automatically detect and decrypt encrypted data to use in standard Ansible workflows. &lt;/p&gt;

&lt;p&gt;For example, an Ansible playbook can reference &lt;a href="https://www.env0.com/blog/mastering-ansible-variables-practical-guide-with-examples" rel="noopener noreferrer"&gt;variables&lt;/a&gt; stored in an encrypted variable file. These files will automatically be decrypted at runtime using the appropriate password.&lt;/p&gt;

&lt;h2&gt;
  
  
  Password Protection and Encryption
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Creating an Encrypted Files
&lt;/h3&gt;

&lt;p&gt;At its most basic level, Ansible Vault can encrypt entire files with a password. For example, consider a situation where you want to store an API key as a variable in a YAML file. You can create the initial vault encrypted file using &lt;code&gt;ansible-vault create&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-vault create vars.yaml
New Vault password:
Confirm New Vault password:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The file will launch in your shell’s default editor, as controlled by the &lt;code&gt;EDITOR&lt;/code&gt; environment variable. If no editor is set, it will default to &lt;code&gt;vi&lt;/code&gt;. Create the file like any other text file and save it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;api_key: SuperS3cretP@ssword!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, when you try to view the file, notice that it is an encrypted blob:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat vars.yaml
$ANSIBLE_VAULT;1.1;AES256 623032643130663663653534363538663564363130386635343238346163613764626537356530376235623834353237333531336530623863356530376563650a356638666338633036396331323061356232323833333034313031346562353139663637663261646430353535663232373365373937656338346665313932370a39303537643361663662383965393065386165653266316464386330626464666130346136666239663632643636356133633134323766613062643266333231
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Viewing and Editing Encrypted Files
&lt;/h3&gt;

&lt;p&gt;You will frequently need to view the contents of vault-encrypted files or edit them directly. You can do this with the &lt;code&gt;view&lt;/code&gt; and &lt;code&gt;edit&lt;/code&gt; commands.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;view&lt;/code&gt; command displays the contents of the encrypted file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-vault view vars.yaml
Vault password:
api_key: SuperS3cretP@ssword!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;edit&lt;/code&gt; command launches an editor to modify the encrypted file using the shell’s default editor:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-vault edit vars.yaml
Vault password:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Decrypting an Encrypted File
&lt;/h3&gt;

&lt;p&gt;Sometimes, you may want to fully decrypt a file. For example, you may determine that the data is no longer sensitive and doesn’t need to be protected as a secret.&lt;/p&gt;

&lt;p&gt;You can fully decrypt a file using the decrypt command. This command fully removes the encrypted file and leaves only the decrypted file in place:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-vault decrypt vars.yaml
Vault password:
Decryption successful

$ cat vars.yaml
api_key: SuperS3cretP@ssword!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Encrypting an Existing File
&lt;/h3&gt;

&lt;p&gt;Configuration as Code is frequently used to transfer sensitive configuration files to remote hosts. Ansible Vault can fully encrypt an existing file. For example, consider a configuration file with sensitive information in it:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat config.yaml
auth_host: auth.example.com
auth_username: admin
auth_password: Sens1tiveP@ssw0rd123!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Ansible Vault can encrypt this entire file using a password with the &lt;code&gt;encrypt&lt;/code&gt; command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-vault encrypt config.yaml
New Vault password:
Confirm New Vault password:
Encryption successful
$ cat config.yaml
$ANSIBLE_VAULT;1.1;AES256
613232306362363036643062303239613361333661613037663061666335326161323436656465303565336264326434636662366565326564353134316630660a373763393839376338333434633836346438313666633663636265376237386330336364396331636365633138393537376538613032333635386364636334630a643161333365386636363963316666336132383630643232383231313737353834346266393032313231386432633433333739633966363033643364633761343866316261333430633630383862396238303961366133613738613834356535396231383637616664356162393135623563336566623135656434656336616265306465323632393962633664373636363433313132316339653431376563353362383032616165306561393833376331646139373634373865343064353635
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Changing the Encryption Key
&lt;/h3&gt;

&lt;p&gt;It’s a security best practice to regularly rotate encryption material. This principle also applies to the passwords used to protect files encrypted with Ansible Vault.&lt;/p&gt;

&lt;p&gt;Ansible Vault makes it easy to rotate the encryption key for a file using the &lt;code&gt;rekey&lt;/code&gt; command. Simply provide it with the existing password and a new password:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-vault rekey config.yaml
Vault password:
New Vault password:
Confirm New Vault password:
Rekey successful
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Encrypting Variables in a Playbook
&lt;/h3&gt;

&lt;p&gt;The examples we have looked at so far encrypt entire files. This is the most common way to use Ansible Vault. However, there are also situations where you want to encrypt data within a playbook while leaving the rest of the playbook unencrypted.&lt;/p&gt;

&lt;p&gt;Ansible Vault enables this pattern with the &lt;code&gt;encrypt_string&lt;/code&gt; command. You can use &lt;code&gt;encrypt_string&lt;/code&gt; to encrypt the contents of an arbitrary string and then place these contents in a playbook.&lt;/p&gt;

&lt;p&gt;For example, consider a playbook that makes an HTTP request to a password-protected API endpoint:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat playbook.yaml
---
- hosts: all
  tasks:
    - name: Make HTTP request
    ansible.builtin.uri:
        url: https://api.example.com
        user: apiUser
        password: S3cretK3y123
        method: POST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We want to protect this password using Ansible Vault, but we don’t want to encrypt the entire file. You can use &lt;code&gt;encrypt_string&lt;/code&gt; to encrypt the API key:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-vault encrypt_string
New Vault password:
Confirm New Vault password:
Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a newline)
S3cretK3y123
Encryption successful
!vault |
        $ANSIBLE_VAULT;1.1;AES256
323831396237393231666163633337663564613866393234636136663064343766646634383530376135303536636265613331383131343562343165643364380a386235636639393838363235633562656464396632373835666536343465613064316236303433356138343036303531323763366362316561653331306162330a3964396430373662623365346434393335356430336333346262623964633638
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice that &lt;code&gt;encrypt_string&lt;/code&gt; allows you to directly input the string you want to encrypt into the terminal. You end the string with &lt;code&gt;CTRL+D&lt;/code&gt;, not with a newline character. This is important to remember, as any newlines you enter into the terminal will become part of the encrypted string.&lt;/p&gt;

&lt;p&gt;Finally, you can insert this encrypted string directly into the playbook:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat playbook.yaml
---
- hosts: all
  tasks:
    - name: Make HTTP request
    ansible.builtin.uri:
        url: https://api.example.com
        user: apiUser
        password: !vault |
        $ANSIBLE_VAULT;1.1;AES256
323831396237393231666163633337663564613866393234636136663064343766646634383530376135303536636265613331383131343562343165643364380a386235636639393838363235633562656464396632373835666536343465613064316236303433356138343036303531323763366362316561653331306162330a3964396430373662623365346434393335356430336333346262623964633638
        method: POST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This approach is useful for very basic playbooks, but it has limitations. You must encrypt each string individually, which can be tedious and time-consuming. Additionally, there is no way to easily rekey all of the encrypted strings in a file. Instead, you must re-encrypt each string.&lt;/p&gt;

&lt;p&gt;A better approach in most scenarios is to use a fully encrypted variable file and limit the use of &lt;code&gt;encrypt_string&lt;/code&gt;. However, using &lt;code&gt;encrypt_string&lt;/code&gt; can be helpful in very simple playbooks that don’t require the overhead of fully encrypted variable files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Ansible Plays with Encrypted Files
&lt;/h3&gt;

&lt;p&gt;By now, you should have a good understanding of how to use Ansible Vault to create and manipulate encrypted files and data. Ansible integrates transparently with Ansible Vault and allows you to use encrypted files and variables within your plays. Ansible automatically decrypts the encrypted data using the password that you provide.&lt;/p&gt;

&lt;p&gt;To illustrate these principles, you can use a playbook that contains both encrypted variables and fully encrypted files:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat playbook.yaml
---
- hosts: localhost
  vars:
    api_user: serviceAccountUser
    api_key: !vault |
        $ANSIBLE_VAULT;1.1;AES256
613838633038643139333530633066333034316239366263626539306130646431313664643132396639353465663932626432636239383331623631643833370a396231343231336132363832353761333061323637383962323363336166323537643738323739323766653831313063626336656535613135663364386336360a62313966323932373162646330376436353761633737316637376436663437363339656535653234376566393536383231373432656462383461376162623035
  tasks:
    - name: Make HTTP request
    ansible.builtin.uri:
        url: http://localhost:3000
        user: "{{ api_user }}"
        password: "{{ api_key }}"
      method: POST

    - name: Transfer encrypted configuration file
    ansible.builtin.copy:
        src: config.yaml
        dest: /etc/application/config.yaml
        owner: root
        group: root
        mode: 0600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This playbook performs two tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Makes an HTTP request using the encrypted &lt;code&gt;api_key&lt;/code&gt; variable&lt;/li&gt;
&lt;li&gt; Copies an encrypted configuration file to the remote host. This configuration file is created using &lt;code&gt;ansible-vault create&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, it’s time to run the playbook. Ansible will automatically recognize data encrypted by Ansible Vault and decrypt it at the appropriate time. Ansible relies on a password to make this work. There are three main methods for providing Ansible with a password for decrypting protected data:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Provide the password manually via the command line using &lt;code&gt;--ask-vault-pass&lt;/code&gt;. This is inappropriate for automated scenarios but works well when testing.&lt;/li&gt;
&lt;li&gt; Reference a password file that contains the password with &lt;code&gt;--vault-password-file&lt;/code&gt;. This is the most common approach. This file should be carefully locked down to prevent exposing the password.&lt;/li&gt;
&lt;li&gt; Look up the password using a script specified by &lt;code&gt;--vault-password-file&lt;/code&gt;. This is an advanced approach that is very useful when you want Ansible to interact with an external secret storage system, such as AWS Secrets Manager.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For this example, we will use a password file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat password.txt
demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With everything in place, it’s time to run Ansible:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-playbook playbook.yaml --vault-password-file password.txt
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'

PLAY [localhost] ***********************************************************************************

TASK [Gathering Facts] *****************************************************************************
ok: [localhost]

TASK [Make HTTP request] ***************************************************************************
ok: [localhost]

TASK [Transfer encrypted configuration file] *******************************************************
changed: [localhost]

PLAY RECAP *****************************************************************************************
localhost               : ok=3  changed=1   unreachable=0   failed=0    skipped=0   rescued=0   ignored=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice that the &lt;code&gt;ansible-playbook&lt;/code&gt; command transparently decrypts the necessary files and variables. This makes it easy to begin using encrypted secrets without disrupting existing automation workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Use Cases
&lt;/h2&gt;

&lt;p&gt;The basic features we have covered so far are enough for most scenarios. However, Ansible Vault also features several advanced usage patterns that are helpful for more complex environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managing Multiple Vaults
&lt;/h3&gt;

&lt;p&gt;Large environments frequently use multiple secrets with different permission levels. For example, a different set of secrets may be used for staging and production infrastructure. Operators may also choose to encrypt different files and variables using separate passwords.&lt;/p&gt;

&lt;p&gt;Ansible Vault allows operators to work with multiple vaults, each uniquely identified by a vault ID. Vault IDs provide a “hint” to indicate the correct password to use when decrypting a vault file.&lt;/p&gt;

&lt;p&gt;For example, consider a situation where you want to encrypt two different configuration files with different passwords. Ansible Vault enables this with the &lt;code&gt;--vault-id&lt;/code&gt; flag. This flag takes its argument in the format of ID@SOURCE, where “ID” is the vault ID to use, and “SOURCE” is the location to find the vault password.&lt;/p&gt;

&lt;p&gt;For example, you can encrypt two different files with different passwords provided via the command line prompt:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-vault create --vault-id config1@prompt config_file_1.yaml
New vault password (config1):
Confirm new vault password (config1):

$ ansible-vault create --vault-id config2@prompt config_file_2.yaml
New vault password (config2):
Confirm new vault password (config2):
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, you can tell &lt;code&gt;ansible-playbook&lt;/code&gt; about the appropriate place to obtain the password for each vault using the &lt;code&gt;--vault-id&lt;/code&gt; flag. Notice that the password for the vault with ID “config1” is given at the prompt, while the password for the vault with ID “config2” is provided through a password file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-playbook playbook.yaml --vault-id config1@prompt --vault-id config2@vault-2-password.txt
Vault password (config1):
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'

PLAY [localhost] ***********************************************************************************

TASK [Gathering Facts] *****************************************************************************
ok: [localhost]

TASK [Copy config file 1] **************************************************************************
changed: [localhost]

TASK [Copy config file 2] **************************************************************************
changed: [localhost]

PLAY RECAP *****************************************************************************************
localhost               : ok=3  changed=2   unreachable=0   failed=0    skipped=0   rescued=0   ignored=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This approach provides great flexibility when using multiple passwords. However, it can quickly become complicated. You should still try to maintain simplicity when designing your password strategy.&lt;/p&gt;

&lt;p&gt;It’s important to understand that vault IDs are just hints. Vault ID matching is not strictly enforced by Ansible unless you set the [.codeDEFAULT_VAULT_ID_MATCH[.code] &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-vault-id-match" rel="noopener noreferrer"&gt;environment variable&lt;/a&gt;. Ansible will try all provided passwords with all provided vaults until it succeeds or fails to decrypt a vault.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating with a Secrets Manager
&lt;/h3&gt;

&lt;p&gt;Storing an Ansible Vault password in a text file or entering it in the command line is appropriate for basic use cases, but advanced environments will typically use an external secrets storage solution. For example, your environment might use HashiCorp Vault, Amazon Web Service Secrets Manager, or your in-house solution.&lt;/p&gt;

&lt;p&gt;Ansible makes it very easy to look up the decryption password for a vault with a &lt;a href="https://docs.ansible.com/ansible/latest/vault_guide/vault_managing_passwords.html#storing-passwords-in-third-party-tools-with-vault-password-client-scripts" rel="noopener noreferrer"&gt;client script&lt;/a&gt;. Client scripts can perform whatever logic is necessary to look up a vault’s password, including interacting with external secret managers. The script then prints the password to standard output, and Ansible uses this password to decrypt the Vault.&lt;/p&gt;

&lt;p&gt;The example below uses the &lt;code&gt;aws-secrets-manager-client.sh&lt;/code&gt; script to look up a vault password. The actual content and logic of this script isn’t important; all that matters is that the script prints the password to standard output for Ansible to use:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansible-playbook playbook.yaml --extra-vars @vars.yaml --vault-password-file aws-secrets-manager-client.sh
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'

PLAY [localhost] ***********************************************************************************

TASK [Gathering Facts] *****************************************************************************
ok: [localhost]

TASK [Print encrypted variable] ********************************************************************
ok: [localhost] =&amp;gt; {
    "msg": "Test123"
}

PLAY RECAP *****************************************************************************************
localhost               : ok=2  changed=0   unreachable=0   failed=0    skipped=0   rescued=0   ignored=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using a client script provides the best of both worlds: You can encrypt files and variables in your Ansible playbooks and Configuration as Code repositories, and also integrate with external secrets managers to keep your Ansible Vault passwords safe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Tips
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Version Control
&lt;/h3&gt;

&lt;p&gt;A significant benefit of the Ansible Vault approach is the ability to store encrypted files and variables directly in your version control system. This avoids the need to store data in multiple places, providing you with a single source of truth for your Configuration as Code.&lt;/p&gt;

&lt;p&gt;However, you must still ensure that any sensitive information, such as the password file for Ansible Vault, is stored outside of the version control system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment Variables and Ansible Configuration
&lt;/h3&gt;

&lt;p&gt;There are several Ansible configuration directives related to Ansible Vault. While you don’t need to know all of them, it is helpful to mention the most common directives that you will encounter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;DEFAULT_VAULT_ID_MATCH&lt;/code&gt;: This environment variable controls vault ID matching. By default, Ansible will not enforce strict ID matching and will try every password with all vaults. Set this variable to “True” to change this behavior.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;DEFAULT_VAULT_PASSWORD_FILE&lt;/code&gt;: This environment variable specifies the default vault password file.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;DEFAULT_VAULT_IDENTITY_LIST&lt;/code&gt;: This environment variable is equivalent to specifying multiple &lt;code&gt;--vault-id&lt;/code&gt; flags, and it can be useful for shortening the length of your &lt;code&gt;ansible-playbook&lt;/code&gt; commands.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Understanding When Files Are Decrypted
&lt;/h3&gt;

&lt;p&gt;Ansible will decrypt files as needed when running plays, and the files will remain encrypted at rest once the play has been completed. However, files will be decrypted at rest on a target host when they are used as the &lt;code&gt;src&lt;/code&gt; argument to the copy, template, unarchive, script, or assemble module.&lt;/p&gt;

&lt;p&gt;This is intended and desirable behavior. It allows you to decrypt a file and place it on a remote host. For example, you can reference an encrypted configuration file in Ansible’s copy module and it will be decrypted and placed onto a remote host.&lt;/p&gt;

&lt;p&gt;While this behavior may seem obvious, it's important to understand the scenarios when Ansible will leave a file decrypted at rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Ansible with env0
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://env0.com" rel="noopener noreferrer"&gt;env0&lt;/a&gt; includes native support for Ansible, enabling you to use your existing playbooks alongside its infrastructure lifecycle management capabilities. With Ansible templates, you can consistently deploy environments while leveraging env0's features like controlled access, cost estimation, and automated deployment flows. Learn more &lt;a href="https://docs.env0.com/docs/ansible" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Protecting sensitive data while still enabling Configuration as Code best practices is a challenge for DevOps teams of any size. It is one of the earliest challenges faced by organizations when they automate their configuration management practices, and it continues to challenge mature teams. A robust approach must be flexible enough to preserve velocity without compromising on security.&lt;/p&gt;

&lt;p&gt;Ansible Vault is an ideal solution for teams that already leverage Ansible in their automation workflows. It has a very low entry barrier, and its basic and advanced features make it appropriate for a variety of scenarios. Simple environments can benefit from encrypted vaults with basic password authentication. Advanced environments with complex needs can tier their vaults using vault IDs and externally stored vault passwords with frequent rotation.&lt;/p&gt;

&lt;p&gt;Ansible Vault is a simple utility that offers robust secrets protection in a variety of scenarios. It is an important component of any Ansible environment’s security posture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Q. What is Ansible Vault?
&lt;/h4&gt;

&lt;p&gt;Ansible Vault is a utility for encrypting secrets. Secrets can include variables inside Ansible playbooks, external variable files, or even arbitrary data. Ansible Vault integrates transparently with other Ansible tools, such as the &lt;code&gt;ansible-playbook&lt;/code&gt; command. These Ansible tools can automatically decrypt and use secrets in playbooks and Ansible commands.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. Is Ansible Vault just for encrypting Ansible playbooks?
&lt;/h4&gt;

&lt;p&gt;No. Ansible Vault can also encrypt arbitrary files, such as sensitive configuration files. Ansible can automatically decrypt these files as necessary, such as when they are transferred to a remote host.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. How is Ansible Vault different from an external secrets manager, such as AWS Secrets Manager?
&lt;/h4&gt;

&lt;p&gt;Ansible Vault is built directly into Ansible and doesn’t require any additional modules or external infrastructure to work. It directly encrypts files using a shared key. External secrets managers exist outside of Ansible and require their own configuration, tooling, and modules to work with Ansible.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. Can you integrate Ansible Vault with an external secrets manager?
&lt;/h4&gt;

&lt;p&gt;Yes. Ansible Vault uses a shared secret password to encrypt and decrypt secrets. This password can be stored in an external secrets manager. Ansible can look up this password in an external secrets manager at runtime using a &lt;a href="https://docs.ansible.com/ansible/latest/vault_guide/vault_managing_passwords.html#storing-passwords-in-third-party-tools-with-vault-password-client-scripts" rel="noopener noreferrer"&gt;client script.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>ansible</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>.gitignore Command Guide: Practical Examples and Terraform Tips</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Wed, 26 Feb 2025 14:21:08 +0000</pubDate>
      <link>https://forem.com/envzero/gitignore-command-guide-practical-examples-and-terraform-tips-4cj4</link>
      <guid>https://forem.com/envzero/gitignore-command-guide-practical-examples-and-terraform-tips-4cj4</guid>
      <description>&lt;p&gt;Git is practically a universal standard when it comes to source code version control. Git allows you to track changes to all files in your software development project.&lt;/p&gt;

&lt;p&gt;However, Git is not suitable for tracking all file types. Generally speaking, you should not track large binary files, files containing secrets and sensitive information, or any file that is generated as part of a build process and can be easily reproduced.&lt;/p&gt;

&lt;p&gt;To avoid tracking certain types of files and directories, Git employs &lt;strong&gt;.gitignore&lt;/strong&gt; files. In this blog post we will learn about this file type and how it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is a .gitignore file?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Most developers today are familiar with Git and use it for source code version control. In general, you want to include most of your files in your Git repository. Files that you want to keep a historical record of are called tracked files. However, not all files in a directory should be tracked.&lt;/p&gt;

&lt;p&gt;Typical examples of files you do not want to track with Git are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Binary files&lt;/strong&gt; - for instance, Terraform provider binaries&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Secrets and sensitive files&lt;/strong&gt; - files containing passwords or personal identifying information&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Dependencies&lt;/strong&gt; - such as the &lt;strong&gt;.terraform&lt;/strong&gt; directory for Terraform projects&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Temporary files&lt;/strong&gt; - files that briefly exist during execution of a specific tool&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These files can end up in your repository, so we need a way to make sure Git ignores them. This is what the &lt;strong&gt;.gitignore&lt;/strong&gt; file is for; it contains one or more patterns that tell Git which files and directories to ignore.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Syntax of .gitignore&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;.gitignore&lt;/strong&gt; file can contain blank lines, comments and patterns. Blank lines have no meaning but can be used to separate patterns from one another.&lt;/p&gt;

&lt;p&gt;A comment starts with a &lt;code&gt;#&lt;/code&gt; and must be placed at the beginning of a row:&lt;/p&gt;

&lt;h1&gt;
  
  
  this is a comment
&lt;/h1&gt;

&lt;p&gt;Any non-blank row that is not a comment is considered a pattern, and any files or directories that match a pattern are ignored by Git.&lt;/p&gt;

&lt;p&gt;Patterns containing a leading &lt;code&gt;/&lt;/code&gt; or a &lt;code&gt;/&lt;/code&gt; somewhere in the middle of the pattern are evaluated relative to the location of the file. A pattern with no or a trailing &lt;code&gt;/&lt;/code&gt; matches files and directories in any subdirectory relative to the ignore file.&lt;/p&gt;

&lt;p&gt;Patterns with a &lt;code&gt;/&lt;/code&gt; at the beginning or middle will only locate matches in the same directory as your &lt;strong&gt;.gitignore&lt;/strong&gt; file. &lt;/p&gt;

&lt;p&gt;Patterns without &lt;code&gt;/&lt;/code&gt; will match everything in that directory, including all folders within it. Patterns ending in &lt;code&gt;/&lt;/code&gt; will only match the folders in that directory.&lt;/p&gt;

&lt;p&gt;In the following examples, we assume that the file is in the root folder of the repository. &lt;/p&gt;

&lt;p&gt;The simplest type of pattern are exact matches:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore files and directories named file.txt or directory1 anywhere in the repository
file.txt
directory1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To specifically match a directory, add a &lt;code&gt;/&lt;/code&gt; at the end of the pattern:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore directories named directory1 anywhere in the repository
directory1/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can use wildcards &lt;code&gt;*&lt;/code&gt; to match parts of a file or directory name:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore all .txt files in a directory named docs in the root of the repository
docs/*.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can negate a pattern by adding a &lt;code&gt;!&lt;/code&gt; at the beginning of the row. This allows you to negate patterns defined earlier in the file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore .txt files in docs/, except for a file named important.txt
docs/*.txt
!docs/important.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There are several instances for using two consecutive asterisks &lt;code&gt;**&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore anything inside a directory named docs in the root of the repository
docs/**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can negate a pattern by adding a ! at the beginning of the row. This allows you to negate patterns defined earlier in the file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore .txt files in docs/, except for a file named important.txt
docs/*.txt
!docs/important.txt
There are several instances for using two consecutive asterisks [.code]**[.code]:
# ignore anything inside a directory named docs in the root of the repository
docs/**

# ignore directories named docs anywhere inside your repository 
 **/docs

# ignore .txt files in any subdirectory of a directory named docs in the root of the repository
docs/**/*.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can use a &lt;code&gt;?&lt;/code&gt; to match a single character except for &lt;code&gt;/&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore e.g. test1.txt, test2.txt, etc but not test.txt or testing.txt
test?.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, you can also match a single character belonging to a specific category using a regular expression:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore test1.txt or test2.txt but not test3.txt or test4.txt etc
test[1-2].txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;How to create a .gitignore file?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;.gitignore&lt;/strong&gt; file is a simple text file. It is automatically recognized by Git, so once you have created the file it will be used by your Git client.&lt;/p&gt;

&lt;p&gt;Create the file and add the patterns you want to ignore.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Where to place .gitignore?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can place &lt;strong&gt;.gitignore&lt;/strong&gt; files anywhere in your repository. Each file will be active for the same directory where it is located and all subdirectories, but will not affect its parent directory. Your repository could contain any number of &lt;strong&gt;.gitignore&lt;/strong&gt; files.&lt;/p&gt;

&lt;p&gt;A best practice is to use a single file placed in the root of your repository. This file will apply to all files in all subdirectories. Keeping a single file simplifies matters for everyone using the repository and makes troubleshooting issues easier.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Use cases for .gitignore&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;What you choose to avoid tracking in Git will vary depending on the content of your repository and any framework or tools that you use. In the following subsections, we cover a few common use cases for this file.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Ignoring credential files&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you add temporary credentials to any file in your repository, you should immediately update &lt;strong&gt;.gitignore&lt;/strong&gt; to include these files. It is a common occurrence to add credentials to a file and then accidentally commit and push the credentials to your version control system.&lt;/p&gt;

&lt;p&gt;If you have a file named &lt;strong&gt;credentials.txt&lt;/strong&gt; you can add the following pattern to ignore it:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore credentials files
credentials.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you commonly use a specific name for local credentials you can add them to your personal git ignore rules, see &lt;a href="https://www.env0.com/blog/gitignore-command-guide-practical-examples-and-terraform-tips#advanced-uses-of-gitignore" rel="noopener noreferrer"&gt;Global .gitignore files&lt;/a&gt; later in this blog post.&lt;/p&gt;

&lt;p&gt;If you work with SSH certificates you might also want to include patterns for the types of certificates you work with, an example is for PEM files:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore .pem and .pfx files anywhere in the repository
*.pem
*.pfx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Ignoring Terraform provider binaries&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Terraform downloads provider binaries and external modules to a directory named &lt;strong&gt;.terraform&lt;/strong&gt;. This directory should not be tracked by Git since the provider files are large files and are not suitable for Git.&lt;/p&gt;

&lt;p&gt;Add the following pattern to ignore all &lt;strong&gt;.terraform&lt;/strong&gt; directories:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**/.terraform/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;See &lt;a href="https://www.env0.com/blog/gitignore-command-guide-practical-examples-and-terraform-tips#example-of-using-gitignore-with-terraform" rel="noopener noreferrer"&gt;Example of using .gitignore with Terraform&lt;/a&gt; later in this blog post for a full example for Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Ignoring IDE specific files&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you use VS Code as your IDE you will most likely want to include the &lt;strong&gt;.vscode&lt;/strong&gt; directory where user-specific settings for VS Code are stored. There are similar directories for other IDEs. Sample patterns for VS Code and JetBrains IDEs are these:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VS Code settings directory
.vscode/
# JetBrains settings directory
.idea/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There will be other similar directories or files if you use different IDEs or editors.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advanced uses of .gitignore&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Using &lt;strong&gt;.gitignore&lt;/strong&gt; is generally simple, but there are a few advanced uses you should be aware of.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Global .gitignore files&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can create a global &lt;strong&gt;.gitignore&lt;/strong&gt; file on your local system. The patterns defined in this file will be active for all Git repositories on your system.&lt;/p&gt;

&lt;p&gt;A good use case for a global file is to ignore specific personal files that you commonly add to your repositories but do not want to track with Git. Examples are credential files or log files.&lt;/p&gt;

&lt;p&gt;Create the file somewhere on your local system and add the patterns for the files and directories you wish to ignore. A good location to store the file is in your home directory. &lt;/p&gt;

&lt;p&gt;Next, configure your Git client to use the global file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --global core.excludesFile ~/.gitignore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Override .gitignore patterns&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Even if a file or directory is ignored, you can still force Git to track it by using the &lt;code&gt;--force&lt;/code&gt; or &lt;code&gt;-f&lt;/code&gt; flag in the &lt;code&gt;git add&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;Given the following ignored patterns:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore all .log files
*.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you still want to track a file named &lt;strong&gt;output.log&lt;/strong&gt; you can use the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add --force output.log
$ git commit -m "Add output log file"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Ignoring a file you are currently tracking with git&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you want to stop tracking a file you are currently tracking, you need to first delete the file from Git but make sure to keep it in your working directory:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git rm --cached my-file.txt
rm 'my-file.txt'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, add a pattern for the file to &lt;strong&gt;.gitignore&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# the rest of the .gitignore file is omitted
my-file.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;What to do when .gitignore is not working?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you are experiencing issues where Git seems to ignore files that you did not expect, there are a few things you can do.&lt;/p&gt;

&lt;p&gt;First, if possible, make sure you are only using a single &lt;strong&gt;.gitignore&lt;/strong&gt; placed in the root of your repository. This simplifies troubleshooting and makes it much easier for everyone working in the repository to understand which files are ignored. You might discover that there are additional files in use, and their patterns are causing your issue.&lt;/p&gt;

&lt;p&gt;Next, check if you are using a global &lt;strong&gt;.gitignore&lt;/strong&gt; file. You can list your current Git config and check if the &lt;code&gt;core.excludeFile&lt;/code&gt;configuration is set:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --list | grep -i core.excludesFile
core.excludesfile=/path/to/global/.gitignore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can also check your global Git config file located in your home directory. If you see the following section configured in the file, you have a global &lt;strong&gt;.gitignore&lt;/strong&gt; file in use:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[core]
    excludesFile = /path/to/global/.gitignore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you are still experiencing issues with files being ignored by Git that you did not expect, you can use the command &lt;code&gt;git check-ignore&lt;/code&gt; to see why this is.&lt;/p&gt;

&lt;p&gt;To see why a file named &lt;strong&gt;test1.txt&lt;/strong&gt; is ignored by Git, run the following in your repository:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git check-ignore -v test1.txt
.gitignore:1:test[1-2].txt test1.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The output shows that &lt;strong&gt;test1.txt&lt;/strong&gt; is ignored due to the pattern defined in row 1 in the file, and the specific pattern is &lt;code&gt;test\[1-2\].txt&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Example of using .gitignore with Terraform&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Terraform creates a number of files and directories that you generally do not want to track with Git.&lt;/p&gt;

&lt;p&gt;When you run &lt;code&gt;terraform init&lt;/code&gt;, Terraform creates a directory named &lt;strong&gt;.terraform&lt;/strong&gt; where it downloads provider binaries and external modules. These are easily downloaded each time you need them and should not be tracked by Git. To ignore the &lt;strong&gt;.terraform&lt;/strong&gt; directory, add the following pattern:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**/.terraform/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This pattern ignores all &lt;strong&gt;.terraform&lt;/strong&gt; directories no matter where in your repository they are located.&lt;/p&gt;

&lt;p&gt;The Terraform state file might contain secret values and resource attributes that you do not want to include in Git. To ignore the state file and any state backup file, include the following pattern:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.tfstate
.tfstate.*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you are using Terraform variable files you will likely want to avoid adding these to &lt;strong&gt;.gitignore&lt;/strong&gt; since they might also contain sensitive values. Add the following pattern to ignore variable files:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.tfvars
.tfvars.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you only use HCL variable files you can remove the &lt;strong&gt;*.tfvars.json&lt;/strong&gt; pattern.&lt;/p&gt;

&lt;p&gt;Other things you might want to ignore include override files, Terraform plan files (these files have no name convention so you need to come up with your own convention), crash log files and Terraform CLI configuration files.&lt;/p&gt;

&lt;p&gt;A full example for Terraform looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .terraform (provider binaries, external module source code, …)
**/.terraform/*
# state files
*.tfstate
*.tfstate.*

# Crash log files (if you run Terraform locally)
crash.log
crash.*.log

# Variable files
*.tfvars
*.tfvars.json

# Override files (HCL or JSON, depending on your configuration)
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# Terraform CLI configuration files
.terraformrc
terraform.rc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Final thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Most software development projects will require the use of &lt;strong&gt;.gitignore&lt;/strong&gt;. You want to avoid tracking large binary files, secrets or sensitive information, as well as any build output that can easily be reproduced. Keeping these files outside of Git makes your repository faster and more secure.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Frequently Asked Questions&lt;/strong&gt;
&lt;/h2&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Q. How can I troubleshoot a .gitignore file?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The easiest way to troubleshoot issues with ignored patterns is to use the &lt;code&gt;git check-ignore&lt;/code&gt; command. This command takes a path name (file name or directory), and if there is a match in a &lt;strong&gt;.gitignore&lt;/strong&gt; file in your repository it will show where the pattern is defined.&lt;/p&gt;

&lt;p&gt;Add the &lt;code&gt;-v&lt;/code&gt; or &lt;code&gt;--verbose&lt;/code&gt; flag to output file names and row numbers where the matching pattern is found.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git check-ignore -v my-pattern.txt
.gitignore:1:*.log docs/debug.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Q. Where can I find common example .gitignore files?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;GitHub keeps a repository with a large number of &lt;strong&gt;.gitignore&lt;/strong&gt; template files specifically for different languages, tools and frameworks. You can use these files as a starting point for your own projects.&lt;/p&gt;

&lt;p&gt;The repository is found &lt;a href="https://github.com/github/gitignore" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Q. How do I ignore a folder using .gitignore?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To match a specific directory anywhere in your repository, add a trailing &lt;code&gt;/&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore directories named .terraform anywhere in your repository
.terraform/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can also specifically ignore a single directory by using the full path to the directory relative to the &lt;strong&gt;.gitignore&lt;/strong&gt; file:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ignore the directory infrastructure/.terraform&lt;br&gt;
infrastructure/.terraform&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  &lt;strong&gt;Q. What should be included in .gitignore for Terraform?&lt;/strong&gt;&lt;br&gt;
&lt;/h4&gt;

&lt;p&gt;In general, a &lt;strong&gt;.gitignore&lt;/strong&gt; file for Terraform should ignore a number of things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Your state file &lt;strong&gt;terraform.tfstate&lt;/strong&gt; and any state backup files&lt;/li&gt;
&lt;li&gt;  The &lt;strong&gt;.terraform&lt;/strong&gt; directory where provider binaries and external modules are downloaded&lt;/li&gt;
&lt;li&gt;  Terraform variable files &lt;em&gt;**.tfvars&lt;/em&gt;*&lt;/li&gt;
&lt;li&gt;  Terraform CLI configuration files &lt;strong&gt;.terraformrc&lt;/strong&gt; and &lt;strong&gt;terraform.rc&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You might also want to include crash log files (&lt;strong&gt;crash.log&lt;/strong&gt; and &lt;strong&gt;crash.*.log&lt;/strong&gt;) and Terraform override files if you use these (&lt;strong&gt;override.tf&lt;/strong&gt; files).&lt;/p&gt;

&lt;p&gt;Finally, if you output the result of terraform plan`` to a file, you should exclude it from Git. Plan output has no default naming convention, so this depends on what you name your plan files.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

</description>
      <category>devops</category>
      <category>infrastructureascode</category>
      <category>terraform</category>
      <category>git</category>
    </item>
    <item>
      <title>Mastering Ansible Variables: Practical Guide with Examples</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Tue, 18 Feb 2025 15:35:14 +0000</pubDate>
      <link>https://forem.com/envzero/mastering-ansible-variables-practical-guide-with-examples-1nfj</link>
      <guid>https://forem.com/envzero/mastering-ansible-variables-practical-guide-with-examples-1nfj</guid>
      <description>&lt;p&gt;Ansible variables enable you to manage differences between systems, environments, and configurations, making your automation tasks more streamlined and adaptable. In this blog post, we dive into how these variables can be best utilized, through a series of step-by-step guides and practical examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What are  Ansible Variables&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ansible variables are dynamic components that allow for reusability in playbooks and roles, enhancing the efficiency of configurations. &lt;/p&gt;

&lt;p&gt;By using variables, you can make your Ansible projects adaptable to different environments and scenarios, allowing for more robust automation and efficient configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Use Variables?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Dynamic configurations:&lt;/strong&gt; Variables allow you to define values that can change based on the environment or context, such as different server IPs, usernames, or package versions&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Simplified management:&lt;/strong&gt; By using variables, you can manage complex configurations more easily, as changes need to be made in only one place&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Enhanced readability:&lt;/strong&gt; Meaningful variable names make playbooks more understandable and maintainable&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Variable Naming Rules&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.env0.com/blog/the-ultimate-ansible-tutorial-a-step-by-step-guide?__hstc=17958374.b2ee9bbfe3d51038fb8197daca529d3d.1739458725561.1739804446594.1739885469329.6&amp;amp;__hssc=17958374.3.1739885469329&amp;amp;__hsfp=3435076530" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt; enforces &lt;a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#creating-valid-variable-names" rel="noopener noreferrer"&gt;specific rules for variable names&lt;/a&gt; to ensure consistency and prevent conflicts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Start with a letter or underscore:&lt;/strong&gt; Variable names must begin with a letter (a-z, A-Z) or an underscore (_)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Allowed characters:&lt;/strong&gt; Subsequent characters can include letters, numbers (0-9), and underscores&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Avoid reserved words:&lt;/strong&gt; Do not use reserved words from Python or Ansible's playbook keywords as variable names&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Examples of valid variable names:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server_port
_user_name
backup_interval_7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Examples of invalid variable names:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1st_user   # Starts with a number
user-name  # Contains a hyphen
backup@time # Contains an invalid character '@'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Adhering to these naming conventions is a good start, but you also need to give your variables meaningful names so anyone reading your &lt;a href="https://www.env0.com/blog/mastering-ansible-playbooks-step-by-step-guide?__hstc=17958374.b2ee9bbfe3d51038fb8197daca529d3d.1739458725561.1739804446594.1739885469329.6&amp;amp;__hssc=17958374.3.1739885469329&amp;amp;__hsfp=3435076530" rel="noopener noreferrer"&gt;Ansible playbooks&lt;/a&gt; will understand what they are for.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Where Can You Use Ansible Variables?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ansible provides multiple ways to define and use variables, each tailored to specific use cases. Below is a detailed explanation of the various methods, with examples to illustrate their practical applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Defining Variables in Playbooks&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Defining variables directly in a playbook is the simplest and most accessible method. This approach is useful when you want to tightly couple variables with a specific playbook or perform quick tests. Variables declared in this manner are scoped to the playbook and cannot be reused elsewhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Playbook example:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Example Playbook
  hosts: all
  vars:
    app_name: "MyApp"
    app_port: 8080
  tasks:
    - debug:
        msg: "Deploying {{ app_name }} on port {{ app_port }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The vars section defines ‘app_name’ and ‘app_port’&lt;/li&gt;
&lt;li&gt;  The variables are accessed using Jinja2 syntax &lt;code&gt;({{ variable_name }})&lt;/code&gt; in tasks. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This method keeps the playbook self-contained but can become unwieldy if the number of variables increases.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Defining Variables in Inventory Files&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Inventory files allow you to associate variables with specific hosts or groups of hosts. This method is ideal for defining system-specific or environment-specific values without altering playbooks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example inventory file (hosts):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[webservers]
web1 ansible_host=192.168.1.10 app_port=8080
web2 ansible_host=192.168.1.11 app_port=8081

[databases]
db1 ansible_host=192.168.1.20 db_name=prod_db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Host-specific variables like ‘app_por’t and ‘db_name’ are defined alongside each host&lt;/li&gt;
&lt;li&gt;  Groupings such as [webservers] and [databases] help categorize hosts by role&lt;/li&gt;
&lt;li&gt;  These variables can be directly accessed in playbooks targeting these hosts or groups &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This method ensures configuration flexibility across different environments or roles.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Defining Variables in Separate Variable Files&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Storing variables in separate files is a best practice for larger projects. This keeps the playbooks clean and allows variables to be reused across multiple playbooks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Variable file (vars/main.yml):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app_name: "MyApp"
app_port: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Playbook example:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Include Variables from File
  hosts: all
  vars_files:
    - vars/main.yml
  tasks:
    - debug:
        msg: "The app name is {{ app_name }} running on port {{ app_port }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The &lt;strong&gt;vars_files&lt;/strong&gt; directive includes external variable files, making the playbook easier to read and maintain&lt;/li&gt;
&lt;li&gt;  Variable files can follow specific naming conventions for better organization, such as &lt;strong&gt;vars/dev.yml&lt;/strong&gt; for development or &lt;strong&gt;vars/prod.yml&lt;/strong&gt; for production&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach supports modularity and scalability in managing configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Group and Host Variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Group and host variables are stored in designated directories (&lt;strong&gt;group_vars&lt;/strong&gt; and &lt;strong&gt;host_vars&lt;/strong&gt;) and automatically applied to their respective groups or hosts. This structure is highly scalable and ensures consistent configurations across environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Group variables (group_vars/webservers.yml):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app_name: "WebApp"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Host variables (host_vars/web1.yml):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app_port: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Variables in &lt;strong&gt;group_vars&lt;/strong&gt; apply to all hosts in the group, such as webservers&lt;/li&gt;
&lt;li&gt;  Variables in &lt;strong&gt;host_vars&lt;/strong&gt; override group variables for a specific host, such as web1 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach makes managing variables for complex inventories simpler and enforces a clear separation of concerns.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Defining Variables at Runtime&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;-e&lt;/code&gt; or &lt;code&gt;--extra-vars&lt;/code&gt; option lets you define variables dynamically during playbook execution. This is especially useful for ad-hoc configurations or testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example command:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook playbook.yml -e "app_port=9090 app_name='DynamicApp'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Variables passed at runtime override other variable definitions in the hierarchy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This method is convenient for temporarily changing settings without editing files. It is, however, less suitable for configurations that need to be reused consistently.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;6. JSON and YAML Files for Runtime Variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When working with a large number of variables, JSON or YAML files provide a structured way to pass variables at runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JSON example (vars.json):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "app_name": "DynamicApp",
  "app_port": 9090
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;YAML example (vars.yml):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app_name: "DynamicApp"
app_port: 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Passing the file:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook playbook.yml --extra-vars @vars.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook playbook.yml --extra-vars @vars.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The structured format improves readability and minimizes errors when passing multiple variables. This method is excellent for managing complex variable sets with special characters.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Types of Ansible Variables&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ansible supports a wide range of variable types to handle diverse data structures and use cases. Here's an in-depth look at the variable types, along with detailed examples to clarify their applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Simple Variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Simple variables store a single value, such as a string, number, or boolean. These are ideal for straightforward configurations where each variable represents a single property or setting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app_name: "SimpleApp"   # A string value
max_retries: 5          # An integer value
debug_mode: true        # A boolean value
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Simple variables work well for settings like application names (‘app_name’), numeric parameters (‘max_retries’), or flags (‘debug_mode’)&lt;/li&gt;
&lt;li&gt;  These variables are easy to define and reference, making them suitable for straightforward configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Complex Variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;These variables allow for more sophisticated configurations by supporting lists, dictionaries, and nested data structures. These are essential for managing interconnected or hierarchical data.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;a. Lists&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Lists are used to define an ordered collection of items. Each item in the list can represent a related piece of information, such as versions, IP addresses, or tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;supported_versions:
  - 1.0
  - 1.1
  - 2.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Lists are perfect for scenarios where you need to iterate over multiple values, such as supported application versions or a list of servers&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;b. Dictionaries&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Dictionaries, also called maps or hashes, define key-value pairs. They are ideal for grouping related configurations into a single structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;database_config:
  host: "db.example.com"
  port: 5432
  username: "db_user"
  password: "secure_password"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Dictionaries work well for organizing structured data like database configurations (&lt;strong&gt;database_config&lt;/strong&gt;), where each key-value pair represents a specific setting&lt;/li&gt;
&lt;li&gt;  You can easily reference individual elements, such as &lt;strong&gt;database_config.host&lt;/strong&gt; which uses the dot notation&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;c. Nested variables&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;These variables combine lists and dictionaries to model more complex relationships, such as defining multiple servers with unique attributes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;servers:
  - name: web1
    ip: 192.168.1.10
  - name: web2
    ip: 192.168.1.11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Nested variables are invaluable for scenarios involving groups of objects, such as multiple servers, where each object has its own properties (name, ip)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Referencing nested variables:&lt;/strong&gt; Nested variables can be accessed using dot notation or bracket notation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example in a task:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- debug:
    msg: "{{ servers[0].name }} has IP {{ servers[0].ip }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;servers[0].name&lt;/code&gt; retrieves the name of the first server in the list (web1)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;servers[0].ip&lt;/code&gt; retrieves the ip of the first server (192.168.1.10)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows you to dynamically access and use specific elements of nested data structures.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Special Variables in Ansible&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ansible's special variables are predefined and offer insights into the system data, inventory, or execution context of a playbook or role. These variables are categorized into magic variables, connection variables, and facts, each serving a specific purpose. &lt;/p&gt;

&lt;p&gt;It’s crucial to note that these variable names are reserved by Ansible and cannot be redefined. Below, we explore these categories in detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Magic Variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Ansible automatically creates magic variables to reflect its internal state. These variables cannot be altered by users but can be accessed directly to retrieve useful information about the playbook’s execution and environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Example: Using inventory_hostname&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;This magic variable represents the name of the current host in the inventory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Playbook example:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Echo playbook
  hosts: localhost
  gather_facts: no
  tasks:
    - name: Echo inventory_hostname
      ansible.builtin.debug:
        msg:
          - "Hello from Ansible playbook!"
          - "This is running on {{ inventory_hostname }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;‍&lt;strong&gt;Output:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PLAY [Echo playbook] *********************************************************
TASK [Echo inventory_hostname] ***********************************************
ok: [localhost] =&amp;gt; {
    "msg": [
        "Hello from Ansible playbook!",
        "This is running on localhost"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Other essential magic variables&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;hostvars:&lt;/strong&gt; Provides information about other hosts in the inventory, including their associated variables

&lt;ul&gt;
&lt;li&gt;  Example: &lt;code&gt;hostvars['web1'].ansible_host&lt;/code&gt; retrieves the '&lt;code&gt;ansible_host&lt;/code&gt;' variable for the &lt;strong&gt;web1&lt;/strong&gt; host&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;group_names:&lt;/strong&gt; Contains a list of group names to which the current host belongs

&lt;ul&gt;
&lt;li&gt;  Example: &lt;code&gt;group_names&lt;/code&gt; helps identify the roles or purposes assigned to the current host&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;groups:&lt;/strong&gt; Group names to the list of hosts in each group

&lt;ul&gt;
&lt;li&gt;  Example: &lt;code&gt;groups['webservers']&lt;/code&gt; returns all hosts in the webservers group&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These variables are indispensable for dynamic inventory management and playbook flexibility. You can reference the list in &lt;a href="https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html#magic-variables" rel="noopener noreferrer"&gt;the documentation here.&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Connection Variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Connection variables control how Ansible connects to remote hosts during playbook execution. They define the connection type, user, and other related settings.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Example: Using ansible_connection&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;ansible_connection&lt;/code&gt; variable indicates the connection type (e.g., SSH, local, or winrm).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Playbook example:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Echo message on localhost
  hosts: localhost
  connection: local
  gather_facts: no
  vars:
    message: "Hello from Ansible playbook on localhost!"
  tasks:
    - name: Echo message and connection type
      ansible.builtin.shell: "echo '{{ message }}' ; echo 'Connection type: {{ ansible_connection }}'"
      register: echo_output
    - name: Display output
      ansible.builtin.debug:
        msg: "{{ echo_output.stdout_lines }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The connection: local directive specifies that the playbook should run locally rather than connecting to a remote host&lt;/li&gt;
&lt;li&gt;  The &lt;code&gt;ansible_connection&lt;/code&gt; variable dynamically identifies the connection type (local in this case)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sample output:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [Echo message and connection type] ***************************************
ok: [localhost] =&amp;gt; {
    "msg": [
        "Hello from Ansible playbook on localhost!",
        "Connection type: local"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Ansible Facts&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Ansible facts are a collection of data automatically gathered about remote systems during playbook execution. &lt;/p&gt;

&lt;p&gt;They provide detailed information about the system, such as operating system details, network interfaces, disk configurations, and more. Facts are stored in the &lt;code&gt;ansible_facts&lt;/code&gt; variable, which can be used in tasks, conditionals, and templates.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Key features of Ansible facts:&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Automatic collection:&lt;/strong&gt; Facts are gathered at the start of each play by default&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Access and scope:&lt;/strong&gt; Facts are available in the &lt;code&gt;ansible_facts&lt;/code&gt; dictionary and can also be accessed as top-level variables with the &lt;code&gt;ansible_ prefix&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Customization:&lt;/strong&gt; Users can add custom facts or disable fact gathering if not needed&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Example: Viewing facts&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To see all available facts for a host, add this task to a playbook:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Print all available facts
  ansible.builtin.debug:
    var: ansible_facts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Alternatively, you can gather raw fact data from the command line:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible &amp;lt;hostname&amp;gt; -m ansible.builtin.setup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Using facts in playbooks&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Facts allow you to dynamically configure tasks based on system attributes. For example, you can retrieve the hostname and default IPv4 address of a system:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Playbook example:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Display system facts
  hosts: all
  tasks:
    - name: Show hostname and default IPv4
      ansible.builtin.debug:
        msg: &amp;gt;
          The system {{ ansible_facts['nodename'] }} has IP {{ ansible_facts['default_ipv4']['address'] }}.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;ansible_facts['nodename']&lt;/code&gt;: Retrieves the system hostname&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;ansible_facts['default_ipv4']['address']&lt;/code&gt;: Retrieves the default IPv4 address&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sample output:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [Show hostname and default IPv4] *****************************************
ok: [host1] =&amp;gt; {
    "msg": "The system host1 has IP 192.168.1.10."
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Common use cases for facts&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dynamic Configurations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Use facts to configure tasks dynamically, such as installing packages based on the OS family:&lt;/li&gt;
&lt;li&gt;name: Install package based on OS
ansible.builtin.package:
name: "{{ 'httpd' if ansible_facts['os_family'] == 'RedHat' else 'apache2' }}"
state: present&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Conditionals and filters:&lt;/strong&gt; Use facts to apply conditional logic, such as running tasks only on systems with specific attributes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom facts:&lt;/strong&gt; Custom facts allow you to extend Ansible's default capabilities by defining your own facts specific to your needs. These can be either static facts (defined in files) or dynamic facts (generated by scripts). Custom facts are stored in the ansible_local namespace to avoid conflicts with system facts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example: Adding a static fact&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;‍&lt;/strong&gt;Create a file for the fact:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  On the remote host, create a directory &lt;strong&gt;/etc/ansible/facts.d&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add a file named &lt;strong&gt;custom.fact&lt;/strong&gt; with content in INI format:&lt;/p&gt;

&lt;p&gt;[general]&lt;br&gt;
app_version = 1.2.3&lt;br&gt;
environment = production&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Access the fact in a playbook:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Show custom facts
  ansible.builtin.debug:
    msg: &amp;gt;
      App Version: {{ ansible_local['custom']['general']['app_version'] }},
      Environment: {{ ansible_local['custom']['general']['environment'] }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Example: Adding a dynamic fact&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a script for the fact:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Place an executable script (e.g., generate &lt;a href="http://fact.sh" rel="noopener noreferrer"&gt;fact.sh&lt;/a&gt;) in &lt;strong&gt;/etc/ansible/facts.d&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;  The script should output JSON:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    #!/bin/bash
    echo '{"dynamic_fact": {"user_count": 10, "status": "active"}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access the fact in a playbook:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Display dynamic fact&lt;br&gt;
  ansible.builtin.debug:&lt;br&gt;
    msg: "User Count: {{ ansible_local['dynamic_fact']['user_count'] }}"&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  &lt;strong&gt;Best Practices with Facts&lt;/strong&gt;&lt;br&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Caching Facts:&lt;/strong&gt; Use fact caching to improve performance in large environments or repetitive tasks.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Disabling Facts:&lt;/strong&gt; Turn off fact gathering for better scalability if you don’t need system details as in the playbook below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hosts: all
gather_facts: false&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Variable Precedence&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Because variables can be defined in various locations, Ansible uses a precedence hierarchy to decide which value takes effect. Below is the entire list as &lt;a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#understanding-variable-precedence" rel="noopener noreferrer"&gt;defined in the documentation&lt;/a&gt;, with the least precedence at the top (the last listed variables override all other variables):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; command line values (for example, -u my_user, these are not variables)&lt;/li&gt;
&lt;li&gt; role defaults&lt;/li&gt;
&lt;li&gt; inventory file or script group vars&lt;/li&gt;
&lt;li&gt; inventory group_vars/all&lt;/li&gt;
&lt;li&gt; playbook group_vars/all&lt;/li&gt;
&lt;li&gt; inventory group_vars/*&lt;/li&gt;
&lt;li&gt; playbook group_vars/*&lt;/li&gt;
&lt;li&gt; inventory file or script host vars&lt;/li&gt;
&lt;li&gt; inventory host_vars/*&lt;/li&gt;
&lt;li&gt; playbook host_vars/*&lt;/li&gt;
&lt;li&gt; host facts / cached set_facts&lt;/li&gt;
&lt;li&gt; play vars&lt;/li&gt;
&lt;li&gt; play vars_prompt&lt;/li&gt;
&lt;li&gt; play vars_files&lt;/li&gt;
&lt;li&gt; role vars&lt;/li&gt;
&lt;li&gt; block vars (only for tasks in block)&lt;/li&gt;
&lt;li&gt; task vars (only for the task)&lt;/li&gt;
&lt;li&gt; include_vars&lt;/li&gt;
&lt;li&gt; set_facts / registered vars&lt;/li&gt;
&lt;li&gt; role (and include_role) params&lt;/li&gt;
&lt;li&gt; include params&lt;/li&gt;
&lt;li&gt; extra vars (for example, -e "user=my_user")(always win precedence)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion and Key Takeaways&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ansible variables enable scalable, flexible, and reusable automation. By mastering their usage and following best practices, you can enhance your Ansible projects' efficiency and maintainability.&lt;/p&gt;

&lt;p&gt;Key takeaways include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Flexibility:&lt;/strong&gt; Variables adapt your playbooks to different contexts with minimal changes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Organization:&lt;/strong&gt; Properly organizing and centralizing variables reduces redundancy and simplifies management.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Efficiency:&lt;/strong&gt; Leveraging advanced techniques like combining variables ensures scalability in larger projects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By applying the best practices outlined here, you can make your Ansible projects more robust, maintainable, and easier to collaborate on.&lt;/p&gt;

&lt;p&gt;If you're interested in learning more about Ansible, I recommend these two blog posts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://www.env0.com/blog/the-ultimate-ansible-tutorial-a-step-by-step-guide?__hstc=17958374.b2ee9bbfe3d51038fb8197daca529d3d.1739458725561.1739804446594.1739885469329.6&amp;amp;__hssc=17958374.3.1739885469329&amp;amp;__hsfp=3435076530" rel="noopener noreferrer"&gt;The Essential Ansible Tutorial: A Step by Step Guide&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.env0.com/blog/mastering-ansible-playbooks-step-by-step-guide?__hstc=17958374.b2ee9bbfe3d51038fb8197daca529d3d.1739458725561.1739804446594.1739885469329.6&amp;amp;__hssc=17958374.3.1739885469329&amp;amp;__hsfp=3435076530" rel="noopener noreferrer"&gt;Mastering Ansible Playbooks: Examples and Step by Step Guide&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Ansible is Better with env0&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Integrating Ansible with &lt;a href="https://env0.com" rel="noopener noreferrer"&gt;env0&lt;/a&gt; revolutionizes infrastructure management by combining Ansible’s powerful automation capabilities with env0’s advanced orchestration and collaboration features. This integration simplifies workflows, reduces manual effort, and enhances governance.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Key Advantages of env0 Integration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Effortless automation:&lt;/strong&gt; Instead of manually running Ansible commands through the CLI, env0 allows you to directly define and manage environments. This streamlines the deployment process, reducing errors and improving consistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Seamless template management:&lt;/strong&gt; Use env0 to create and manage environments based on Ansible templates. These templates can specify the Ansible version, SSH keys, and other configurations, ensuring that your deployments adhere to organizational standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Enhanced GitHub integration:&lt;/strong&gt; Link your env0 environments to your GitHub repository. By specifying the folder where your Ansible playbooks reside, env0 ensures that your scripts and configurations are always accessible and up-to-date.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Simplified variable handling:&lt;/strong&gt; With env0, defining and managing environment variables like ANSIBLE_CLI_inventory becomes straightforward. This enables Ansible to dynamically locate and utilize the correct inventory files for deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Automated execution:&lt;/strong&gt; Once an environment is initiated, env0 handles the entire process: cloning the repository, setting up the working directory, loading variables, and executing the playbooks. This eliminates repetitive tasks and accelerates deployment cycles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Governance and collaboration:&lt;/strong&gt; env0’s built-in RBAC and OPA policy support ensures that deployments remain secure and compliant. Team members can collaborate efficiently with clear access controls and activity tracking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Comprehensive logs and insights:&lt;/strong&gt; Review deployment logs in env0 to verify configurations and monitor playbook execution. This transparency aids troubleshooting and ensures accountability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Multi-framework support:&lt;/strong&gt; Combine Ansible with other IaC tools like Terraform, OpenTofu, Pulumi, or CloudFormation within env0. This flexibility allows teams to use the best tools for specific tasks while maintaining a cohesive workflow.&lt;/p&gt;

&lt;p&gt;By integrating Ansible with env0, teams can achieve greater efficiency, improve collaboration, and maintain strict governance over their infrastructure. &lt;/p&gt;

&lt;p&gt;Whether managing simple configurations or complex deployments, env0 ensures that Ansible users can focus on innovation rather than operational overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Frequently Asked Questions&lt;/strong&gt; 
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Q. How do I specify a variable in Ansible?
&lt;/h4&gt;

&lt;p&gt;Variables can be specified using the vars keyword in a playbook, inventory files, or through external variable files. They can also be passed at runtime using the -e flag.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. What is {{ item }} in Ansible?
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;{{ item }}&lt;/code&gt; is a placeholder used in loops to reference the current item being iterated over.&lt;/p&gt;

&lt;p&gt;Q. What is the difference between vars_files and include_vars?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;vars_files:&lt;/strong&gt; Used to include external variable files in a playbook&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;include_vars:&lt;/strong&gt; A task that dynamically includes variable files during playbook execution&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Q. How do I use environment variables in Ansible?
&lt;/h4&gt;

&lt;p&gt;Environment variables can be accessed using the ansible_env dictionary. Learn &lt;a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_environment.html" rel="noopener noreferrer"&gt;more in the documentation.&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. How do I pass variables to an Ansible playbook?
&lt;/h4&gt;

&lt;p&gt;Variables can be passed using the -e flag, through inventory files, or by including them in variable files referenced in the playbook.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. What is the order of precedence for Ansible variables?
&lt;/h4&gt;

&lt;p&gt;The order of precedence determines which variable value is used when multiple variables with the same name exist. Extra variables (&lt;code&gt;-e&lt;/code&gt;) have the highest precedence.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. What is the order of execution in Ansible?
&lt;/h4&gt;

&lt;p&gt;Ansible executes tasks in the order they are listed in the playbook, applying variables and configurations as it progresses.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. What is the Ansible naming convention for variables?
&lt;/h4&gt;

&lt;p&gt;Variables should start with a letter or underscore, and subsequent characters can include letters, numbers, and underscores. Avoid using reserved words or special characters.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. How do I set the env variable using Ansible?
&lt;/h4&gt;

&lt;p&gt;Use the environment keyword in tasks to set environment variables for that task.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. How do I assign variables in Ansible?
&lt;/h4&gt;

&lt;p&gt;Assign variables using the vars keyword, in inventory files, or through &lt;code&gt;set_fact&lt;/code&gt; during playbook execution.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. How do I use environment variables in an Ansible playbook?
&lt;/h4&gt;

&lt;p&gt;Access environment variables using the &lt;code&gt;ansible_env&lt;/code&gt; dictionary or pass them explicitly when running the playbook.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q. How do I pass variables to Ansible playbook?
&lt;/h4&gt;

&lt;p&gt;Variables can be passed using the &lt;code&gt;-e&lt;/code&gt; option, included in inventory files, or defined in external files and referenced in the playbook.&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>ansible</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Terraform Split and Join Functions: Examples and Best Practices</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Thu, 13 Feb 2025 17:01:11 +0000</pubDate>
      <link>https://forem.com/envzero/terraform-split-and-join-functions-examples-and-best-practices-1p3a</link>
      <guid>https://forem.com/envzero/terraform-split-and-join-functions-examples-and-best-practices-1p3a</guid>
      <description>&lt;p&gt;The HashiCorp Configuration Language (HCL) comes with many built-in &lt;a href="https://www.env0.com/blog/terraform-functions-guide-complete-list-with-examples" rel="noopener noreferrer"&gt;functions&lt;/a&gt;, the major ones including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Numeric functions for working with the number type&lt;/li&gt;
&lt;li&gt;  Collection functions for managing complex data structures&lt;/li&gt;
&lt;li&gt;  Encoding functions for encoding and decoding data&lt;/li&gt;
&lt;li&gt;  String functions for string manipulation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The string functions category contains many functions to manipulate strings in different ways. Two such functions are &lt;code&gt;split&lt;/code&gt; and &lt;code&gt;join&lt;/code&gt;. In this blog post, we will explore these two functions, how they work, and what they are used for.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; All use cases of Terraform &lt;code&gt;join&lt;/code&gt; and &lt;code&gt;split&lt;/code&gt; functions discussed here work similarly in &lt;a href="https://www.env0.com/blog/opentofu-the-open-source-terraform-alternative" rel="noopener noreferrer"&gt;OpenTofu&lt;/a&gt;, the open-source Terraform alternative. However, to keep it simple and familiar for DevOps engineers, we will refer to these as “Terraform split” and “Terraform join” throughout this blog post.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform split Function&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;split&lt;/code&gt; function is a built-in function that takes a string input and a separator that determines where the input string will be divided. The output is a list of strings.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Syntax of the Terraform split function&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The syntax of the Terraform &lt;code&gt;split&lt;/code&gt; function is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;split(separator, string)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;split&lt;/code&gt; function has two arguments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;separator&lt;/code&gt; is what the input string should be split on&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;string&lt;/code&gt; is the value that should be split into a list of strings &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To show how the split function works, here is a basic example using the &lt;code&gt;terraform console&lt;/code&gt; command for splitting a static string:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; split(",", "value1,value2,value3,value4")
tolist([
  "value1",
  "value2",
  "value3",
  "value4",
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The separator argument does not have to be a single character; it could be an arbitrary string:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; split("---", "value1---value2---value3")
tolist([
  "value1",
  "value2",
  "value3",
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Use cases for the Terraform split function&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;split&lt;/code&gt; function is used for a variety of use cases involving splitting strings.&lt;/p&gt;

&lt;p&gt;A few of the most common use cases are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Parsing the value of an input variable from a string to an array, e.g., a comma-separated list of subnet names&lt;/li&gt;
&lt;li&gt;  Extracting parts of a string identifier, e.g., an Amazon Resource Name (ARN) string&lt;/li&gt;
&lt;li&gt;  Removing a part of a string value, e.g., a leading "https://"&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Alternatives to the Terraform split function&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are a number of alternative built-in functions to the &lt;code&gt;split&lt;/code&gt; function, depending on what your goal is.&lt;/p&gt;

&lt;p&gt;If you want to remove parts of a string you can instead use the &lt;code&gt;replace&lt;/code&gt; function:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; replace(
  "https://docs.env0.com/docs/getting-started",
  "https://",
  "")
"docs.env0.com/docs/getting-started"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;replace&lt;/code&gt; function is more intuitive for this specific use case compared to splitting strings at the substring you want to remove.&lt;/p&gt;

&lt;p&gt;If you want to extract all subnet names from a comma-separated string you could use the &lt;code&gt;regexall&lt;/code&gt; function together with the flatten function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; flatten(regexall("([a-z0-9-]+)", "subnet-1,subnet-2,subnet-3"))
[
  "subnet-1",
  "subnet-2",
  "subnet-3",
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform split Examples&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let's see some examples to understand how to use this function.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Parsing values of input variables&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let's assume you have a variable in your Terraform configuration that expects a string of subnet names separated by commas. You want to use the subnet names to create AWS subnet resources for each name.&lt;/p&gt;

&lt;p&gt;The variable is defined in your &lt;strong&gt;variables.tf&lt;/strong&gt; file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "subnet_names" {
  type    = string
  default = "subnet-1,subnet-2,subnet-3"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Use a local value in your &lt;strong&gt;network.tf&lt;/strong&gt; file to get the individual names of the subnets from the input variable with the &lt;code&gt;split&lt;/code&gt; function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  subnet_names = split(",", var.subnet_names)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the same file, define your AWS VPC resource and use the &lt;code&gt;local.subnet_names&lt;/code&gt; value to create a number of subnet resources in the VPC:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "this" {
  cidr_block = "10.100.10.0/24"
}

resource "aws_subnet" "all" {
  count  = length(local.subnet_names)
  vpc_id = aws_vpc.this.id
  # the rest of the attributes are omitted
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Extracting parts of a string identifier&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Resources that you create in Terraform using a given cloud provider have some identifier string in their respective cloud. For AWS, resources have an Amazon Resource Name (ARN).&lt;/p&gt;

&lt;p&gt;An example of an ARN for an AWS EC2 virtual machine is:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;arn:aws:ec2:eu-west-1:123456789012:instance/i-0fb379ac92f436969
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;An ARN is the concatenation of a few different strings to form a unique identifier for the resource. The ARN includes the service name (ec2), the AWS region (eu-west-1), the account ID (123456789012), and more. Each part is separated by a colon.&lt;/p&gt;

&lt;p&gt;Sometimes you need to access one or more string values from an ARN to use elsewhere in your Terraform configuration. This is a great use case for &lt;code&gt;split&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;You can use local values to extract the parts that you are interested in. An example of an AWS EC2 instance is this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "web" {
  # arguments omitted for brevity
}

locals {
  arn = split(":", aws_instance.web.arn)
  region = local.arn[3] # eu-west-1
  account_id = local.arn[4] # 123456789012
  instance_id = local.arn[5] # instance/i-0fb379ac92f436969
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Removing a part of a string value&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You may have an input variable value, or an attribute of a resource or data source, that represents a web address. This string value often starts with "http://" or "https://". However, the place in your Terraform configuration where you want to use this value might expect a value without "http://" or "https://".&lt;/p&gt;

&lt;p&gt;You can use the &lt;code&gt;split&lt;/code&gt; function to remove unwanted parts of a string. Note that if you remove the leading part of a string, the return value of the &lt;code&gt;split&lt;/code&gt; function will include an empty string as the first value, for example:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; split("https://", "https://google.com")
tolist([
  "",
  "google.com",
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You’ll need to take this into account in your Terraform configuration.&lt;/p&gt;

&lt;p&gt;Here’s an example of taking a full URL and extracting parts of it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  full_url  = "https://docs.env0.com/docs/getting-started"

  # docs.env0.com/docs/getting-started
  first     = split("https://", local.full_url)[1] 

  # docs.env0.com
  subdomain = split("/", local.first)[0]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform join Function&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Terraform &lt;code&gt;join&lt;/code&gt; function is a built-in function that performs the opposite job of the &lt;code&gt;split&lt;/code&gt; function. The join function is used to combine a list of string values into a single string.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Syntax of the Terraform &lt;code&gt;join&lt;/code&gt; function&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The syntax of the join function in &lt;a href="https://www.env0.com/blog/what-is-terraform-cli" rel="noopener noreferrer"&gt;Terraform CLI&lt;/a&gt; is:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;join(separator, list)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There are two arguments for the &lt;code&gt;join&lt;/code&gt; function:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;separator&lt;/code&gt; is a string that will be inserted between each element in the list of string values&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;list&lt;/code&gt; is the list of string values that should be joined together&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The output will be a single string made up of the list of input strings, with the specified delimiter in between. You can combine values from different source objects: static strings, resource attributes, input variables, and more.&lt;/p&gt;

&lt;p&gt;Let's use the &lt;code&gt;terraform console&lt;/code&gt; command to show how the &lt;code&gt;join&lt;/code&gt; function works. This is a basic example of joining a static list of strings, i.e., string concatenation:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; join(",", ["value1", "value2", "value3", "value4"])
"value1,value2,value3,value4"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The separator argument does not have to be a single character; it could be any string value:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; join("---", ["value1", "value2", "value3"])
"value1---value2---value3"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In fact, the separator could also be an empty string:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; join("", ["value1", "value2", "value3"])
"value1value2value3"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Use cases of the Terraform join function&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A few of the most common use cases for the &lt;code&gt;join&lt;/code&gt; function:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Build URL and file path strings&lt;/li&gt;
&lt;li&gt;  Building resource identifiers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The general use case is to combine data from various sources into a single string.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Alternatives to the Terraform join function&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The most common alternative to the &lt;code&gt;join&lt;/code&gt; function is string interpolation –. this is when you include other values in a string using the &lt;code&gt;${...}&lt;/code&gt; syntax. &lt;/p&gt;

&lt;p&gt;A simple example of string interpolation in a local value is shown below:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "domain" {
  type = string
  default = "docs.env0.com"
}

variable "path" {
  type = string
  default = "docs/getting-started"
}

locals {
  url = "https://${var.domain}/${var.path}" #
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Another alternative to the &lt;code&gt;join&lt;/code&gt; function is the built in function named &lt;code&gt;format&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; format("%s,%s,%s", "subnet-1", "subnet-2", "subnet-3")
"subnet-1,subnet-2,subnet-3"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;format&lt;/code&gt; function can do much of what the Terraform &lt;code&gt;join&lt;/code&gt; function can do, but is not intended for the same use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform join Examples&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let's see some examples that illustrate how to use the Terraform &lt;code&gt;join&lt;/code&gt; function.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Build URL and file path strings&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you need to combine multiple separate strings into a file path or URL the &lt;code&gt;join&lt;/code&gt; function is useful. These strings could be input variables, local values, resource attributes, or static strings.&lt;/p&gt;

&lt;p&gt;Let's assume you have a number of variables representing parts of a website URL that you need to combine to form a full URL:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "domain" {
  type = string
  default = "docs.env0.com"
}

variable "path" {
  type = string
  default = "docs/getting-started"
}

locals {
  # note that it should be "https:/" with a single "/" due to the separator being a "/"
  url = join("/", ["https:/", var.domain, var.path)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;A similar use case is to build file paths. Here is an example of using the local provider to read an existing configuration file for an environment based on the value of an input variable:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "environment" {
  type = string
  validation {
    condition     = contains(["dev", "prod"], var.environment)
    error_message = "Use an valid environment name"
  }
}

data "local_file" "config" {
  filename = join("/", [path.module, "config", "${var.environment}.conf"])
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Building resource identifiers&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In a similar manner to how you used the &lt;code&gt;split&lt;/code&gt; function to split an ARN identifier into parts, it is common to also have to build an identifier from its parts. This is a common use case for the &lt;code&gt;join&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;In AWS resources have ARNs, and in Azure each resource has a resource ID. The resource ID is a string presented in the following format:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/subscriptions/&amp;lt;guid&amp;gt;/resourceGroups/&amp;lt;name&amp;gt;/providers/&amp;lt;provider&amp;gt;/&amp;lt;type&amp;gt;/&amp;lt;name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There are a few static strings and a few varying strings, depending on the type of resource that the ID is for.&lt;/p&gt;

&lt;p&gt;It is common for you to have to build these resource IDs from parts, as shown in the following simple example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "storage_account_name" {
  type = string
}

data "azurerm_client_config" "current" {}

resource "azurerm_resource_group" "this" {
  # arguments omitted for brevity
}

locals {
  resource_id = join("/", [
    "/subscriptions",
    data.azurerm_client_config.current.subscription_id,
    "resourceGroups",
    azurerm_resource_group.this.name,
    "providers",
    "Microsoft.Storage",
    "storageAccounts",
    var.storage_account_name
  ])
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is easy to manipulate data in HCL using the many built-in functions.&lt;/p&gt;

&lt;p&gt;String manipulation functions are among the most used functions. The &lt;code&gt;split&lt;/code&gt; and &lt;code&gt;join&lt;/code&gt; functions are heavily used for tasks that require splitting strings into separate parts or combining different strings into a single string.&lt;/p&gt;

&lt;p&gt;A common use case for the &lt;code&gt;join&lt;/code&gt; function is building URLs, file paths, and resource identifiers from static strings, input variables, local values and resources, and data source attributes.&lt;/p&gt;

&lt;p&gt;A common use case for the &lt;code&gt;split&lt;/code&gt; function is to divide URLs, file paths, or resource identifiers to get specific parts from these strings.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;split&lt;/code&gt; and &lt;code&gt;join&lt;/code&gt; functions perform opposite tasks.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Frequently Asked Questions&lt;/strong&gt;
&lt;/h2&gt;
&lt;h4&gt;
  
  
  Q. How do you join a variable and a string?
&lt;/h4&gt;

&lt;p&gt;Create a list of all the different objects you want to &lt;code&gt;join&lt;/code&gt; and pass it to the join function:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; join(",", [var.subnet_eu_west_1a, var.subnet_eu_west_1b, "subnet-1"])
"subnet-eu-west-1a,subnet-eu-west-1b,subnet-1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Q. How do you join two lists?
&lt;/h4&gt;

&lt;p&gt;Join elements from two or more lists of strings by passing all the lists to the &lt;code&gt;join&lt;/code&gt; function:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform join
&amp;gt; join(",", ["subnet-1", "subnet-2"], ["subnet-3", "subnet-4"])
"subnet-1,subnet-2,subnet-3,subnet-4"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Q. How do you split a string on a substring?
&lt;/h4&gt;

&lt;p&gt;Use the &lt;code&gt;split&lt;/code&gt; function to split a string on a specified separator substring:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform console
&amp;gt; split(",", "subnet-1,subnet-2,subnet-3")
tolist([
  "subnet-1",
  "subnet-2",
  "subnet-3",
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>infrastructureascode</category>
      <category>terraform</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>DORA Metrics: An Infrastructure as Code Perspective</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Mon, 10 Feb 2025 13:30:00 +0000</pubDate>
      <link>https://forem.com/envzero/dora-metrics-an-infrastructure-as-code-perspective-1fop</link>
      <guid>https://forem.com/envzero/dora-metrics-an-infrastructure-as-code-perspective-1fop</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;What are DORA Metrics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;DORA metrics is a stream management framework that emerged as a cornerstone for evaluating developer productivity and software delivery performance. By focusing on four key metrics—deployment frequency, lead time for changes, time to restore service, and production change failure rate—the DORA framework provides actionable insights into how engineering teams deliver value and where improvements can drive efficiency and reliability.&lt;/p&gt;

&lt;p&gt;Widely accepted as an industry benchmark, these metrics help improve organizational performance and align engineering efforts with business outcomes, ensuring that teams not only deliver faster but also maintain stability.&lt;/p&gt;

&lt;p&gt;Critically, the effectiveness of DORA metrics hinges on an often-overlooked factor: the infrastructure supporting the application code. After all, without reliable and resilient infrastructure to run on, even the best application code cannot deliver value to the business.&lt;/p&gt;

&lt;p&gt;To accelerate developer productivity and increase application stability, the underlying infrastructure must be self-service, consistent, automated, and resilient. Achieving these qualities requires adopting &lt;a href="https://www.env0.com/blog/infrastructure-as-code-101" rel="noopener noreferrer"&gt;Infrastructure as Code&lt;/a&gt; (&lt;a href="https://www.env0.com/blog/infrastructure-as-code-101" rel="noopener noreferrer"&gt;IaC&lt;/a&gt;) and leveraging a robust IaC management platform.&lt;/p&gt;

&lt;p&gt;This post examines how IaC impacts DORA metrics, highlighting the potential to enhance both throughput and stability with the right tools and practices, such as those offered by env0.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;DORA Metrics Defined&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;DORA metrics, developed by the DevOps Research and Assessment group, are widely recognized for measuring software delivery performance. The framework evaluates four key areas grouped under the categories ‘Throughput’ and ‘Stability’:&lt;/p&gt;

&lt;h4&gt;
  
  
  Throughput Metrics
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Change Lead Time&lt;/strong&gt;:  This metric measures the time between committing a code change and merging it into production. Success in this area depends on seamless CI/CD pipelines, which require reliable infrastructure throughout the testing and deployment lifecycle.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Deployment Frequency&lt;/strong&gt;: Deployment frequency tracks how often new changes are pushed to production. To increase deployment frequency without sacrificing quality, organizations must automate and standardize infrastructure provisioning, ensuring consistent environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Stability Metrics
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Change Fail Percentage&lt;/strong&gt;: This represents the proportion of deployments that fail in production. High failure rates often stem from inconsistent or misconfigured infrastructure, emphasizing the need for standardized IaC practices.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Mean Time to Recovery (MTTR)&lt;/strong&gt;: MTTR measures how quickly teams recover from deployment failures. Effective monitoring, coupled with automated rollback capabilities for both application and infrastructure changes, is critical to reducing recovery time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qyfy25dsbnfkux3vggx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qyfy25dsbnfkux3vggx.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These metrics serve as a performance benchmark for organizations aiming to improve performance and engineering teams' practices and deliver value efficiently.&lt;/p&gt;

&lt;p&gt;Although DORA metrics typically focus on application code, they make an implicit assumption that infrastructure is always available, stable, and ready when needed. In reality, infrastructure plays a significant role in determining the success of application deployments, and its limitations can undermine DORA’s utility as a comprehensive measure of productivity.&lt;/p&gt;

&lt;p&gt;For that reason, Infrastructure as Code and its management need to be included in any heuristic assessment of developer productivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Pitfalls
&lt;/h3&gt;

&lt;p&gt;While DORA metrics provide valuable insights, they are not without limitations, nor are they the only way to measure performance. Other attempts to quantify productivity include lines of code committed, commit frequency, and pull request lead time. Each approach attempts to measure a larger phenomenon through a limited set of data points and is prone to skew and manipulation.&lt;/p&gt;

&lt;p&gt;The ultimate goal of any productivity measure is to deliver business value, which DORA does not explicitly measure. The goal of DORA metrics is to correlate developer productivity with positive business outcomes. In much the same way, measuring the stability and resiliency of IaC is an attempt to correlate developer productivity with operational maturity. Despite the flawed nature of DORA metrics- and all other metrics, organizations continue to rely on them for actionable benchmarks, recognizing their potential to prioritize improvements in engineering teams' performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure and Application Code Interaction
&lt;/h3&gt;

&lt;p&gt;Infrastructure can be widely defined as the layer of technology that supports applications but is not directly part of the application code.&lt;/p&gt;

&lt;p&gt;Infrastructure provides the foundation upon which applications can be deployed. Put simply, infrastructure is the layer that developers don't have to manage. However, it still has a large impact on throughput and stability once a developer's code leaves their local development environment.&lt;/p&gt;

&lt;p&gt;Infrastructure and application code are intrinsically linked, with the former providing the foundation for successful deployments and improved deployment frequency. Developers require infrastructure that is scalable, self-service, stable, and secure to meet the demands of the modern software delivery process. Infrastructure as Code (IaC) addresses these requirements by enabling consistency, automation, and scalability across environments.&lt;/p&gt;

&lt;p&gt;By codifying infrastructure definitions, IaC allows teams to provision environments that meet specific development and testing needs. This alignment ensures that infrastructure can support both the speed and stability demanded by DORA metrics, ultimately enhancing developer productivity and reducing risks.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;IaC Impact on DORA Metrics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;How does Infrastructure as Code specifically impact the four key DORA metrics? We can dig into each metric to see how it's defined and how IaC could positively or negatively impact it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Throughput
&lt;/h3&gt;

&lt;p&gt;Throughput as a category encompasses the speed with which a developer can write, commit, and deploy code successfully to production. It's broken up into the measures of &lt;strong&gt;Change Lead&lt;/strong&gt; &lt;strong&gt;Time&lt;/strong&gt; and &lt;strong&gt;Deployment Frequency&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Change Lead Time&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Change lead time is expressed as the time it takes for a code commit to make its way into a production deployment. In a typical software development lifecycle, committing code to a repository kicks off a series of actions that are required before code can be promoted and packaged for production. This can include the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Code commit: when code is pushed to the repository&lt;/li&gt;
&lt;li&gt; Code review and approval: the time spent reviewing and approving changes&lt;/li&gt;
&lt;li&gt; Continuous integration (CI): the process of building and testing the change&lt;/li&gt;
&lt;li&gt; Deployment: the process of packaging and deploying changes to production&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Infrastructure plays a significant part in the CI and deployment phases. In a typical battery of CI tests, the code is deployed and validated in a testing environment.&lt;/p&gt;

&lt;p&gt;Automated provisioning ensures that testing environments are ready when needed and configured correctly, eliminating delays and mistakes caused by manual setup. IaC also allows for multiple testing environments to be provisioned, so code changes can be tested in parallel.&lt;/p&gt;

&lt;p&gt;During the deployment process, code artifacts typically undergo additional testing in lower environments before being promoted to production. The infrastructure for these lower environments- QA, staging, etc- should be consistently configured to match each other and production.&lt;/p&gt;

&lt;p&gt;Standardized IaC configurations prevent discrepancies between environments, reducing debugging time and improving deployment confidence. Applications deployed to consistent environments are less likely to fail in unexpected ways due to infrastructure configuration issues.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Deployment Frequency&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Deployment frequency is expressed by how often an organization successfully releases changes to production. A key philosophy in DORA metrics is the ability of teams to deliver small, incremental changes often, rather than large and infrequent deployments.&lt;/p&gt;

&lt;p&gt;To successfully deploy changes frequently, organizations need to have in place pipelines that review code quality, run tests, and promote code through environments with the goal of continuous deployment. Once again, infrastructure heavily influences the ability of development teams to build a robust pipeline.&lt;/p&gt;

&lt;p&gt;Levering IaC eliminates manual provisioning steps, enabling faster and more frequent code releases. It also delivers support for automated testing through consistent, IaC-defined environments that can be deployed on demand.&lt;/p&gt;

&lt;p&gt;When it comes to enhancing &lt;em&gt;Throughput&lt;/em&gt;, IaC can assist by being automated, self-service, consistent, and scalable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stability
&lt;/h3&gt;

&lt;p&gt;The second grouping of DORA metrics examines the stability of applications that have been deployed into production. One might think that accelerating throughput would lead to reduced stability, but studies conducted by DORA have found the opposite to be true. Increased throughput tends to lead to greater stability by requiring the introduction of automated pipelines, thorough testing, and streamlined deployment processes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Change Failure Rate
&lt;/h4&gt;

&lt;p&gt;Change failure rate is a measure of how often production deployments fail. A key to reducing change failures is to catch issues earlier on in the development lifecycle.&lt;/p&gt;

&lt;p&gt;IaC ensures that all environments share the same configuration catching potential issues before they reach production and reducing deployment failures caused by mismatched settings. Best practices like blue/green deployments and phased rollouts, facilitated by IaC, improve system reliability and can catch deployment issues before an outage is caused, thus directly impacting the change failure rate in production.&lt;/p&gt;

&lt;h4&gt;
  
  
  Mean Time to Recovery (MTTR)
&lt;/h4&gt;

&lt;p&gt;Mean Time to Recovery is a measure of how long it takes on average to recover from a deployment failure. Robust and resilient infrastructure cannot help with bad application code, but it can assist with lowering the time it takes to recover from a failed rollout.&lt;/p&gt;

&lt;p&gt;Monitoring tools integrated with IaC platforms can quickly identify failures and trigger automated recovery workflows. If the issue was caused by an infrastructure change, or the deployment required updating the infrastructure configuration, IaC is a vehicle to undo or remediate those issues.&lt;/p&gt;

&lt;p&gt;IaC-driven infrastructure management also can contain the capability to automatically detect and resolve potential issues before they impact production. This could be as simple as automatically scaling components to deal with unforeseen application load, or as complex as detecting failures with the deployment region and automating a failover to another region.&lt;/p&gt;

&lt;p&gt;These examples highlight how IaC directly influences the throughput and stability factors outlined by DORA metrics, underscoring the need for a comprehensive management platform to fully realize its benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;IaC Management with env0&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Infrastructure as Code can have a tremendous impact on the stability and throughput of your developers, but only if it's managed properly. An IaC management platform simplifies and orchestrates the complexities of managing infrastructure at scale. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://env0.com" rel="noopener noreferrer"&gt;Env0&lt;/a&gt; embodies this concept through five key pillars: self-service, governance, automation &amp;amp; orchestration, analytics &amp;amp; monitoring, and cloud asset management. These pillars collectively address the challenges of IaC adoption, ensuring infrastructure meets the needs of modern development teams.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Self-Service&lt;/strong&gt;: Empowering developers to provision and manage infrastructure on-demand reduces lead times and improves productivity.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Governance&lt;/strong&gt;: Policy enforcement ensures consistency and compliance, lowering the risk of failures and enabling cost control.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Automation &amp;amp; Orchestration&lt;/strong&gt;: Streamlined workflows eliminate manual interventions, supporting faster deployments and greater scalability.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Analytics &amp;amp; Monitoring&lt;/strong&gt;: Comprehensive metrics provide visibility into infrastructure performance, enabling proactive issue resolution and optimization.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Cloud Asset Management&lt;/strong&gt;: Ensures that all cloud assets are managed through IaC and assists with assessing risk and detecting potential issues like configuration drift.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By addressing the throughput and stability challenges highlighted in DORA metrics, env0 not only improves development workflows but also delivers measurable business value, such as cost reduction and enhanced customer satisfaction. In the next section, we take a close look at each pillar and how it impacts and improves DORA metrics.&lt;/p&gt;

&lt;p&gt;To provide some visual context on how these are related to DORA metrics, here is a rough idea of how they impact stability and throughput, and how they come into play depending on the organization's IaC 'maturity'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fduhbkwkm0o6d4gpi6d27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fduhbkwkm0o6d4gpi6d27.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For instance, the topic of Governance, which represents the balance point of stability and throughput, impacting both, is something that organizations start to consider mid-way into their IaC journey. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Enhancing DORA Metrics with env0&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  IaC Automation &amp;amp; Managed Self-Service
&lt;/h3&gt;

&lt;p&gt;IaC Automation encompasses the necessary tooling and platform to automate and orchestrate the deployment and ongoing management of infrastructure through code. env0 supports automation workflows across multi-cloud environments using OpenTofu, Terraform, Pulumi, CloudFormation, Terragrunt, and Kubernetes. Pipelines built with env0 are flexible and customizable to support complex and unique workflows.&lt;/p&gt;

&lt;p&gt;The customized deployment pipelines along with &lt;a href="https://docs.env0.com/docs/templates" rel="noopener noreferrer"&gt;reusable templates&lt;/a&gt; and shared variables create a golden path developers can leverage when they need to create infrastructure. Developers can control the workflow &lt;a href="https://docs.env0.com/docs/plan-and-apply-from-pr-comments" rel="noopener noreferrer"&gt;through pull requests&lt;/a&gt; and branch merges, so they don't have to leave their native tools to make use of env0's self-service capabilities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Throughput&lt;/strong&gt;: Automated infrastructure updates and self-service capabilities shorten change lead times and enable faster testing cycles. &lt;a href="https://docs.env0.com/docs/policy-ttl" rel="noopener noreferrer"&gt;Time-to-live (TTL)&lt;/a&gt; features ensure testing environments are available and automatically decommissioned, optimizing resource usage.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Workflows and Parallel Deployments&lt;/strong&gt;: Orchestrated &lt;a href="https://docs.env0.com/docs/workflows" rel="noopener noreferrer"&gt;workflows&lt;/a&gt; and support for &lt;a href="https://docs.env0.com/docs/bulk-operations" rel="noopener noreferrer"&gt;parallel deployments&lt;/a&gt; reduce bottlenecks, enabling faster code promotion to production.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1czywdkczsxeoon1xo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1czywdkczsxeoon1xo9.png" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example: Role-based access managed at a team-level&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;To learn more about env0's self-service capabilities and how they empower teams with managed IaC workflows, check out this guide: &lt;a href="https://www.env0.com/blog/mastering-managed-iac-self-service-the-complete-guide" rel="noopener noreferrer"&gt;Mastering Managed IaC Self-Service with env0&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Governance
&lt;/h3&gt;

&lt;p&gt;On an IaC Management platform, governance encompasses several aspects including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/docs/access" rel="noopener noreferrer"&gt;Access controls&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/docs/policies" rel="noopener noreferrer"&gt;Policy enforcement&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/docs/drift-detection" rel="noopener noreferrer"&gt;Drift detection and remediation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/docs/cost-monitoring" rel="noopener noreferrer"&gt;Cost controls&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/docs/audit-logs" rel="noopener noreferrer"&gt;Auditing and logging&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The use of delegated access and roles empowers developers to manage their own infrastructure, reducing change lead time and increasing deployment frequency. At the same time, properly applied RBAC prevents unauthorized changes in critical environments reducing outages caused by misapplied updates.&lt;/p&gt;

&lt;p&gt;Enforcing policy helps to ensure compliance with accepted best practices and well-architected designs. This has the impact of reducing production change failure rate, as compliant infrastructure is less likely to fail in well-known ways.&lt;/p&gt;

&lt;p&gt;Watch this video to learn how env0 leverages runtime policies to enhance governance, ensure compliance, and enable secure Terraform deployments:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/LL5GHYG-fp0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Infrastructure drift is a situation where the actual resources differ from the declared configuration. Drift is often a source of inconsistency between environments, and can lead to failed deployments where production has drifted from what was tested in the lower environments. env0's drift detection helps to surface drift when it occurs and assists in remediating that drift before it causes outages or failed deployments, enhancing the stability metrics.&lt;/p&gt;

&lt;p&gt;Watch this tutorial to see how env0 enables smart drift detection and auto-remediation:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/a2s63tPic28"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;When an outage occurs, logging is critical to determine the cause and apply a remediation. env0 offers extensive logging and auditing capabilities to reduce mean time to resolution and identify root causes to prevent future outages from occurring.&lt;/p&gt;

&lt;p&gt;Although cost control doesn't necessarily improve DORA metrics, it does provide real business value by reducing operational expenditures. This is an area where the DORA metrics fail to paint a complete picture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Analytics &amp;amp; Monitoring
&lt;/h3&gt;

&lt;p&gt;While auditing and logging do play a critical role in governance, they are also part of the larger analytics and monitoring features in env0. &lt;/p&gt;

&lt;p&gt;A key part of the DevOps teams' cycle is to provide feedback to the developers to help them enhance their code. env0 integrates with popular observability tools like Splunk and Datadog to build a holistic view of your software development lifecycle from first deployment to steady-state operation.&lt;/p&gt;

&lt;p&gt;The metrics gathered from testing and production environments can help identify bottlenecks and inform optimization efforts. Solid monitoring and proper analysis can both increase throughput and enhance stability, as long as the right information is available to the development teams. env0 assists in surfacing that information through tools your development team is already utilizing.&lt;/p&gt;

&lt;p&gt;Just as important as gathering &lt;a href="https://docs.env0.com/docs/dashboards" rel="noopener noreferrer"&gt;metrics&lt;/a&gt; and &lt;a href="https://docs.env0.com/docs/audit-logs" rel="noopener noreferrer"&gt;logs&lt;/a&gt; is monitoring the environment for possible issues and outages. While env0 does not replace a traditional monitoring solution, it does have insight into the status of infrastructure deployment and can alert when deployments fail or drift is detected. Both of these factors can enhance stability by reducing MTTR and change failure rate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5hhgafwnuicx75gr13o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5hhgafwnuicx75gr13o.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;env0 dashboard offers insights into users, deployments, environments, drifts, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud Asset Management
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.env0.com/docs/cloud-compass" rel="noopener noreferrer"&gt;Cloud Compass&lt;/a&gt;, env0’s Cloud Asset Management capability, aims to bridge the gap between manual and automated cloud operations. It scans your current cloud accounts to identify infrastructure that is not currently being managed by IaC. Not only does it identify unmanaged resources, but it also allows you to take action by automatically generating IaC and importing those resources into env0.&lt;/p&gt;

&lt;p&gt;If you've inherited existing infrastructure from other teams or through a merger, there's a good chance that it hasn't been deployed with infrastructure as code and likely isn't in compliance with your current policies. Cloud Compass can drastically shorten the time it takes to onboard new environments and apply consistent controls, providing all the DORA metric benefits that come from proper IaC Management.&lt;/p&gt;

&lt;p&gt;Additionally, IaC tools can only detect drift in the resources under management. If new, out-of-band resources are introduced through Click-Ops or CLI tools, traditional IaC management is unaware of them. These unknown resources can be the cause of outages and failed deployments, slowing down troubleshooting efforts and presenting an incomplete picture of reality. Cloud Compass discovers these unmanaged resources and can assign risk levels to help operations teams prioritize action.&lt;/p&gt;

&lt;p&gt;To learn more about Cloud Compass and how it identifies unmanaged resources, check out this video:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/wtrYlBwD9P8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Infrastructure is the backbone of modern application delivery, playing a pivotal role in achieving the continuous improvement goals outlined by DORA metrics. Driving improvement across these four metrics is not only essential for enhancing engineering performance but also a key component of effective DevOps stream management, enabling the development of high-performing teams, empowerment of elite performers, etc.&lt;/p&gt;

&lt;p&gt;By automating and managing infrastructure with IaC, organizations can address the challenges of throughput and stability, enabling more frequent, reliable, and efficient deployments.&lt;/p&gt;

&lt;p&gt;Platforms like env0 elevate the benefits of IaC by offering tools for self-service, governance, and analytics that optimize infrastructure workflows. For teams aiming to improve developer productivity and deliver measurable business value, env0 serves as an indispensable partner in achieving these objectives.&lt;/p&gt;

&lt;p&gt;To learn how env0 can help your team improve DORA metrics, &lt;a href="https://www.env0.com/demo-request" rel="noopener noreferrer"&gt;schedule a personalized demo&lt;/a&gt; today.&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>cicd</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Terraform Refresh Command: Guides, Examples and Best Practices</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Thu, 06 Feb 2025 13:54:44 +0000</pubDate>
      <link>https://forem.com/envzero/terraform-refresh-command-guides-examples-and-best-practices-25ji</link>
      <guid>https://forem.com/envzero/terraform-refresh-command-guides-examples-and-best-practices-25ji</guid>
      <description>&lt;p&gt;Terraform manages the infrastructure resources and deployment using the state file. By running the &lt;code&gt;refresh&lt;/code&gt; command, you can update the state file with the actual infrastructure configuration.&lt;/p&gt;

&lt;p&gt;As of Terraform version v0.15.4, the &lt;code&gt;terraform refresh&lt;/code&gt; command was deprecated because its default behavior could be deemed unsafe if you have misconfigured credentials for any of your providers. &lt;/p&gt;

&lt;p&gt;In this blog, we will explore the &lt;code&gt;terraform refresh&lt;/code&gt; command and how it works, and also discuss its limitations and alternatives through the use of practical hands-on examples.    &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer :&lt;/strong&gt; All use cases of the Terraform &lt;code&gt;refresh&lt;/code&gt; discussed here work similarly in &lt;a href="https://www.env0.com/blog/opentofu-the-open-source-terraform-alternative" rel="noopener noreferrer"&gt;OpenTofu&lt;/a&gt;, the open-source Terraform alternative. However, to keep it simple and familiar for DevOps engineers, we will use “Terraform refresh” as a catch-all term throughout this blog post.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Terraform Refresh&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;terraform refresh&lt;/code&gt; command ensures that your state file reflects the current state of your infrastructure resources. &lt;/p&gt;

&lt;p&gt;Let’s say that the resources managed by the Terraform code are sometimes modified using a console, CLI, third-party software APIs, or scripts. This means that the current infrastructure configuration will not match your Terraform code, creating drift because configuration changes were made outside of the regular code-to-cloud CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;One way to resolve these drifts is by running &lt;code&gt;refresh&lt;/code&gt;, which updates the state file with the actual infrastructure configuration. However, this approach comes with several downsides and is generally not considered a best practice due to some of the reasons we’ll describe below. &lt;/p&gt;

&lt;p&gt;Before getting to that, here are some scenarios in which you might need to run the &lt;code&gt;terraform refresh&lt;/code&gt; command.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Syncing the state:&lt;/strong&gt; As mentioned, when the state file is out of sync with the current infrastructure, you can run the &lt;code&gt;refresh&lt;/code&gt; command to reconcile the differences between the state file and its actual state (drifts).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Before running Terraform plan:&lt;/strong&gt;  Running &lt;code&gt;refresh&lt;/code&gt; before &lt;a href="https://www.env0.com/blog/terraform-plan" rel="noopener noreferrer"&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/a&gt; ensures that the plan is based on the most recent resource configuration changes made in your cloud environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The syntax for the &lt;code&gt;terraform refresh&lt;/code&gt; command is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform refresh [options]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And the options could be any of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-state=&amp;lt;path&amp;gt;&lt;/code&gt;: Specifies the path to the state file. Unless specified, it is the default path to the &lt;strong&gt;terraform.tfstate&lt;/strong&gt; file. &lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-state-out=&amp;lt;path&amp;gt;&lt;/code&gt;: Using this, you can define where you want to store your refreshed state file. If the value is not passed, it overwrites the existing state file.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-lock=&amp;lt;true|false&amp;gt;&lt;/code&gt;: When the state is refreshed, it helps define whether the state lock should be acquired. The default value is ‘true’.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-lock-timeout=&amp;lt;duration&amp;gt;&lt;/code&gt;: Determines the wait time for a state lock to be acquired. The default is ‘0s’ (no timeout).&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-backup=&amp;lt;path&amp;gt;&lt;/code&gt;: Used to pass the location path for a backup of the state file before it's overwritten. The default value is ‘.tfstate’backup'.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How does Terraform Refresh State Work&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://www.env0.com/blog/what-is-terraform-cli" rel="noopener noreferrer"&gt;Terraform CLI&lt;/a&gt;, when the actual configuration of your resources on cloud providers (such as AWS, GCP, or Azure) no longer matches the configuration defined in your Terraform configuration, it causes a drift.&lt;/p&gt;

&lt;p&gt;In such a scenario, when you run &lt;code&gt;terraform refresh&lt;/code&gt; command, it reconciles this difference between the desired infrastructure (your state file) and the current infrastructure (actual cloud configuration) to match the difference between them (a.k.a  drifts), by doing the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  First, Terraform inspects the existing state file (&lt;strong&gt;terraform.tfstate&lt;/strong&gt;), which contains the desired state of your infrastructure. While reading the state file, Terraform identifies the resources that need to be refreshed or updated as defined in your &lt;strong&gt;.tf&lt;/strong&gt; files.&lt;/li&gt;
&lt;li&gt;  Next, Terraform makes API calls to the providers, to retrieve the current state of the infrastructure resources, as they were defined in the Terraform code. &lt;/li&gt;
&lt;li&gt;  Once the configuration is fetched, Terraform compares the configuration in the &lt;strong&gt;terraform.tfstate&lt;/strong&gt; file with the current state of your infrastructure. If there are any changes in the resource arguments, the file is refreshed and updated with the current state for all those resources. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two important things to note:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; After running &lt;code&gt;refresh&lt;/code&gt; it’s always a good idea to verify that the state file is updated using the &lt;code&gt;terraform show&lt;/code&gt; command.&lt;/li&gt;
&lt;li&gt; It’s important to keep in mind that the &lt;code&gt;refresh&lt;/code&gt; command doesn’t actually make any changes to the current infrastructure, and only updates the state file.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Example Scenario&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To better demonstrate how &lt;code&gt;terraform refresh&lt;/code&gt; works, let’s go into a quick hands-on example of how it can be used to update your state file.&lt;/p&gt;

&lt;p&gt;First, let’s define an AWS S3 bucket using the Terraform &lt;strong&gt;main.tf&lt;/strong&gt; file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region     = "us-east-1"
}
resource "aws_s3_bucket" "terraform_state" {
  bucket = "env0-terraform-state-bucket"
  lifecycle {
    prevent_destroy = false
  }
  tags = {
    Name        = "Terraform State Bucket"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, let’s run the &lt;a href="https://www.env0.com/blog/terraform-init" rel="noopener noreferrer"&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://www.env0.com/blog/terraform-apply-guide-command-options-and-examples" rel="noopener noreferrer"&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;/a&gt; commands to deploy the S3 bucket on AWS.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v5.77.0...
- Installed hashicorp/aws v5.77.0 (signed by HashiCorp)
…
Terraform has been successfully initialized!
…
➜  env0 git:(main) ✗ terraform apply --auto-approve
  + create
Terraform will perform the following actions:
  # aws_s3_bucket.terraform_state will be created
  + resource "aws_s3_bucket" "terraform_state" {
      + bucket                      = "env0-terraform-state-bucket"
      + force_destroy               = false
      + tags                        = {
          + "Name" = "Terraform State Bucket"
        }
      + tags_all                    = {
          + "Name" = "Terraform State Bucket"
        }
…
    }
Plan: 1 to add, 0 to change, 0 to destroy.
aws_s3_bucket.terraform_state: Creating...
aws_s3_bucket.terraform_state: Creation complete after 5s [id=env0-terraform-state-bucket]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To close the loop, you can verify that the bucket was created by visiting the AWS console. Navigate to ‘S3’ and search with the bucket name. Here, you’ll see an ‘env0-terraform-state-bucket’. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrajnzhov32acrhbejc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrajnzhov32acrhbejc6.png" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the bucket created, run the &lt;code&gt;terraform show&lt;/code&gt; commands to view the current local state file with the resources managed using Terraform.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ terraform show      
# aws_s3_bucket.terraform_state:
resource "aws_s3_bucket" "terraform_state" {
    arn                         = "arn:aws:s3:::env0-terraform-state-bucket"
    bucket                      = "env0-terraform-state-bucket"
    force_destroy               = false
    tags                        = {
        "Name" = "Terraform State Bucket"
    }
    tags_all                    = {
        "Name" = "Terraform State Bucket"
    }
…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, in the AWS console, edit the bucket tags under ‘Properties.’ You can add an owner tag, which helps you identify the owner of this bucket. Here, it is ‘Saksham’.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbgfb4zuu25ogfqj9kge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbgfb4zuu25ogfqj9kge.png" width="800" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This manual change creates a drift in the infrastructure, which means that the current Terraform state file is out of sync with the current AWS infrastructure. &lt;/p&gt;

&lt;p&gt;Now it’s time to run the &lt;code&gt;terraform plan&lt;/code&gt; command to detect the drift changes in your infrastructure.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env git:(main) ✗ expoterraform plan                                 
aws_s3_bucket.terraform_state: Refreshing state... [id=env0-terraform-state-bucket]
  ~ update in-place
  # aws_s3_bucket.terraform_state will be updated in-place
  ~ resource "aws_s3_bucket" "terraform_state" {
      ~ tags                        = {
            "Name"  = "Terraform State Bucket"
          - "Owner" = "Saksham" -&amp;gt; null
        }
      ~ tags_all                    = {
          - "Owner" = "Saksham" -&amp;gt; null
…
Plan: 0 to add, 1 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above &lt;code&gt;plan&lt;/code&gt; output displays that the new tag – ‘Owner’ – was added to the S3 bucket outside of your Terraform configuration. If you &lt;code&gt;apply&lt;/code&gt; your Terraform code, the tag will be removed from the bucket. &lt;/p&gt;

&lt;p&gt;This means that there is a drift and the Terraform configuration in your state file does not contain any ‘Owner’ tag.&lt;/p&gt;

&lt;p&gt;You can update your state file by running the &lt;code&gt;terraform refresh&lt;/code&gt; command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ terraform refresh
aws_s3_bucket.terraform_state: Refreshing state... [id=env0-terraform-state-bucket]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once the Terraform state file is refreshed, you can review the updated state file output using &lt;code&gt;terraform show&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ expoterraform show                                 
# aws_s3_bucket.terraform_state:
resource "aws_s3_bucket" "terraform_state" {
    arn                         = "arn:aws:s3:::env0-terraform-state-bucket"
    bucket                      = "env0-terraform-state-bucket"
    id                          = "env0-terraform-state-bucket"
    tags                        = {
        "Name"  = "Terraform State Bucket"
        "Owner" = "Saksham"
    }
    tags_all                    = {
        "Name"  = "Terraform State Bucket"
        "Owner" = "Saksham"
    }
…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above output, you can see that the S3 bucket configuration has been updated with the latest tags from the AWS in the state file. &lt;/p&gt;

&lt;p&gt;If you want to overwrite this change with your Terraform code configuration, you need to run the &lt;code&gt;apply&lt;/code&gt; command. Otherwise, if you want to accept this change, add the tags in your Terraform code and run &lt;code&gt;apply&lt;/code&gt; to deploy the changes to the cloud. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Concerns with Terraform Refresh&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As mentioned above, the &lt;code&gt;terrafrom refresh&lt;/code&gt; command was deprecated because of unsafe behavior in the case of misconfigured providers. &lt;/p&gt;

&lt;p&gt;For instance, if you have misconfigured the provider credentials for one AWS account (A)  with another AWS account (B), the command could trick Terraform into updating the state file with changes from account (B) rather than account (A), which could lead to various issue such as the deletion of all the resources in the state file without any confirmation.&lt;/p&gt;

&lt;p&gt;In addition to the above concerns, the &lt;code&gt;refresh&lt;/code&gt; command has other inherent limitations, which include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Running &lt;code&gt;refresh&lt;/code&gt; does not modify the configuration in the &lt;strong&gt;.tf&lt;/strong&gt; files. In case there was a configuration change, you will need to manually update your Terraform management code. &lt;/li&gt;
&lt;li&gt;  The &lt;code&gt;refresh&lt;/code&gt; command helps you detect drift by comparing the current and desired states. However, you still need to fix this drift to avoid resource misconfigurations or security issues. And fixing the drift doesn’t always mean reverting to the configuration in the state file, as some drift could be a result of an intended change (e.g., manual error fix or the result of other software behavior).&lt;/li&gt;
&lt;li&gt;  When multiple team members work on a large infrastructure, continuously running &lt;code&gt;terraform refresh&lt;/code&gt; just to fetch any manual changes made to the infrastructure is not really feasible, running the risk of creating merge conflicts and plenty of overheard.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Apply refresh-only: A (Slightly) Better Alternative&lt;/strong&gt;:
&lt;/h3&gt;

&lt;p&gt;Terraform version 0.15.4 introduced the &lt;code&gt;-refresh-only&lt;/code&gt; flag to provide more control over the functionality of the &lt;code&gt;refresh&lt;/code&gt; command. When you run this with the &lt;code&gt;plan&lt;/code&gt; and &lt;code&gt;apply&lt;/code&gt; commands, you will be greeted with an interactive prompt where you can review and confirm the detected changes.&lt;/p&gt;

&lt;p&gt;For example, run the following command: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply -refresh-only
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will allow you to review the changes in your current infrastructure before updating your state file with them. Once you have reviewed them, simply approve or reject these changes. &lt;/p&gt;

&lt;p&gt;Unlike the &lt;code&gt;refresh&lt;/code&gt; command, using &lt;code&gt;apply -refresh-only&lt;/code&gt; offers a better approach to handling drift. However, it is still far from being a comprehensive solution as it doesn’t solve the challenges of drift detection, nor does it help provide context for the drift. &lt;/p&gt;

&lt;p&gt;As a result, some drifts go unnoticed, and their reconciliation could end up causing more infrastructure chaos than remediation. Also, rolling back some of the drifts could cause a domino effect of issues, effectively removing necessary – albeit manual or third-party – changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Drift Detection with env0&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://env0.com" rel="noopener noreferrer"&gt;env0&lt;/a&gt;’s drift detection and cause analysis features, you do not need to worry about scheduling runs for &lt;code&gt;plan&lt;/code&gt; or &lt;code&gt;refresh&lt;/code&gt; to continuously monitor your infrastructure or identify potential drifts. Moreover, you will also have additional context to ensure that the drifts are reconciled without causing any unwanted cascading issues across your cloud infrastructure.&lt;/p&gt;

&lt;p&gt;Let’s look at how you can identify and investigate drifts using env0:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Automated Drift Detection&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Using env0*&lt;em&gt;,&lt;/em&gt;* you can automate the process of drift detection in your infrastructure code by enabling the ‘&lt;a href="https://docs.env0.com/docs/drift-detection" rel="noopener noreferrer"&gt;Drift Detection&lt;/a&gt;’ in ‘Settings’ with a cron job. &lt;/p&gt;

&lt;p&gt;For example, schedule a drift detection run once every hour in your environment. This will run the scheduled &lt;code&gt;plan&lt;/code&gt; command and analyze its output for any changes done outside your Terraform configuration or drifts. When any drift is detected, you can see the result as ‘drifted’ in the &lt;code&gt;plan&lt;/code&gt; stage of your deployment&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hd33ua0sycaqs6q80is.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hd33ua0sycaqs6q80is.png" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; This capability saves plenty of time and prevents any incidents due to neglected drift.&lt;/p&gt;

&lt;p&gt;Using some of the platform's more advanced capabilities, you can even be preemptively aware of future drifts that might occur after applying the new &lt;code&gt;plan&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;These, for example, could be a result of using dynamic variables, and they will appear as   line items in the env0 dashboard, enabling you to further investigate by going back to reviewing the plan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfyhi8fvabps2ciz6xby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfyhi8fvabps2ciz6xby.png" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Drift Cause&lt;/strong&gt; 
&lt;/h3&gt;

&lt;p&gt;In addition to the above, env0 platform also offers a unique &lt;a href="https://www.env0.com/blog/drift-cause-closing-the-loop-on-infrastructure-drift-management" rel="noopener noreferrer"&gt;Drift Cause&lt;/a&gt; feature which connects the dots between the state of the codified infrastructure and out-of-code audit logs to offer additional context about drifts, and enable team to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Identify who made the change, when, and how&lt;/li&gt;
&lt;li&gt;  Understand the specific event or action responsible for the drift (e.g., automated or scripted procedure via CLI or API, or a human being via cloud provider interface)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check out the video below to see it in action.&lt;/p&gt;

&lt;p&gt;‍&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/d2vS6c63de0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Drift Monitoring and Alerting&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;env0 also provides a centralized &lt;a href="https://docs.env0.com/docs/dashboards" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt; that displays the ‘Drift Status’ for each infrastructure environment deployed using Terraform. You can also see more information on what percentage of environments are drifted or if these environments have active, inactivate, or failed deployments, like so:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5hhgafwnuicx75gr13o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5hhgafwnuicx75gr13o.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using this dashboard, teams can easily monitor multiple environments for drift simultaneously under one unified dashboard, improving team efficiency. Additionally, team members can quickly identify which environments are drifting from their desired Terraform configuration. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Approval Policies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Related to the topic of drifts, env0 allows you to enable approval policies (e.g., using Open Policy Agent). These can be tailored to preemptively address common causes of drifts, which could be identified through the use of a dashboard and insights provided by Drift Cause. &lt;/p&gt;

&lt;p&gt;Leveraging these insights on what could typically go wrong,  platform and infrastructure teams can enhance scheduled deployments with relevant approval policies together, to create a framework for a smart auto-remediation.  &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By now, you should understand what Terraform &lt;code&gt;refresh&lt;/code&gt; does and when you can use it for your use case. We did a deep dive into what happens when you run the &lt;code&gt;refresh&lt;/code&gt; command and shared some practical examples. &lt;/p&gt;

&lt;p&gt;Even though &lt;code&gt;terraform refresh&lt;/code&gt; is really helpful for larger teams, we learned about its limitations and how you can overcome them with env0’s drift detection and remediation capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Frequently Asked Questions&lt;/strong&gt;
&lt;/h2&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Q1. Does Terraform plan refresh the state?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Yes, Terraform refreshes the state and updates the state file with the most recent resource configuration of the current infrastructure. It ensures that the plan reflects the current state rather than the desired state before any changes are made.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Q2. How do I apply Terraform without refresh state?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;You can apply the Terraform &lt;code&gt;refresh&lt;/code&gt; by running the &lt;code&gt;terraform apply -refresh=false&lt;/code&gt; command. It skips the refresh and applies the plan based on the existing state.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Q3. How does Terraform refresh state work?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Terraform refreshes its state by querying the infrastructure to detect any changes made outside of Terraform (manual changes, updates by other tools, etc.). This process ensures that the Terraform state file reflects the current state of the infrastructure.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Q4. What is the difference between Terraform refresh and Terraform import?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Terraform’s &lt;code&gt;refresh&lt;/code&gt; command updates the state file by checking the actual infrastructure to reflect any changes made outside Terraform (e.g., manually via the AWS Console). It doesn’t create or modify resources, only syncs the state with the real world.&lt;/p&gt;

&lt;p&gt;Terraform’s &lt;a href="https://www.env0.com/blog/terraform-import-commands-example-tips-and-best-practices" rel="noopener noreferrer"&gt;&lt;code&gt;import&lt;/code&gt;&lt;/a&gt; command is used to bring existing resources into Terraform’s management. It associates an existing infrastructure resource with a Terraform resource block, allowing Terraform to manage it going forward. It updates the state file but doesn’t modify the resource itself.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>productivity</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>New: Detect Drift Within Minutes—Even Before Full Onboarding</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Tue, 04 Feb 2025 14:06:11 +0000</pubDate>
      <link>https://forem.com/envzero/new-detect-drift-within-minutes-even-before-full-onboarding-12f3</link>
      <guid>https://forem.com/envzero/new-detect-drift-within-minutes-even-before-full-onboarding-12f3</guid>
      <description>&lt;p&gt;Onboarding large organizations to an IaC management platform is no small task. Signing up and deploying a few initial environments is straightforward, but full onboarding can take days or even weeks, depending on infrastructure complexity.&lt;/p&gt;

&lt;p&gt;This is a challenge we understand well, which is why we've developed features like &lt;a href="https://docs.env0.com/changelog/onboarding-import-external-environments#/" rel="noopener noreferrer"&gt;environment discovery&lt;/a&gt; and the &lt;a href="https://www.env0.com/blog/switch-from-terraform-cloud-in-minutes-with-our-new-migration-tool" rel="noopener noreferrer"&gt;TFC migration Tool&lt;/a&gt; to help customers achieve value faster. For larger enterprises, we’ve also introduced a dedicated 1-day PoC process to accelerate the first steps of onboarding.&lt;/p&gt;

&lt;p&gt;Today, we’re excited to introduce a new capability that delivers instant value, allowing you to detect and analyze drift within minutes, even before fully onboarding to env0.&lt;/p&gt;

&lt;p&gt;With just a few clicks, you can surface configuration drift across your cloud account, pinpoint its cause, and gain the context needed for quick, informed reconciliation—addressing deep-rooted and overlooked issues that could lead to security risks, compliance gaps, and hidden cost sprawl.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The ability to instantly detect drift is powered by Cloud Compass, which analyzes your infrastructure posture using our proprietary algorithm and AI-assisted logic. With this upgrade, Cloud Compass now assesses drift likelihood, identifying risk based on past events.&lt;/p&gt;

&lt;p&gt;Drift information appears directly in the Cloud Compass dashboard under a new 'Drift Risk' column, complete with filtering options to help you quickly identify and prioritize potential issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49ok1jnrsydkzc4wf3kh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49ok1jnrsydkzc4wf3kh.png" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Importantly, this analysis can produce insights from up to 12 months back, providing a historical view of drift. As such, it supplements our existing &lt;a href="https://docs.env0.com/docs/drift-detection#/" rel="noopener noreferrer"&gt;drift detection&lt;/a&gt; solution, which tracks changes from the moment it’s activated, by uncovering past drift events for a more comprehensive picture.&lt;/p&gt;

&lt;p&gt;Together, these capabilities give you full visibility into drift—past, present, and future.&lt;/p&gt;

&lt;p&gt;Detecting drift is only the first step. Understanding who made the change, how it happened, and when it occurred is just as important, giving you the clarity to take confident action.&lt;/p&gt;

&lt;p&gt;That’s where our recently announced &lt;a href="https://docs.env0.com/docs/drift-cause#/" rel="noopener noreferrer"&gt;Drift Cause&lt;/a&gt; feature comes in. Clicking ‘Details’ on a flagged resource opens a panel with drift status, severity classification, and a full history of changes, including both API-driven and manual actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee6fbyfzao0btvhq0rnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee6fbyfzao0btvhq0rnu.png" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Cloud Compass Improvements
&lt;/h2&gt;

&lt;p&gt;With resources fully managed in &lt;a href="https://www.env0.com" rel="noopener noreferrer"&gt;env0&lt;/a&gt;, Cloud Compass now provides direct links between discovered resources and their corresponding env0 environments. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjpib29eli2t97h5p53p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjpib29eli2t97h5p53p.png" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These usability enhancements make it easier to access relevant data, track ownership, understand resource purpose, and connect infrastructure to its IaC management.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;As part of our broader vision for cloud infrastructure management, Cloud Compass will continue to evolve, expanding beyond visibility to provide deeper insights and greater operational control. &lt;/p&gt;

&lt;p&gt;Upcoming enhancements will include smarter detection for unmanaged resources, tighter integrations with managed environments, cost impact analysis, and automatic code generation to seamlessly codify discovered resources into IaC.&lt;/p&gt;

&lt;p&gt;By bridging discovery with full lifecycle management, these updates will help teams improve efficiency, strengthen governance, and better align cloud operations with business goals.&lt;/p&gt;

&lt;p&gt;To see env0 and Cloud Compass in action and get an instant view of your cloud infrastructure, &lt;a href="https://www.env0.com/demo-request" rel="noopener noreferrer"&gt;schedule a demo&lt;/a&gt; today.&lt;/p&gt;

</description>
      <category>news</category>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>driftdetection</category>
    </item>
    <item>
      <title>Top Infrastructure as Code Security Tools in 2025</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Tue, 28 Jan 2025 14:22:34 +0000</pubDate>
      <link>https://forem.com/envzero/top-infrastructure-as-code-security-tools-in-2025-g56</link>
      <guid>https://forem.com/envzero/top-infrastructure-as-code-security-tools-in-2025-g56</guid>
      <description>&lt;p&gt;&lt;a href="https://www.env0.com/blog/infrastructure-as-code-101" rel="noopener noreferrer"&gt;Infrastructure as Code&lt;/a&gt; (IaC) is the standard method to deploy and manage cloud deployments. The benefits are obvious: faster deployment times, improved repeatability, and increased standardization across cloud infrastructure.&lt;/p&gt;

&lt;p&gt;However, the flexibility and power of IaC also introduces unique risks. IaC configurations define your infrastructure, including sensitive settings, access permissions, and operational behaviors. &lt;/p&gt;

&lt;p&gt;A single misconfiguration or exposed secret can compromise entire environments. Additionally, IaC integrates with a variety of external plugins and providers, creating a dependency chain vulnerable to supply chain attacks.&lt;/p&gt;

&lt;p&gt;As the practice of IaC evolves, additional emphasis is placed on building security into the IaC development lifecycle to protect infrastructure resources and avoid security and compliance violations. This article discusses approaches and tools for securing Infrastructure as Code workloads, from development to running environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  IaC Security Tools Defined
&lt;/h2&gt;

&lt;p&gt;Before we discuss specific tools, it is important to understand the background of IaC security approaches. This prerequisite knowledge will help you make informed decisions about the IaC security features that your organization needs when choosing which tools to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Necessity of IaC Security
&lt;/h3&gt;

&lt;p&gt;First, let’s discuss why you need to build security into your IaC workflows. Infrastructure as Code enables more members of an organization – from software developers to platform engineers – to build infrastructure in a repeatable way. &lt;/p&gt;

&lt;p&gt;Organizations benefit from this approach in many ways. Infrastructure provisioning lead times decrease, consistency improves, and the risk of human error causing misconfigurations or security issues is greatly reduced. Additionally, it's easier to make and roll back infrastructure changes.&lt;/p&gt;

&lt;p&gt;However, this approach also comes with increased responsibility. IaC allows for large-scale deployments but introduces the challenge of managing two layers of complexity: the code that defines resources and the resources themselves.&lt;/p&gt;

&lt;p&gt;This introduces new opportunities for security mistakes and misconfigurations. Engineers need help securing their infrastructure resources, and a rich ecosystem of Infrastructure as Code security tools has developed to meet this need.&lt;/p&gt;

&lt;h3&gt;
  
  
  Approaches to Security Scanning
&lt;/h3&gt;

&lt;p&gt;Just as software engineering has adopted a shift-left approach to integrating testing earlier in the software development lifecycle, so did IaC practitioners begin embedding security measures earlier in their workflows. This proactive approach ensures that security is a foundational element of infrastructure design, rather than an afterthought. The tools discussed in this article take two broad approaches to IaC security:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Proactive:&lt;/strong&gt; The tool scans IaC code and configuration files before they are deployed and identifies misconfigurations before they reach an environment. This approach is similar to the static code analysis tools used by software engineers.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Reactive:&lt;/strong&gt; The tool scans existing infrastructure and identifies misconfigurations in the running environment. For example, a reactive tool might scan your cloud services for existing misconfigurations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The shift left approach makes proactive scanning a crucial element of IaC security strategy. However, reactive scanning is equally important to identify configuration drift or unauthorized changes. Even the best environments can experience situations where a production environment has drifted from the codified intent of IaC. Leveraging both proactive and reactive IaC tools ensures protection across the IaC development lifecycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  IaC Security Beyond Terraform
&lt;/h3&gt;

&lt;p&gt;Terraform has become synonymous with the Infrastructure as Code movement, but many other popular tools meet this need. OpenTofu is being widely adopted as a Terraform alternative. Cloud providers expose their own specific tooling. Examples include AWS CloudFormation (CFN), Azure Resource Manager (ARM), Microsoft Azure Bicep, and Google Cloud Platform Deployment Manager.&lt;/p&gt;

&lt;p&gt;Even Kubernetes can be considered an Infrastructure as Code tool. Projects such as Crossplane extend Kubernetes to provide a control plane across infrastructure and application resources. The Kubernetes Cluster API special interest group extends Kubernetes to deploy additional Kubernetes clusters. The OperatorHub is also full of infrastructure-oriented operators that build upon the power of the Kubernetes control plane to deploy infrastructure resources.&lt;/p&gt;

&lt;p&gt;A comprehensive security approach in a multi-cloud environment must consider the need to support the entirety of an organization’s IaC tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ten Tools for Better IaC Security
&lt;/h2&gt;

&lt;p&gt;This article discusses ten different tools for improving the security posture of your Infrastructure as Code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Open Policy Agent&lt;/li&gt;
&lt;li&gt; Tflint&lt;/li&gt;
&lt;li&gt; Trivy&lt;/li&gt;
&lt;li&gt; CloudSploit&lt;/li&gt;
&lt;li&gt; Checkov&lt;/li&gt;
&lt;li&gt; Snyk&lt;/li&gt;
&lt;li&gt; Kubescape&lt;/li&gt;
&lt;li&gt; Terrascan&lt;/li&gt;
&lt;li&gt; Prowler&lt;/li&gt;
&lt;li&gt; KICS&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Open Policy Agent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.env0.com/blog/open-policy-agent" rel="noopener noreferrer"&gt;Open Policy Agent&lt;/a&gt; (OPA) is more than a simple tool. It’s a declarative language for expressing policy as code using the Rego language. OPA runs against JSON inputs, so it supports proactive policy enforcement against any language that can be converted to JSON.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.conftest.dev/" rel="noopener noreferrer"&gt;Conftest&lt;/a&gt; project from OPA can run Rego queries directly against many configuration languages. It supports Terraform, Docker, Kubernetes, and many other languages. Conftest also includes direct support for YAML-based configuration files.&lt;/p&gt;

&lt;p&gt;Unlike many of the tools in this list, OPA and Conftest do not include any default security policies for IaC scanning. Instead, OPA provides a framework to build custom policies. As you'll see, the OPA framework is integrated into many Infrastructure as Code tools discussed in this list.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tflint
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.env0.com/blog/tflint-with-custom-flows" rel="noopener noreferrer"&gt;Tflint&lt;/a&gt; is a linter for Terraform configuration files. While linting isn’t strictly a security activity, following a language's best practices and avoiding deprecated language features are important elements of robust code. Tflint’s comprehensive rule set ensures that your infrastructure code follows language best practices.&lt;/p&gt;

&lt;p&gt;Tflint offers a plugin-based architecture with support for writing custom plugins. It also supports adding custom rules using OPA.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trivy
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://trivy.dev/latest/" rel="noopener noreferrer"&gt;Trivy&lt;/a&gt; is a comprehensive vulnerability scanner from Aqua. The tfsec tool familiar to many infrastructure engineers is now integrated into Trivy. The Trivy team maintains vulnerability databases to address hundreds of vulnerabilities and misconfigurations.&lt;/p&gt;

&lt;p&gt;Trivy is primarily a proactive tool that can scan a wide variety of configuration files, container images, and package formats. It features support for a variety of common IaC formats, including Terraform, Azure ARM, Azure Bicep, and AWS CloudFormation.&lt;/p&gt;

&lt;p&gt;Trivy also has some support for reactive scanning of existing environments. Specifically, Trivy can connect directly to a Kubernetes cluster and scan it for misconfigurations.&lt;/p&gt;

&lt;p&gt;Trivy can be extended via custom checks written in the OPA Rego language.&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudSploit
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/aquasecurity/cloudsploit" rel="noopener noreferrer"&gt;CloudSploit&lt;/a&gt; is a reactive security project from Aqua that perfectly complements Trivy. It scans a cloud environment and detects security risks in running cloud infrastructure across multiple cloud providers. CloudSploit can even run remediative actions to automatically resolve an identified issue.&lt;/p&gt;

&lt;p&gt;Reactive environment scanning is an important part of IaC's overall security strategy. While CloudSploit doesn’t directly scan IaC configuration files, regular scanning of the cloud environment itself can detect vulnerabilities introduced by configuration drift or malicious actors.&lt;/p&gt;

&lt;p&gt;CloudSploit is extensible via custom plugins that directly make API calls to a cloud provider to interrogate the cloud configuration and identify misconfiguration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Checkov
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.env0.com/blog/best-iac-scan-tool-what-is-checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt; is a proactive scanning tool that emphasizes policy as code. It features support for Terraform, AWS CloudFormation, Azure ARM, Kubernetes (including Helm charts), Docker, and even Ansible.&lt;/p&gt;

&lt;p&gt;Checkov includes hundreds of policies to meet compliance with the Center for Internet Security and AWS Foundations Benchmark standards. Using IaC is an excellent way to ensure compliance with security benchmarks in a repeatable way. Checkov can ensure that your IaC is actually meeting these benchmarks.&lt;/p&gt;

&lt;p&gt;Checkov is also highly extensible, supporting custom policies in Python or YAML.&lt;/p&gt;

&lt;h3&gt;
  
  
  Snyk
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://snyk.io/product/infrastructure-as-code-security/" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt; is a cloud platform that provides proactive scanning across several common IaC formats. It supports scanning Terraform, AWS CloudFormation, Azure ARM, and Kubernetes file formats. Snyk offers a comprehensive database of vulnerabilities and common misconfigurations. It can also be extended through custom policies, including those written using OPA.&lt;/p&gt;

&lt;p&gt;Snyk prides itself on a positive developer experience. Snyk’s integrations include IDE plugins and the ability to directly suggest fixes for misconfigurations while authoring IaC. The cloud platform also includes helpful dashboards to provide visibility into an organization’s IaC security posture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubescape
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubescape.io/" rel="noopener noreferrer"&gt;Kubescape&lt;/a&gt; is a CNCF sandbox project that aims to embed security across the lifecycle of Kubernetes environments. Kubescape includes elements of proactive and reactive security. It is increasingly used as an IaC tool to directly deploy infrastructure workloads. Scanning Kubernetes manifests is paramount to ensure best practices.&lt;/p&gt;

&lt;p&gt;Kubescape can proactively scan Kubernetes manifests and Helm charts for misconfigurations. It can also reactively scan existing Kubernetes clusters to find vulnerabilities, or it can be deployed as a Kubernetes Operator to provide regular scanning of cluster resources.&lt;/p&gt;

&lt;p&gt;Kubescape uses Open Policy Agent under the hood, and it is extensible via custom policies written in Rego.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terrascan
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.env0.com/blog/best-iac-scan-tools-what-is-terrascan" rel="noopener noreferrer"&gt;Terrascan&lt;/a&gt; is a tool used to scan IaC for security best practices. It features over 500 policies across a variety of clouds. Despite its name, Terrascan is not limited to Terraform files; it also supports AWS CloudFormation, Azure ARM, Kubernetes, Helm, and Docker.&lt;/p&gt;

&lt;p&gt;Terrascan can run as a Kubernetes admission webhook to proactively deny misconfigurations from reaching a Kubernetes cluster. It supports custom policies through OPA.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prowler
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://prowler.com/" rel="noopener noreferrer"&gt;Prowler&lt;/a&gt; is a reactive security scanner that directly scans cloud infrastructure to identify misconfigurations and vulnerabilities. It features hundreds of checks across AWS, Azure, GCP, and Kubernetes. Pairing Prowler’s reactive approach with proactive scanning of IaC configuration files provides an enhanced security posture across the infrastructure lifecycle.&lt;/p&gt;

&lt;p&gt;Prowler can be extended with custom checks through Python scripts. It includes Prowler Check Kreator, a utility to quickly build the scaffolding needed for custom checks.&lt;/p&gt;

&lt;p&gt;Prowler shines with advanced reporting capabilities for security best practices. A comprehensive web UI provides a dashboard of a cloud environment’s security posture. Prowler can also forward its findings and logs to external services, such as AWS Security Hub, for further triage and analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  KICS
&lt;/h3&gt;

&lt;p&gt;Keeping Infrastructure as Code Secure (&lt;a href="https://kics.io/" rel="noopener noreferrer"&gt;KICS&lt;/a&gt;) is a comprehensive scanner that supports proactive scanning against over 20 configuration formats. The platform includes support for Terraform, Pulumi, Ansible, AWS CloudFormation, Azure ARM, Kubernetes, and many others.&lt;/p&gt;

&lt;p&gt;The KICS query database includes hundreds of common misconfigurations and security vulnerabilities. The project also makes it very easy to write additional queries using OPA.&lt;/p&gt;

&lt;p&gt;KICS’s unique architecture makes it highly extensible through plugins to parse additional IaC languages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Four Questions for Choosing the Right Tools
&lt;/h2&gt;

&lt;p&gt;We’ve covered ten Infrastructure as Code security tools, with their unique approach to securing IaC workflows and mitigating security risks. How do you decide on the right IaC tools for your organization?&lt;/p&gt;

&lt;p&gt;Ask yourself these four questions when evaluating your IaC security toolchain:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Does the tool support the Infrastructure as Code languages and configuration management tools that your organization currently uses and is planning to use in the future? This is particularly important to consider as many organizations adopt a multi-cloud approach.&lt;/li&gt;
&lt;li&gt; Does the tool's deployment model work for you, and can it be easily integrated into your existing developer workflows? The majority of the tools discussed in this article are easy to install and integrate into common workflows, such as CI/CD pipelines. However, some tools provide a better developer experience with robust IDE integrations.&lt;/li&gt;
&lt;li&gt; Does the tool support a proactive approach, a reactive approach, or both? One approach is not better than the other. Rather, a mature security posture includes tools that support the entire IaC development lifecycle.&lt;/li&gt;
&lt;li&gt; Finally, can the tool be extended using approaches that you are comfortable with? Many of the tools discussed in this article are extensible via Open Policy Agent, which is seeing broad industry adoption as the chosen framework for policy as code. However, some tools may require you to write custom plugins that must be designed and maintained.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  IaC Security Tools and env0
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://env0.com" rel="noopener noreferrer"&gt;env0&lt;/a&gt; integrates security seamlessly across the Infrastructure as Code (IaC) lifecycle, empowering users to incorporate tools like Open Policy Agent (OPA), Checkov, TFLint, TFSec, and Trivy directly into their workflows. These integrations allow organizations to identify risks proactively and enforce compliance standards using tools tailored to their needs.&lt;/p&gt;

&lt;p&gt;In addition, env0 ensures robust protection through granular access controls, including Role-Based Access Control (RBAC), for precise permission management. Features such as encrypted state files and secure secret management further safeguard sensitive data. Together, these capabilities provide a comprehensive security framework, enabling organizations to confidently manage and protect their IaC environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Securing the IaC development lifecycle involves shifting left and integrating security tools earlier in the development process. This proactive approach, exemplified by many of the tools in this article, embeds security best practices in the Infrastructure as Code development lifecycle. This strategy mitigates security risks and compliance violations by preventing them from ever reaching production environments.&lt;/p&gt;

&lt;p&gt;On the other hand, a reactive approach to scanning existing cloud environments is equally important. Regular scanning of cloud resources for security vulnerabilities can catch configuration drift, identify unauthorized changes, and notice deprecated configurations that may have been created in the past.&lt;/p&gt;

&lt;p&gt;A combination of proactive and reactive scanning is important when designing an IaC security strategy.&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>security</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Terraform Backend Configuration: Local and Remote Options</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Thu, 23 Jan 2025 13:30:00 +0000</pubDate>
      <link>https://forem.com/envzero/terraform-backend-configuration-local-and-remote-options-1e0e</link>
      <guid>https://forem.com/envzero/terraform-backend-configuration-local-and-remote-options-1e0e</guid>
      <description>&lt;p&gt;Terraform manages the infrastructure changes using a state file, which tracks the changes made to the resources deployed to the cloud using Terraform. In other words, Terraform backend helps you store state files in a specified location.&lt;/p&gt;

&lt;p&gt;This blog will discuss Terraform backends, their types, and how to configure them for various cloud providers, such as AWS, Azure, and GCP. You will also learn how to migrate from one backend to another, and more.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;: All use cases of the Terraform backend discussed here work similarly in &lt;a href="https://www.env0.com/blog/opentofu-the-open-source-terraform-alternative" rel="noopener noreferrer"&gt;OpenTofu&lt;/a&gt;, the open-source Terraform alternative. However, to keep it simple and familiar for DevOps engineers, we will use “Terraform backend” as a catch-all term throughout this blog post.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Terraform Backend&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Terraform provides a backend configuration block to store and manage the state file of your Terraform code. &lt;/p&gt;

&lt;p&gt;Using the backend, state files can be stored either locally or in a centralized remote location, depending on the size and requirements of the engineering team responsible for the structure. &lt;/p&gt;

&lt;p&gt;You can configure this remote backend on your own in your Terraform code to store your state file in cloud provider storage, such as AWS S3 bucket, Azure Blob Storage, or Google Cloud Storage. Once the configuration is done, it is visible in the storage and can be easily accessed via the console. &lt;/p&gt;

&lt;p&gt;Thus, whenever &lt;a href="https://www.env0.com/blog/what-is-terraform-cli" rel="noopener noreferrer"&gt;Terraform CLI commands&lt;/a&gt; such as &lt;a href="https://www.env0.com/blog/terraform-plan" rel="noopener noreferrer"&gt;&lt;code&gt;plan&lt;/code&gt;&lt;/a&gt;, and &lt;a href="https://www.env0.com/blog/terraform-apply-guide-command-options-and-examples" rel="noopener noreferrer"&gt;&lt;code&gt;apply&lt;/code&gt;&lt;/a&gt; are performed to provision or manage the cloud infrastructure, the state file in the backend is updated.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Common Use Cases of Terraform Backend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Terraform backend is used among teams as a de facto practice due to its benefits, such as versioning, state locking, etc. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Teams can easily collaborate on the same Terraform configuration with the centralized state file storage, which helps prevent any misconfiguration or merge conflicts.&lt;/li&gt;
&lt;li&gt;  If the infrastructure is down due to some configuration changes, it can be easily recovered using the backend state file.&lt;/li&gt;
&lt;li&gt;  Infrastructure changes can be easily audited with the state file’s change history, which helps with quick troubleshooting and tracking changes.&lt;/li&gt;
&lt;li&gt;  Teams can use state locking to avoid merge conflicts when multiple engineers work simultaneously, and employ encryption to secure the state file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Types of Terraform Backends&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are two types of Terraform backends: local and remote. Let’s learn more about them in this section. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Local Backend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;By default, the local backend configuration stores the state file in the same directory as the Terraform code. You can easily find the state file, &lt;strong&gt;terraform.tfstate&lt;/strong&gt;, in your current working directory. In this case, since you are the only user, it makes sense to use the local backend to store your state file. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Remote Backend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once you have developed the infrastructure and added more contributing developers, you should start using a remote backend. The remote backend configuration stores the state file in a centralized and secure location, such as a cloud-based storage service (S3) or Terraform cloud. &lt;/p&gt;

&lt;p&gt;In case multiple team members need to access and update the same state file, using a state backend such as S3 with DynamoDB comes in handy for state locking and preventing merge conflicts. &lt;/p&gt;

&lt;p&gt;The remote backend offers collaboration features like versioning, state locking, state file encryption, etc. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Versioning:&lt;/strong&gt; The versioning of the state files helps with easy rollbacks in case of a failure when the new changes are deployed.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;State locking:&lt;/strong&gt; Prevents conflicts resulting from multiple team members simultaneously applying changes to the same state file.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Encryption:&lt;/strong&gt; State files store not only the resource configuration but also sensitive information, such as credentials or URLs. State file encryption ensures that the information is secure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To summarize, here is a quick look at the difference between the two types of Terraform backends.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqoyx7wjw56qe0toqrf12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqoyx7wjw56qe0toqrf12.png" alt="Terraform Beckends types: Local vs. Remote" width="800" height="756"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform Backend Configuration for Different Cloud Providers&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When you are working with different cloud providers such as AWS, Azure, or GCP, they use their own cloud storage to store the state file. &lt;/p&gt;

&lt;p&gt;The configuration is passed using the Terraform code, and when initialized, the backend for your Terraform state file management is set to remote. Let’s see how you can define Terraform backend configuration for various public cloud providers.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;AWS S3 Backend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you define a Terraform backend with S3 storage and pass the key arguments, it looks like so:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket = "env0-terraform-state-bucket"
    key    = "env0/terraform.tfstate"
    region = "us-east-1"
    encrypt = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s break down the above Terraform config arguments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;bucket&lt;/code&gt;: This is the name of your s3 bucket where the state file is stored. Ensure that it exists before configuring your backend with proper access permission and that versioning is enabled.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;key&lt;/code&gt;: This is the path to your state file inside your S3 bucket.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;region&lt;/code&gt;: To find your S3 bucket, backend needs the region of your S3 bucket that stores your state file.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;encrypt&lt;/code&gt;: If you want to enable server-side encryption for the state file, pass the value as ‘true.’ Otherwise, the  default value is set to ‘false’.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Azure Blob Storage Backend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create your Terraform backend configuration with Azure Blob Storage by passing these key arguments:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "azurerm" {
    resource_group_name   = "env0-terraform-rg"
    storage_account_name  = "env0terraformstate"
    container_name        = "state"
    key                   = "env0/terraform.tfstate"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s break down the above config to understand it better:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;resource_group_name&lt;/code&gt;: This is the name of the resource group in which the storage account exists.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;storage_account_name&lt;/code&gt;: This is the name of your Azure Storage account. Ensure that you already have created one.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;container_name&lt;/code&gt;: This lets you pass the container or blob name within your storage account to store your state file.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;key&lt;/code&gt;: This is the path to your state file within your container.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;GCS (Google Cloud Storage) Backend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Terraform backend configuration with GCS has 3 key arguments:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "gcs" {
    bucket = "env0-terraform-state-bucket"
    prefix = "env0/terraform.tfstate"
    credentials = "path/to/service-account.json"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Explaining the above backend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;bucket&lt;/code&gt;: This is the name of your Google storage bucket that stores your state file. Ensure that the bucket already exists. Enable versioning for a quick disaster recovery.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;prefix&lt;/code&gt;: This is the path where you want to store your state file.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;credentials&lt;/code&gt;: Pass the path to the service account JSON file or environment variable, such as ‘GOOGLE_APPLICATION_CREDENTIALS’, for authentication. Ensure the service account has the ‘storage.objects.create’ and ‘storage.objects.get’ permissions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By now, you know how to write a Terraform backend configuration file for various cloud providers. Next, let’s see how you can define your S3 remote backend.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Example: Configuring Terraform Backend Block in AWS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now, let’s look at an example of using an AWS S3 bucket to store your Terraform state file using a Terraform backend configuration to manage your state file. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Define AWS S3 Bucket&lt;/strong&gt; 
&lt;/h3&gt;

&lt;p&gt;First, define your AWS provider and S3 bucket in your &lt;strong&gt;main.tf&lt;/strong&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region     = "us-east-1"
}
resource "aws_s3_bucket" "terraform_state" {
  bucket = "env0-terraform-state-bucket"
  lifecycle {
    prevent_destroy = true
  }
  tags = {
    Name        = "Terraform State Bucket"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above code, while defining the &lt;code&gt;aws_s3_bucket&lt;/code&gt; name ‘env0-terraform-state-bucket’, we have provided the &lt;code&gt;prevent_destroy&lt;/code&gt; as true. &lt;/p&gt;

&lt;p&gt;This prevents any accidental deletion of the S3 bucket, since it will store the Terraform state file. In order to delete this bucket, you would need to disable the &lt;code&gt;prevent_destroy&lt;/code&gt; by passing its value as ‘false’. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Enable S3 Versioning and Encryption&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To make the most of the &lt;a href="https://www.env0.com/blog/terraform-remote-state-using-a-remote-backend" rel="noopener noreferrer"&gt;remote state&lt;/a&gt;, you can enable S3 bucket versioning for easy rollbacks and disaster recovery. By encrypting your bucket, you can also secure your state file from any unauthorized access risk and align it with compliance.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket_versioning" "state_bucket_versioning" {
  bucket = aws_s3_bucket.terraform_state.id
  versioning_configuration {
    status = "Enabled"
  }
}
resource "aws_kms_key" "state_bucket_key" {
  description             = "This key is used to encrypt bucket objects"
  deletion_window_in_days = 10
}
resource "aws_s3_bucket_server_side_encryption_configuration" "state_bucket_encryption" {
  bucket = aws_s3_bucket.terraform_state.id
  rule {
    apply_server_side_encryption_by_default {
      kms_master_key_id = aws_kms_key.state_bucket_key.arn
      sse_algorithm     = "aws:kms"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, the KMS key is used to encrypt the S3 bucket, which, when deleted, will persist for 10 days. During this time you can restore the KMS key. &lt;/p&gt;

&lt;p&gt;The key point to remember is that since bucket ID is used by both &lt;code&gt;aws_s3_bucket_versioning&lt;/code&gt; and &lt;code&gt;aws_s3_bucket_server_side_encryption_configuration&lt;/code&gt; resources, there is an internal dependency on the S3 bucket. &lt;/p&gt;

&lt;p&gt;This means that versioning and encryption resources are deployed only after the ‘env0-terraform-state-bucket’ bucket is created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enable State Locking
&lt;/h3&gt;

&lt;p&gt;Next, define a DynamoDB in your &lt;strong&gt;main.tf&lt;/strong&gt; file, which allows the state-locking mechanism on your Terraform state file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_dynamodb_table" "terraform_locks" {
  name           = "terraform-locks"
  billing_mode   = "PAY_PER_REQUEST"
  hash_key       = "LockID"
  attribute {
    name = "LockID"
    type = "S"
  }
  tags = {
    Name        = "Terraform Lock Table"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using the above DynamoDB configuration, only one team member can perform the Terraform operations such as &lt;code&gt;plan&lt;/code&gt; and &lt;code&gt;apply&lt;/code&gt; on your state file. This helps prevent resource configuration conflicts or corruption, which running simultaneous Terraform commands might cause.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Deploy Terraform Configuration&lt;/strong&gt; 
&lt;/h3&gt;

&lt;p&gt;Once your Terraform configurations are done, run the &lt;code&gt;terraform init&lt;/code&gt;command to initialize your provider plugins and modules:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ terraform init
Initializing the backend...
Initializing provider plugins...
- Installing hashicorp/aws v5.76.0...
- Installed hashicorp/aws v5.76.0 (signed by HashiCorp)

Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intend to make.
Terraform has been successfully initialized!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can review the deployment resources and their changes by running the &lt;code&gt;terraform plan&lt;/code&gt; command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ terraform plan                  
…
  + resource "aws_dynamodb_table" "terraform_locks" {
      + billing_mode     = "PAY_PER_REQUEST"
      + hash_key         = "LockID"
…
  + resource "aws_kms_key" "state_bucket_key" {
      + is_enabled                         = true
      + bypass_policy_lockout_safety_check = false
…
  + resource "aws_s3_bucket" "terraform_state" {
      + bucket                      = "env0-terraform-state-bucket"
      + force_destroy               = true
…
  + resource "aws_s3_bucket_server_side_encryption_configuration" "state_bucket_encryption" {
      + rule {
          + apply_server_side_encryption_by_default {
              + sse_algorithm     = "aws:kms"
…
  + resource "aws_s3_bucket_versioning" "state_bucket_versioning" {
          + status     = "Enabled"
…
Plan: 5 to add, 0 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After reviewing the deployment changes, run the &lt;code&gt;terraform apply --auto-approve&lt;/code&gt; command. After giving the plan as an output, it will deploy your infrastructure without requiring manual ‘yes’ or ‘no’ input.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ terraform apply --auto-approve  
  + resource "aws_dynamodb_table" "terraform_locks" {
…
  + resource "aws_kms_key" "state_bucket_key" {
…
  + resource "aws_s3_bucket" "terraform_state" {
…
  + resource "aws_s3_bucket_server_side_encryption_configuration" 
…
  + resource "aws_s3_bucket_versioning" "state_bucket_versioning" {
…
Plan: 5 to add, 0 to change, 0 to destroy.
aws_kms_key.state_bucket_key: Creating...
aws_dynamodb_table.terraform_locks: Creating...
aws_s3_bucket.terraform_state: Creating...
aws_kms_key.state_bucket_key: Creation complete after 1s [id=6d1b2022-7ae4-4175-acb4-0c010e0fe11f]
aws_s3_bucket.terraform_state: Creation complete after 4s [id=env0-terraform-state-bucket]
aws_s3_bucket_versioning.state_bucket_versioning: Creating...
aws_s3_bucket_server_side_encryption_configuration.state_bucket_encryption: Creating...
aws_s3_bucket_server_side_encryption_configuration.state_bucket_encryption: Creation complete after 1s [id=env0-terraform-state-bucket]
aws_s3_bucket_versioning.state_bucket_versioning: Creation complete after 2s [id=env0-terraform-state-bucket]
aws_dynamodb_table.terraform_locks: Creation complete after 8s [id=terraform-locks]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can verify the creation of the S3 bucket and its configuration using the AWS console.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxwgomdmfyfmfsswy4b4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxwgomdmfyfmfsswy4b4.png" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Terraform Backend Configuration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Next, let’s configure the Terraform backend configuration in the &lt;strong&gt;main.tf&lt;/strong&gt; file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket         = “"env0-terraform-state-bucket"”
    key            = "terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terrateam-state-lock-01"
    encrypt        = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Make sure that you pass the bucket name in the backend configuration, since it does not allow variables. &lt;/p&gt;

&lt;p&gt;Next, run the &lt;a href="https://www.env0.com/blog/terraform-init" rel="noopener noreferrer"&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;/a&gt; command to ensure that the “s3” backend is initialized. In case you pass the bucket name using &lt;code&gt;aws_s3_bucket&lt;/code&gt; reference to your Terraform backend config, it throws an error, like so:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ terraform init
Initializing the backend...
╷
│ Error: Variables not allowed
│ 
│   on iam_aws_1.tf line 67, in terraform:
│   67:     bucket         = aws_s3_bucket.terraform_state.id
│ 
│ Variables may not be used here.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When you run the &lt;code&gt;terraform init&lt;/code&gt; command with the hardcoded string values in the arguments for the Terraform backend configuration block, your output is like so:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ terraform init                
Initializing the backend...
Acquiring state lock. This may take a few moments...
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "local" backend to the
  newly configured "s3" backend. No existing state was found in the newly
  configured "s3" backend. Do you want to copy this state to the new "s3"
  backend? Enter "yes" to copy and "no" to start with an empty state.
  Enter a value: no
Releasing state lock. This may take a few moments...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.76.0
Terraform has been successfully initialized!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, when asked if you want to copy your local backend or start fresh, choose ‘no’. &lt;/p&gt;

&lt;p&gt;Remember, you have deployed our S3 bucket and DynamoDB configurations using Terraform code. In your project directory, a &lt;strong&gt;terraform.tfstate&lt;/strong&gt; file is created using the default local backend. &lt;/p&gt;

&lt;p&gt;You would like to only use your ‘s3’ Terraform backend to track your configuration changes and dispose of your local backend. &lt;/p&gt;

&lt;p&gt;Since you chose to start with an empty state, your S3 bucket without a state file is created and looks like so:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffp491p1qq1vwgawjaquy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffp491p1qq1vwgawjaquy.png" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you run the &lt;code&gt;terraform apply&lt;/code&gt; command now, it would show that you have some already existing resources, such as S3 bucket and DynamoDB, which your ‘s3’ backend state is trying to create. &lt;/p&gt;

&lt;p&gt;To avoid such resource duplication problems, you can either delete the Terraform code for your DynamoDB and S3 bucket or migrate your local backend to the remote ‘s3’ backend during the Terraform initialization.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Migrating State From Local to Remote Backend&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By default, it is recommended that teams use the &lt;a href="https://www.env0.com/blog/terraform-remote-state-using-a-remote-backend" rel="noopener noreferrer"&gt;remote backend&lt;/a&gt; to store their state file from the initial infrastructure provisioning. &lt;/p&gt;

&lt;p&gt;However, sometimes there might be some local state files that need to migrate after a successful run, or the code is ready to be deployed in a dev environment. Let’s see how it can be done.&lt;/p&gt;

&lt;p&gt;Continuing on the previous example, Terraform detects that you have a state file named terraform in your current working directory named &lt;strong&gt;terraform.tfstate&lt;/strong&gt;. Now, run the &lt;code&gt;terraform init&lt;/code&gt; command for the Terraform backend configuration to initialize the backend. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗ terraform init
Initializing the backend...
Acquiring state lock. This may take a few moments...
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "local" backend to the
  newly configured "s3" backend. No existing state was found in the newly
  configured "s3" backend. Do you want to copy this state to the new "s3"
  backend? Enter "yes" to copy and "no" to start with an empty state.
  Enter a value: yes
Releasing state lock. This may take a few moments...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.76.0
Terraform has been successfully initialized!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above output, running the terraform &lt;code&gt;init&lt;/code&gt; command, enabled Terraform to lock on the new backend file and prompted you to provide input. In case you would like to migrate your state file to the newly configured ‘s3’ remote backend. &lt;/p&gt;

&lt;p&gt;Choosing ‘yes’ releases the lock from the ‘s3’ backend after having moved your state file there. &lt;/p&gt;

&lt;p&gt;Moving forward, the ‘s3’ backend will track your changes and infrastructure changes. &lt;/p&gt;

&lt;p&gt;With that, your backend migration is complete, and your Terraform configuration has been successfully initialized. You can verify this using the AWS console that displays the new state file. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjye3nlypioy4e28g0w5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjye3nlypioy4e28g0w5.png" width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also take a local backup of your state file by running the &lt;code&gt;terraform state pull &amp;gt; local-state.json&lt;/code&gt; command. It stores your state file in a &lt;strong&gt;local-state.json&lt;/strong&gt; file on your local machine.&lt;/p&gt;

&lt;p&gt;Remember, you should not modify your state file manually. This can cause state corruption or infrastructure inconsistencies.&lt;/p&gt;

&lt;p&gt;As a DevOps Engineer, configuring the Terraform backend, access control, versioning, and locking mechanism to ensure the security, compliance, and backup recovery hinders the productivity of your team.&lt;/p&gt;

&lt;p&gt;After configuring your backend, your changes will also be applied to your remote state file. This takes a lot of bandwidth, leaves margin for error, and requires frequent checks. You would also need to occasionally switch between backends to run any POCs on your current configuration. Easier and more efficient automation of these processes can be done with tools such as env0, as we'll see below.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform Backend with env0&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://env0.com" rel="noopener noreferrer"&gt;env0&lt;/a&gt; provides a remote backend  to facilitate secure and streamlined team collaboration, which creates a foundation for a unified deployment process across the organization and enables many other governance, automation, and visibility features. . &lt;/p&gt;

&lt;p&gt;If needed, the platform also offers a bring your own bucket (BYOB) option for teams who prefer to manage their state files in their hosted environments.&lt;/p&gt;

&lt;p&gt;To use the env0 remote state, you can easily integrate your Terraform code with the env0 platform.&lt;/p&gt;

&lt;p&gt;All you need to do is enable ‘Use env0 remote backend’ while creating a new environment for integration. This allows env0 to take care of your Terraform remote backend configuration automatically with versioning, access controls, and a state locking mechanism in place. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filgx98ympqx3669xxhog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filgx98ympqx3669xxhog.png" width="661" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the integration is done, you can find your remote state under ‘STATES ‘ in your env0 console, along with creation time and ID. You can even download your state file as a &lt;strong&gt;.tfstate&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3nlvq7loa2gj33eio73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3nlvq7loa2gj33eio73.png" width="800" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, team members can use their local setup to run the remote &lt;code&gt;plan&lt;/code&gt; and &lt;code&gt;apply&lt;/code&gt;. To enable this, navigate to the ‘SETTINGS’ in your environment, and check box the ‘Allow Remote Apply’.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw3s43wpg3kes2emjgg2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw3s43wpg3kes2emjgg2.png" width="697" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s see how you can run the Terraform &lt;code&gt;apply&lt;/code&gt; from your local to remote. &lt;/p&gt;

&lt;p&gt;First, add the Terraform cloud configuration block in your Terraform code and create a new S3 resource: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  cloud {
    hostname    = "backend.api.env0.com"
    organization = "81b8f9f3-6542-417b-a2b8-e8120df3a2a2.4fab912f-8565-459b-b367-ed8a5a5fd933"
    workspaces {
    name = "env0-test-null-34440991"
    }
  }
}
resource "aws_s3_bucket" "env0_bucket_04" {
  bucket = "s03-tf-bucket"
  acl   = "private"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, do a terraform backend login with the remote state ID and then run the &lt;code&gt;terraform init&lt;/code&gt; and &lt;code&gt;terraform apply --auto-approve&lt;/code&gt; command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  env0 git:(main) ✗  terraform apply --auto-approve
Running apply in HCP Terraform. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.
Preparing the remote apply...
…
Remote Apply initialized successfully!
downloading and extracting local code
…
reading Terraform Variables...
Terraform Variables:
        vpc_config={
…
reading Environment Variables...
Environment Variables:
        ENV0_CLI_ARGS_APPLY=-auto-approve="true"
…
&amp;gt; /opt/tfenv/bin/terraform --version
Terraform v1.5.7
…
&amp;gt; /opt/tfenv/bin/terraform init
…
Terraform has been successfully initialized!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Go to the ‘DEPLOYMENT’ tab on the nv0 console to check the status of your deployment. To get more information on your deployment, check the ‘DEPLOYMENT-LOGS’.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnncrduogipznx4m3zyfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnncrduogipznx4m3zyfk.png" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we’ve seen above, env0’s managed remote backend is easily enabled by checking the ‘Use env0 Remote Backend’ from UI, and everything  simply works. &lt;/p&gt;

&lt;p&gt;There’s no need to spend your time writing the infrastructure code or going through the trouble of creating and managing the remote backend, including backup, replication, high availability, encryption, and locking. &lt;/p&gt;

&lt;p&gt;Teams can expedite development cycles by running the &lt;code&gt;plan&lt;/code&gt; (and event &lt;code&gt;apply&lt;/code&gt;) locally while executing them remotely, which allows the local changes to run with the shared backend and could be useful for quick debugging &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By now, you should have a clear understanding of Terraform backends and if you want to use a local backend or a remote one. We looked at how to write Terraform backend configuration for Azure Blob, AWS S3, or GCS. Additionally, now you can easily migrate your state file from a local to a remote one.&lt;/p&gt;

&lt;p&gt;Remote Terraform backends are the preferred option across organizations and large teams. They also enable features, such as versioning, state locking mechanism, and encryption, allowing a secure, compliant, and recoverable state file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Q. What is the backend of Terraform?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Terraform backend is used to define where your state file will be stored and how you can run Terraform operations on it. It can be on public cloud providers, local, or Terraform Cloud.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Q. What is the default local backend in Terraform?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The default local backend in Terraform is your local machine. When you initialize and plan your Terraform code without any backend, it is stored in your current working directory and named as a &lt;strong&gt;terraform.tfstate&lt;/strong&gt; file.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Q. What is the difference between Terraform backend remote and cloud?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The remote backend is used to store the state file on remote cloud storage services such as S3, GCS, or Azure blob, with a state locking mechanism. Conversely, cloud backend uses Terraform Cloud to store state and provide additional features like remote execution, state versioning, and team collaboration capabilities.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Q. How to move the Terraform state file from one backend to another?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To move your state file from one backend to another, update your Terraform backend configuration in Terraform code. Next, run the &lt;code&gt;terraform init -migrate-state&lt;/code&gt; command. When prompted, confirm and apply the migration. Finally, review that the migration is completed using the AWS console.&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>terraform</category>
      <category>backend</category>
    </item>
    <item>
      <title>2024 Product Release Highlights</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Tue, 14 Jan 2025 14:04:00 +0000</pubDate>
      <link>https://forem.com/envzero/2024-product-release-highlights-2p9e</link>
      <guid>https://forem.com/envzero/2024-product-release-highlights-2p9e</guid>
      <description>&lt;p&gt;2024 has been a transformative year filled with impactful product announcements and new features—marking a major milestone in our journey to redefine cloud infrastructure automation and management.&lt;/p&gt;

&lt;p&gt;Importantly, this was the year we expanded env0’s capabilities past the “traditional trio” of IaC automation, collaboration, and governance—adding features to improve the monitoring and control of non-codified cloud resources.&lt;/p&gt;

&lt;p&gt;As we prepare for an even more exciting 2025, here’s a quick look at some of 2024’s key product releases:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Cloud Compass: Cloud Asset Management&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This year, we introduced &lt;a href="https://docs.env0.com/docs/cloud-compass" rel="noopener noreferrer"&gt;Cloud Compass&lt;/a&gt;, a groundbreaking release that takes infrastructure management beyond IaC workflows. By combining Cloud Asset Management (CAM) with actionable insights, Cloud Compass provides enhanced visibility into all cloud resources—codified and non-codified alike. &lt;/p&gt;

&lt;p&gt;By monitoring all cloud assets, Cloud Compass detects and prioritizes risks, including inconsistencies from manual operations. It empowers organizations to codify unmanaged resources, aligning them with best practices and governance policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3bzxwwuwdtjq71vksi2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3bzxwwuwdtjq71vksi2.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Drift Cause Analysis:  Who, What, When, and Why&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.env0.com/blog/drift-cause-closing-the-loop-on-infrastructure-drift-management" rel="noopener noreferrer"&gt;Drift Cause&lt;/a&gt; is a breakthrough capability that delivers unprecedented insight into the origins of infrastructure drift. &lt;/p&gt;

&lt;p&gt;Built on the powerful foundation of Cloud Compass, Drift Cause enables env0 users to track configuration drifts back to their source, revealing the ‘who’, ‘when’, and ‘why’ behind the change that caused the discrepency. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvp4m7jw63o0lq757zhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvp4m7jw63o0lq757zhp.png" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Environment Discovery: A Better Way to GitOps&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By bridging Git workflows and env0, &lt;a href="https://docs.env0.com/docs/environment-discovery" rel="noopener noreferrer"&gt;Environment Discovery&lt;/a&gt; simplifies and improves how teams manage IaC resources. &lt;/p&gt;

&lt;p&gt;The feature automatically scans Git repositories, identifies IaC configurations, and enables the creation of environments in env0 directly from your codebase. This streamlines onboarding reduces manual effort, and ensures that all environments align with organizational governance policies and workflows. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4je336jxcn53pfv5qff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4je336jxcn53pfv5qff.png" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Environment Outputs: Easy Dependency Management&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.env0.com/blog/environment-output-variables-easy-and-secure-output-piping%5C" rel="noopener noreferrer"&gt;Environment Output Variables&lt;/a&gt; enhance workflows by simplifying the management of complex dependencies between environments. &lt;/p&gt;

&lt;p&gt;This enables teams to securely share sensitive values as outputs, pipe data to dependent environments, and avoid the need for complex scripting or external data sources. By streamlining data sharing, Environment Output Variables improve efficiency and ensure consistency across infrastructure management. &lt;a href="https://www.env0.com/blog/environment-output-variables-easy-and-secure-output-piping" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z3cqlvmlum2o1hmw08u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z3cqlvmlum2o1hmw08u.png" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Backstage Integration: Productivity AND Governance&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://docs.env0.com/docs/backstage" rel="noopener noreferrer"&gt;env0 Backstage plugin&lt;/a&gt; seamlessly integrates IaC management into Backstage, enabling developers to create, deploy, and manage cloud environments directly from their internal developer portal. &lt;/p&gt;

&lt;p&gt;This self-service capability enhances productivity by letting developers handle infrastructure tasks within their familiar workflows. Meanwhile, admins can maintain governance by configuring templates and policies that ensure deployments adhere to organizational standards.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vddz84aarg62c1m7f2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vddz84aarg62c1m7f2w.png" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Ansible Support: Expanding IaC Tooling&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;With the addition of &lt;a href="https://docs.env0.com/docs/ansible" rel="noopener noreferrer"&gt;Ansible Support&lt;/a&gt;, env0 now enables teams to integrate configuration management workflows into their IaC strategies. This update provides greater flexibility for managing post-deployment configurations and provisioning resources. &lt;a href="https://docs.env0.com/docs/ansible" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59g2x3f86ntfctxldxcn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59g2x3f86ntfctxldxcn.png" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Self-Hosted Remote Backend: Enterprise Compliance&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://docs.env0.com/docs/self-hosted-remote-state" rel="noopener noreferrer"&gt;Self-Hosted Remote State&lt;/a&gt; feature provides organizations with full control over their Terraform state files by hosting them within their own infrastructure. &lt;/p&gt;

&lt;p&gt;This solution is ideal for enterprises with strict compliance or security requirements, as it ensures that sensitive state data remains within their control while leveraging env0’s orchestration capabilities. &lt;a href="https://docs.env0.com/docs/self-hosted-remote-state" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Modules CI Testing: Safer Deployments&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Managing IaC modules at scale requires confidence that changes won’t introduce unexpected issues. With &lt;a href="https://docs.env0.com/docs/modules-continuous-integration-testing" rel="noopener noreferrer"&gt;Modules Continuous Integration (CI) Testing&lt;/a&gt;, teams can automate the validation of module changes before deployment. &lt;/p&gt;

&lt;p&gt;This feature ensures that updates are thoroughly tested within a controlled environment, helping to identify potential issues early and reduce risk in production environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok8nj91n2hpa5fcmra6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok8nj91n2hpa5fcmra6q.png" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Variable Sets: Ensuring Consistency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Managing environment variables is a critical part of IaC workflows, and &lt;a href="https://www.env0.com/blog/new-variable-sets-boost-efficiency-reduce-clutter" rel="noopener noreferrer"&gt;Variable Sets&lt;/a&gt; simplifies this by centralizing reusable variable groups.&lt;/p&gt;

&lt;p&gt;Teams can now define variables in one place, ensuring consistency across environments and reducing repetitive work. With Variable Sets, updates are easily applied across multiple environments, saving time and improving efficiency. &lt;a href="https://www.env0.com/blog/new-variable-sets-boost-efficiency-reduce-clutter" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8g4p3gahfak60ye9ava5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8g4p3gahfak60ye9ava5.png" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Other Honorable Mentions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;in 2024 we also introduced multiple features designed to enhance user experience, productivity, governance and other aspects of our product.&lt;/p&gt;

&lt;p&gt;Here are some of those additional highlights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/docs/plan-and-apply-from-pr-comments" rel="noopener noreferrer"&gt;&lt;strong&gt;GitOps/Atlantis Workflow Enhancements&lt;/strong&gt;&lt;/a&gt;: Streamlines apply processes directly from pull requests, improving the speed and reliability of Git-based IaC workflows.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/changelog/mix-agents-and-variables" rel="noopener noreferrer"&gt;&lt;strong&gt;Support for SaaS and Self-hosted Agents in One Organization&lt;/strong&gt;&lt;/a&gt;: Offers the flexibility to mix SaaS and self-hosted agents within a single organization to meet diverse operational and compliance needs.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/docs/private-registry" rel="noopener noreferrer"&gt;&lt;strong&gt;Private Registry for Mono-repo Support&lt;/strong&gt;&lt;/a&gt;: Simplifies module sharing across mono-repos, making it easier to manage and deploy resources consistently.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/docs/remote-backend" rel="noopener noreferrer"&gt;&lt;strong&gt;Remote Backend Support for OpenTofu&lt;/strong&gt;&lt;/a&gt;: Extends compatibility to the open-source IaC tool, ensuring smooth integration for diverse workflows.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.env0.com/blog/switch-from-terraform-cloud-in-minutes-with-our-new-migration-tool" rel="noopener noreferrer"&gt;&lt;strong&gt;TFC Migration Tool&lt;/strong&gt;&lt;/a&gt;: Speeds up transitions from Terraform Cloud to env0, minimizing downtime and effort.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/docs/project-level-custom-flow" rel="noopener noreferrer"&gt;&lt;strong&gt;Layered Custom Flows and Approval Policies&lt;/strong&gt;&lt;/a&gt;: Adds advanced governance options for defining workflows and approvals at multiple levels.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.env0.com/blog/use-our-new-navigation-to-simplify-iac-project-management" rel="noopener noreferrer"&gt;&lt;strong&gt;Navigation Upgrades&lt;/strong&gt;&lt;/a&gt;: Improves user experience with a more intuitive project management interface.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/changelog/move-environments-between-projects" rel="noopener noreferrer"&gt;&lt;strong&gt;Move Environments Between Projects&lt;/strong&gt;&lt;/a&gt;: Provides flexibility to reorganize environments as needed, aligning with evolving project structures.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.env0.com/changelog/custom-flows-for-tasks" rel="noopener noreferrer"&gt;&lt;strong&gt;Custom Flows in Ad-hoc Tasks&lt;/strong&gt;&lt;/a&gt;: Enables workflow customization for one-off operations, ensuring greater control and efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Road Ahead&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Building on the advancements made last year, our 2025 roadmap focuses on helping enterprises navigate the growing complexity of managing modern cloud environments.&lt;/p&gt;

&lt;p&gt;As infrastructure becomes more intricate and distributed, we have plans in place to further expand visibility, enable even more precise controls, and deliver intelligent solutions that improve how teams manage and scale their operations. &lt;/p&gt;

&lt;p&gt;Stay tuned!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>infrastructureascode</category>
      <category>news</category>
      <category>devresolutions2024</category>
    </item>
    <item>
      <title>New Backstage Plugin: Manage and Deploy IaC from Your Internal Developer Portal</title>
      <dc:creator>env zero Team</dc:creator>
      <pubDate>Tue, 07 Jan 2025 14:46:27 +0000</pubDate>
      <link>https://forem.com/envzero/new-backstage-plugin-manage-and-deploy-iac-from-your-internal-developer-portal-20n5</link>
      <guid>https://forem.com/envzero/new-backstage-plugin-manage-and-deploy-iac-from-your-internal-developer-portal-20n5</guid>
      <description>&lt;p&gt;Today we’re excited to announce our latest plugin which enables Backstage users to manage Infrastructure as Code (IaC) workflows directly within their developer platform. With this new official integration option, developers can deploy and manage IaC assets just like they manage application software and other resources, right from the familiar Backstage interface.&lt;/p&gt;

&lt;p&gt;The key promise of this integration is to enhance env0’s &lt;a href="https://www.env0.com/blog/mastering-managed-iac-self-service-the-complete-guide" rel="noopener noreferrer"&gt;self-service capabilities&lt;/a&gt;, a cornerstone of our platform. The new plugin makes all of env0’s self-service features easily accessible to teams using Backstage, empowering engineers to deploy infrastructure independently and at speed.&lt;/p&gt;

&lt;p&gt;From the point of view of infrastructure and platform engineering teams, this also enables full use of env0’s governance features such as &lt;a href="https://www.youtube.com/watch?v=LL5GHYG-fp0" rel="noopener noreferrer"&gt;runtime policies&lt;/a&gt; and &lt;a href="https://www.env0.com/blog/take-control-of-iac-costs-with-env0-free-whitepaper-inside" rel="noopener noreferrer"&gt;cost controls&lt;/a&gt;. Together, these help pave the golden path for IaC usage while upholding best practices, as well as security and compliance standards.&lt;/p&gt;

&lt;p&gt;Let’s take a closer look at some key use cases for this new integration:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For engineers deploying in a self-service model&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Streamlined IaC deployment flows with Backstage templates, in the context of the specific service or feature in development&lt;/li&gt;
&lt;li&gt;  A simple way to spin off ephemeral environments for debugging purposes &lt;/li&gt;
&lt;li&gt;  Easy discovery of IaC environments in the Backstage catalog&lt;/li&gt;
&lt;li&gt;  Quick access to detailed environment views, including deployment history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For teams responsible for infrastructure and cloud operations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Empower teams to create and manage IaC environments directly from Backstage&lt;/li&gt;
&lt;li&gt;  Easily creation of custom Backstage templates tailored to organizational needs &lt;/li&gt;
&lt;li&gt;  Use env0 governance features to optimize cloud cost and uphold security, reliability, and compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The plugin is available in this &lt;a href="http://github.com/env0/env0-backstage-plugin" rel="noopener noreferrer"&gt;public repository&lt;/a&gt; and enables managing environments in env0 directly within Backstage. Once installed, the plugin provides several capabilities, as described below. &lt;/p&gt;

&lt;h3&gt;
  
  
  Create New Environments Using Backstage Templates
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://env0.com" rel="noopener noreferrer"&gt;env0&lt;/a&gt; plugin enables users to create environments using templates preconfigured by Backstage admin. To streamline the deployment flow, admins can customize a series of form questions to define the required inputs, such as environment names and variable values. This helps users deploy environments with ease, through a structured and easy-to-follow process.&lt;/p&gt;

&lt;p&gt;Once submitted, env0 handles the deployment behind the scenes, and the environment is automatically registered in Backstage for tracking and management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.prod.website-files.com%2F63eb9bf7fa9e2724829607c1%2F677d2c4cded55635d035261c_677d2bf3b28df8627d477720_image2%252520%281%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.prod.website-files.com%2F63eb9bf7fa9e2724829607c1%2F677d2c4cded55635d035261c_677d2bf3b28df8627d477720_image2%252520%281%29.png" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Creating an env0 environment using a Backstage form&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Importantly, within this process, sensitive variables will be flagged with a warning icon, ensuring that they can be clearly identified and handled in accordance with their respective privacy policies. &lt;/p&gt;

&lt;h3&gt;
  
  
  Search, Filter, and Browse Environments
&lt;/h3&gt;

&lt;p&gt;All environments deployed through env0 are listed in the Backstage catalog. This enables users to browse through environments using filters (e.g., tags or owners) and search queries. Doing so makes it easy to locate specific environments and manage them alongside other resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff11sme8ht7wwtt5am8ly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff11sme8ht7wwtt5am8ly.png" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Viewing and managing env0 environments within the Backstage catalog&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Access Detailed Environment Views
&lt;/h3&gt;

&lt;p&gt;To access detailed environment information, users can start with the Overview tab, where they see the environment’s status and general information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3f6t76qwqsoiqr453gcm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3f6t76qwqsoiqr453gcm.png" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Environment details with status, drift information, and VCS data&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;They can then navigate to the env0 tab to review deployment history, including statuses, timestamps, resource changes for each deployment, and check for any errors&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjodaa6gl8wk0j2xsctdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjodaa6gl8wk0j2xsctdl.png" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Overview of deployment history, including statuses and timestamps&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If infrastructure adjustments or fixes are needed, users can update their previous form answers in the env0 tab and redeploy to apply the changes to the environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6v2b2of752n9fs59m78b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6v2b2of752n9fs59m78b.png" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Redeployment with updated variables in the env0 tab&lt;/em&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Today's release addresses a popular demand from our customers and improves on custom integrations some built using the env0 API. Now, with an official plugin, all customers have access to a fully maintained solution, eliminating the effort and complexities of custom development. &lt;/p&gt;

&lt;p&gt;The new plugin allows admins to ensure compliance and uphold best practices, while developers gain autonomy to manage IaC workflows in a self-service model—creating environments, viewing deployment history, and monitoring infrastructure—all within the familiar Backstage interface.&lt;/p&gt;

&lt;p&gt;Want to learn more? &lt;a href="https://www.env0.com/demo-request" rel="noopener noreferrer"&gt;Schedule a technical demo&lt;/a&gt; to see env0 in action.&lt;/p&gt;

</description>
      <category>developer</category>
      <category>devops</category>
      <category>infrastructureascode</category>
      <category>backstage</category>
    </item>
  </channel>
</rss>
