<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: CloudSkills.io</title>
    <description>The latest articles on Forem by CloudSkills.io (@cloudskills).</description>
    <link>https://forem.com/cloudskills</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cloudskills"/>
    <language>en</language>
    <item>
      <title>The Beginners Guide to Running Docker Containers on AWS</title>
      <dc:creator>TeamCloudSkills</dc:creator>
      <pubDate>Sat, 06 Mar 2021 18:50:05 +0000</pubDate>
      <link>https://forem.com/cloudskills/the-beginners-guide-to-running-docker-containers-on-aws-2m7n</link>
      <guid>https://forem.com/cloudskills/the-beginners-guide-to-running-docker-containers-on-aws-2m7n</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/lO2wU2rcGUw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Learning the basics of AWS and Docker containers?&lt;/p&gt;

&lt;p&gt;In this episode, we explain the basics of Docker and how to run containers on AWS.&lt;/p&gt;

&lt;p&gt;This a one-hour project focused tutorial from our AWS Certified Solutions Architect training.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>techtalks</category>
    </item>
    <item>
      <title>AWS Quick Tip: Working with EC2 and IAM Roles</title>
      <dc:creator>TeamCloudSkills</dc:creator>
      <pubDate>Sat, 06 Mar 2021 18:40:12 +0000</pubDate>
      <link>https://forem.com/cloudskills/aws-quick-tip-working-with-ec2-and-iam-roles-4gdc</link>
      <guid>https://forem.com/cloudskills/aws-quick-tip-working-with-ec2-and-iam-roles-4gdc</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/lNzfPp2UYio"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Allowing your applications to access cloud services on your behalf can sometimes be tricky. In AWS, you can take advantage of IAM roles to make this a lot easier.&lt;/p&gt;

&lt;p&gt;Working with IAM roles and delegating service access to applications is a core concept when it comes to dealing with AWS.&lt;/p&gt;

&lt;p&gt;In this AWS Quick Tip, CloudSkills Author Michael Levan explains how this works. &lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>AKS Virtual Nodes with Chad Crowell</title>
      <dc:creator>TeamCloudSkills</dc:creator>
      <pubDate>Sat, 06 Mar 2021 18:37:41 +0000</pubDate>
      <link>https://forem.com/cloudskills/aks-virtual-nodes-with-chad-crowell-2jkf</link>
      <guid>https://forem.com/cloudskills/aks-virtual-nodes-with-chad-crowell-2jkf</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Mn56W-D-A-0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Working with Azure Kubernetes Service (AKS)?&lt;/p&gt;

&lt;p&gt;If you're in Azure and planning to work with Kubernetes, definitely check out how to SCALE virtual nodes and get your billing under control.&lt;/p&gt;

&lt;p&gt;This 13 minute YouTube video is by CloudSkills Author Chad Crowell.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Packer and Terraform with Immutable Infrastructure</title>
      <dc:creator>TeamCloudSkills</dc:creator>
      <pubDate>Tue, 02 Feb 2021 15:00:32 +0000</pubDate>
      <link>https://forem.com/cloudskills/packer-and-terraform-with-immutable-infrastructure-47ja</link>
      <guid>https://forem.com/cloudskills/packer-and-terraform-with-immutable-infrastructure-47ja</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/w0gv-Tw6698"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Terraform is red hot right now, and you probably know that it's an open-source Infrastructure as Code (IaC) solution.&lt;/p&gt;

&lt;p&gt;OK, but what is Packer?&lt;/p&gt;

&lt;p&gt;Well, Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration. It's lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel.&lt;/p&gt;

&lt;p&gt;In this CloudSkills Community Call replay,  Luke Orellana shows you how to get started with Packer, Terraform, and Immutable Infrastructure.&lt;/p&gt;

&lt;p&gt;Resources from this episode:&lt;/p&gt;

&lt;p&gt;HashiCorp Learn:&lt;br&gt;
&lt;a href="https://learn.hashicorp.com/terraform"&gt;https://learn.hashicorp.com/terraform&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Introduction to Packer&lt;br&gt;
&lt;a href="https://www.packer.io/intro"&gt;https://www.packer.io/intro&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Learning Application Security with Tanya Janca</title>
      <dc:creator>TeamCloudSkills</dc:creator>
      <pubDate>Tue, 02 Feb 2021 12:10:54 +0000</pubDate>
      <link>https://forem.com/cloudskills/learning-application-security-with-tanya-janca-4ki7</link>
      <guid>https://forem.com/cloudskills/learning-application-security-with-tanya-janca-4ki7</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/3t4tXDBVQM4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Tanya Janca, who was on the podcast last year, is back! Since the last time we talked, she’s been very busy founding her own company, We Hack Purple. Tanya’s an expert on App Security, and through her company, she’s sharing those skills with online on-demand courses.&lt;/p&gt;

&lt;p&gt;In this episode, we talk about…&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tanya’s new community centered around her online academy, which has been a positive and kind space for discussing related topics.&lt;/li&gt;
&lt;li&gt;Why, if you’re a software developer, you should learn about app security and have a strategy to talk to management to get what you want and legitimize your job.&lt;/li&gt;
&lt;li&gt;Tanya’s book and how she teaches each lesson within it several different ways to make it accessible for various learning styles.&lt;/li&gt;
&lt;li&gt;Tanya’s current free mini-course offering about scaling your security program and security team.&lt;/li&gt;
&lt;li&gt;The best career advice Tanya has ever gotten.&lt;/li&gt;
&lt;li&gt;The importance of mentors in everyone’s careers and why it’s very important to build a community.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Resources from this episode:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow Tanya on &lt;a href="https://twitter.com/shehackspurple"&gt;Twitter&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Check out the &lt;a href="https://store.wehackpurple.com/"&gt;We Hack Purple Academy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read Tanya's new book: &lt;a href="https://www.amazon.com/Alice-Bob-Learn-Application-Security/dp/1119687357"&gt;Alice and Bob Learn Application Security&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Join the &lt;a href="https://newsletter.wehackpurple.com/"&gt;We Hack Purple Newsletter&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
    </item>
    <item>
      <title>Azure Automation: Managing Runbook Authentication and Modules</title>
      <dc:creator>TeamCloudSkills</dc:creator>
      <pubDate>Tue, 02 Feb 2021 11:16:28 +0000</pubDate>
      <link>https://forem.com/cloudskills/azure-automation-managing-runbook-authentication-and-modules-4bd2</link>
      <guid>https://forem.com/cloudskills/azure-automation-managing-runbook-authentication-and-modules-4bd2</guid>
      <description>&lt;p&gt;In my previous post, we took a look at &lt;a href="https://cloudskills.io/blog/azure-automation-runbook"&gt;creating your first Azure Automation PowerShell runbook&lt;/a&gt;. We set up the Azure Automation account, authored a PowerShell runbook, and incorporated parameters and variable assets. The next step is to understand how we can access and manage Azure resources from our runbooks.&lt;/p&gt;

&lt;p&gt;In this guide, you will learn how Azure Automation can authenticate and access Azure resources. We will also take a look at importing PowerShell modules to add cmdlets to our runbooks. When you're finished, you'll have the skills to elevate your runbooks to the next level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin this guide, you'll need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure tenant and subscription&lt;/li&gt;
&lt;li&gt;Administrator account with sufficient permissions on a subscription, such as Owner, or a role containing Microsoft.Automation resource authorization&lt;/li&gt;
&lt;li&gt;PowerShell knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Azure Automation Run As Account
&lt;/h2&gt;

&lt;p&gt;When I created the Azure Automation account in the first article, I enabled the option &lt;a href="https://cloudskills.io/blog/azure-automation-runbook#create-an-azure-automation-account"&gt;to create an Azure Run As account&lt;/a&gt;. By enabling this option, Azure will automatically create an Azure AD application. You can use this application identity to authenticate to an Azure subscription to access and manage resources. During the Run As account creation, Azure performs several other functions such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adding a self-signed certificate to the application account.&lt;/li&gt;
&lt;li&gt;Creating a service principal identity for the application.&lt;/li&gt;
&lt;li&gt;Assigning the &lt;strong&gt;Contributor&lt;/strong&gt; role for the account in the current subscription.&lt;/li&gt;
&lt;li&gt;Creating Automation connection assets named &lt;strong&gt;AzureRunAsCertificate&lt;/strong&gt; and &lt;strong&gt;AzureRunAsConnection&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to view more information about the Run As account, from the Automation Account resource page, navigate to &lt;strong&gt;Account Settings &amp;gt; Run as accounts&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--opYOhQmv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yohirx566xvpc15c3iwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--opYOhQmv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yohirx566xvpc15c3iwq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can view the account's properties in the portal, including the certificate thumbprint, the Azure AD application information, service principal ID, role assignments, and runbooks utilizing the account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--isRydeN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nf2l67s2r4dikyc5xoa3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--isRydeN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nf2l67s2r4dikyc5xoa3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the top of the properties page, note an action named &lt;strong&gt;Renew Certificate&lt;/strong&gt;. The account's certificate is only valid for one year, so be sure to renew before expiration to keep your runbooks from failing.&lt;/p&gt;

&lt;p&gt;Along with the Run As account, Azure will create two connection assets: &lt;strong&gt;AzureRunAsCertificate&lt;/strong&gt; and &lt;strong&gt;AzureRunAsConnection&lt;/strong&gt;. The certificate asset authenticates to Azure so the runbook can manage Azure Resource Manager resources. The connection asset contains the application ID, tenant ID, subscription ID, and certificate thumbprint. Basically, everything you need to connect to Azure to start managing resources! In an upcoming example, I'll use this connection to connect to Azure and to retrieve some resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation Account Modules
&lt;/h2&gt;

&lt;p&gt;Before I create a new runbook to manage my Azure resources, I need to make sure I have the right PowerShell commands available. When you create a new Automation Account, Azure will automatically import some PowerShell modules for you, such as AzureRM.Automation, AzureRM.Computer, and AzureRM.Resources. However, these modules are a part of the older AzureRM PowerShell module, which Microsoft is no longer developing. Microsoft has replaced this module with the newer Az PowerShell module.&lt;/p&gt;

&lt;p&gt;Luckily, I'm not stuck using the older modules in my PowerShell runbooks. I can import the new Az modules into the Automation Account for use with my PowerShell code. From the Automation Account, navigate to &lt;strong&gt;Shared Resources &amp;gt; Modules gallery&lt;/strong&gt;. The first module I need to import is the &lt;em&gt;Az.Accounts&lt;/em&gt; module so I can use the &lt;em&gt;Connect-AzAccount&lt;/em&gt; cmdlet in my runbook. Search for "Az.Accounts", then select the resulting module.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2EL8o6i2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tnn6cv9tlsmeyyl4ws8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2EL8o6i2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tnn6cv9tlsmeyyl4ws8c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the module page, I can search the module for the cmdlets and functions to verify it has what I need. From here, I can select the &lt;strong&gt;Import&lt;/strong&gt; action at the top. Once the import is successful, I need to navigate the &lt;strong&gt;Modules gallery&lt;/strong&gt; and perform the same steps for the &lt;em&gt;Az.Resources&lt;/em&gt; module so I can use the &lt;em&gt;Get-AzResourceGroup&lt;/em&gt; in my runbook.&lt;/p&gt;

&lt;p&gt;The imports can take a few minutes, but you can verify the status by navigating back to &lt;strong&gt;Shared Resources &amp;gt; Modules&lt;/strong&gt;. In this list of modules, verify the module import progress in the &lt;strong&gt;Status&lt;/strong&gt; column. From this &lt;strong&gt;Modules&lt;/strong&gt; page, also note there are options to import a custom module. Select the &lt;strong&gt;+ Add a module&lt;/strong&gt; and choose a .zip file that contains the module code. Note that the module code's file name must match the file name of the zip file. Importing custom written module is a fantastic feature of Azure Automation that allows you to write a runbook to fit any scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to Azure from PowerShell Runbook
&lt;/h2&gt;

&lt;p&gt;Back in the Automation Account, I will create a new runbook that will connect to Azure and retrieve my resource groups. In the Automation Account, navigate to &lt;strong&gt;Process Automation &amp;gt; Runbooks&lt;/strong&gt;, then select &lt;strong&gt;+ Create a runbook&lt;/strong&gt;. From here, input a name for the runbook, select the PowerShell runbook type, then select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Edit PowerShell Runbook&lt;/strong&gt; window, I need to write code that will retrieve the information stored in &lt;strong&gt;AzureRunAsConnection&lt;/strong&gt;. For this, I use the &lt;em&gt;Get-AutomationConnection&lt;/em&gt; cmdlet and specify the name of the Run As connection. If I store the connection information to a variable, I can reference the tenant ID, application ID, and certificate thumbprint in the &lt;em&gt;Connect-AzAccount&lt;/em&gt; cmdlet to authenticate to my Azure tenant.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$connection&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Get-AutomationConnection&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;AzureRunAsConnection&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;Connect-AzAccount&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-ServicePrincipal&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;-Tenant&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$connection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;TenantID&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;-ApplicationId&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$connection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ApplicationID&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;-CertificateThumbprint&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$connection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CertificateThumbprint&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do not remember the cmdlet to retrieve the Run As connection asset, remember you can expand &lt;em&gt;Assets&lt;/em&gt; on the left to view saved connections and other assets. Selecting &lt;em&gt;Add to canvas&lt;/em&gt; will insert the necessary PowerShell code.&lt;/p&gt;

&lt;p&gt;Once the runbook connects, I can now use other Az cmdlets to work with Azure resources. Since I imported the &lt;em&gt;Az.Resources&lt;/em&gt; module, I can retrieve all my resource groups using the &lt;em&gt;Get-AzResourceGroup&lt;/em&gt; cmdlet. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eB8kqUB3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nrbem9o06kz1bujhjixc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eB8kqUB3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nrbem9o06kz1bujhjixc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, I will save the runbook code, publish, and then execute it. When I view the runbook job status, the &lt;strong&gt;Output&lt;/strong&gt; tab will show the runbook successfully authenticating to Azure. It will then start displaying my resource groups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V4o6lLpt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a3fvfg6bwho7n0k75s7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V4o6lLpt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a3fvfg6bwho7n0k75s7m.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Retrieving resource groups is only the beginning of what is possible. Imagine what other tasks you could automate within a runbook, such as taking snapshots of virtual machine disks, turning off virtual machines, or automatically resizing resources based on a schedule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Runbook Authentication without Run As Accounts
&lt;/h2&gt;

&lt;p&gt;You might be asking, "If I don't create a Run As account with the Automation Account, how can I authenticate to Azure?". There are a few options for managing authentication in this scenario, which I outline in the following sections. Remember, if you choose not to use the built-in Run As account, you will need to ensure that whichever account used has permissions to the Azure resources the script is trying to access. The built-in Run As account accomplishes this by being a Contributor at the subscription level, but you may want to apply more granular permissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Run As Account
&lt;/h3&gt;

&lt;p&gt;I can still enable the Run As account after the Automation Account has been created. In the account, navigate to &lt;strong&gt;Account Settings &amp;gt; Run as accounts&lt;/strong&gt;. Previously in this screen, we viewed the existing account, but I can also create one if it doesn't exist already. No additional inputs or settings are needed to set up the account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oulDmORF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d7bq3ett2o7czkghxdia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oulDmORF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d7bq3ett2o7czkghxdia.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Service Principal
&lt;/h3&gt;

&lt;p&gt;Applications use a service principal to access resources secured by Azure AD. The service principle represents the application inside the tenant, and you can assign it permissions to resources (just like a user account).&lt;/p&gt;

&lt;p&gt;To create a service principal, navigate to Azure Active Directory in the Azure portal. In the &lt;strong&gt;Manage&lt;/strong&gt; section, select &lt;strong&gt;App registrations&lt;/strong&gt;. Select &lt;strong&gt;+ New registration&lt;/strong&gt;, then input the application's name, support account type (for now, leave at the default), and redirect URI (this can be blank).&lt;/p&gt;

&lt;p&gt;After you create the app registration, on the &lt;strong&gt;Overview&lt;/strong&gt; page, take note of the application ID and tenant ID. You will need this information later to create a new connection or credentials asset (see following sections). In the &lt;strong&gt;Manage&lt;/strong&gt; section, you can navigate to &lt;strong&gt;Certificates &amp;amp; secrets&lt;/strong&gt; to upload a certificate or generate a secret (essentially a password for the account). You can then use either of these to authenticate out to Azure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a New Connection Asset
&lt;/h3&gt;

&lt;p&gt;You can create your own connection asset that includes an ApplicationId, TenantId, Certificate Thumbprint, and Subscription Id. If you created a service principal in the previous section, you could use it for this purpose.&lt;/p&gt;

&lt;p&gt;Create a connection asset in the Automation Account by navigating to &lt;strong&gt;Shared Resources &amp;gt; Connections&lt;/strong&gt;, then select &lt;strong&gt;+ Add a connection&lt;/strong&gt;. Give a name to the connection asset, select &lt;em&gt;AzureServicePrincipal&lt;/em&gt; as the type, and enter the required information from the service principal. Once the connection is created, I can use it just like the &lt;em&gt;AzureRunAsConnection&lt;/em&gt; asset in my PowerShell code from earlier.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Credentials Asset
&lt;/h3&gt;

&lt;p&gt;Finally, you can store security credentials as a shared resource. The credentials asset includes a username and password, and you can use these on cmdlets that accept a PSCredential object. In the Automation Account, navigate to &lt;strong&gt;Shared Resources &amp;gt; Credentials&lt;/strong&gt; and select &lt;strong&gt;+ Add a credential&lt;/strong&gt;. Name the credential and enter the username and password.&lt;/p&gt;

&lt;p&gt;Once you create the asset, you can retrieve the credentials using &lt;em&gt;Get-AutomationPSCredential&lt;/em&gt; and store it in a variable. You then pass the credentials to the &lt;strong&gt;Connect-AzAccount&lt;/strong&gt; cmdlet, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$creds&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Get-AutomationPSCredential&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'{CredentialAssetName}'&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;Connect-AzAccount&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Credentials&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$creds&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't forget, you can use the Asset explorer on the left to input the correct code and asset name into your script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, you learned how to configure authentication so your runbooks can access Azure resources. Being able to authenticate to Azure without storing usernames and passwords in the runbook code is a security best practice. Now that you can connect to Azure, start thinking about what tasks you can automate using runbooks. You will need to be sure to import any modules required by your code.&lt;/p&gt;

&lt;p&gt;Check back soon for my next post on Azure Automation where I will show how to configure a hybrid worker so you can execute a runbook anywhere.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Jeff Brown&lt;/strong&gt; is a Systems Engineer and Cloud Administrator with over a decade of experience in server and application administration. In his career, he has managed a wide range of technologies including Windows Server, Exchange Server, Skype for Business, Azure, and Microsoft 365. Jeff enjoys writing about technology topics and creating content for the community. You can find more of his content over at &lt;a href="https://jeffbrown.tech/"&gt;jeffbrown.tech&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
    </item>
    <item>
      <title>Azure Automation: Creating a PowerShell Runbook</title>
      <dc:creator>TeamCloudSkills</dc:creator>
      <pubDate>Tue, 02 Feb 2021 11:08:50 +0000</pubDate>
      <link>https://forem.com/cloudskills/azure-automation-creating-a-powershell-runbook-fn8</link>
      <guid>https://forem.com/cloudskills/azure-automation-creating-a-powershell-runbook-fn8</guid>
      <description>&lt;p&gt;Azure Automation is a cloud-based automation and configuration service that you can use for process automation through runbooks. You can author runbooks using a graphical interface or in PowerShell or Python programming languages. Think of these runbooks as replacing scripts you have scheduled to run on a server. &lt;/p&gt;

&lt;p&gt;In this guide, you will set up an Azure Automation Account and deploy your first PowerShell runbook. When you're finished, you will have the necessary skills to get started deploying runbooks in your Azure tenant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin this guide, you'll need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure tenant and subscription&lt;/li&gt;
&lt;li&gt;Administrator account with sufficient permissions on a subscription, such as Owner or Contributor&lt;/li&gt;
&lt;li&gt;PowerShell knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create an Azure Automation Account
&lt;/h2&gt;

&lt;p&gt;Before creating your first runbook, you need to create an Azure Automation Account. This account is responsible for executing runbooks and authenticating to Azure resources required by the runbook. The account groups together Automation resources, runbooks, and configuration settings. You can create multiple accounts to separate their functionality, such as accounts for development and production.&lt;/p&gt;

&lt;p&gt;To get started creating your first Azure Automation Account, log into the Azure portal at &lt;a href="https://portal.azure.com"&gt;https://portal.azure.com&lt;/a&gt;. In the Search bar, enter Azure Automation, and select Automation Accounts from the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ikmj3oWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/srostou7qzd7yaz7utkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ikmj3oWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/srostou7qzd7yaz7utkj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here on the Automation Accounts resource page, you can view and manage any existing Automation Accounts. Click on &lt;strong&gt;+ Add&lt;/strong&gt; to create your Automation Account. In the &lt;strong&gt;Add Automation Account&lt;/strong&gt; page, you need to define some information for your account:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: Enter a descriptive name for the account. Following Microsoft recommendations, I will name mine based on the resource type, its purpose, environment, Azure region, and instance. For example, aa-cloudskills-prod-westus-001.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subscription&lt;/strong&gt;: Select a valid Azure subscription.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Group&lt;/strong&gt;: Select an existing resource group or create a new one. For this demo, I am creating a new resource group named &lt;em&gt;azacct-rg&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location&lt;/strong&gt;: Select a location to host the Automation Account.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Azure Run As account&lt;/strong&gt;: Enabling this option will automatically create an Azure Run As account for authenticating to other Azure resources. For now, configure this to &lt;strong&gt;Yes&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6KOvB1rk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/scoop4m2s46le7m0ttjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6KOvB1rk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/scoop4m2s46le7m0ttjc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have entered the account information, click on &lt;strong&gt;Create&lt;/strong&gt;. Once Azure creates the account successfully, select the account in the Automation Account list (this may require a refresh before it appears).&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a PowerShell Runbook
&lt;/h2&gt;

&lt;p&gt;Now that the Automation Account has been created, you can author the runbook that hosts your PowerShell code. Using the left menu in the Automation Account resource, scroll down to &lt;strong&gt;Process Automation&lt;/strong&gt; and select &lt;strong&gt;Runbooks&lt;/strong&gt;. Here you will see some examples of each type of runbook you can create: Graphical, Python, and PowerShell. You can view each of these runbooks to learn how to perform different actions in runbooks, such as using variables or connecting to Azure resources. You can also import a runbook or browse the PowerShell Gallery and Azure Automation GitHub organization for resources created by Microsoft and the community.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tsrp-ZiQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jiwnw9donvjrfppseuk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tsrp-ZiQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jiwnw9donvjrfppseuk5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's get started creating a runbook by selecting &lt;strong&gt;Create a runbook&lt;/strong&gt;. In the form, enter a runbook name, select the runbook type, and enter a description. This demo is using a PowerShell runbook type. Once you have entered all the runbook information, select the &lt;strong&gt;Create&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dVhDiiMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v4nskiczopsj7b5hy8l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dVhDiiMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v4nskiczopsj7b5hy8l1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After Azure creates the runbook, it should redirect you to the &lt;strong&gt;Edit PowerShell Runbook&lt;/strong&gt; page. This page is where you can enter in the PowerShell code that the Automation Account executes. The menu actions include the ability to save the runbook, publish a new version of the runbook, revert to a previously published version, or run a test of the runbook. On the left, you can view the modules and cmdlets available to use in the runbook, import references to other runbooks, or view assets that you can use in the script, such as variables or certificates for authentication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mLOGQC8C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sx1fdyb9sgsh6mvfd2ng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mLOGQC8C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sx1fdyb9sgsh6mvfd2ng.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this runbook, I am keeping the code simple and displaying the phrase "Hello, Azure Runbooks!" to the console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="s2"&gt;"Hello, Azure Runbooks!"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have entered the code, select the &lt;strong&gt;Save&lt;/strong&gt; button, then the &lt;strong&gt;Publish&lt;/strong&gt; button. You will receive a warning that publishing the runbook will override the existing published version. Select &lt;strong&gt;Yes&lt;/strong&gt; to this prompt. Once the runbook is successfully published, the Azure portal will redirect to the &lt;strong&gt;Overview&lt;/strong&gt; page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execute the Runbook
&lt;/h2&gt;

&lt;p&gt;With the runbook created and published, you can now execute the runbook and view the output. From the &lt;strong&gt;Overview&lt;/strong&gt; page, select the &lt;strong&gt;Start&lt;/strong&gt; icon. It will prompt you to make sure you want to start the runbook, go ahead and select &lt;strong&gt;Yes&lt;/strong&gt;. Once Azure begins executing the runbook, the portal will redirect to the overview page for this runbook job instance. Here, you can view the instance ID, the status, and the input and output streams of the runbook. From here, select the &lt;strong&gt;Output&lt;/strong&gt; tab to view the "Hello, Azure Runbooks!" message to the console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b7RHMTio--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cq9pny8xpjdoz3l9nhjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b7RHMTio--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cq9pny8xpjdoz3l9nhjl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add Warning and Error Output
&lt;/h2&gt;

&lt;p&gt;In the above example, the PowerShell runbook output the string "Hello, Azure Runbooks!". You can also use the cmdlets &lt;em&gt;Write-Warning&lt;/em&gt; and &lt;em&gt;Write-Error&lt;/em&gt; to output warning and error messages to the console logs.&lt;/p&gt;

&lt;p&gt;Back on the runbook &lt;strong&gt;Overview&lt;/strong&gt; page, select the Edit icon at the top to go back to the &lt;strong&gt;Edit PowerShell Runbook&lt;/strong&gt; page with the existing code. In the code editor, add the following lines of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Write-Warning&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Message&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"This is the warning message."&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;Write-Error&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Message&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"This is the error message."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sEcrTDPl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e1l4qxch3hj2wemog68j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sEcrTDPl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e1l4qxch3hj2wemog68j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save and publish this version of the runbook. Back in the &lt;strong&gt;Overview&lt;/strong&gt; page for the runbook, select the &lt;strong&gt;Start&lt;/strong&gt; icon to execute the runbook again, just as you did previously. In the job results window, select the &lt;strong&gt;Errors&lt;/strong&gt; and &lt;strong&gt;Warnings&lt;/strong&gt; tabs to view the custom messages outputted from the script. You can also select &lt;strong&gt;All Logs&lt;/strong&gt; to view all output from the script in one place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--71ovBWJF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vyu1n2bi8vkwrqsshybc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--71ovBWJF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vyu1n2bi8vkwrqsshybc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Op-SKgRq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n3ikulrqc5snbdywzn1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Op-SKgRq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n3ikulrqc5snbdywzn1z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using the appropriate PowerShell cmdlets, you can create a runbook that shows regular, warning, and error messages. Customizing this output can quickly ascertain if the script has any issues by correctly displaying warnings and errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhance Runbook Execution with Parameters
&lt;/h2&gt;

&lt;p&gt;PowerShell parameters enable passing information to a script to use during execution. Parameters allow PowerShell scripts to be more dynamic instead of setting static variables. Azure Automation PowerShell runbooks can also use parameters when you define them in the script code. Let's examine this functionality now.&lt;/p&gt;

&lt;p&gt;Follow the instructions from earlier in this post to edit the runbook code. In the code editor, add a parameter that will accept a name to display in the greeting message. Since this parameter is not mandatory, I'm defining the parameter with a default value of "CloudSkills." Here is the new runbook code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Parameter&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nv"&gt;$Name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CloudSkills"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="s2"&gt;"Hello, &lt;/span&gt;&lt;span class="nv"&gt;$Name&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After saving and publishing the runbook, execute the runbook just as you have done previously. However, this time Azure will display a &lt;strong&gt;Start Runbook&lt;/strong&gt; window to allow you to input a value for the &lt;em&gt;Name&lt;/em&gt; parameter. It also indicates that "CloudSkills" will be used as the default if you don't enter another value. Enter your name and select &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cCyycm6e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3oldf6n4c94ubw4vw7pf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cCyycm6e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3oldf6n4c94ubw4vw7pf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the job results window, the &lt;strong&gt;Input&lt;/strong&gt; tab will show the value of the &lt;em&gt;Name&lt;/em&gt; parameter passed to the runbook, and the &lt;strong&gt;Output&lt;/strong&gt; tab will show the output message using the value of the parameter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Variable Asset
&lt;/h2&gt;

&lt;p&gt;While you can define variables within the runbook code, you can also define variables within the Automation Account to be used by multiple runbooks. Back in the Automation Account, navigate to &lt;strong&gt;Shared Resources&lt;/strong&gt; &amp;gt; &lt;strong&gt;Variables&lt;/strong&gt;. From here, select &lt;strong&gt;Add a variable&lt;/strong&gt;. In the &lt;strong&gt;New Variable&lt;/strong&gt; window, enter a variable name, description, data type, value, and encryption type. Once completed, select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jqu1ZP2i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sr6vwtsz2fami3sh52ka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jqu1ZP2i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sr6vwtsz2fami3sh52ka.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the variable create, you can now reference it in a runbook. Navigate back to the runbook and edit the code. Inside the script, you can retrieve the variable's value by using the &lt;strong&gt;Get-AutomationVariable&lt;/strong&gt; with the variable name and storing it into a script-level variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$congratsMessage&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Get-AutomationVariable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'congratsMessage'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now use the defined variable as needed in the script. Suppose you don't remember the name of the variable asset you defined in the Automation Account. In that case, you can view variable assets under &lt;strong&gt;Assets&lt;/strong&gt; and use the context menu to auto-generate the PowerShell command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XqSmugMm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2dfcfes0hnzghv4ay2k1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XqSmugMm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2dfcfes0hnzghv4ay2k1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save, publish, and then execute the runbook to verify the variable output in the job results window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GCvPrPs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nxoi8y0iqwt6f2c5yksu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GCvPrPs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nxoi8y0iqwt6f2c5yksu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, you learned how to create an Azure Automation Account to host your first PowerShell runbook. You saw how to view the runbook output and use &lt;em&gt;Write-Warning&lt;/em&gt; and &lt;em&gt;Write-Error&lt;/em&gt; to customize the output. Finally, you extended the functionality of your runbook with parameters and variable assets.&lt;/p&gt;

&lt;p&gt;Check back soon for my next post on Azure Automation where I will show how to configure authentication to access and manage Azure resources.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Jeff Brown&lt;/strong&gt; is a Systems Engineer and Cloud Administrator with over a decade of experience in server and application administration. In his career, he has managed a wide range of technologies including Windows Server, Exchange Server, Skype for Business, Azure, and Microsoft 365. Jeff enjoys writing about technology topics and creating content for the community. You can find more of his content over at &lt;a href="https://jeffbrown.tech/"&gt;jeffbrown.tech&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
    </item>
    <item>
      <title>Windows Virtual Desktop with Travis Roberts</title>
      <dc:creator>TeamCloudSkills</dc:creator>
      <pubDate>Tue, 02 Feb 2021 10:55:17 +0000</pubDate>
      <link>https://forem.com/cloudskills/windows-virtual-desktop-with-travis-roberts-2pc1</link>
      <guid>https://forem.com/cloudskills/windows-virtual-desktop-with-travis-roberts-2pc1</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/zIeEZEN9rHU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;In this episode we're catching up with Travis Roberts to chat about all things Windows Virtual Desktop, Azure, and a lot more.&lt;/p&gt;

&lt;p&gt;In part due to the COVID-19 Pandemic, 2020 saw the use of Windows Virtual Desktop soar. Many companies are in the process of switching their networks over to the Cloud. Mike’s guest today is Travis Roberts, the Senior Networking Systems Administrator at RBA Consulting. Travis has a background of working in corporate IT for over 20 years, but he made a switch when he realized he wanted to focus less on management and more on consulting. He has lots of experience with Windows Virtual Desktop (and more) and creates content online to help others learn how to better utilize this technology.&lt;/p&gt;

&lt;p&gt;In this episode, we talk about…&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Travis’s background, how he got into consulting, and what the transition was like coming to that from corporate jobs.&lt;/li&gt;
&lt;li&gt;How Azure has been ramping up and is currently neck and neck with AWS.&lt;/li&gt;
&lt;li&gt;Why, at this point, most Systems Administrators are essentially Cloud Developers, as well.&lt;/li&gt;
&lt;li&gt;How you have to think a bit differently when using ARM templates and PowerShell.&lt;/li&gt;
&lt;li&gt;Why moving to the Cloud can save a business a lot of overhead expenses.&lt;/li&gt;
&lt;li&gt;The mindset shift involved in maintaining a server rack with manual switches and buttons, versus working within Azure.&lt;/li&gt;
&lt;li&gt;Why setting up a virtual network within Windows Virtual Desktop is so much faster than the traditional means.&lt;/li&gt;
&lt;li&gt;Why you need to continuously learn given the way technology is shifting now -and how if you learn it sooner, you can help others as they shift to the Cloud during the next decade.&lt;/li&gt;
&lt;li&gt;How building a brand can help you, and if you’re interested, you should start writing a blog or making Youtube videos.&lt;/li&gt;
&lt;li&gt;The importance of Source Control and how Travis has been using Github.&lt;/li&gt;
&lt;li&gt;The switch from using JSON to using HCL.&lt;/li&gt;
&lt;li&gt;How Travis has been seeing that he’s been needing to learn more about Continuous Integration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Resources from this episode:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Travis on &lt;a href="https://twitter.com/Ciraltos"&gt;Twitter&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Travis on&lt;a href="https://www.youtube.com/channel/UCuB24cID6NnypDWSLe4gfqA"&gt;YouTube&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Travis on &lt;a href="https://www.linkedin.com/in/robertst/"&gt;LinkedIn&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Udemy Course Discount: &lt;a href="https://www.udemy.com/course/zero-to-hero-with-windows-virtual-desktop/?couponCode=C1A569972876FF39D337"&gt;Zero to Hero with Windows Virtual Desktop&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS for Azure Pros: The Ultimate AWS to Azure Service Comparison</title>
      <dc:creator>TeamCloudSkills</dc:creator>
      <pubDate>Tue, 02 Feb 2021 10:52:28 +0000</pubDate>
      <link>https://forem.com/cloudskills/aws-for-azure-pros-the-ultimate-aws-to-azure-service-comparison-3o69</link>
      <guid>https://forem.com/cloudskills/aws-for-azure-pros-the-ultimate-aws-to-azure-service-comparison-3o69</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/m94tlNJCjSI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Already know Azure but need to learn AWS? In this CloudSkills Community Call replay, MSFT MVP Mike Pfeiffer delivers a one-hour training that maps what you already know in Azure to comparable services in AWS.&lt;/p&gt;

&lt;p&gt;Note: Throughout the video Mike mentions an upcoming AWS class at CloudSkills.io; it ended up becoming the Cloud Native DevOps Bootcamp:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloudskills.io/cloudnative"&gt;https://cloudskills.io/cloudnative&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>azure</category>
    </item>
    <item>
      <title>Cloud Career Tips &amp; Strategy for 2021</title>
      <dc:creator>Mike Pfeiffer</dc:creator>
      <pubDate>Sat, 23 Jan 2021 11:09:26 +0000</pubDate>
      <link>https://forem.com/cloudskills/cloud-career-tips-strategy-for-2021-2oge</link>
      <guid>https://forem.com/cloudskills/cloud-career-tips-strategy-for-2021-2oge</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/bJA-CZCJths"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;In this episode I’m sharing by best career tips &amp;amp; strategy for 2021. What should you focus on? What do you need to think about to start building a successful career in the cloud? Those are some of the questions we answer in this episode, along with Q&amp;amp;A with a live audience.&lt;/p&gt;

&lt;p&gt;In this episode, we talk about…&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2021 industry predictions from Forester &amp;amp; Gartner&lt;/li&gt;
&lt;li&gt;How to become multi-dimensional as an engineer&lt;/li&gt;
&lt;li&gt;Soft skills you need to be building, in addition to hard skills&lt;/li&gt;
&lt;li&gt;How to run experiments to gather data&lt;/li&gt;
&lt;li&gt;How to get more hands-on experience&lt;/li&gt;
&lt;li&gt;Q&amp;amp;A with a live audience&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>career</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Getting Started with Terraform on Azure: Tips and Tricks</title>
      <dc:creator>Luke Orellana</dc:creator>
      <pubDate>Sun, 16 Aug 2020 19:46:27 +0000</pubDate>
      <link>https://forem.com/cloudskills/getting-started-with-terraform-on-azure-tips-and-tricks-4afo</link>
      <guid>https://forem.com/cloudskills/getting-started-with-terraform-on-azure-tips-and-tricks-4afo</guid>
      <description>&lt;h1&gt;
  
  
  Table Of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Modules&lt;/li&gt;
&lt;li&gt;Remote State&lt;/li&gt;
&lt;li&gt;Source Control&lt;/li&gt;
&lt;li&gt;Keeping Designs Simple and Reusable&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Infrastructure development is complex, and there can be many hoops to jump through. Making changes to live infrastructure code always involves some risk and can feel like a game of Jenga. While Terraform is relatively new (initial release in  2014), several proven practices are known in the Terraform community that help deal with some hurdles and complexities. Understanding the trial and errors of those who used Terraform early on allows us to learn from them and be more efficient when we are just starting.  This knowledge increases the chance of success in implementing and using Terraform. &lt;/p&gt;

&lt;p&gt;In this guide, we will review some practical tips and tricks to be mindful of when developing with Terraform. Although these are community proven practices, keep in mind that there is more than one way to do something, and it doesn't necessarily mean that's the best and most efficient way for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modules &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In the software development world, we break up reusable segments of our code into parameterized functions and reuse them. This practice allows us to write tests for these functions and maintain them. In Terraform, we use modules in the same manner. We make templates of infrastructure and convert them into modules, which allows the code in each module to be reusable, maintainable, and testable. &lt;/p&gt;

&lt;p&gt;Do not create Terraform configurations that are thousands of lines of code. It reduces code quality and clarity when debugging or making changes. Splitting up your infrastructure code into modules will also prevent you from copying and pasting code between environments, which can introduce many errors. &lt;/p&gt;

&lt;h3&gt;
  
  
  Version Providers and Modules
&lt;/h3&gt;

&lt;p&gt;The Azure Terraform provider is changing extremely fast. Check out the &lt;a href="https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/CHANGELOG.md"&gt;change log&lt;/a&gt; for the Azure provider. The amount of changes made every month is extreme, and many code-breaking changes appear in many updates. To guard yourself against this, version your provider and save yourself the headache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "azurerm" {
  version = "1.38.0"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Additionally, version your modules, especially ones from the Terraform Registry. If you're developing private modules, version those as well. Versioning modules allow for introducing module changes without affecting the infrastructure that is currently using them. For example, let's say my current environment uses version 1.1 of my server module, which is stored in a repo and tagged with v1.1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Create Server
module "server" {
  source    = "git::https://allanore@dev.azure.com/allanore/terraform-azure-server/_git/terraform-azure-server?ref=v1.1"

  servername    = "myserver123"
  rgname    = azurerm_resource_group.rg.name
  location  = azurerm_resource_group.rg.location
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I need to add a new feature to the module that includes Private Link functionality. In this case, I can use module versioning to safely deploy infrastructure using the new version without affecting infrastructure using version 1.1 by tagging it as version 1.2 and sourcing the specific module version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Create Server
module "server" {
  source    = "git::https://allanore@dev.azure.com/allanore/terraform-azure-server/_git/terraform-azure-server?ref=v1.2"

  servername    = "myserver211"
  rgname    = azurerm_resource_group.rg.name
  location  = azurerm_resource_group.rg.location
  private_link = true
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Using versioning for both providers and modules is a must in Terraform, and you will quickly find out why if your not using them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Dependency Injections
&lt;/h3&gt;

&lt;p&gt;When reusing modules throughout different environments, some environments may contain required components that already exist. For example, if I write a module that requires a storage account for the service that it's deploying, there may be some environments where this storage account already exists. This scenario may cause some people to attempt to write logic into their code to check if a resource exists or not and perform X action if it does. Introducing complex logic like this is not in line with the declarative methodology that Terraform uses. The resource either exists or not. &lt;/p&gt;

&lt;p&gt;Instead, use dependency injections. Create the module to allow input from resources that either already exist or are created in the configuration.&lt;br&gt;
Take a look at the code below, for example. We have a Network Security Group module that requires a subnet ID to associate the NSG to a subnet. In this example, we are creating the subnet within the same configuration and passing it along. The subnet does not exist prior, so we are creating one to assign to the NSG:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Subnet is directly managed in the Terraform configuration

resource "azurerm_subnet" "snet" {
  name                 = "snet-cloudapp-1"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefix       = "10.0.2.0/24"
}

module "nsg" {
  source                = "./modules/nsg"
  resource_group_name   = azurerm_resource_group.rg.name
  nsg_name              = "nsg-${var.system}-httpsallow"
  source_address_prefix = ["VirtualNetwork"]
  predefined_rules = [
    {
      name     = "HTTPS"
      priority = "500"
    },
    {
      name     = "RDP"
      priority = "501"
    }
  ]

  subnet_id = azurerm_subnet.snet.id
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Alternatively, we have another environment where a subnet is already existing. We would use the &lt;code&gt;azurerm_subnet&lt;/code&gt; data source to collect the subnet id information and pass it through to our module using &lt;code&gt;data.arurerm_subnet.snet.id&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Subnet already exists and is called with a data source block

data "azurerm_subnet" "snet" {
  name                 = "snet-coudapp-1"
  virtual_network_name = "production"
  resource_group_name  = "rg-networking"
}

module "nsg" {
  source                = "../../modules/nsg"
  resource_group_name   = azurerm_resource_group.rg.name
  nsg_name              = "nsg-${var.system}-httpsallow"
  source_address_prefix = ["VirtualNetwork"]
  predefined_rules = [
    {
      name     = "HTTPS"
      priority = "500"
    },
    {
      name     = "RDP"
      priority = "501"
    }
  ]

  subnet_id = data.azurerm_subnet.snet.id
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We are not hard coding logic into the module to check for an existing subnet in these two examples. Instead, we take the declarative approach that Terraform is designed for and state in our configuration if it already exists or if it doesn't. Our module can now be reusable in different situations, and we are not complicating the module. We also have better visibility in the module code. Another co-worker on the team can look at the module and get a clear distinction between the two environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remote State &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Try to use &lt;a href="https://cloudskills.io/blog/terraform-azure-04"&gt;remote state&lt;/a&gt; as soon as possible in your Terraform development. It can save many headaches later on, especially when multiple people become involved with deploying and managing the same Terraform code. &lt;/p&gt;

&lt;p&gt;Using remote state allows us to secure sensitive variables in our configurations. The Terraform state file is not encrypted, so keeping it on a local workstation may quickly become a security issue. Also, don't make a habit of storing Terraform state files in source control. It increases the chance of exposing sensitive variables, especially if the repository is public. Instead, use a &lt;a href="https://git-scm.com/docs/gitignore"&gt;gitignore&lt;/a&gt; file to omit any tf.state files from accidentally getting committed automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Split Up Terraform States
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.slideshare.net/opencredo/hashidays-london-2017-evolving-your-infrastructure-with-terraform-by-nicki-watt"&gt;Terraservices&lt;/a&gt; is a popular term coined a few years ago which involves splitting up Terraform state into different environments to reduce the blast radius on changes made. You don't want to keep all your eggs in one basket. Let's say a team member makes a change to resize a VM. They end up fat fingering the resource group name, and their pipeline workflow auto applies the incorrect change. Terraform rebuilds the resource group and deletes all items causing catastrophic failures to the environment. This situation is not uncommon. &lt;/p&gt;

&lt;p&gt;Also, be aware that your Terraform plan becomes longer and longer if you don't split up a reasonably large environment into separate states. It introduces a new type of risk. Now, the Terraform plan can take longer to run and become harder to read as there are more resources affected by the change. It also means unwanted changes can be easily missed. It's easier to catch a mistake in a few lines of code vs. 10000 lines.&lt;/p&gt;

&lt;p&gt;Ideally, you want to separate high-risk components from components that are typically changed and modified. Don't keep all the eggs in one basket. Below is a Terraform project folder structure inspired by &lt;a href="https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa"&gt;Gruntwork's recommended setup&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prod
  └ rg
    └ main.tf
    └ variables.tf
    └ output.tf
  └ networking
  └ services
      └ frontend-app
          └ main.tf
          └ variables.tf
          └ output.tf
      └ backend-app
  └ data-storage
      └ sql
      └ redis
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In the folder structure above, each folder separates out the Terraform states. The resource group has its own state, limiting the risk of daily changes made to the resource group. Services like SQL and Redis are also separated to reduce the risk of accidentally modifying the databases on any change. Splitting up environment states like this reduces a lot of risks. However, it adds a lot of complexity to the infrastructure code. We now have to design ways to feed information between each state and deal with dependencies. Terraform currently doesn't allow for an easy way to manage this. But, tools like &lt;a href="https://terragrunt.gruntwork.io/"&gt;Terragrunt&lt;/a&gt;, developed by Gruntwork, address handling the complexities with splitting up Terraform state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Source Control &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Terraform and source control go together hand in hand. If you're not storing your Terraform code in source control, you're missing out on the following benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Change Tracking&lt;/strong&gt;: The historical data of all infrastructure changes is extremely powerful and a great bonus for auditors or compliance requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback&lt;/strong&gt;: One of the major benefits of infrastructure as code is the ability to "rollback" to the previous configuration or state. However, depending on the environment, that rollback may mean a rebuild. Storing the Terraform configuration in source control makes it easier to re-deploy a pre-existing working state of the environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration Among Teams&lt;/strong&gt;: Most source control tools like Azure DevOps, Github, or Bitbucket provide a form of access control. This role-based access allows for separate teams to manage their infrastructure code or provide read-only access to other teams for increased visibility of how the environment works. No more guessing if a firewall port is open or not; look at the code and see if it is.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt;: There are many CI/CD tools available that hook into source control. These tools amplify the development and deployment of Terraform.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is also the concept of &lt;em&gt;GitOps&lt;/em&gt;, where processes are automated through Git workflows like submitting a pull request. There are community tools out there like &lt;a href="https://www.runatlantis.io/"&gt;Atlantis&lt;/a&gt; that are amazing for GitOps with Terraform and can increase efficiency among teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structure Repo to Business Needs
&lt;/h3&gt;

&lt;p&gt;Designing the source control repo structure for infrastructure can be an intimidating task, especially for those making the jump from a traditional systems engineer to an infrastructure developer role. There are various strategies for storing Terraform code. Some companies put all their Terraform configurations into a single repository, some store configurations with each project's application source code. So which one should I pick?&lt;/p&gt;

&lt;p&gt;This short answer is, it depends on your environment. Large environments are going to have a completely different set up than start-up environments. Also, team structure comes largely into play here. Do you have a team that manages all the infrastructure, or is it the developers and DevOps engineers who manage the infrastructure for their application? You will see many DevOps experts and thought leaders in the community talk about &lt;a href="https://www.thoughtworks.com/insights/blog/applying-conways-law-improve-your-software-development"&gt;Conway's Law&lt;/a&gt;, which states that the communication structure of organizations is the limiter on the way that they develop and design software. This concept is pretty evident when implementing Terraform into your organization. Analyze how your teams are structured and structure your Terraform configuration repos in a way that compliments that structure.  &lt;/p&gt;

&lt;p&gt;Here are several common repo strategies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Single Repo:&lt;/strong&gt;: All live infrastructure code is in one single repository managed by a governing team.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;One Repo Per Project:&lt;/strong&gt; Every application has its own Terraform folder, and code is stored in a folder of the application source code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;One Repo Per Environment:&lt;/strong&gt; Environments are split up into their own repository and managed by separate teams. For example, code managing the company firewalls are in a separate repo and managed by the security or networking team. This strategy allows each team to own and manage their infrastructure responsibilities and delegate out lesser permissions for other teams to request changes or view the environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Don't stress out over getting your Terraform repo structure right when your first starting out. This will most likely change several times due to business needs, scaling up, or finding a better solution for your environment. Starbucks changed up its repo structure three times over several years and ended up settling &lt;a href="https://www.hashicorp.com/resources/terraform-at-starbucks-infrastructure-as-code-for-software-engineers/"&gt;on a repo per component&lt;/a&gt; strategy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep All Live Infrastructure in the Master Branch
&lt;/h3&gt;

&lt;p&gt;All live infrastructure changes should always stay in the master branch. Storing the same infrastructure code in multiple branches can cause conflicts and create headaches. For example, let's say a team member branches off of master and adjusts the Terraform configuration to change a VM's size. They make their change and deploy it, but don't merge their branch back into master because they are still making changes. A few minutes later, someone else modifies the same VM's tags but creates a different branch off of master that hasn't been updated yet with the new VM size. The change to the tags is deployed, and now the VM size is reverted back to its original size because it didn't contain the VM resize code. This is why it's important to make sure the master branch is always a live representation of the environment. &lt;/p&gt;

&lt;h3&gt;
  
  
  Execute Terraform Code Through a Pipeline
&lt;/h3&gt;

&lt;p&gt;When first starting on Terraform, it is typical to have each infrastructure developer manage the infrastructure by authenticating locally on their machine with the Azure provider (either with AZ Cli or some environment variables). They execute the Terraform code with their local install of Terraform. Long term, this can cause a few headaches like inconsistent Terraform versions among developers. It's best to shift to deploying code with a pipeline by storing Terraform configurations in source control and running a Continuous Integration process that executes the Terraform code on pull requests. A pipeline significantly increases automation capabilities and has a few advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform code is run on the same platform every time, reducing errors due to inconsistent dependencies like Terraform versions.&lt;/li&gt;
&lt;li&gt;Pipelines can introduce configuration error checking and Terraform policy, preventing insecure or destructive configurations changes from being made.&lt;/li&gt;
&lt;li&gt;Automated testing can run to perform regression tests against modules when a new change is made to the modules.&lt;/li&gt;
&lt;li&gt;Many pipeline tools provide some sort of secret store functionality that makes it easy to securely pass variables through to Terraform configurations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Keeping Designs Simple and Reusable &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;It's essential to keep the right balance between creating conditional logic and introducing too many complexities. For example, it may be useful to add logic into a networking module that will automatically choose the next available subnet space on a Virtual Network and create a subnet. While this logic prevents a user from having to specify a subnet address when they use the module, it also adds more complexity and can make the module more brittle. It may be better to design the module to contain an argument to take in input for the subnet address, requiring the user to calculate a subnet address for the module input beforehand. These are trade-offs with pros and cons to each.&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;code review&lt;/em&gt; is a software development practice where multiple developers check each other's code for mistakes. With infrastructure development, this is starting to become a more common practice.  Complex Terraform code will take away from the benefits of code reviews.  When peers cannot easily understand the code to review, errors can be easily missed.  &lt;/p&gt;

&lt;p&gt;Complex Terraform code will also make it harder to troubleshoot issues and onboard new people to the team. One of the benefits of IaC is the living documentation that it provides. Don't put in logic that makes infrastructure code too complex to use for documentation. &lt;/p&gt;

&lt;p&gt;Connecting inputs and outputs between modules and states can introduce many complexities and can grow to become a dependency nightmare. When passing data between modules or state files, be mindful of the purpose and limit the dependencies involved in your design. &lt;/p&gt;

&lt;h3&gt;
  
  
  Use Provisioners Sparingly
&lt;/h3&gt;

&lt;p&gt;Most provisioners introduce platform or network constraints into our Terraform code. For example, using a provisioner to SSH into a server once it's provisioned and run a script will now require the node executing the Terraform code to have network access to the VM during deployment.&lt;/p&gt;

&lt;p&gt;Instead, take advantage of Azure's custom script extension for VMs to &lt;a href="https://github.com/CloudSkills/Terraform-Projects/blob/master/10-Advanced-HCL/6.%20Dynamic_blocks/main.tf"&gt;pass a script through to the VM&lt;/a&gt; without any network constraints. &lt;/p&gt;

&lt;p&gt;The goal is to create infrastructure code that you can execute from anywhere. Aim to achieve this as much as possible to give your design even more reusability. &lt;/p&gt;

&lt;h3&gt;
  
  
  Use Terraform Graph to Troubleshoot Dependency Issues
&lt;/h3&gt;

&lt;p&gt;During Terraform development, you may run into resource timing errors where a resource is deployed but relies on another resource that hasn't completed provisioning yet. Maybe a disk or a storage account provisions too fast half of the time or a subnet isn't deployed before a network interface. Typically this is due to a dependency issue in the configuration and is usually solved using interpolation between the proper resources or using a "depends on" block. However, these can be difficult to track down. With &lt;code&gt;terraform graph&lt;/code&gt;, you can run this command against a configuration directory, and it will produce a DOT format output. You can then copy and paste the output into a website like &lt;a href="http://www.webgraphviz.com/"&gt;WebGraphViz&lt;/a&gt; to generate a visual representation of the configuration dependencies to help troubleshoot. &lt;/p&gt;

&lt;h3&gt;
  
  
  Use the Terraform Registry
&lt;/h3&gt;

&lt;p&gt;Especially when first starting out, don't try to reinvent the wheel. The &lt;a href="https://cloud.google.com/devops/state-of-devops"&gt;State of the DevOps report&lt;/a&gt; shows that highly efficient teams re-use other people's code. There are many Azure modules already created on the &lt;a href="https://registry.terraform.io/"&gt;Terraform Registry&lt;/a&gt;. If you need to deploy a specific Azure service, take the time to search the registry and see if a module has already been created for the service you need. If the modules that are in the Terraform registry don't meet your needs, you can fork these modules and customize them to your own.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;When getting started with Terraform, don't try to do everything all at once. Start small and try to make minor improvements to your infrastructure little by little. Only focus on making one quality change at a time, instead of building one big massive project from the start with pipelines, modules, tests, and remote state storage.  In the end, you will achieve faster results and create a higher quality design overall.&lt;/p&gt;

&lt;p&gt;Also, keep in mind that every environment is different. Not all of these tips will fit every Terraform use case. For example, if your environment is very simple and extremely small, it may not be worth it to split up the Terraform state files. Having good judgment and design for your infrastructure code comes into play. Learn the different concepts in the community and explore how other people are using Terraform, and then do what works best for your environment. &lt;/p&gt;

&lt;p&gt;Infrastructure as code has not yet reached its maturity and has yet to become the standard way of operating for most companies.  Over the years, research has shown that companies adopting infrastructure as code are functioning at significantly higher speeds than those that are still running on traditional methods. This research is making skillsets with tools like Terraform high in demand for companies. Taking the time to learn it is well worth it. Terraform is still in its infancy stage, and the game will continue to evolve and always get better each year. Enjoy the creativity and embrace the complexity and learning that comes with infrastructure development.  &lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Getting Started with Terraform on Azure: Testing</title>
      <dc:creator>Luke Orellana</dc:creator>
      <pubDate>Sun, 16 Aug 2020 19:41:30 +0000</pubDate>
      <link>https://forem.com/cloudskills/getting-started-with-terraform-on-azure-testing-1a16</link>
      <guid>https://forem.com/cloudskills/getting-started-with-terraform-on-azure-testing-1a16</guid>
      <description>&lt;h1&gt;
  
  
  Table Of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Step 1 — Module Repo Folder Structure&lt;/li&gt;
&lt;li&gt;Step 2 — Types of Testing&lt;/li&gt;
&lt;li&gt;Step 3 — Static Analysis&lt;/li&gt;
&lt;li&gt;Step 4— Unit Testing&lt;/li&gt;
&lt;li&gt;Step 5 — Integration Testing&lt;/li&gt;
&lt;li&gt;Step 6 — End-to-End Testing&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In software development, testing code is a common practice. Software developers need to be able to validate parts of their code to ensure it is working in an automated fashion. The realm of infrastructure development takes software best practices to create and manage infrastructure code, and with that comes testing. You should be writing tests for your Terraform code.&lt;/p&gt;

&lt;p&gt;Writing tests for Terraform provides many benefits that make life easier as an infrastructure developer. Automated tests provide faster loop feedback. No more making a change to a Terraform configuration and manually running &lt;code&gt;terraform apply&lt;/code&gt; and then checking the Azure portal to ensure that the change is there. Instead, we can use tools like Terratest to perform these steps for us and allow us to test our modules much faster than we could manually. Not only do we get faster feedback, but also fewer bugs. Automating tests for every possible scenario in a Terraform configuration provides better code coverage and can catch bugs much quicker. Tests give us increased confidence in our changes and provide us with greater predictability for our change. We can now accurately predict that our code deploys what we designed it to deploy without destroying other resources.&lt;/p&gt;

&lt;p&gt;One common misconception is that because Terraform is declarative, we don't need to write tests for it. We are already declaring the resource that needs to exist. If there is an issue with provisioning that resource, Terraform automatically provides the error. This is a valid point, which is why when we talk about testing our Terraform code, we want to write tests for sanity checks, conditional logic, and variable outcomes in our code. For example, writing a test for the following code to create a resource group would add minimal benefit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_resource_group" "rg" {
    name     = "rg-myrg"
    location = var.location
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If we wrote in a check to ensure that the resource group deploys to the proper location, this would be a redundant check.  The resource group will automatically be built in the location we specify, and if it's not successful, an error event occurs natively through Terraform. Writing hundreds of tests in this way could become more of a maintenance burden since we would have to continually modify the test to keep up with changes to the module.&lt;/p&gt;

&lt;p&gt;On the other hand, writing a test for the example below would be more beneficial since it contains more variance in the outcome and could potentially become altered when changes are made to the module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_resource_group" "rg" {
    name     = "rg-myrg"
    location = var.environment != "Production" ? "southcentralus" : "northcentralus"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In this guide, we are going to clone a module repository and walk through writing and running tests for it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Before you begin, you'll need to set up the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://azure.microsoft.com/en-us/"&gt;Azure subscription&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since we will be writing tests in GO, we will need an environment set up to write these tests. We are going to use Visual Studio Codespaces to walk through this guide. Visual Studio Codespaces is an online version of Visual Studio Code. It allows us to automatically deploy an environment with a code repository so we can dive into creating tests with minimal setup.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We will be using the basic sized environment for our Visual Studio Codespace. The compute for this service will be hosted in your Azure tenant, so there will be some charges associated. The average cost for a basic environment is around $0.08 per hour. Also, environments can be in a "suspended" and "active" state, which reduces pricing even further. For more information on VSC billing, check out the &lt;a href="https://azure.microsoft.com/en-us/pricing/details/visual-studio-online/"&gt;pricing page&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To create the Codespaces environment, navigate to the &lt;a href="https://online.visualstudio.com/login"&gt;Visual Studio Codespaces&lt;/a&gt; login page and sign in with your Azure credentials. Make sure to use Google Chrome as FireFox and Edge are not yet supported. Once logged in, we will be presented with the following page below. Select &lt;strong&gt;Create Codespace&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s8XoFekf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e8iflkiicmiu4zzt7jav.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s8XoFekf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e8iflkiicmiu4zzt7jav.PNG" alt="NoCodespace"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to create a billing plan. The billing plan connects the Visual Studio Codespace environment to our Azure subscription. Select a location that makes sense for you. Also, you can input a plan name according to your Azure naming standards. The plan name is the name of the Azure resource that deploys to your subscription. Next, we need to specify the resource group to host the Visual Studio Codespace resource. When done configuring these settings select &lt;strong&gt;Create&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uV2L78z_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5yxpl427jsoikmzh6z85.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uV2L78z_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5yxpl427jsoikmzh6z85.PNG" alt="CreateBillingPlan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can set up our Codespace; this is the Visual Studio environment. We could have multiple Codespaces in our plan if we wanted to. Input a &lt;strong&gt;Codespace Name&lt;/strong&gt; for the Codespace. Under &lt;strong&gt;Git Repository&lt;/strong&gt; paste in the following GitHub repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github.com/allanore/terraform-azure-testing
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The GitHub repo contains the code for the Terraform module that we will create tests for in this guide. There is also a &lt;a href="https://github.com/allanore/terraform-azure-testing/blob/master/.devcontainer/post-create.sh"&gt;post-create.sh&lt;/a&gt; script in this repo that automatically installs the tools we want for our environment, like Go and Terraform. Under &lt;strong&gt;Instance Type&lt;/strong&gt;, select the Basic type. Next, select &lt;strong&gt;Create&lt;/strong&gt; to build out our VSC environment finally:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GPZ4UYOV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ottbywwyl6uqcxwb5tal.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GPZ4UYOV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ottbywwyl6uqcxwb5tal.PNG" alt="CreateCodespace"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We should see our environment starting to build and the post-create.sh script automatically executes and installs our tools and extensions. We are installing GO, Terraform, and a few other tools:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You may need to refresh your browser at some point to get this screen to show up.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b8bgYzsj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/76av2taorsppxkbtklgl.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b8bgYzsj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/76av2taorsppxkbtklgl.PNG" alt="BuildingCodespace"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the &lt;strong&gt;Configure Codespace&lt;/strong&gt; section shows complete, we are ready to move on to  reviewing the folder structure of this repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0bY_2JO4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x3kifirlyxj95cyndv2k.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0bY_2JO4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x3kifirlyxj95cyndv2k.PNG" alt="ConfigureCodespace"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 — Module Repo Folder Structure &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Before we can write our tests, we need to know a little more about what this module does and the structure of the repo. The function of this Terraform module is to deploy a virtual network. There are also submodules for creating a network security group and virtual network peer. In Visual Studio Codespaces, you should see the following folder structure on the left-hand side:&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--medDuIOH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o19m7874eb4m8sgy4g1i.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--medDuIOH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o19m7874eb4m8sgy4g1i.PNG" alt="RepoStructure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the root of this repo we can see several files, we have our &lt;code&gt;.gitignore&lt;/code&gt; file, which contains a list of file types that we don't want to save into source control, like .tfstate. Next, we have our &lt;code&gt;.pre-commit-config.yaml&lt;/code&gt; file, which contains a list of tools to execute on each git commit. We will go deeper into this in a later step. Last, we have our Terraform configuration files &lt;code&gt;main.tf&lt;/code&gt;, &lt;code&gt;output.tf&lt;/code&gt;, and &lt;code&gt;variables.tf&lt;/code&gt;. These are the root Terraform files and perform the base function of our module, which is to create a virtual network and subnet.  &lt;/p&gt;

&lt;p&gt;Now that we've gone over the files in the root folder of this module, let's go over each folder so we have a better understanding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;.devcontainer&lt;/strong&gt; - This folder is only used to automate the Visual Studio Codespace environment. It does not affect our terraform module.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;.azuredevops&lt;/strong&gt; - Contains our CI pipeline for Azure DevOps. We won't be going over Azure DevOps in this guide, but if you wanted to take this guide a step further and add this repository into an Azure DevOps pipeline, this is the yml file with all the steps needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;examples&lt;/strong&gt; - The examples folder serves two purposes. First, it serves as documentation on how to use the module. Second, it serves as Terraform code that we can use for testing. We can execute these examples in our test files to stand up resources that we can test against. There are two subfolders in the examples folder. Each one has code for performing different tasks of the module. The &lt;em&gt;network&lt;/em&gt; folder contains code for standing up a Virtual Network and Network Security Group. The &lt;em&gt;vnet-peering&lt;/em&gt; folder contains code for standing up two virtual networks and then peering them together. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;modules&lt;/strong&gt; - The modules folder contains our sub-modules or &lt;em&gt;helper modules&lt;/em&gt; that provide additional features like creating a Network Security Group or Virtual Network Peer. These are optional modules that provide more flexibility and options for those that are using the module. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;test&lt;/strong&gt; - This is where our test files will live.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is best practice in Terraform to split up tasks or services into modules. We want to use our modules as building blocks to build infrastructure one piece at a time. From a developer perspective, one might think of modules like functions. Where each module is performing the heavy lifting of a specific task:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X21WUKAW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y00h5vmnjupdnlfl6cze.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X21WUKAW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y00h5vmnjupdnlfl6cze.PNG" alt="UsingModules"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having our infrastructure built in smaller units like modules allows us to write tests for them. You can't write a test for a Terraform configuration that is thousands of lines of code. This also provides us with a smaller blast radius when making changes to code. If I make a change to my web app module, I don't put the entire application at risk.&lt;/p&gt;

&lt;p&gt;Even though there is a smaller blast radius, there is still potential to destroy infrastructure when making changes to modules, which is why there needs to be a thorough automated way to test our modules and test them often. Next, we will go over the different types of tests we can write.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2 — Types of Testing &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are four basic types of testing. These terms are loosely categorized in the infrastructure as code world. We are still in the infant stages, and there hasn't been a standardized definition for each category. We will be going over the way that &lt;a href="https://terratest.gruntwork.io/docs/testing-best-practices/unit-integration-end-to-end-test/"&gt;Gruntwork defines testing in Terraform&lt;/a&gt;. Gruntwork is a company that provides Terraform modules to companies that desire to have IaC in their environment but don't have the skillset or time to create their own. They have years of experience developing infrastructure in Terraform and are also the creators of Terratest, which is the tool we will be using for some of our tests.  &lt;/p&gt;

&lt;p&gt;These are the four basic types of tests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Static Analysis&lt;/strong&gt; - Testing code without running it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unit Testing&lt;/strong&gt; - Testing a single unit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Testing&lt;/strong&gt; - Testing the functionality between two or more units.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;End-to-End Testing&lt;/strong&gt; - Testing an entire application infrastructure from the ground up. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is a diagram demonstrating the cost of each test. We want to run most of our tests at the bottom because that is the quickest and least costly to run. Each test will also catch different types of bugs. If we aim to catch most bugs with static analysis and unit tests, we will gain much more speed in development than if we are testing for the same bug in an end-to-end test. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AaktyE2F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6pnrfxvadr8rk634e5bs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AaktyE2F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6pnrfxvadr8rk634e5bs.png" alt="Testing"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3 — Static Analysis &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Static analysis tests involve testing our code without running it. We want to run a tool that can look at our code and analyze if there is a bug in it. A great way to perform static analysis testing is by using a tool called pre-commit. Pre-commit packages git hooks together to allow for tools to be run after each commit. If we look at the contents of the &lt;code&gt;.pre-commit-config.yaml&lt;/code&gt; file in our repo, we can see it's configured to run several hooks. We can also see the repository URL, which contains all the git hook scripts for these hooks. This allows us to manage our git hooks and distribute them amongst team members in a much more manageable fashion. This pre-commit repo is from Gruntwork, they've done all the work to create the hooks and bundle them up into a pre-commit repository for community use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;repos:
  - repo: https://github.com/gruntwork-io/pre-commit
    rev: v0.1.4
    hooks:
      - id: terraform-fmt
      - id: terraform-validate
      - id: gofmt
      - id: golint
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In the &lt;code&gt;.precommit-config.yaml&lt;/code&gt;, there are two hooks for Terraform and two for GO. &lt;strong&gt;Terraform-fmt&lt;/strong&gt;  is a hook that runs the &lt;code&gt;terraform fmt&lt;/code&gt; command, which automatically formats code to look pretty with proper spacing. &lt;strong&gt;Terraform-validate&lt;/strong&gt; runs the &lt;code&gt;terraform validate&lt;/code&gt; command against our code to ensure it is syntactically correct by checking for no misplaced &lt;code&gt;{ }&lt;/code&gt; or invalid Terraform syntax.  &lt;/p&gt;

&lt;p&gt;Let's give these two hooks a test by modifying some Terraform code. Open the &lt;code&gt;variables.tf&lt;/code&gt; file in the root of the repo and add a description to the system variable. Let's purposely use the argument &lt;code&gt;descriptions&lt;/code&gt; instead of &lt;code&gt;description&lt;/code&gt; in our variable to catch the error in the pre-commit hook. Copy the snippet below and overwrite the content currently there for the &lt;code&gt;system&lt;/code&gt; variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "system" {
  type         = string
  descriptions = "Name of the system or environment"
  default      = "terratest"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Save the changes by entering &lt;strong&gt;CTRL + S&lt;/strong&gt; on the keyboard. Before pre-commit can run, we need to configure the hooks for this repository. Open up the terminal by entering &lt;strong&gt;CTRL + ~&lt;/strong&gt; on the keyboard and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pre-commit install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now let's try to commit our change we made to the system variable in the &lt;code&gt;variables.tf&lt;/code&gt; file. When we run our commit, the pre-commit hooks for &lt;code&gt;terraform fmt&lt;/code&gt; and &lt;code&gt;terraform validate&lt;/code&gt; are triggered:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
git commit -m "this is a test commit to trigger our pre-commit hooks"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We should see in the output that &lt;code&gt;terraform validate&lt;/code&gt; has run and failed. We should also see the error message describing why it failed, which is because we used the argument &lt;code&gt;descriptions&lt;/code&gt; instead of &lt;code&gt;description&lt;/code&gt; which is the correct argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Terraform has installed the required providers to support the configuration
upgrade process. To begin upgrading your configuration, run the following:
    terraform 0.12upgrade

To see the full set of errors that led to this message, run:
    terraform validate

Error: Unsupported argument

  on variables.tf line 3, in variable "system":
   3:   descriptions = "Name of the system or environment"

An argument named "descriptions" is not expected here. Did you mean
"description"?

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To fix the change, rename the&lt;code&gt;descriptions&lt;/code&gt; argument to &lt;code&gt;description&lt;/code&gt;, which should clear our invalid syntax error. But this time, add a large number of spaces in the &lt;code&gt;default&lt;/code&gt; argument. This unsightly formatting will trigger our &lt;code&gt;terraform fmt&lt;/code&gt; pre-commit hook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "system" {
  type         = string
  description = "Name of the system or environment"
  default      =                        "terratest"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, let's add and commit the changes again in git:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
git commit -m "this is a test commit to trigger our pre-commit hooks"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This time we see that &lt;code&gt;terraform fmt&lt;/code&gt; failed. We added that additional spaces in there which goofed up the formatting in our code. &lt;code&gt;Terraform fmt&lt;/code&gt; automatically runs and corrects these changes when the pre-commit hook runs, if it detects any formatting changes made, it will cause a failure like so:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;code&gt;gofmt&lt;/code&gt; and &lt;code&gt;golint&lt;/code&gt; hooks display as skipped. This is because we did not modify any GO files. The pre-commit hooks only run against the files that have been modified and will only trigger against each one's respective file type.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Terraform fmt............................................................Failed
- hook id: terraform-fmt
- files were modified by this hook

variables.tf

Terraform validate.......................................................Passed
gofmt................................................(no files to check)Skipped
golint...............................................(no files to check)Skipped
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If we look at our &lt;code&gt;variables.tf&lt;/code&gt; file, we can see that &lt;code&gt;terraform fmt&lt;/code&gt; has corrected our spaces. Now we can run our add and commit again to save all our changes finally. In the output we can see that both checks pass this time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vsonline:~/workspace/terraform-azure-testing$ git add .
vsonline:~/workspace/terraform-azure-testing$ git commit -m "this is a test commit to trigger our pre-commit hooks"
Terraform fmt............................................................Passed
Terraform validate.......................................................Passed
gofmt................................................(no files to check)Skipped
golint...............................................(no files to check)Skipped
[master 9b67894] this is a test commit to trigger our pre-commit hooks
 1 file changed, 3 insertions(+), 2 deletions(-)

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now our commit is complete. With pre-commit hooks, we can catch bugs before they are committed to code. This process makes our commits much cleaner and prevents silly mistakes from getting committed to source control. It also helps out with code reviews by weeding out syntactical mistakes. We can catch errors quicker in development, where if we just made a change and ran Terraform apply to test, we would have to wait for that process to kick-off before we realized our configuration wasn't syntactically correct. We can iterate through our code quicker, which speeds up development. We could also perform static analysis testing within a CI/CD pipeline to analyze our Terraform plan.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Terraform validate&lt;/code&gt; is a basic static analysis test that we can perform. There are many other static analysis tools in the community that provide great benefits. Below are a few to note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/terraform-linters/tflint"&gt;TFlint&lt;/a&gt; - Catch provider specific bugs like incorrect Azure VM sizes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/instrumenta/conftest"&gt;ConfTest&lt;/a&gt; - Analyze TF plan files and enforces defined policies and governance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/bridgecrewio/checkov"&gt;CheckOV&lt;/a&gt; - Security and compliance on the Terraform configuration level. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 4— Unit Testing &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A unit test involves testing a single unit or individual component. This is the smallest testable piece of code that can be achieved. In Terraform, we could write tests that check the resource blocks in our Terraform configuration, however many in the Terraform community agree that it offers very little value. To write a valuable unit test, we need to communicate with the provider and stand up infrastructure, which by software development testing standards is technically not a true unit test. Its an integration test. In infrastructure development with Terraform, Gruntwork defines a unit test as testing a single module by itself. &lt;/p&gt;

&lt;p&gt;We are going to write a unit test for our networking module. We will create a test that uses Terratest. Terratest is a GO library, built by Gruntwork, for testing Terraform. Our unit test will automatically deploy the Terraform code from the &lt;code&gt;examples/network&lt;/code&gt; folder, test to ensure the desired outcome is achieved in Azure using the Azure API, then run a &lt;code&gt;terraform destroy&lt;/code&gt; and tear down our test infrastructure at the end. The process is the same as if we were manually testing the module, except we are automating it.&lt;/p&gt;

&lt;p&gt;First, we need to create a  GO test file called &lt;code&gt;terraform_azure_network_test.go&lt;/code&gt;. To make this file, right-click on the &lt;code&gt;test&lt;/code&gt; folder in Visual Studio Codespace and select &lt;strong&gt;new file&lt;/strong&gt;. Then name the file &lt;code&gt;terraform_azure_network_test.go&lt;/code&gt;. Notice that file ends in &lt;code&gt;_test.go&lt;/code&gt;. This format indicates that the file is a GO test file. Go will automatically see this file as a test file and execute it when we run our test command later. Now we will create the foundational code for our unit test. &lt;/p&gt;

&lt;h5&gt;
  
  
  Base Setup
&lt;/h5&gt;

&lt;p&gt;Below is the code for a base test setup. I typically use this as a starting point for creating a test and then expand from there. Copy this code to the &lt;code&gt;terraform_azure_network_test.go&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package test

import (
  "github.com/gruntwork-io/terratest/modules/terraform"
  "testing"
)


func TestTerraformAzureNetworkingExample(t *testing.T) {
    t.Parallel()

    terraformOptions := &amp;amp;terraform.Options{
    }

    // At the end of the test, run `terraform destroy` to clean up any resources that were created
    defer terraform.Destroy(t, terraformOptions)

    // This will run `terraform init` and `terraform apply` and fail the test if there are any errors
    terraform.InitAndApply(t, terraformOptions)

}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let's review all the parts and pieces of the code. Starting from the top we have &lt;code&gt;package test&lt;/code&gt; declaring that the name of our package is test (which describes the purpose of our package):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package test
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, we have our &lt;code&gt;import&lt;/code&gt; declaration. The &lt;code&gt;import&lt;/code&gt; declaration contains libraries or packages used in the Golang code. In Golang, libraries and packages are similar to modules in PowerShell. We can reference packages native to GO, or we can reference ones from source control repos like GitHub. We will be adding more libraries as we build out our unit test, for now, we are using the &lt;code&gt;testing&lt;/code&gt; GO package and the &lt;code&gt;github.com/gruntwork-io/terratest/modules/terraform&lt;/code&gt; library created by Gruntwork, which allows us to automate deploying code with Terraform during the test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import (
  "github.com/gruntwork-io/terratest/modules/terraform"
  "testing"
)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After our &lt;code&gt;import&lt;/code&gt; declaration, we have our testing function, which we called &lt;code&gt;TestTerraformAzureNetworkingExample&lt;/code&gt;. Testing functions in GO have the following format; they also need to start with the word &lt;code&gt;Test&lt;/code&gt; with a capital &lt;code&gt;T&lt;/code&gt; followed by a capitalized letter after &lt;code&gt;Test&lt;/code&gt;. We also are using &lt;code&gt;t.Parallel()&lt;/code&gt; which indicates that we want to run this test in parallel with other tests. This parallel statement is a big deal because we can run multiple tests at once to test our module in several different ways:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func TestTerraformAzureNetworkingExample(t *testing.T) {
    t.Parallel()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Inside our function is a variable called &lt;code&gt;terraformOptions&lt;/code&gt;. In GO, we can declare variables by using &lt;code&gt;:=&lt;/code&gt;, which is different than in PowerShell, where we would use &lt;em&gt;$variables = somevalue&lt;/em&gt; to declare a variable. Inside our variable declaration, we are using the &lt;code&gt;github.com/gruntwork-io/terratest/modules/terraform&lt;/code&gt; library by referencing the package name of our library &lt;code&gt;terraform&lt;/code&gt;. Then we are referencing the object inside the library called &lt;code&gt;Options&lt;/code&gt;. This object is called a &lt;em&gt;struct&lt;/em&gt; in Golang. A &lt;em&gt;struct&lt;/em&gt; is similar to an object in PowerShell. We are essentially making an options object that we can add a collection of settings to and pass them through to Terraform. For now, we have no options defined in our struct, but in the next step, we will add them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    terraformOptions := &amp;amp;terraform.Options{
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To finish out our testing function, we have two more lines of code. We are using the Terratest library again by referencing &lt;code&gt;terraform&lt;/code&gt;. However, this time we are using the &lt;code&gt;Destroy&lt;/code&gt; and &lt;code&gt;InitAndApply&lt;/code&gt; functions from the &lt;a href="https://github.com/gruntwork-io/terratest/tree/master/modules/terraform"&gt;Terratest library&lt;/a&gt; to perform the standard terraform init, apply, and destroy commands using our defined options in the &lt;code&gt;terraformOptions&lt;/code&gt; variables. Note, the &lt;code&gt;defer&lt;/code&gt; in GO is similar to a &lt;em&gt;finally&lt;/em&gt; in PowerShell. The defer means GO will run this function at the end and will always run it even if the test were to error out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;defer terraform.Destroy(t, terraformOptions)

terraform.InitAndApply(t, terraformOptions)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h5&gt;
  
  
  Configure Terraform Options
&lt;/h5&gt;

&lt;p&gt;Let's add some options to our &lt;code&gt;terraformOptions&lt;/code&gt; variable. Our Terraform code in the &lt;code&gt;examples/network&lt;/code&gt; folder has a &lt;code&gt;variables.tf&lt;/code&gt; file that allows us to input several parameters and customize our network infrastructure when deployed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "system" {
  type        = string
  description = "Name of the system or environment"
  default     = "terratest"
}
variable "location" {
  type        = string
  description = "Azure location of terraform server environment"
  default     = "westus2"
}
variable "vnet_address_space" {
  description = "Address space for Virtual Network"
}

variable "subnet_prefix" {
  type = string
  description = "Prefix of subnet address"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We need to define the values for these variables in our unit test and then pass them through to Terraform using the &lt;code&gt;terraformOptions&lt;/code&gt; variable. &lt;/p&gt;

&lt;p&gt;If you notice in our &lt;code&gt;variables.tf&lt;/code&gt; file, we have a variable for &lt;code&gt;location&lt;/code&gt;. For the &lt;code&gt;location&lt;/code&gt; variable value, we want to be able to randomly test deploying into different regions that could be a possibility for us to use. This randomization allows us to have a thorough test since each test can deploy infrastructure into one of the listed regions. We will do this by providing a list of possible regions to deploy to in our test and then randomly pick one when the test runs. Doing this can also help sniff out any region-specific incidents occurring in Azure when we run our tests.  &lt;/p&gt;

&lt;p&gt;To create a list of regions, we are declaring a variable called &lt;code&gt;regions&lt;/code&gt; and setting the variable type to &lt;code&gt;[]string&lt;/code&gt; which stands for a list of strings. Then, we are filling out that list with various regions in Azure that we want to test in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    var regions = []string{
        "centralus",
        "eastus",
        "eastus2",
        "northcentralus",
        "southcentralus",
        "westcentralus",
        "westus",
        "westus2",
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, we create a variable for our &lt;code&gt;regions&lt;/code&gt; that uses the function &lt;code&gt;RandomString&lt;/code&gt; to pick a region from our list of regions randomly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;azureRegion := random.RandomString(regions)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have a variable created for the Terraform location input variable, we can configure the values for the rest of the input variables. For the &lt;code&gt;system&lt;/code&gt; variable, we are going to use a similar concept that we used for the Azure region. We are going to generate a random name for the system name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemName := strings.ToLower(random.UniqueId())
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This random name generation allows us to run this test several times at the same time without overlapping within the same resource name. It is useful when multiple people are adding features to a module on their branches and are running the tests. Next, we have hardcoded values for our virtual network address and subnet prefix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vnetAddress := "10.0.0.0/16"
subnetPrefix := "10.0.0.0/24"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have values for our input variables, we need to pass them into &lt;code&gt;terraformOptions&lt;/code&gt;. We add the &lt;code&gt;TerraformDir&lt;/code&gt; argument to specify the directory of the code we want to execute is the code in the &lt;code&gt;examples/network&lt;/code&gt; folder of our repo. Next, we are importing a map of variables, which allows us to pass values through to Terraform as if we were using the &lt;code&gt;-var&lt;/code&gt; option in the command line. We are passing through all four variables that we just set up in GO:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraformOptions := &amp;amp;terraform.Options{

        TerraformDir: "../examples/network",

        Vars: map[string]interface{}{
            "system":             systemName,
            "location":           azureRegion,
            "vnet_address_space": vnetAddress,
            "subnet_prefix":      subnetPrefix,
        },
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We need to add two more packages and libraries to our &lt;code&gt;import&lt;/code&gt; declaration since we are now using them to create a random string for the system name and to choose our Azure region randomly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import (
    "strings"
    "testing"


    "github.com/gruntwork-io/terratest/modules/random"
    "github.com/gruntwork-io/terratest/modules/terraform"
)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we have our test configured to run our Terraform code in the &lt;code&gt;examples/network&lt;/code&gt; folder with values for the &lt;code&gt;variables.tf&lt;/code&gt; file. At this point, our unit test in &lt;code&gt;terraform_azure_network_test.go&lt;/code&gt; should look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package test

import (
    "strings"
    "testing"

    "github.com/gruntwork-io/terratest/modules/random"
    "github.com/gruntwork-io/terratest/modules/terraform"
)

// An example of how to test the Terraform module in examples/terraform-azure-example using Terratest.
func TestTerraformAzureNetworkingExample(t *testing.T) {
    t.Parallel()

    var regions = []string{
        "centralus",
        "eastus",
        "eastus2",
        "northcentralus",
        "southcentralus",
        "westcentralus",
        "westus",
        "westus2",
    }

    // Pick a random Azure region to test in.
    azureRegion := random.RandomString(regions)

    // Network Settings for Vnet and Subnet
    systemName := strings.ToLower(random.UniqueId())
    vnetAddress := "10.0.0.0/16"
    subnetPrefix := "10.0.0.0/24"

    terraformOptions := &amp;amp;terraform.Options{

        // The path to where our Terraform code is located
        TerraformDir: "../examples/network",

        // Variables to pass to our Terraform code using -var options
        Vars: map[string]interface{}{
            "system":             systemName,
            "location":           azureRegion,
            "vnet_address_space": vnetAddress,
            "subnet_prefix":      subnetPrefix,
        },
    }

    // At the end of the test, run `terraform destroy` to clean up any resources that were created
    defer terraform.Destroy(t, terraformOptions)

    // This will run `terraform init` and `terraform apply` and fail the test if there are any errors
    terraform.InitAndApply(t, terraformOptions)
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h5&gt;
  
  
  Add In The Tests
&lt;/h5&gt;

&lt;p&gt;Now we are ready to add in some tests. We want to write tests that are meaningful and don't want the process to become a burden to maintain. Determining if a test is meaningful depends on the situation and environment. A meaningful test for someone else doesn't mean it is meaningful for your case. For this example, we are going to keep things simple. We want to test that our virtual network subnet is truly associated with an NSG after our Terraform example is deployed. This test ensures that any change to our module in the future won't interrupt this desired outcome, which could result in a potential security risk if the NSG was not assigned to the subnet.&lt;/p&gt;

&lt;p&gt;Our Terraform code in the &lt;code&gt;examples/network&lt;/code&gt; folder contains the following in the &lt;code&gt;output.tf&lt;/code&gt; file. These are values we want to use to perform our tests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vnet_rg" {
  description = "Location of vnet"
  value       = module.vnet.vnet_rg
}

output "subnet_id" {
  description = "Subnet ID"
  value       = module.vnet.subnet_id
}

output "nsg_name" {
  description = "Name of vnet Security Group"
  value       = module.nsg.nsg_name
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We want to make sure our modules are not too large to be tested. We don't want a 10,000 line module because it's not possible to unit test. The more you write tests for your modules, the more structured your modules become for testing. This practice becomes a great benefit as it allows for a more stable and better structured Terraform code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The &lt;code&gt;terraform.Output&lt;/code&gt; function allows us to collect output values from our Terraform deployment after the &lt;code&gt;init&lt;/code&gt; and &lt;code&gt;apply&lt;/code&gt; has completed. We are referencing the &lt;code&gt;terraformOptions&lt;/code&gt; variable which contains our Terraform environment settings. Also we include each output value from &lt;code&gt;output.tf&lt;/code&gt;. Then we save each output value into a variable in GO:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vnetRG := terraform.Output(t, terraformOptions, "vnet_rg")
subnetID := terraform.Output(t, terraformOptions, "subnet_id")
nsgName := terraform.Output(t, terraformOptions, "nsg_name")
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we will use the output variables we created to directly check our Azure environment using the Azure API and confirm that the NSG we create is assigned to the subnet. We perform a lookup of all subnet IDs assigned to the NSG using the &lt;code&gt;azure.GetAssociationsforNSG&lt;/code&gt; function, which requires the virtual network resource group and NSG name. Then we run a test using the &lt;code&gt;assert&lt;/code&gt; GO package, which allows us to compare all the resources IDs in our &lt;code&gt;nsgAssociations&lt;/code&gt; variable with the &lt;code&gt;subnetID&lt;/code&gt; variable, which contains the output information from our Terraform code. This comparison allows us to take the subnet ID output from Terraform and look into Azure using the API and verify that it's assigned to the NSG we created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    // Look up Subnet and NIC ID associations of NSG
    nsgAssociations := azure.GetAssociationsforNSG(t, vnetRG, nsgName, "")

    //Check if subnet is associated with NSG
    assert.Contains(t, nsgAssociations, subnetID)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If the subnet ID is in the list of NSG associations, our test will pass, if it is missing, the test will fail. &lt;/p&gt;

&lt;p&gt;Lastly, we will need to add two more libraries to our &lt;code&gt;import&lt;/code&gt; declaration. We are using &lt;code&gt;assert&lt;/code&gt; to perform a comparison and validate our subnet ID is associated with the NSG, and we are using the function &lt;code&gt;azure.GetAssociationsforNSG&lt;/code&gt; from the GO library aztest. The &lt;code&gt;azure.GetAssociationsforNSG&lt;/code&gt; function allows us to quickly authenticate with our Azure environment and look up assigned resources to an NSG. It is just a helper function for working with the &lt;a href="https://github.com/Azure/azure-sdk-for-go"&gt;Azure GO SDK&lt;/a&gt; which communicates with the Azure API:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;a href="https://godoc.org/github.com/gruntwork-io/terratest/modules/azure"&gt;Azure test library&lt;/a&gt; in Terratest right now is fairly limited compared to &lt;a href="https://godoc.org/github.com/gruntwork-io/terratest/modules/aws"&gt;AWS&lt;/a&gt;. Gruntwork has been heavily focused on building out the AWS and GCP modules. In the current state, most companies are just making their own private libraries for testing their resources in Azure. To help people get started writing tests for Terraform code in Azure, I have expanded upon the Terratest Azure testing functions and created a public GO library called &lt;a href="https://github.com/allanore/aztest"&gt;AzTest&lt;/a&gt;, feel free to use it.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import (
    "strings"
    "testing"

    "github.com/allanore/aztest/modules/azure"
    "github.com/gruntwork-io/terratest/modules/random"
    "github.com/gruntwork-io/terratest/modules/terraform"
    "github.com/stretchr/testify/assert"
)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have our test written, our entire unit test is as follows. Copy the following to &lt;code&gt;terraform_azure_network_test.go&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package test

import (
    "strings"
    "testing"

    "github.com/allanore/aztest/modules/azure"
    "github.com/gruntwork-io/terratest/modules/random"
    "github.com/gruntwork-io/terratest/modules/terraform"
    "github.com/stretchr/testify/assert"
)

// An example of how to test the Terraform module in examples/terraform-azure-example using Terratest.
func TestTerraformAzureNetworkingExample(t *testing.T) {
    t.Parallel()

    var regions = []string{
        "centralus",
        "eastus",
        "eastus2",
        "northcentralus",
        "southcentralus",
        "westcentralus",
        "westus",
        "westus2",
    }

    // Pick a random Azure region to test in.
    azureRegion := random.RandomString(regions)

    // Network Settings for Vnet and Subnet
    systemName := strings.ToLower(random.UniqueId())
    vnetAddress := "10.0.0.0/16"
    subnetPrefix := "10.0.0.0/24"

    terraformOptions := &amp;amp;terraform.Options{

        // The path to where our Terraform code is located
        TerraformDir: "../examples/network",

        // Variables to pass to our Terraform code using -var options
        Vars: map[string]interface{}{
            "system":             systemName,
            "location":           azureRegion,
            "vnet_address_space": vnetAddress,
            "subnet_prefix":      subnetPrefix,
        },
    }

    // At the end of the test, run `terraform destroy` to clean up any resources that were created
    defer terraform.Destroy(t, terraformOptions)

    // This will run `terraform init` and `terraform apply` and fail the test if there are any errors
    terraform.InitAndApply(t, terraformOptions)

    // Run `terraform output` to get the value of an output variable
    vnetRG := terraform.Output(t, terraformOptions, "vnet_rg")
    subnetID := terraform.Output(t, terraformOptions, "subnet_id")
    nsgName := terraform.Output(t, terraformOptions, "nsg_name")

    // Look up Subnet and NIC ID associations of NSG
    nsgAssociations := azure.GetAssociationsforNSG(t, vnetRG, nsgName, "")

    //Check if subnet is associated with NSG
    assert.Contains(t, nsgAssociations, subnetID)

}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Whew! We just built our first unit test in GO. Now, let's run it. First, we need to authenticate to Azure. The easiest way is to use Azure CLI. Type in the following command in the terminal while in the &lt;code&gt;~/workspace/terraform-azure-testing/test&lt;/code&gt; directory. Log in with your Azure account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az login
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, we need to set the &lt;code&gt;ARM_SUBSCRIPTION_ID&lt;/code&gt; environment variable which is used for several functions in Terratest when deploying to Azure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export ARM_SUBSCRIPTION_ID=$(az account show | jq '.id' -r)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once logged in with Azure CLI, we need to download all of our dependencies for our tests. The dependencies include the packages that we specified during the import declaration. This concept is similar to running &lt;em&gt;install-module&lt;/em&gt; in PowerShell to install all the external modules used in a script.  Type the following command using &lt;code&gt;go get&lt;/code&gt; while in the &lt;code&gt;test&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go get -t -v
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We see the libraries and packages download. This process may take a minute. Once complete, we are ready to run our test. To kick off the test, type in the following command. We are using the &lt;code&gt;go test&lt;/code&gt; command with &lt;code&gt;-v&lt;/code&gt; to specify verbose output. Also, we are specifying the GO test file that we just created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go test -v terraform_azure_network_test.go
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We will start to see Terraform executing our example code in the output display:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=== RUN   TestTerraformAzureNetworkingExample
=== PAUSE TestTerraformAzureNetworkingExample
=== CONT  TestTerraformAzureNetworkingExample
TestTerraformAzureNetworkingExample 2020-05-09T14:30:17Z retry.go:72: terraform [init -upgrade=false]
TestTerraformAzureNetworkingExample 2020-05-09T14:30:17Z command.go:86: Running command terraform with args [init -upgrade=false]
TestTerraformAzureNetworkingExample 2020-05-09T14:30:17Z command.go:168: Initializing modules...
TestTerraformAzureNetworkingExample 2020-05-09T14:30:17Z command.go:168: 
TestTerraformAzureNetworkingExample 2020-05-09T14:30:17Z command.go:168: Initializing the backend...
TestTerraformAzureNetworkingExample 2020-05-09T14:30:17Z command.go:168: 
TestTerraformAzureNetworkingExample 2020-05-09T14:30:17Z command.go:168: Initializing provider plugins...
TestTerraformAzureNetworkingExample 2020-05-09T14:30:19Z command.go:168: 
TestTerraformAzureNetworkingExample 2020-05-09T14:30:19Z command.go:168: Terraform has been successfully initialized!
TestTerraformAzureNetworkingExample 2020-05-09T14:30:19Z command.go:168: 

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Don't hit CTRL + C during the test; this will prevent resources from being cleaned up in Azure. If the test should error out, the defer stage runs and executes Terraform destroy to remove anything that was provisioned.&lt;/p&gt;

&lt;p&gt;The test is executing the Terraform code in the &lt;code&gt;examples/network&lt;/code&gt; folder, running a test to ensure that the NSG is associated with the subnet, and running Terraform destroy at the end. Once the test run completes, we see the result like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TestTerraformAzureNetworkingExample 2020-05-09T14:32:01Z command.go:168: azurerm_resource_group.rg: Destruction complete after 47s
TestTerraformAzureNetworkingExample 2020-05-09T14:32:01Z command.go:168: 
TestTerraformAzureNetworkingExample 2020-05-09T14:32:01Z command.go:168: Destroy complete! Resources: 7 destroyed.
--- PASS: TestTerraformAzureNetworkingExample (103.78s)
PASS
ok      command-line-arguments  103.783s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now our unit test is complete! In the next step, we will add an integration test. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5 — Integration Testing &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;An integration test with infrastructure code is defined as testing the functionality of two components interacting together. This could be a Webapp with a SQL database or two microservices interacting with each other. For demonstration purposes, we are going to keep it very simple and stand up two virtual networks and test that we can peer them together. The example code for each vnet is in the &lt;code&gt;examples/vnet-peering&lt;/code&gt; directory. Our integration test will execute the Terraform code in &lt;code&gt;vnet1&lt;/code&gt; to deploy the first virtual network. Once complete, the test will execute the code in &lt;code&gt;vnet2&lt;/code&gt; to deploy a second virtual network and peer them together. We will then write a test that validates that the peering was successful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;examples
    └──networking
    └──vnet-peering
      └─vnet1
          └─main.tf
          └─output.tf
          └─variables.tf
      └─vnet2
          └─main.tf
          └─output.tf
          └─variables.tf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In our &lt;code&gt;test&lt;/code&gt; directory, let's create another GO test file called &lt;code&gt;terraform_azure_network_peering_test.go&lt;/code&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Base Test
&lt;/h5&gt;

&lt;p&gt;We will copy the following code below and paste it in as our base test configuration. Notice we now have two sets of &lt;code&gt;terraform.options&lt;/code&gt; as well as two sets of the destroy and apply steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package test

import (

    "testing"


    "github.com/gruntwork-io/terratest/modules/terraform"
)

// An example of how to test the Terraform module in examples/terraform-azure-example using Terratest.
func TestTerraformAzureNetworkingPeeringExample(t *testing.T) {
    t.Parallel()


    vnet1Opts := &amp;amp;terraform.Options{

    }

    // Deploy VNet1
    defer terraform.Destroy(t, vnet1Opts)
    terraform.InitAndApply(t, vnet1Opts)

    vnet2Opts := &amp;amp;terraform.Options{

    }

    // Deploy VNet2
    defer terraform.Destroy(t, vnet2Opts)
    terraform.InitAndApply(t, vnet2Opts)
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h5&gt;
  
  
  Add in Terraform Options
&lt;/h5&gt;

&lt;p&gt;Next, we will add in the Terraform options. We include the steps for randomized Azure regions as well as using random system names for our virtual networks. The big difference from our unit test is that we are passing the Terraform output values after deploying VNet1 to the &lt;code&gt;terraform.options&lt;/code&gt; of VNet2, this allows us to configure the peering between them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package test

import (
    "strings"
    "testing"

    "github.com/gruntwork-io/terratest/modules/random"
    "github.com/gruntwork-io/terratest/modules/terraform"
)

// An example of how to test the Terraform module in examples/terraform-azure-example using Terratest.
func TestTerraformAzureNetworkingPeeringExample(t *testing.T) {
    t.Parallel()

    var regions = []string{
        "centralus",
        "eastus",
        "eastus2",
        "northcentralus",
        "southcentralus",
        "westcentralus",
        "westus",
        "westus2",
    }

    // Pick a random Azure region to test in.
    azureRegion := random.RandomString(regions)

    // Network Settings for Vnet and Subnet
    vnet1Sysname := strings.ToLower(random.UniqueId())
    vnet1Address := "10.0.0.0/16"
    vnet1SubnetPrefix := "10.0.0.0/24"
    vnet2Sysname := strings.ToLower(random.UniqueId())
    vnet2Address := "10.1.0.0/16"
    vnet2SubnetPrefix := "10.1.0.0/24"

    vnet1Opts := &amp;amp;terraform.Options{

        // The path to where our Terraform code is located
        TerraformDir: "../examples/vnet-peering/vnet1",

        // Variables to pass to our Terraform code using -var options
        Vars: map[string]interface{}{
            "system":             vnet1Sysname,
            "location":           azureRegion,
            "vnet_address_space": vnet1Address,
            "subnet_prefix":      vnet1SubnetPrefix,
        },
    }

    // Deploy VNet1
    defer terraform.Destroy(t, vnet1Opts)
    terraform.InitAndApply(t, vnet1Opts)
    vnetOutRG := terraform.Output(t, vnet1Opts, "vnet_rg")
    vnetOutName := terraform.Output(t, vnet1Opts, "vnet_name")

    vnet2Opts := &amp;amp;terraform.Options{

        // The path to where our Terraform code is located
        TerraformDir: "../examples/vnet-peering/vnet2",

        // Variables to pass to our Terraform code using -var options
        Vars: map[string]interface{}{
            "system":             vnet2Sysname,
            "location":           azureRegion,
            "vnet_address_space": vnet2Address,
            "subnet_prefix":      vnet2SubnetPrefix,
            "peer_vnet_rg":       vnetOutRG,
            "peer_vnet_name":     vnetOutName,
        },
    }

    // Deploy VNet2
    defer terraform.Destroy(t, vnet2Opts)
    terraform.InitAndApply(t, vnet2Opts)
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h5&gt;
  
  
  Add In The tests
&lt;/h5&gt;

&lt;p&gt;Next, we will add in the tests. We are collecting the resource group and virtual network name from the output of deploying VNet2. Then we are using those values to look up the virtual network with the &lt;code&gt;azure.GetVnetbyName&lt;/code&gt; function and storing them in the &lt;code&gt;vnet2Properties&lt;/code&gt; variable. We are taking the virtual network peering properties from the &lt;code&gt;vnet2Properties&lt;/code&gt; variable and using a &lt;code&gt;for&lt;/code&gt; loop in GO to loop through all of the values within the &lt;code&gt;VirtualNetworkPeerings&lt;/code&gt; property, which contains a list of all peerings made on the virtual network. We are then validating that all peerings in that property are in a &lt;code&gt;Succeeded&lt;/code&gt; state, ensuring that they peered properly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    // Collect RG and Virtual Network name from VNet2 Output
    vnet2RG := terraform.Output(t, vnet2Opts, "vnet_rg")
    vnet2Name := terraform.Output(t, vnet2Opts, "vnet_name")

    // Look up Virtual Network 2 by Name
    vnet2Properties := azure.GetVnetbyName(t, vnet2RG, vnet2Name, "")

    //Check if VNet Peering in VNet2 Provisioned Successfully
    for _, vnet := range *vnet2Properties.VirtualNetworkPeerings {
        assert.Equal(t, "Succeeded", string(vnet.VirtualNetworkPeeringPropertiesFormat.ProvisioningState), "Check if Peerings provisioned successfully")
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Our final integration test file should look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package test

import (
    "strings"
    "testing"

    "github.com/allanore/aztest/modules/azure"
    "github.com/gruntwork-io/terratest/modules/random"
    "github.com/gruntwork-io/terratest/modules/terraform"
    "github.com/stretchr/testify/assert"
)

// An example of how to test the Terraform module in examples/terraform-azure-example using Terratest.
func TestTerraformAzureNetworkingPeeringExample(t *testing.T) {
    t.Parallel()

    var regions = []string{
        "centralus",
        "eastus",
        "eastus2",
        "northcentralus",
        "southcentralus",
        "westcentralus",
        "westus",
        "westus2",
    }

    // Pick a random Azure region to test in.
    azureRegion := random.RandomString(regions)

    // Network Settings for Vnet and Subnet
    vnet1Sysname := strings.ToLower(random.UniqueId())
    vnet1Address := "10.0.0.0/16"
    vnet1SubnetPrefix := "10.0.0.0/24"
    vnet2Sysname := strings.ToLower(random.UniqueId())
    vnet2Address := "10.1.0.0/16"
    vnet2SubnetPrefix := "10.1.0.0/24"

    vnet1Opts := &amp;amp;terraform.Options{

        // The path to where our Terraform code is located
        TerraformDir: "../examples/vnet-peering/vnet1",

        // Variables to pass to our Terraform code using -var options
        Vars: map[string]interface{}{
            "system":             vnet1Sysname,
            "location":           azureRegion,
            "vnet_address_space": vnet1Address,
            "subnet_prefix":      vnet1SubnetPrefix,
        },
    }

    // Deploy VNet1
    defer terraform.Destroy(t, vnet1Opts)
    terraform.InitAndApply(t, vnet1Opts)
    vnetOutRG := terraform.Output(t, vnet1Opts, "vnet_rg")
    vnetOutName := terraform.Output(t, vnet1Opts, "vnet_name")

    vnet2Opts := &amp;amp;terraform.Options{

        // The path to where our Terraform code is located
        TerraformDir: "../examples/vnet-peering/vnet2",

        // Variables to pass to our Terraform code using -var options
        Vars: map[string]interface{}{
            "system":             vnet2Sysname,
            "location":           azureRegion,
            "vnet_address_space": vnet2Address,
            "subnet_prefix":      vnet2SubnetPrefix,
            "peer_vnet_rg":       vnetOutRG,
            "peer_vnet_name":     vnetOutName,
        },
    }

    // Deploy VNet2
    defer terraform.Destroy(t, vnet2Opts)
    terraform.InitAndApply(t, vnet2Opts)

    // Collect RG and Virtual Network name from VNet2 Output
    vnet2RG := terraform.Output(t, vnet2Opts, "vnet_rg")
    vnet2Name := terraform.Output(t, vnet2Opts, "vnet_name")

    // Look up Virtual Network 2 by Name
    vnet2Properties := azure.GetVnetbyName(t, vnet2RG, vnet2Name, "")

    //Check if VNet Peering in VNet2 Provisioned Successfully
    for _, vnet := range *vnet2Properties.VirtualNetworkPeerings {
        assert.Equal(t, "Succeeded", string(vnet.VirtualNetworkPeeringPropertiesFormat.ProvisioningState), "Check if Peerings provisioned successfully")
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, let's run our integration test. No need to download the dependencies because we downloaded them already when we ran the unit test. Run the following command in the terminal specifying the &lt;code&gt;terraform_azure_network_peering_test.go&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go test -v terraform_azure_network_peering_test.go
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We should see both virtual networks get deployed, followed by our test to verify the peering state between them. Then a &lt;code&gt;terraform destroy&lt;/code&gt; is ran to tear down our test infrastructure. At the end we should see that our test has passed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TestTerraformAzureNetworkingPeeringExample 2020-05-10T13:22:40Z command.go:168: azurerm_resource_group.rg: Destruction complete after 48s
TestTerraformAzureNetworkingPeeringExample 2020-05-10T13:22:41Z command.go:168: 
TestTerraformAzureNetworkingPeeringExample 2020-05-10T13:22:41Z command.go:168: Destroy complete! Resources: 7 destroyed.
--- PASS: TestTerraformAzureNetworkingPeeringExample (347.89s)
PASS
ok      command-line-arguments  347.890s

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For an extra challenge, we could run both tests at the same time using the following command. Because we have &lt;code&gt;t.Parallel()&lt;/code&gt; in our test functions, each test can be run in parallel. This is incredibly powerful because we can test our modules in several different ways at the same time:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In a typical continuous integration pipeline for our module, we would ideally be running all tests in our test folder at the same time when we commit our changes to the module repo. This allows us to get a full amount of test coverage against our module.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go test -v
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In the next step we go over how to perform end to end testing. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6 — End-to-End Testing &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;End-to-end testing involves standing up the entire application composed of multiple modules. This can take hours, depending on how big the application is, which can make it incredibly time-consuming to run end-to-end tests. If the application takes a significant amount of time to deploy, it is recommended to deploy a copy of the entire application in a separate test environment and keep it deployed for a specific amount of time. Then, when we make a change to any of the modules in the application, we can re-deploy that module out to the application and then test the application functionality as a whole with automated tests.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As you can see, writing tests can take a bit of work. Because IaC is such a newly adopted concept, the tooling and concepts for writing tests can be awkward and complex. However, the rewards are worth it. A module with thorough tests will have much more stability and provides greater confidence when teams are executing that code. &lt;/p&gt;

&lt;p&gt;Infrastructure code typically "rots" quickly. There are constantly new updates made to Azure, Terraform providers, and Terraform itself, which can make it a full-time task to ensure modules are always working as they should. To help with this, consider scheduling tests to run nightly to catch these types of bugs as early as possible. &lt;/p&gt;

&lt;p&gt;It is strongly encouraged to run your tests in an Azure subscription separate from production and even development. When developing and testing modules, there is always a risk of destroying other resources. Also, this allows for safely configuring automated cleanup on the testing subscription to ensure any tests that bombed-out arent leaving remnants of infrastructure.&lt;/p&gt;

&lt;p&gt;There is also the concept of test-driven development (TDD), which is a software development practice where tests are written first before the code is written. This is not very common in the infrastructure development world yet because of its infancy stage. However, IaC veterans like &lt;a href="https://www.thoughtworks.com/profiles/kief-morris"&gt;Kief Morris&lt;/a&gt; are huge advocates for TDD and describe that performing TDD for infrastructure code drives better design as it requires one to think through the outcome and function of their infrastructure when developing.&lt;/p&gt;

&lt;p&gt;Tests for infrastructure code are critical to the stability of the infrastructure. Jacob Kaplan-Moss, software developer and co-creator of Django said it best, "Code without tests is broken as designed". Writing tests for IaC can be time-consuming and difficult because of the immaturity of the tooling; however, it is an essential piece of infrastructure development that shouldn't be overlooked.&lt;/p&gt;

&lt;p&gt;In the next article, we will complete out this series by reviewing best practices when using Terraform. &lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>terraform</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
