<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Nam Phuong Tran</title>
    <description>The latest articles on Forem by Nam Phuong Tran (@agsouthernt).</description>
    <link>https://forem.com/agsouthernt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/agsouthernt"/>
    <language>en</language>
    <item>
      <title>Getting Started with AWS Infrastructure as Code: A Terraform Guide</title>
      <dc:creator>Nam Phuong Tran</dc:creator>
      <pubDate>Tue, 26 Sep 2023 14:27:10 +0000</pubDate>
      <link>https://forem.com/agsouthernt/getting-started-with-aws-infrastructure-as-code-a-terraform-guide-4l6j</link>
      <guid>https://forem.com/agsouthernt/getting-started-with-aws-infrastructure-as-code-a-terraform-guide-4l6j</guid>
      <description>&lt;p&gt;If you're looking to start working with AWS and dive into the world of Terraform, this article is your comprehensive guide. We'll walk you through setting up your Terraform project to work seamlessly with AWS and share essential best practices for a smooth transition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 1: Getting Started with AWS&lt;/strong&gt;&lt;br&gt;
1.1 &lt;a href="https://aws.amazon.com/getting-started/guides/setup-environment/"&gt;Create Your AWS Account&lt;/a&gt;&lt;br&gt;
We begin by creating your AWS account, which is your gateway to AWS services. Follow our step-by-step guide to set up your AWS account quickly and efficiently.&lt;/p&gt;

&lt;p&gt;1.2 &lt;a href="https://docs.aws.amazon.com/SetUp/latest/UserGuide/setup-acctrequirements.html"&gt;Configure Users&lt;/a&gt;&lt;br&gt;
Learn how to configure user accounts in AWS, ensuring proper access control for your team members&lt;/p&gt;

&lt;p&gt;1.3 &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html"&gt;Set Up the AWS CLI&lt;/a&gt;&lt;br&gt;
Discover how to install and configure the AWS CLI (Command Line Interface) for managing AWS resources from your local environment.&lt;/p&gt;

&lt;p&gt;We have to Configure AWS CLI credentials. In this article I will use &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html"&gt;Long-term credentials&lt;/a&gt; option. But it would be better if you use AWS IAM Identity Center (SSO). This is more secure. However you have to use your root user credentials or an IAM user with sufficient permissions to enable AWS SSO to setup this.&lt;/p&gt;

&lt;p&gt;Let’s start with Long-term credentials options first.&lt;br&gt;
Go to the AWS Portal, choice user account then Security Credentials&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0DCXf139--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9g9483gt7vodokkg6ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0DCXf139--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9g9483gt7vodokkg6ae.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scrolling to the Access Key&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N_ISHTDy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uvxyacyffmhf1kusoir7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N_ISHTDy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uvxyacyffmhf1kusoir7.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
Click to the Create access key button&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XeAubt2C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yqutpte7b10utrekgsd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XeAubt2C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yqutpte7b10utrekgsd7.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;br&gt;
Choice the first option (Command line Interface – CLI)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nQBJTqXZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/19kkgtzw4yqq9b4m3gpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nQBJTqXZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/19kkgtzw4yqq9b4m3gpq.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
Tick to the confirmation checkbox and Click to the Next button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ayOKo332--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uzh0647kvjjrw2y11r91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ayOKo332--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uzh0647kvjjrw2y11r91.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;br&gt;
Because the Secret key only available for the first time so please note that we should copy the value of secret to some where or download csv file before closing the tab.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 2: AWS CLI and Terraform&lt;/strong&gt;&lt;br&gt;
2.1 Installing AWS CLI&lt;br&gt;
Get hands-on with the AWS CLI installation process. Check your installation with aws --version.&lt;/p&gt;

&lt;p&gt;2.2 Configure AWS CLI Credentials&lt;br&gt;
Explore different options for configuring AWS CLI credentials, including long-term credentials and more secure AWS IAM Identity Center (SSO) configurations.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X4uRMBMq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f7g5zebwfdmi6jg25u2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X4uRMBMq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f7g5zebwfdmi6jg25u2u.png" alt="Image description" width="800" height="161"&gt;&lt;/a&gt;&lt;br&gt;
Filling the value corresponding to aws configure command&lt;br&gt;
Now, we are going to create a resource group, that helps us to manage resources better via Tags management.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ic-OimuV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tflyhilzag3c5sh3qq5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ic-OimuV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tflyhilzag3c5sh3qq5c.png" alt="Image description" width="800" height="215"&gt;&lt;/a&gt;&lt;br&gt;
It will show you the result&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yLmJJ4pa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sa6p8pqpbghdyayaijuy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yLmJJ4pa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sa6p8pqpbghdyayaijuy.png" alt="Image description" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 3: Setting Up Remote State Storage&lt;/strong&gt;&lt;br&gt;
3.1 Using AWS S3 for Remote State&lt;br&gt;
Secure Remote Backends: Store the Terraform state in secure remote backends such as AWS S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MSwdwLk8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8zavyeoy9vrv5d6ajvy8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MSwdwLk8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8zavyeoy9vrv5d6ajvy8.png" alt="Image description" width="800" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In AWS, the Services should be enabled tagging the tag. That can help us to manage services better. It is also the tagged the service into the resource group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LJGTLNQA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jyaru1gei8sc67r7uqyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LJGTLNQA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jyaru1gei8sc67r7uqyd.png" alt="Image description" width="800" height="34"&gt;&lt;/a&gt;&lt;br&gt;
After creating the S3 bucket, we can see it on the AWS portal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iq0QViXD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rujywzsbcoc9b9o01uw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iq0QViXD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rujywzsbcoc9b9o01uw6.png" alt="Image description" width="800" height="208"&gt;&lt;/a&gt;&lt;br&gt;
Go to the Resource group and check. Awesome, they look as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SBL30ZDb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/enprprye2ifebobczdvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SBL30ZDb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/enprprye2ifebobczdvv.png" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 4: DynamoDB for State Locking&lt;/strong&gt;&lt;br&gt;
4.1 Understanding State Locking&lt;br&gt;
As you know that, using DynamoDB for state locking enhances the reliability and scalability of Terraform workflows, especially in environments with multiple users or automation processes working concurrently. It helps ensure that infrastructure changes are applied safely and consistently while preventing conflicts and data corruption.&lt;br&gt;
Terraform uses state files to keep track of resource information and dependencies. When multiple users or automation processes work with Terraform concurrently, there's a risk of conflicts and data corruption. DynamoDB provides built-in concurrency control mechanisms, ensuring that only one process can acquire a lock for a particular state file at a given time. This prevents conflicts and ensures consistent state management.&lt;br&gt;
Terraform provides built-in support for using DynamoDB as a backend for state locking. Configuring Terraform to use DynamoDB is straightforward, and it seamlessly integrates with the Terraform CLI.&lt;br&gt;
So, we are going to create Dynamo table&lt;/p&gt;

&lt;p&gt;4.2 Creating a DynamoDB Table&lt;br&gt;
Step-by-step instructions on creating a DynamoDB table to serve as the state locking mechanism&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ifV9pN3k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hh5erhodx0fwt08lc20s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ifV9pN3k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hh5erhodx0fwt08lc20s.png" alt="Image description" width="800" height="269"&gt;&lt;/a&gt;&lt;br&gt;
It should be also tagged then it will show in the resource group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m_ThfIwK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ee78edqag2zel0eq9jqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m_ThfIwK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ee78edqag2zel0eq9jqf.png" alt="Image description" width="800" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome, now let’s start write Terraform code&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 5: Writing Terraform Modules&lt;/strong&gt;&lt;br&gt;
5.1 Terraform Configuration&lt;br&gt;
Setting up your Terraform project with the required providers, including AWS and optional Kubernetes configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fFbBLtqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3da2jsfjz0zeineb9oqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFbBLtqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3da2jsfjz0zeineb9oqu.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.16.2"#"~&amp;gt; 4.16"
    }

    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "&amp;gt;= 2.16.1"
    }
  }

  required_version = "&amp;gt;= 1.2.0"
}
provider "aws" {
  region = "us-east-1"
}
terraform {
  backend "s3" {
    bucket         = "s3-terraform-state-use1"
    key            = "terraform.tfstate"   # State file name
    region         = "us-east-1"           # Use your desired AWS region
    encrypt        = true                  # Optionally, enable server-side encryption
    dynamodb_table = "ddb-statelock-table" # Optional, use a DynamoDB table for state locking
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.2 Creating a Resource Group&lt;br&gt;
Learn how to define a Terraform resource group to organize AWS resources based on tags.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_resourcegroups_group" "rg" {
  name = join("-", [var.resource_type, var.application, var.application_environment, var.region_short])

  resource_query {
    query = &amp;lt;&amp;lt;JSON
    {
      "ResourceTypeFilters": [
        "AWS::AllSupported"
      ],
      "TagFilters": [
        {
          "Key": "Environment",
          "Values": ["${var.workload_environments}"]
        },
        {
          "Key": "ApplicationEnvironment",
          "Values": ["${var.application_environment}"]          
        },
        {
          "Key": "OpsTeam",
          "Values": ["${var.ops_team}"]
        },
        {
          "Key": "Owner",
          "Values": ["${var.owner}"]
        },{
          "Key": "Criticality",
          "Values": ["${var.business_criticality}"]          
        },{
          "Key": "OpsCommitment",
          "Values": ["${var.ops_commitment}"]          
        },{
          "Key": "ApplicationName",
          "Values": ["${upper(var.application)}"]
        }
      ]
    }
  JSON
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.3 Configuring S3 Buckets&lt;br&gt;
Create S3 buckets for your infrastructure components, making them public and configuring website settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "s3" {
  bucket = join("-", [var.resource_type, var.application, var.application_environment, var.region_short]) #"s3-static-website-bucket"                                                                               # Make the bucket public

  tags = {
    Environment            = var.workload_environments
    ApplicationEnvironment = var.application_environment
    OpsTeam                = var.ops_team
    Owner                  = var.owner
    Criticality            = var.business_criticality
    OpsCommitment          = var.ops_commitment
    ApplicationName        = upper(var.application)
  }
  depends_on = [ var.resource_group]
}

resource "aws_s3_bucket_website_configuration" "s3_website" {
  bucket = aws_s3_bucket.s3.id

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }

  routing_rule {
    condition {
      key_prefix_equals = "docs/"
    }
    redirect {
      replace_key_prefix_with = "documents/"
    }
  }

  depends_on = [ var.resource_group]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Section 6: Module Integration in Main Configuration&lt;/strong&gt;&lt;br&gt;
6.1 Integrating the Resource Group Module&lt;br&gt;
See how to call and configure the resource group module in your&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;main configuration.
module "rg" {
  for_each                = var.application_environments
  source                  = "./modules/resource-group"
  resource_type           = "rg"
  application             = var.organization
  workload_environments   = var.workload_environments
  application_environment = each.value #var.application_environment# 
  region                  = var.region
  region_short            = var.region_short
  ops_team                = var.ops_team
  owner                   = var.owner
  business_criticality    = var.business_criticality
  ops_commitment          = var.ops_commitment
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.2 Integrating the S3 Bucket Module&lt;br&gt;
Integrate the S3 bucket module into your main configuration and link it to your resource group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "s3" {
  for_each                = var.application_environments
  source                  = "./modules/bucket"
  resource_type           = "s3"
  application             = var.organization
  workload_environments   = var.workload_environments
  application_environment = each.value #var.application_environment#
  region                  = var.region
  region_short            = var.region_short
  ops_team                = var.ops_team
  owner                   = var.owner
  business_criticality    = var.business_criticality
  ops_commitment          = var.ops_commitment
  resource_group          = module.rg[each.value]
  resource_group_id       = module.rg[each.value].resource_group_id
  resource_group_arn      = module.rg[each.value].resource_group_arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oKupMMxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hea2tq5e0vj8r22ce49m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oKupMMxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hea2tq5e0vj8r22ce49m.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v7ypLqCl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n1hwusuzdtd7xbceo691.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v7ypLqCl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n1hwusuzdtd7xbceo691.png" alt="Image description" width="800" height="755"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rbFsDRzE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pt5a1am877b1t6an2yqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rbFsDRzE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pt5a1am877b1t6an2yqo.png" alt="Image description" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JM-I1lDZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r0pe4i38mwupcsjlk29v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JM-I1lDZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r0pe4i38mwupcsjlk29v.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A80aJdOL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cesar1wzcwe554wte39v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A80aJdOL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cesar1wzcwe554wte39v.png" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source code: &lt;a href="https://github.com/namphuongtran/aws-infrastructure"&gt;https://github.com/namphuongtran/aws-infrastructure&lt;/a&gt;&lt;br&gt;
By the end of this article, you'll have a firm grasp of setting up AWS and Terraform for your infrastructure needs. The included best practices will ensure you're working efficiently and effectively in your new AWS environment.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>devops</category>
    </item>
    <item>
      <title>Provisioning Azure Databricks with Terraform</title>
      <dc:creator>Nam Phuong Tran</dc:creator>
      <pubDate>Tue, 12 Sep 2023 14:32:32 +0000</pubDate>
      <link>https://forem.com/agsouthernt/provisioning-azure-databricks-with-terraform-3bj9</link>
      <guid>https://forem.com/agsouthernt/provisioning-azure-databricks-with-terraform-3bj9</guid>
      <description>&lt;p&gt;Azure Databricks is a powerful analytics platform built on Apache Spark, tailor-made for Azure. It fosters collaboration between data engineers, data scientists, and machine learning experts, facilitating their work on large-scale data and advanced analytics projects. On the other hand, Terraform, an open-source Infrastructure as Code (IaC) tool developed by HashiCorp, empowers users to define and provision infrastructure resources using a declarative configuration language.&lt;br&gt;
In this guide, we'll delve into the seamless integration of these two technologies using the Databricks Terraform provider. This combination offers several compelling advantages and is the recommended approach for efficiently managing Databricks workspaces and their associated resources in Azure.&lt;/p&gt;

&lt;p&gt;Setting Up Your Terraform Environment&lt;br&gt;
Before we dive into the specifics, there are some prerequisites for successfully using Terraform and the Databricks Terraform provider:&lt;/p&gt;

&lt;p&gt;Azure Account: Ensure you have an Azure account.&lt;br&gt;
Azure Admin User: You need to be an account-level admin user in your Azure account.&lt;br&gt;
Development Machine Setup: On your local development machine, you should have the Terraform CLI and Azure CLI installed and configured. Make sure you are signed in via the az login command with a user that has Contributor or Owner rights to your subscription.&lt;/p&gt;

&lt;p&gt;Project Structure&lt;br&gt;
Organize your project into a folder for your Terraform scripts, let's call it "Terraform-Databricks." We will create several configuration files to handle authentication and resource provisioning.&lt;br&gt;
Version and Provider Configuration&lt;br&gt;
In your Terraform project, create a versions.tf file to specify the Terraform version and required providers:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform {
  required_version = "&amp;gt;= 1.2, &amp;lt; 1.5"
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
    }
    databricks = {
      source  = "databricks/databricks"
      version = "1.24.1"
    }
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let's define the providers in a providers.tf file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

provider "azurerm" {
  features {}
}

# Use Azure CLI authentication.
provider "databricks" {
  # We'll revisit this section later
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We'll return to the Databricks provider configuration shortly.&lt;br&gt;
Backend Configuration&lt;br&gt;
To store the Terraform state, create a backend.tf file. In this example, we're using an Azure Storage account to store the state:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform {
  backend "azurerm" {
    resource_group_name  = "rg-terraform-non-prod-weu"
    storage_account_name = "stteranonprodweu"
    container_name       = "terraform-databricks"
    key                  = "terraform.tfstate"

    # rather than defining this inline, the Access Key can also be sourced
    # from an Environment Variable - more information is available below.
    access_key = "your_key_storage"
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With these initial configurations in place, we can proceed to set up Terraform for Azure.&lt;/p&gt;

&lt;p&gt;Getting started writing the Terraform script to build Azure Databricks infrastructure.&lt;br&gt;
Step 1: Retrieve the Current Client Configuration and User in Azure&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

data "azurerm_client_config" "current" {
}

data "databricks_current_user" "me" {  
   depends_on = [azurerm_databricks_workspace.dbw]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Step 2: Create a resource group&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_resource_group" "rg" {
  name     = "rg-analytics-test-weu"
  location = "westeurope"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Step 3: Create a Virtual Network (Vnet) - Optional, but Important&lt;br&gt;
Whether to create a Vnet depends on your specific use case. If you're exclusively using Azure Databricks and don't require outbound access or a high level of security, you can skip this step. However, if you need to interact with services outside of Azure, it's advisable to create a Vnet.&lt;br&gt;
Consider the scenario where you want Azure Databricks to access MongoDB Atlas, which resides outside of Azure. MongoDB Atlas secures its infrastructure by allowing specific IPs in a whitelist. However, exposing Azure Databricks to the internet isn't an ideal solution. Instead, you can create a Vnet and set up peering or a private endpoint.&lt;br&gt;
It's essential to note that you can't add a Vnet to an existing workspace. Once a workspace is created, its configurations are registered in the Control Plane and can't be modified.&lt;/p&gt;

&lt;p&gt;Here, we'll create a Vnet:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_virtual_network" "vnet" {
  name                = "vnet-analytics-test-weu"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  address_space       = [var.cidr]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Within this Vnet, we'll create two subnets: a public subnet and a private subnet. We'll also implement a network security group (NSG) to manage security for the Vnet.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_network_security_group" "nsg" {
  name                = "nsg-analytics-test-weu"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
}

resource "azurerm_subnet" "public" {
  name                 = "subnet-public"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = [cidrsubnet(var.cidr, 3, 0)]

  delegation {
    name = "databricks"
    service_delegation {
      name = "Microsoft.Databricks/workspaces"
      actions = [
        "Microsoft.Network/virtualNetworks/subnets/join/action",
        "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action",
      "Microsoft.Network/virtualNetworks/subnets/unprepareNetworkPolicies/action"]
    }
  }
}

resource "azurerm_subnet_network_security_group_association" "public" {
  subnet_id                 = azurerm_subnet.public.id
  network_security_group_id = azurerm_network_security_group.nsg.id
}

resource "azurerm_subnet" "private" {
  name                 = "subnet-private"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = [cidrsubnet(var.cidr, 3, 1)]

  delegation {
    name = "databricks"
    service_delegation {
      name = "Microsoft.Databricks/workspaces"
      actions = [
        "Microsoft.Network/virtualNetworks/subnets/join/action",
        "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action",
      "Microsoft.Network/virtualNetworks/subnets/unprepareNetworkPolicies/action"]
    }
  }
}

resource "azurerm_subnet_network_security_group_association" "private" {
  subnet_id                 = azurerm_subnet.private.id
  network_security_group_id = azurerm_network_security_group.nsg.id
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Azure Databricks Workspace Setup&lt;br&gt;
Now, let's focus on creating an Azure Databricks workspace. This workspace will be our central hub for running analytics tasks:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_databricks_workspace" "dbw" {
  name                          = "dbw-analytics-test-weu"
  resource_group_name           = azurerm_resource_group.rg.name
  location                      = azurerm_resource_group.rg.location
  sku                           = "premium"
  managed_resource_group_name   = "rg-databricks-managed-weu"
  public_network_access_enabled = var.public_network_access_enabled
  custom_parameters {
    no_public_ip                                         = var.no_public_ip
    virtual_network_id                                   = azurerm_virtual_network.vnet.id
    private_subnet_name                                  = azurerm_subnet.private.name
    public_subnet_name                                   = azurerm_subnet.public.name
    public_subnet_network_security_group_association_id  = azurerm_subnet_network_security_group_association.public.id
    private_subnet_network_security_group_association_id = azurerm_subnet_network_security_group_association.private.id
  }
  depends_on = [
    azurerm_subnet_network_security_group_association.public,
    azurerm_subnet_network_security_group_association.private
  ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can see that in the depends_on, I listed 2 subnet. We need this, otherwise destroy doesn't cleanup things correctly.&lt;br&gt;
This workspace creation includes some crucial parameters like SKU, location, and network configurations. We're ensuring that the workspace is integrated into your Azure environment.&lt;br&gt;
Next, we are going to create Databricks cluster, however, before creating cluster. We have to define note type, spark version and instance pool.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

data "databricks_node_type" "dbr_node_type" {
  local_disk = true
  depends_on = [azurerm_databricks_workspace.dbw]
}

data "databricks_spark_version" "dbr_spark" {
  long_term_support = true
  depends_on        = [azurerm_databricks_workspace.dbw]
}

resource "databricks_instance_pool" "dbr_instance_pool" {
  instance_pool_name = "pool-analytics-test-weu"
  min_idle_instances = 0
  max_capacity       = 10
  node_type_id       = data.databricks_node_type.dbr_node_type.id

  idle_instance_autotermination_minutes = 10

  azure_attributes {
    availability       = "ON_DEMAND_AZURE"
    spot_bid_max_price = -1
  }

  disk_spec {
    disk_type {
      azure_disk_volume_type = "PREMIUM_LRS"
    }
    disk_size  = 80
    disk_count = 1
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Azure Databricks Cluster Creation&lt;br&gt;
Next, we'll create an Azure Databricks cluster, which will serve as the computational engine for our analytics tasks:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "databricks_cluster" "cluster" {

  cluster_name  = "dbc-analytics-test-weu"
  spark_version = data.databricks_spark_version.dbr_spark.id
  node_type_id = data.databricks_node_type.dbr_node_type.id
  autotermination_minutes = 20
  autoscale {
    min_workers = 1
    max_workers = 50
  }
  spark_conf = {
    "spark.databricks.io.cache.enable" : true
  }
  depends_on       = [azurerm_databricks_workspace.dbw]
  # instance_pool_id = databricks_instance_pool.dbr_instance_pool.id
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We have also declared that the cluster will be dependent on Databricks workspace. We can define the note type id or instance pool id.&lt;br&gt;
To reduce cluster start time, you can attach a cluster to a predefined pool of idle instances. When attached to a pool, a cluster allocates its driver and worker nodes from the pool. If the pool does not have sufficient idle resources to accommodate the cluster’s request, it expands by allocating new instances from the instance provider. When an attached cluster changes its state to TERMINATED, the instances it used are returned to the pool and reused by a different cluster.&lt;br&gt;
In this article, I use note type id instead of instance pool id.&lt;br&gt;
In addition to creating more stuff such as: jobs or notebook we can define as below&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "databricks_notebook" "nb" {
  path          = "${data.databricks_current_user.me.home}/${var.notebook_subdirectory}/${var.notebook_filename}"
  language      = var.notebook_language
  source        = "./notebooks/${var.notebook_filename}"
}

resource "databricks_job" "job" {
  name = var.job_name
  existing_cluster_id = databricks_cluster.cluster.cluster_id
  notebook_task {
    notebook_path = databricks_notebook.nb.path
  }
  email_notifications {
    on_success = [ data.databricks_current_user.me.user_name ]
    on_failure = [ data.databricks_current_user.me.user_name ]
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is almost completely.&lt;br&gt;
This configuration includes cluster details such as its name, Spark version, and autoscaling properties.&lt;/p&gt;

&lt;p&gt;Databricks Provider Update&lt;br&gt;
We need to revisit the Databricks provider configuration and update it with the necessary authentication information:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

provider "databricks" {
  azure_workspace_resource_id = azurerm_databricks_workspace.dbw.id
  host                        = azurerm_databricks_workspace.dbw.workspace_url
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This help authorization Azure Databricks service.&lt;br&gt;
Initialize the working directory containing the *.tf file by running the terraform init command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform init


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Terraform downloads the specified providers and installs them in a hidden subdirectory of your current working directory, named .terraform. The terraform init command prints out which version of the providers were installed. Terraform also creates a lock file named .terraform.lock.hcl which specifies the exact provider versions used, so that you can control when you want to update the providers used for your project.&lt;br&gt;
Check whether your project was configured correctly by running the terraform plan command. If there are any errors, fix them, and run the command again.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform plan


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If everything looks good, apply the changes to your Azure environment. Apply the changes required to reach the desired state of the configuration by running the terraform apply command.  &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform apply -auto-approve


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxujte8492kwqf98inb2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxujte8492kwqf98inb2v.png" alt="Terraform apply running"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg1fvfpr0rjw6hnwnkxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg1fvfpr0rjw6hnwnkxw.png" alt="Terraform apply results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F425w7x2wr1wx6ddibpwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F425w7x2wr1wx6ddibpwe.png" alt="Azure portal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpnmjnsd5qjvs6ijtvb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpnmjnsd5qjvs6ijtvb7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7oufx4ccbiocq1hvje0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7oufx4ccbiocq1hvje0x.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbukw83d3hpwx6gcfiksd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbukw83d3hpwx6gcfiksd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0layok738eu5ii1ogg07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0layok738eu5ii1ogg07.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9t2rw13zmwi2a449cct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9t2rw13zmwi2a449cct.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hbtj563j7mq7djtywws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hbtj563j7mq7djtywws.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This guide covers the fundamental setup and provisioning steps for Azure Databricks using Terraform. In upcoming articles, we'll explore more advanced configurations and automation options to help you harness the full potential of this powerful analytics platform.&lt;br&gt;
Stay tuned for more in-depth insights into managing Databricks workspaces and resources with Terraform!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>databrick</category>
    </item>
    <item>
      <title>How to Create a Beautiful PowerShell Prompt with Oh My Posh and Windows Terminal</title>
      <dc:creator>Nam Phuong Tran</dc:creator>
      <pubDate>Mon, 04 Sep 2023 13:25:55 +0000</pubDate>
      <link>https://forem.com/agsouthernt/how-to-create-a-beautiful-powershell-prompt-with-oh-my-posh-and-windows-terminal-aga</link>
      <guid>https://forem.com/agsouthernt/how-to-create-a-beautiful-powershell-prompt-with-oh-my-posh-and-windows-terminal-aga</guid>
      <description>&lt;p&gt;As a developer working with tools like PowerShell, Kubernetes, Terraform, and others, having a well-customized and visually appealing terminal can significantly enhance your workflow. In this guide, we'll walk through the steps to create a stunning PowerShell prompt using Oh My Posh and Windows Terminal.&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;br&gt;
Before we start, make sure you have the following prerequisites in place:&lt;/p&gt;

&lt;p&gt;PowerShell Core: If you haven't already installed PowerShell Core, you can find installation instructions here.&lt;/p&gt;

&lt;p&gt;Windows Terminal: You can install Windows Terminal by following the guide here.&lt;/p&gt;

&lt;p&gt;After installing you can see the default UI as below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F67gx-6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iyyse92nitqanxrnea8u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F67gx-6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iyyse92nitqanxrnea8u.png" alt="Image description" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Oh My Posh: Install Oh My Posh by running the following command in PowerShell or Terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;winget install JanDeDobbeleer.OhMyPosh -s winget

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a PowerShell Folder: Create a folder for PowerShell scripts on your C drive, for example, C:\Users&amp;lt;your-username&amp;gt;\Documents\PowerShell. Inside this folder, create a file named profile.ps1.&lt;br&gt;
Terminal Icons: Install the Terminal Icons module by running the following command in Windows Terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Install-Module -Name Terminal-Icons -Repository PSGallery

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fonts: Download and install the Caskaydia Cove Nerd Font Complete from here. Install this font in C:\Windows\Fonts.&lt;br&gt;
Configuration Steps&lt;br&gt;
Open the profile.ps1 file you created in the PowerShell folder (C:\Users&amp;lt;your-username&amp;gt;\Documents\PowerShell\profile.ps1).&lt;/p&gt;

&lt;p&gt;Add the following lines to set up aliases for your tools:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$pwd = pwd
write-host "Using profile in '$pwd\Documents\PowerShell\profile.ps1':"
Import-Module oh-my-posh
Import-Module -Name Terminal-Icons

Set-PoshPrompt -Theme 'C:\Users\&amp;lt;your-username&amp;gt;\Documents\PowerShell\ohmyposhv3-v2.json'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3hkd_Wcw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83ytz927ldvask091ohi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3hkd_Wcw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83ytz927ldvask091ohi.png" alt="Image description" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that the Terminal UI will look like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zQJnntMn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ij8ttbooz42rceecwmkd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zQJnntMn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ij8ttbooz42rceecwmkd.png" alt="Image description" width="705" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Download the ohmyposhv3-v2.json theme file from Scott Hanselman's GitHub. Save it in your PowerShell folder (C:\Users&amp;lt;your-username&amp;gt;\Documents\PowerShell).&lt;br&gt;
Open Windows Terminal and go to Settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nv9Shsei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5whiz1iz473uk6wungvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nv9Shsei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5whiz1iz473uk6wungvq.png" alt="Image description" width="707" height="402"&gt;&lt;/a&gt;&lt;br&gt;
Set up PowerShell as the default shell for Windows Terminal.&lt;br&gt;
In the Windows Terminal settings, navigate to Profiles &amp;gt;  &amp;gt; Appearance and change the font to CaskaydiaCove NFM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z2JsYHMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxqcgag20vpvog1xw4ik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z2JsYHMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxqcgag20vpvog1xw4ik.png" alt="Image description" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ssGq6pu4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymk3x808axx7hzom756v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ssGq6pu4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymk3x808axx7hzom756v.png" alt="Image description" width="800" height="434"&gt;&lt;/a&gt;&lt;br&gt;
Save your settings.&lt;br&gt;
Now, you have a beautifully customized PowerShell prompt with Oh My Posh and Windows Terminal. Enjoy your new, stylish, and efficient development environment!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yv1T2c-g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iat9batibo4dr7ti6fw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yv1T2c-g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iat9batibo4dr7ti6fw6.png" alt="Image description" width="659" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>showdev</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Creating a Private Endpoint for Azure Storage Account using Terraform</title>
      <dc:creator>Nam Phuong Tran</dc:creator>
      <pubDate>Fri, 01 Sep 2023 13:33:17 +0000</pubDate>
      <link>https://forem.com/agsouthernt/creating-a-private-endpoint-for-azure-storage-account-using-terraform-1f3g</link>
      <guid>https://forem.com/agsouthernt/creating-a-private-endpoint-for-azure-storage-account-using-terraform-1f3g</guid>
      <description>&lt;p&gt;Enhancing the security of our infrastructure is paramount. Today, I'll guide you through the process of setting up a private endpoint for Azure Storage Account using Terraform, step by step, leveraging distinct services.&lt;br&gt;
One of the best architectures is highly recommended from MS official as below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fua1dlkhymotzrgovsryt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fua1dlkhymotzrgovsryt.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deeping dive to the detail we can see how it goes behind the scenes&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqna4z2lb9glyo5hd9uua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqna4z2lb9glyo5hd9uua.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that, all almost connections to the Azure services such as: Storage account, Container registry, Keyvault, etc. It should be used private endpoint for more secure.&lt;br&gt;
So, Today we will focus on the Azure Storage Account first. After we creating private endpoint for Storage account it looks like&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6qtsuvrpuybmacuq4dc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6qtsuvrpuybmacuq4dc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s start&lt;br&gt;
Step 1: Create a Resource Group&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_resource_group" "rg" {
  name = "rg-sd2488-non-prod-weu-infra"
  location = "westeurope"
  tags = {
    Owner="sd2488"
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Step 2: Create a Storage Account&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_storage_account" "st" {
  name                     = "stsd2488nonprodweu"
  resource_group_name      = azurerm_resource_group.rg.name
  location                 = azurerm_resource_group.rg.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
  min_tls_version = "TLS1_2"
  network_rules {
    default_action             = "Deny"
    ip_rules                   = []
  }

  tags = {
    Owner="sd2488"
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Step 3: Create a Virtual Network with Subnets&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_virtual_network" "vnet" {
  name                = "vnet-sd2488-non-prod-weu"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  address_space       = ["10.1.0.0/16"]  

  tags = {
    Owner="sd2488"
  }
}


resource "azurerm_subnet" "snet_endpoint" {
  name = "PrivateSubnet"
  virtual_network_name = azurerm_virtual_network.vnet.name
  resource_group_name = azurerm_resource_group.rg.name
  address_prefixes      = ["10.1.0.0/24"]  

}

resource "azurerm_subnet" "snet_bas" {
  name = "AzureBastionSubnet"
  virtual_network_name = azurerm_virtual_network.vnet.name
  resource_group_name = azurerm_resource_group.rg.name
  address_prefixes      = ["10.1.1.0/24"]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The Virtual Network comprises two subnets:&lt;br&gt;
The PrivateSubnet: This is intended for creating the private endpoint. It also includes a Virtual Machine for testing purposes.&lt;br&gt;
The AzureBastionSubnet: This subnet is designated for the Azure Bastion service, which enables secure connections to Virtual Machines.&lt;br&gt;
Step 4: Set Up Public IP Address and Azure Bastion&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_public_ip" "pip" {
  name                = "pip-sd2488-non-prod-weu"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  allocation_method   = "Static"
  sku                 = "Standard"
}

resource "azurerm_bastion_host" "bas" {
  name                = "bas-sd2488-non-prod-weu"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  ip_configuration {
    name                 = "bas-configuration"
    subnet_id            = azurerm_subnet.snet_bas.id
    public_ip_address_id = azurerm_public_ip.pip.id
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create a Public IP address and configure Azure Bastion.&lt;br&gt;
Step 5: Establish a Private Endpoint&lt;br&gt;
Before creating the private endpoint, generate a Private DNS Zone. Link the DNS Zone with the Virtual Network (VNet) and define the A record within the DNS Zone.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_private_dns_zone" "pdns_st" {
  name                = "privatelink.blob.core.windows.net"
  resource_group_name = azurerm_resource_group.rg.name
}
Then,


resource "azurerm_private_endpoint" "pep_st" {
  name                = "pep-sd2488-st-non-prod-weu"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  subnet_id           = azurerm_subnet.snet_endpoint.id

  private_service_connection {
    name                           = "sc-sta"
    private_connection_resource_id = azurerm_storage_account.st.id
    subresource_names              = ["blob"]
    is_manual_connection           = false
  }

  private_dns_zone_group {
    name                 = "dns-group-sta"
    private_dns_zone_ids = [azurerm_private_dns_zone.pdns_st.id]
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This step, we need to link the DNS Zone with Vnet and define A record in DNS Zone&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_private_dns_zone_virtual_network_link" "dns_vnet_lnk_sta" {
  name                  = "lnk-dns-vnet-sta"
  resource_group_name   = azurerm_resource_group.rg.name
  private_dns_zone_name = azurerm_private_dns_zone.pdns_st.name
  virtual_network_id    = azurerm_virtual_network.vnet.id
}

resource "azurerm_private_dns_a_record" "dns_a_sta" {
  name                = "sta_a_record"
  zone_name           = azurerm_private_dns_zone.pdns_st.name 
  resource_group_name = azurerm_resource_group.rg.name
  ttl                 = 300
  records             = [azurerm_private_endpoint.pep_st.private_service_connection.0.private_ip_address]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It seems almost thing done. Now, we are going to create an Virtual Machine and verify&lt;br&gt;
Step 6: Create a Virtual Machine&lt;br&gt;
Before creating Virtual machine, we need to create Network Interface and Network Security Group and link 2 services with each together&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "azurerm_network_security_group" "nsg" {
  name                = "nsg-sd2488-non-prod-weu"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  tags = {
    Owner="sd2488"
  }
}

resource "azurerm_network_interface" "nic" {
  name                = "nic-sd2488-non-prod-weu"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  ip_configuration {
    name                          = "nic-configuration"
    subnet_id                     = azurerm_subnet.snet_endpoint.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_network_interface_security_group_association" "nsgnic" {
  network_interface_id      = azurerm_network_interface.nic.id
  network_security_group_id = azurerm_network_security_group.nsg.id
}

resource "azurerm_windows_virtual_machine" "vm" {
  name                = "vm-sd2488-non"
  location              = azurerm_resource_group.rg.location
  resource_group_name   = azurerm_resource_group.rg.name
  size                = "Standard_F2"
  admin_username      = "adminuser"
  admin_password      = "P@$$w0rd1234!"  
  network_interface_ids = [
    azurerm_network_interface.nic.id,
  ]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2019-Datacenter"
    version   = "latest"
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Creating a Virtual Machine involves setting up a Network Interface and a Network Security Group. These components need to be connected. &lt;br&gt;
Execute the Terraform script with the "terraform apply" command.&lt;br&gt;
After running the command successfully, the resources will be created on Azure portal. It looks like&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xk9ak4io9so7nmujvdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xk9ak4io9so7nmujvdu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftea0d51l6k49b573m7mp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftea0d51l6k49b573m7mp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 7: Now, we will verify the result.&lt;br&gt;
Step 7.1: Accessing through Storage Explorer&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08nfmgwqmuyh3vuuaaaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08nfmgwqmuyh3vuuaaaf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see the result. If trying to access the Storage Account through Storage Explorer. It should be evident that direct access is prohibited.&lt;br&gt;
Step 7.2: Now, we will use virtual machine that is the same virtual network as Remote Access using Azure Bastion&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ri66x6shmrwdkyahlfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ri66x6shmrwdkyahlfw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using Azure Bastion to remotely access the Azure Virtual Machine is the next step. This entails connecting to the VM through the Azure Portal and selecting the Bastion option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh25agizj9yhtbek5vrom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh25agizj9yhtbek5vrom.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2to2c766fttd1cd9ymi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2to2c766fttd1cd9ymi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install Azure Storage Explorer to access the VM, retrieve the Access Key from the Storage Account, and establish a connection through the Explorer. Create a blob container as an additional validation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvft0accu03lc9hjm4ams.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvft0accu03lc9hjm4ams.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 7.3: Using nslookup for FQDN Verification&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlin1k2xujlgycczof70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlin1k2xujlgycczof70.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To confirm the setup, we can use the nslookup command for the FQDN (fully qualified domain name) generated after creating the DNS A record. This step helps ensure that the DNS record is accurately resolving to the expected IP address.&lt;br&gt;
By following these meticulous steps, we create a private endpoint for an Azure Storage Account, significantly bolstering the security of our infrastructure.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>devops</category>
      <category>security</category>
    </item>
  </channel>
</rss>
