<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Zakaria EL BAZI</title>
    <description>The latest articles on Forem by Zakaria EL BAZI (@z4ck404).</description>
    <link>https://forem.com/z4ck404</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/z4ck404"/>
    <language>en</language>
    <item>
      <title>AWS multi-region VPC peering using Terraform</title>
      <dc:creator>Zakaria EL BAZI</dc:creator>
      <pubDate>Sat, 12 Nov 2022 14:23:06 +0000</pubDate>
      <link>https://forem.com/z4ck404/aws-multi-region-vpc-peering-using-terraform-47jl</link>
      <guid>https://forem.com/z4ck404/aws-multi-region-vpc-peering-using-terraform-47jl</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;AWS multi-region VPC peering using Terraform&lt;br&gt;
How to securely connect two VPCs from different regions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;VPC peering is a networking connection between between two VPCs that enables traffic routing between the two using private IPv4 and/or IPv6 addresses.&lt;br&gt;
AWS blog : What is VPC peering?A way to privately connect the two VPCs without exposing them to the internet and the resources in either VPC can communicate with each other as if they are within the same network.&lt;br&gt;
Check this very detailed article from Ashish Patel for more information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftal8457rbd3ajfdmhhqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftal8457rbd3ajfdmhhqx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NB : The two VPCs should not have matching or overlapping CIDR blocks.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;p&gt;1/ Create a peering connection using a aws_vpc_peering_connection in one of the VPCs (this VPC will be the 'requester' of the peering connection, and the one that requests access to the other VPC's resources).&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_vpc_peering_connection" "this" {
  vpc_id      = var.requester_vpc_id
  peer_vpc_id = var.accpeter_vpc_id
  peer_region = var.accepter_region
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;2/ Create and accept the peering connection in the other VPC using a aws_vpc_peering_connection_accepter . (When using cross-account or cross-region the other vpc will be the 'accepter' side and will need to create and accept the incoming request of peering to allow access to it's resources).&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_vpc_peering_connection_accepter" "this" {
  provider                  = aws.accepter
  vpc_peering_connection_id = aws_vpc_peering_connection.this.id
  auto_accept               = true
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;3/ Create the necessary aws_routes in the routes tables of both VPCs so they can handle and know where to redirect traffic and where each resource is. And that's why is the peering requires having different and non-overlapping CIDRs.&lt;/p&gt;

&lt;p&gt;A complete example with the all the necessary resources is available &lt;a href="https://gist.github.com/Z4ck404/b08f72fb7bdcbc47d7beaa1a70ffd229#file-main-tf" rel="noopener noreferrer"&gt;here&lt;/a&gt; &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And that's it 👋&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>terraform</category>
      <category>cloud</category>
    </item>
    <item>
      <title>All you need to know about Terraform provisioners and why you should avoid them.</title>
      <dc:creator>Zakaria EL BAZI</dc:creator>
      <pubDate>Sat, 05 Mar 2022 04:26:18 +0000</pubDate>
      <link>https://forem.com/z4ck404/all-you-need-to-know-about-terraform-provisioners-and-why-you-should-avoid-them-236a</link>
      <guid>https://forem.com/z4ck404/all-you-need-to-know-about-terraform-provisioners-and-why-you-should-avoid-them-236a</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn52itxilv63msr20txkd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn52itxilv63msr20txkd.png" alt="Image description"&gt;&lt;/a&gt;As defined in the Terraform documentation, provisioners can be used to model specific actions on the local machine running the Terraform Core or on a remote machine to prepare servers or other infrastructure objects. But HashiCorp clearly states in its documentation that they should be used as the last solution ! which I will explain in this article.&lt;/p&gt;

&lt;p&gt;Provisioners are the feature to use when what’s needed is not clearly addressed by Terraform. You can copy data with to the newly created resources, run scripts or specific tasks like installing or upgrading modules.&lt;/p&gt;

&lt;p&gt;There are 3 types of provisioners :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File Provisioner :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Used to copy files or directories to the newly created resources and underneath it’s using ssh or winrm. Does the job of an scp.&lt;/p&gt;

&lt;p&gt;In this &lt;a href="https://awstip.com/i-deployed-my-static-website-with-kubernetes-on-azure-using-terraform-because-why-not-2cdfe8807ca4" rel="noopener noreferrer"&gt;article&lt;/a&gt; I used this provisioner inside a null resource to copy my kubernetes configuration files to a newly created VM where I installed minikube. You can define it inside the vm resources as well but i prefer to use them in a separate module as they shouldn't be mixed with the resources objects. And this is how I did it :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "null_resource" "configure-vm" {

  connection {
      type = "ssh"
      user = var.username
      host = var.ip_address
      private_key = var.tls_private_key
    }

  ## Copy files to VM :
  provisioner "file" {
    source = "/Users/zakariaelbazi/Documents/GitHub/zackk8s/kubernetes"
    destination = "/home/${var.username}"
  }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note that you will need to add the ssh connection details since it’s using it behind the scenes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;remote-exec Provisioner:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This provisioner invokes a script on the newly created resource. it’s similar to connecting to the resource and running a bash or a command in the terminal.&lt;/p&gt;

&lt;p&gt;It can be used inside the Terraform resource object and in that case it will be invoked once the resource is created, or it can be used inside a null resource which is my prefered approche as it separates this non terraform behavior from the real terraform behavior.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "null_resource" "configure-vm" {

  connection {
      type = "ssh"
      user = var.username
      host = var.ip_address
      private_key = var.tls_private_key
    }

  ## Copy files to VM :
  provisioner "file" {
    source = "/Users/zakariaelbazi/Documents/GitHub/zackk8s/kubernetes" #TODO move to variables.
    destination = "/home/${var.username}"
  }

  ## install &amp;amp; start minikube
  provisioner "remote-exec" {
    inline = [
      "sudo chmod +x /home/${var.username}/kubernetes/install_minikube.sh",
      "sh /home/${var.username}/kubernetes/install_minikube.sh",
      "./minikube start --driver=docker"
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Note that you can not pass any arguments to the script or command, so the best way is to use file provisioner to copy the files to the resources and then invoke them with the remote-exec provisioner like I did above for my script that installs minikube on the azure-vm.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;An other thing to pay attention to is that by default, provisioners that fail will also cause the Terraform apply to fail. To avoid that, the on_failure can be used.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "null_resource" "configure-vm" {
    .........
    ..........

  provisioner "remote-exec" {
    inline = [
      "sudo chmod +x /home/${var.username}/kubernetes/install_minikube.sh",
      "sh /home/${var.username}/kubernetes/install_minikube.sh",
      "./minikube start --driver=docker"
    ]
    on_failure = continue #or fail

  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, I am using inline which is a series of command, the on_failure will apply only to the final command in the list !&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;local-exec Provisioner:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Technically this one is very similar to the one before in terms of behavior or use but it works in the local machine ruining Terraform. It invokes a script or a command on local once the resource it’s declared in is created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## from HashiCorp docs
resource "null_resource" "example1" {  
  provisioner "local-exec" {    
    command = "open WFH, '&amp;gt;completed.txt' and print WFH scalar localtime"    
    interpreter = ["perl", "-e"]  
    }
 }

 resource "null_resource" "example2" {
  provisioner "local-exec" {
    command = "Get-Date &amp;gt; completed.txt"
    interpreter = ["PowerShell", "-Command"]
  }
}

resource "aws_instance" "web" {
  # ...

  provisioner "local-exec" {
    command = "echo $FOO $BAR $BAZ &amp;gt;&amp;gt; env_vars.txt"

    environment = {
      FOO = "bar"
      BAR = 1
      BAZ = "true"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s the only provisioner that doesn’t need any ssh or winrm connection details as it runs locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why you should avoid provisioner or use them only as a last resort ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, Terraform cannot model the actions of provisioners as part of a plan, as they can in principle take any action (the possible commands is “limitless”) which means you won’t get anything about the provisioners in your tfstate. Even if you have them in null resources like I did.&lt;/p&gt;

&lt;p&gt;Second, as you I mentioned above file and remote-exec provisioners require connection credentials to function and that adds unnecessary complexity to the Terraform configuration (Mixing day 1 and day 2 tasks).&lt;/p&gt;

&lt;p&gt;So as HashiCorp recommends in the docs try using other techniques first, and use provisioners only if there is no other option but you should know them especially if you are planning to pass the Terraform certification exam.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>cloud</category>
      <category>aws</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
