<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: ZRP</title>
    <description>The latest articles on Forem by ZRP (@zrp).</description>
    <link>https://forem.com/zrp</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/zrp"/>
    <language>en</language>
    <item>
      <title>Using Terraform to deploy a CS: GO server in a single command</title>
      <dc:creator>Pedro Gryzinsky</dc:creator>
      <pubDate>Tue, 18 Apr 2023 15:50:36 +0000</pubDate>
      <link>https://forem.com/zrp/using-terraform-to-deploy-a-cs-go-server-in-a-single-command-j01</link>
      <guid>https://forem.com/zrp/using-terraform-to-deploy-a-cs-go-server-in-a-single-command-j01</guid>
      <description>&lt;p&gt;Well, welcome to a very unusual post.&lt;/p&gt;

&lt;p&gt;If you’ve read the title correctly (and yes, you have), this is the story about how I’ve developed a Terraform module to deploy an entire CS: GO server in the Cloud using nothing but a single command.&lt;/p&gt;

&lt;p&gt;That’s right, a single command. 🤯&lt;/p&gt;

&lt;p&gt;But how in the world could we pull that out? If you’ve read the title correctly, you’ve also noticed that we used Terraform.&lt;/p&gt;

&lt;p&gt;But what is Terraform, and why should you care?&lt;/p&gt;

&lt;p&gt;Well, first of all, it can be used to deploy CS: GO servers, which is very important, case closed.&lt;/p&gt;

&lt;p&gt;But do you know that it could also be used to deploy anything that you want?&lt;/p&gt;

&lt;p&gt;There are many benefits to this approach, which is called Infrastructure as Code, or shortly, IaC.&lt;/p&gt;

&lt;p&gt;Treating infrastructure as code simply means:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Write once, deploy many times.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Deploying infrastructure many times may seem like an odd idea to you. So let me clarify why it’s good to do IaC instead of manually provisioning infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why should I write code to manage my infrastructure?
&lt;/h2&gt;

&lt;p&gt;I don't know how many times I’ve deployed a CS: GO server in my life.&lt;/p&gt;

&lt;p&gt;In ZRP, sometimes we host Game Nights for everybody to chill out and play some games. A lot of people like to play free games, like Gartic and Among Us, but sometimes people like to go wild and play fancier games.&lt;/p&gt;

&lt;p&gt;On more than an occasion, we’ve decided to play CS. But the time for deploying a server was so consuming that we ended up giving up and not playing at all.&lt;/p&gt;

&lt;p&gt;Also once I accidentally crashed the server 👀 because my connection was so slow that the server couldn’t keep up, but that’s a story for another time.&lt;/p&gt;

&lt;p&gt;So here we are, software developers contemplating the opportunity in our face to automate the task of building a server and deploying it without any human intervention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fO6EuhI2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2A6FHdw0ru1lsBqJzC.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fO6EuhI2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2A6FHdw0ru1lsBqJzC.jpg" alt="Originally found at [https://conversableeconomist.blogspot.com/2020/01/worries-about-automation-and.html](https://conversableeconomist.blogspot.com/2020/01/worries-about-automation-and.html) on February 28, 2023." width="768" height="686"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“What kind of tools exist that could easily solve this task for us?” The developers asked. If you guessed IaC, you’re goddamn right.&lt;/p&gt;

&lt;p&gt;So the first thing IaC is great at is automation. Automation, per se, is not an advantage, it must improve upon something. In our case, it was speed (the time it took us to have a running CS: GO server).&lt;/p&gt;

&lt;p&gt;It let us deploy the server whenever we want nowadays, so if someone says, let’s do a Game Night, and people want to play CS, we could just, one hour before the event, run terraform and, boom, the server is up and running.&lt;/p&gt;

&lt;p&gt;IaC is also great for another thing: cutting costs. In our case, and mostly for businesses, time is money, and allowing people to dedicate more of their energy to important stuff is great.&lt;/p&gt;

&lt;p&gt;The last thing in our use case is that IaC reduces human errors, and oh boy, my first attempt at running a CS server manually was, to put it lightly, like being stabbed in the back on a friendly-fire match (of course I didn’t know a fraction of what I currently know, but there are a lot of mistakes one can make before understanding what went wrong).&lt;/p&gt;

&lt;p&gt;So we’ve achieved a deployment that is almost entirely independent of developers while being error-free — at least, the ones that mattered to us — and fast to replicate. For me, these 3 aspects (speed, reducing costs, and consistency) are the main benefits of IaC.&lt;/p&gt;

&lt;p&gt;To another extent, IaC is also a great tool for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Better security, because it allows systems to be better designed, and security could be thought ahead of development. It could also be hardened easier (meaning that you could encapsulate some security measures in a module, and replicate those modules elsewhere).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Better communication, because code is a tool of communication. It uses language to let computers, and developers alike, understand intent, and behavior, and change it according. Readable infrastructure is very important for a strategy of improved understanding. A diagram, alone, can tell as much as the drawer intent, but IaC must convey all the information required to create the infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Better reviews, because — and this is one of my favorites — since most of our codebases nowadays are in Git anyway, you could just create a PR, review it, roll back to previous versions, and destroy code without fear of missing something important.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Further automation, because automation has this unique snowball effect, especially if you automate right. It becomes almost invisible, to the point that you could automate the deployment of your infrastructure, and automate the testing required to deploy your infrastructure, and (…) you get the point.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Eliminating configuration drift. I know, this one sounds fancier that the others, but it’s a simple effect of running dynamic resources. Those resources (servers, machines, IPs, etc…) could drift (change in time). This could happen because the service works this way, or because someone has changed the configuration manually. Since in an IaC setup our configuration is tightly integrated with our code, it will simply reapply all the settings we want, and preserve the correct configuration as fast as possible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can tell you when something happened. Indirectly, it could be understood that this is simply a continuation of the communication and review process, and it is, but I’ve decided to highlight this because it is an invaluable point. Accountability is often critical in many organizations, especially at scale, and can help teams better communicate, while also protecting companies from malicious intents, so knowing why / how / when things changed is of uttermost importance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And last: replication, which is a very important point. We usually want to run copies of our infrastructure, at smaller and bigger scales, to develop and test changes in our environment, while also understanding how much a system can go beyond its current capabilities, people usually call this scalability, I like to call it future-proofing :)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So this is all great, and I think I could end the story here. We’ve used our knowledge from something we do all the time in projects to do something unusual because we understood that if you need to repeat yourself, don’t.&lt;/p&gt;

&lt;p&gt;Automate it.&lt;/p&gt;

&lt;p&gt;Is this the end? No, this is the beginning. This article is useless without some code, so I want to deep dive into the nuts and bolts of the code. If you like this article until now, give us some claps, it helps a lot 👏&lt;/p&gt;

&lt;h2&gt;
  
  
  Before we dive into the code
&lt;/h2&gt;

&lt;p&gt;I will try to keep this section as introductory as possible, because you may not be familiar with some concepts (like using a CLI, opening a terminal, etc…), which is not the point of this story anyway.&lt;/p&gt;

&lt;p&gt;Instead, we will focus on writing Terraform code, which uses HCL, a language written by Hashicorp, that aims to be a structured configuration language that is both human- and machine-friendly.&lt;/p&gt;

&lt;p&gt;I want to explain to you the important parts, how this project works, and how you could make sense of it yourself on GitHub.&lt;/p&gt;

&lt;p&gt;First of all, we must understand how a CS: GO server works, or, for that matter, how any game server works.&lt;/p&gt;

&lt;p&gt;We will start from there and build up to Terraform.&lt;/p&gt;

&lt;p&gt;Ready!?&lt;/p&gt;

&lt;h3&gt;
  
  
  How a CS: GO server (or any game server) works
&lt;/h3&gt;

&lt;p&gt;A game server is a special type of server for multiplayer games that is solely responsible for tracking what is going on.&lt;/p&gt;

&lt;p&gt;But what do I mean by “what is going on”?&lt;/p&gt;

&lt;p&gt;Suppose you shoot a bullet using your gorgeous custom-painted signature AK-47, hitting another player in the process.&lt;/p&gt;

&lt;p&gt;How the hell do you know that you hit that particular player?&lt;/p&gt;

&lt;p&gt;Well, the server knows that you hit that particular player because the server knows it all (in computer science, the source of information that is considered the primary one is called an authoritative data source).&lt;/p&gt;

&lt;p&gt;When a player joins a match, he must send events to the server, and the server must send all events to all players. Those events are the source of truth, the state in which our game is currently-in.&lt;/p&gt;

&lt;p&gt;This state should be enough to reconstruct the entire game world.&lt;/p&gt;

&lt;p&gt;This allows players to keep an up-to-date version of the match on their computers, so they “see” the same thing. When you shoot the player — in our example — the server knows where the player is, that you shoot a bullet, that the bullet collides with the player's hitbox in that instant, and that the shoot does X damage, based on where in the hitbox you hit.&lt;/p&gt;

&lt;p&gt;After that, the server updates the player's position, health, and state of the world, so the player on the other side knows they were shoot.&lt;/p&gt;

&lt;p&gt;The frequency at which the server updates the state of the world is fixed. In source servers, this is called a tick. The default tick is 64 times a second in CS: GO.&lt;/p&gt;

&lt;p&gt;This is very complex, but the process used for transmitting those events is fairly direct. We establish — just like on the web — a session between us (the game client) and the server. The session is managed at the application level, but events are usually transported — properly — at the transport layer using a protocol called UDP.&lt;/p&gt;

&lt;p&gt;There are a lot of complexities to deal with that we do not have the time to explore in this article, like latency, predicting stuff, and so on.&lt;/p&gt;

&lt;p&gt;At the end of the article, I’ve left some useful materials for those who want to know more, particularly how Valve implemented the high-level concepts of the Source Multiplayer Networking Architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Into the code
&lt;/h2&gt;

&lt;p&gt;So now that we understand how a CS: GO server works, we must decide how and where we will deploy our server, so let’s do that first.&lt;/p&gt;

&lt;p&gt;We usually use AWS, so that’s a no-brainer, it’s cheap for our use case, and well-documented.&lt;/p&gt;

&lt;p&gt;In AWS, Linux instances are way cheaper than Windows machines, so we decided to use Linux and EC2.&lt;/p&gt;

&lt;p&gt;Linux has this neat suite called LinuxGSM that manages game servers for us. They have instructions for installing and setting up a lot of different game servers, and so we’ve decided on them.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An important note is that, in our particular case, deciding how the server is actually started and managed is less important than it seems, as the underlying infrastructure does not change.&lt;br&gt;
 We just use some scripts to install the dependencies and services required, plus installing the game server. This means it’s fairly easy to just replace the scripts and use CSGOSL, CSGO Server Launcher, Docker, etc…I find the installation easier on Ubuntu, and since the underlying distro is not important, we will use that.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We also need a basic network setup on AWS, with at least a basic VPC with a public subnet and auto-assign IPv4 enabled to host our instance.&lt;/p&gt;

&lt;p&gt;We also want to be able to connect to the instance remotely, so we will install a VNC server, although we will not explain how to use this in this article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coding the module
&lt;/h3&gt;

&lt;p&gt;Finally, we’ve arrived at the coding section. Given that we know exactly what we want, let’s draw a diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J3ti4e-g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2706/1%2AM2Gm8tdHQvK-8sdbGh3RYw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J3ti4e-g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2706/1%2AM2Gm8tdHQvK-8sdbGh3RYw.png" alt="An overview of the infrastructure architecture we will implement" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram provides an overview of what is required.&lt;/p&gt;

&lt;p&gt;Let’s start by setting up some files. Create a folder and create the following files inside that folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
├── main.tf &lt;span class="c"&gt;# where we develop our modules logic / resources&lt;/span&gt;
├── versions.tf &lt;span class="c"&gt;# where we declare the providers versions for our resources&lt;/span&gt;
├── variables.tf &lt;span class="c"&gt;# where we declare inputs&lt;/span&gt;
├── outputs.tf &lt;span class="c"&gt;# where we declare outputs&lt;/span&gt;
├── terraform.tfvars &lt;span class="c"&gt;# where we declare values for our inputs&lt;/span&gt;
├── scripts &lt;span class="o"&gt;(&lt;/span&gt;d&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# a directory for running scripts in the server&lt;/span&gt;
└── templates &lt;span class="o"&gt;(&lt;/span&gt;d&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# a directory for files that are required by the server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within this setup, let’s first declare the required providers and the versions of these providers that are required for our module by using the versions.tf file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 1.0"&lt;/span&gt;

  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 4.56"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;local&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
    &lt;span class="nx"&gt;tls&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
    &lt;span class="nx"&gt;random&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The utility of some of those providers will be clear soon.&lt;/p&gt;

&lt;p&gt;We will also add some common variables to the variables.tf file, these variables are useful for renaming resources and explaining which environment we’re currently in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"app"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The app name"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"csgo"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"env"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The environment for the current application"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;From now, local and var will appear throughout the code, and the variables from which they came will not be explicitly referenced, only in some special cases, to keep the focus on what matters most.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can now open the main.tf and add the first things we need to create our system.&lt;/p&gt;

&lt;p&gt;We first need a network, but we want the network to already exist. A public subnet usually exists within AWS, and there are a lot of already established modules that implement networking, so to make this setup more flexible, we just want a subnet_id .&lt;/p&gt;

&lt;p&gt;We will use the subnet_id to retrieve the subnet, vpc, and default security group of the vpc:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"public"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subnet_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Retrieve the provided subnet vpc and default security group&lt;/span&gt;
&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_default_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will create an SSH key pair that we will use to connect to the machine, and a random password to connect to our CS: GO server and manage it in the game (a rcon_password).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Creates the RCON password&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"random_password"&lt;/span&gt; &lt;span class="s2"&gt;"rcon_password"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;length&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;
  &lt;span class="nx"&gt;special&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;override_special&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"_%@"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Creates the SSH key pair&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"tls_private_key"&lt;/span&gt; &lt;span class="s2"&gt;"ssh"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;algorithm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"RSA"&lt;/span&gt;
  &lt;span class="nx"&gt;rsa_bits&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;4096&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Saves the private pem locally&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"local_file"&lt;/span&gt; &lt;span class="s2"&gt;"id_rsa"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;content&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tls_private_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_key_pem&lt;/span&gt;
  &lt;span class="nx"&gt;filename&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/id_rsa.pem"&lt;/span&gt;
  &lt;span class="nx"&gt;file_permission&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Saves the public pem locally&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"local_file"&lt;/span&gt; &lt;span class="s2"&gt;"id_rsa_pub"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;content&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tls_private_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_key_pem&lt;/span&gt;
  &lt;span class="nx"&gt;filename&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/id_rsa.pub"&lt;/span&gt;
  &lt;span class="nx"&gt;file_permission&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;755&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Saves the private pem in the cloud&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ssm_parameter"&lt;/span&gt; &lt;span class="s2"&gt;"pk"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/SSHPrivateKey"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SecureString"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tls_private_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_key_pem&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Creates the key pair in EC2&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_key_pair"&lt;/span&gt; &lt;span class="s2"&gt;"ssh"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;key_name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"ssh"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="nx"&gt;public_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tls_private_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_key_openssh&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet creates a random password using the &lt;strong&gt;random&lt;/strong&gt; provider, while also creating a key pair using the **tls **provider. The key pair is saved both locally, using the **local **provider, and remotely, using AWS SSM.&lt;/p&gt;

&lt;p&gt;As we’ve decided for Ubuntu 20.04, we will query it directly from the public AMI repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Find the latest release of Ubuntu 20&lt;/span&gt;
&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_ami"&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"ubuntu/images/hvm-ssd/ubuntu*20.04*amd64-server*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"virtualization-type"&lt;/span&gt;
    &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"hvm"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# Published by Canonical&lt;/span&gt;
  &lt;span class="nx"&gt;owners&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we’re almost done. The following bits are the basis for our server and are important to understand.&lt;/p&gt;

&lt;p&gt;We will create an EC2 instance within our public subnet and attach to it an EIP (an IPv4 address).&lt;/p&gt;

&lt;p&gt;We also will allow incoming traffic from ports required by the CS: GO server, and also for VNC and SSH communication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Creates the security group for incoming / outgoing traffic&lt;/span&gt;
&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"security_group"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-modules/security-group/aws"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 4.17"&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"security"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"CSGO Server Default Security Group"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_id&lt;/span&gt;
  &lt;span class="nx"&gt;ingress_with_cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;rule&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ssh-tcp"&lt;/span&gt;
      &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;27000&lt;/span&gt;
      &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;27020&lt;/span&gt;
      &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
      &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"CSGO TCP"&lt;/span&gt;
      &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;27000&lt;/span&gt;
      &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;27020&lt;/span&gt;
      &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"udp"&lt;/span&gt;
      &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"CSGO UDP"&lt;/span&gt;
      &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5901&lt;/span&gt;
      &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5901&lt;/span&gt;
      &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
      &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"VNC"&lt;/span&gt;
      &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;egress_rules&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"all-all"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;egress_cidr_blocks&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;egress_ipv6_cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"::/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Create the server&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"server"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ubuntu&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;
  &lt;span class="nx"&gt;key_name&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_key_pair&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key_name&lt;/span&gt;
  &lt;span class="nx"&gt;associate_public_ip_address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;root_block_device&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;volume_size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;aws_default_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;security_group_id&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Name"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"instance"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Associate an EIP with the created EC2 instance&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_eip"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;instance&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;vpc&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet is very important. It uses the key that we’ve generated, the AMI id we’ve found, and the subnet we provided to launch an instance of a given type in EC2.&lt;/p&gt;

&lt;p&gt;But wait, where is the CS: GO server? 🤔&lt;/p&gt;

&lt;h3&gt;
  
  
  Remote Execution
&lt;/h3&gt;

&lt;p&gt;Terraform allows us to execute scripts, locally or in the server, using what is called Provisioners.&lt;/p&gt;

&lt;p&gt;Provisioners model specific actions on the local machine or on the remote machine to prepare servers or other infrastructure objects for service.&lt;/p&gt;

&lt;p&gt;These actions are usually not easily representable as a resource or any other abstraction provided by Terraform, so they’re considered a last resort.&lt;/p&gt;

&lt;p&gt;If you remember, we’ve created 2 folders at the beginning of this section (&lt;strong&gt;scripts&lt;/strong&gt; and &lt;strong&gt;templates&lt;/strong&gt;).&lt;/p&gt;

&lt;p&gt;We will be using scripts to create a custom setup.sh script to install LinuxGSM on the server, while also installing the CS: GO server alongside it, and copying some files to the CS: GO server folder for customization.&lt;/p&gt;

&lt;p&gt;But first, we must connect Terraform to the server. To connect to the remote server, we can use a connection block, which will provide the configuration required for the connection.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"server"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;# ...&lt;/span&gt;
  &lt;span class="nx"&gt;connection&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;host&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ssh"&lt;/span&gt;
    &lt;span class="nx"&gt;user&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"csgoserver"&lt;/span&gt;
    &lt;span class="nx"&gt;private_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tls_private_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_key_pem&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, we’re using a user called &lt;strong&gt;csgoserver&lt;/strong&gt;. This user does not exist by default on the Ubuntu 20.04 AMI, so we must create it.&lt;/p&gt;

&lt;p&gt;Of course, this will not be done manually. Instead, we’re going to use a script called create-user.sh that will be executed before everything else. Since this script will connect to our server using a different user, we must add to this provisioner a different connection block.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"server"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;# ...&lt;/span&gt;

  &lt;span class="c1"&gt;# Create user for server&lt;/span&gt;
  &lt;span class="k"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"remote-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;connection&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;host&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;
      &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ssh"&lt;/span&gt;
      &lt;span class="nx"&gt;user&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;
      &lt;span class="nx"&gt;private_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tls_private_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;private_key_pem&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;script&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;module}&lt;/span&gt;&lt;span class="s2"&gt;/scripts/create-user.sh"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, remember to add the scripts/create-user.sh script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="c"&gt;# Create steam user&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;adduser csgoserver &lt;span class="nt"&gt;--disabled-password&lt;/span&gt; &lt;span class="nt"&gt;-gecos&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;

&lt;span class="c"&gt;# Add csgoserver to sudo users&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;csgoserver
&lt;span class="nb"&gt;sudo &lt;/span&gt;su &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"echo 'csgoserver     ALL=(ALL) NOPASSWD:ALL' &amp;gt;&amp;gt; /etc/sudoers"&lt;/span&gt;

&lt;span class="c"&gt;# Give .ssh access&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; csgoserver bash &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
# Give ssh access
mkdir -p .ssh
chmod 700 .ssh
touch .ssh/authorized_keys
chmod 600 .ssh/authorized_keys
TOKEN=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; PUT http://169.254.169.254/latest/api/token &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-aws-ec2-metadata-token-ttl-seconds: 21600"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;
curl -H "X-aws-ec2-metadata-token: &lt;/span&gt;&lt;span class="nv"&gt;$TOKEN&lt;/span&gt;&lt;span class="sh"&gt;" http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key &amp;gt;&amp;gt; .ssh/authorized_keys
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="nb"&gt;exit &lt;/span&gt;0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we’ve created the user, granted it &lt;strong&gt;sudo&lt;/strong&gt; access, and can configure our server. So we must execute the setup.sh script, and import the configuration files (which you can read more about in the LinuxGSM documentation at the end of the article).&lt;/p&gt;

&lt;p&gt;The code is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"template_file"&lt;/span&gt; &lt;span class="s2"&gt;"lgsm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;module}&lt;/span&gt;&lt;span class="s2"&gt;/templates/lgsm.tpl"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;vars&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;default_map&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"de_dust2"&lt;/span&gt;
    &lt;span class="nx"&gt;max_players&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"32"&lt;/span&gt;
    &lt;span class="nx"&gt;slack_alert&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;slack_webhook_url&lt;/span&gt; &lt;span class="err"&gt;!&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="err"&gt;?&lt;/span&gt; &lt;span class="s2"&gt;"on"&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"off"&lt;/span&gt;
    &lt;span class="nx"&gt;tickrate&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tickrate&lt;/span&gt;
    &lt;span class="nx"&gt;slack_webhook_url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;slack_webhook_url&lt;/span&gt;
    &lt;span class="nx"&gt;gslt&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;gslt&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"template_file"&lt;/span&gt; &lt;span class="s2"&gt;"server"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;module}&lt;/span&gt;&lt;span class="s2"&gt;/templates/server.tpl"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;vars&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;hostname&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ZRP"&lt;/span&gt;
    &lt;span class="nx"&gt;rcon_password&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rcon_password&lt;/span&gt;
    &lt;span class="nx"&gt;sv_password&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sv_password&lt;/span&gt;
    &lt;span class="nx"&gt;sv_contact&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sv_contact&lt;/span&gt;
    &lt;span class="nx"&gt;sv_tags&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sv_tags&lt;/span&gt;
    &lt;span class="nx"&gt;sv_region&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sv_region&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"template_file"&lt;/span&gt; &lt;span class="s2"&gt;"autoexec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;module}&lt;/span&gt;&lt;span class="s2"&gt;/templates/autoexec.tpl"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"server"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;# ...&lt;/span&gt;

  &lt;span class="c1"&gt;# Runs the setup&lt;/span&gt;
  &lt;span class="k"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"remote-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;script&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;module}&lt;/span&gt;&lt;span class="s2"&gt;/scripts/setup.sh"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# Download and config CS:GO server&lt;/span&gt;
  &lt;span class="k"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"remote-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;inline&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;"./csgoserver auto-install"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# Upload server config&lt;/span&gt;
  &lt;span class="k"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"file"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;content&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;template_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lgsm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rendered&lt;/span&gt;
    &lt;span class="nx"&gt;destination&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/home/csgoserver/lgsm/config-lgsm/csgoserver/common.cfg"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"file"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;content&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;template_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rendered&lt;/span&gt;
    &lt;span class="nx"&gt;destination&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/home/csgoserver/serverfiles/csgo/cfg/csgoserver.cfg"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"file"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;content&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;template_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;autoexec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rendered&lt;/span&gt;
    &lt;span class="nx"&gt;destination&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/home/csgoserver/serverfiles/csgo/cfg/autoexec.cfg"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# Start&lt;/span&gt;
  &lt;span class="k"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"remote-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;inline&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;"chmod 775 /home/csgoserver/lgsm/config-lgsm/csgoserver/common.cfg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"chmod 775 /home/csgoserver/serverfiles/csgo/cfg/csgoserver.cfg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"chmod 775 /home/csgoserver/serverfiles/csgo/cfg/autoexec.cfg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s2"&gt;"./csgoserver start"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that we are using **template_file **data, which means that we’re reading a template file (any text file) from the **templates **folder, and replacing the variables with the provided values.&lt;/p&gt;

&lt;p&gt;That’s why in the **file **provisioners we call the rendered method, to get the final file.&lt;/p&gt;

&lt;p&gt;In this snippet we execute the setup, copy some files, and finally, start the server.&lt;/p&gt;

&lt;p&gt;From now on we need to manage the server manually, as we will not be able to update these files automatically. This is as intended, provisoners run once on create or destroy.&lt;/p&gt;

&lt;p&gt;After you edit your configs within the server, you want to keep editing only on the server. A backup can be easily added, as we’ve done in the repository, so &lt;a href="https://github.com/zrp/terraform-csgo-server"&gt;&lt;strong&gt;go to the repository and check it out&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finally, deploying
&lt;/h2&gt;

&lt;p&gt;After coding our infrastructure, it is showtime ✨&lt;/p&gt;

&lt;p&gt;As promised at the beginning of the article, the deployment is a single line.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;First, copy the example in examples/complete .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate a GSLT token from Steam (access &lt;a href="https://steamcommunity.com/dev/managegameservers"&gt;https://steamcommunity.com/dev/managegameservers&lt;/a&gt; to generate, use app_id 730).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the main.tf config to match yours and replace the variables with your values.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Export your AWS_PROFILE or credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run terraform init to initialize and download the providers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run terraform plan -out plan.out and review what will be created (don’t trust a random person on the internet, check what will be created).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Finally, deploy using terraform apply "plan.out" .&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You’ve succeeded in deploying your own CS: GO server to AWS.&lt;/p&gt;

&lt;p&gt;In the repository, there is further information on how to connect to the server and use VNC. Go check it out if you haven’t already.&lt;/p&gt;

&lt;p&gt;Thank you for reading, I hope you’ve enjoyed this article as much as I did.&lt;/p&gt;

&lt;p&gt;Feel free to add me on Steam. You can find more about me on my &lt;a href="https://github.com/gryzinsky"&gt;GitHub Profile Page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Until next time! 👋&lt;/p&gt;

&lt;h2&gt;
  
  
  Materials that you might be interested
&lt;/h2&gt;

&lt;p&gt;So the article is over, but if you’re interested in the topic of game servers, or want to know more about things that we’re not able to talk about in this article, check out the links below 🤓&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/zrp/terraform-csgo-server"&gt;&lt;strong&gt;zrp/terraform-csgo-server&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.valvesoftware.com/wiki/Counter-Strike:_Global_Offensive_Dedicated_Servers#CSGO_Server_Launcher"&gt;&lt;strong&gt;Counter-Strike: Global Offensive Dedicated Servers&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking"&gt;&lt;strong&gt;Source Multiplayer Networking&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/crazy-max/csgo-server-launcher/blob/master/doc/installation.md"&gt;&lt;strong&gt;csgo-server-launcher&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tigervnc.org/"&gt;&lt;strong&gt;TigerVNC&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://linuxgsm.com/servers/csgoserver/"&gt;&lt;strong&gt;LinuxGSM&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Using a Reverse Proxy Server for Application Deployment</title>
      <dc:creator>Pedro Gryzinsky</dc:creator>
      <pubDate>Wed, 18 Mar 2020 12:44:20 +0000</pubDate>
      <link>https://forem.com/zrp/using-a-reverse-proxy-server-for-application-deployment-1lci</link>
      <guid>https://forem.com/zrp/using-a-reverse-proxy-server-for-application-deployment-1lci</guid>
      <description>&lt;p&gt;Deploying an application is never easy! If you’ve ever tried it for yourself, you know things actually never come as simple and easy as it may sound on paper.&lt;/p&gt;

&lt;p&gt;It’s also quite common that only one person, or a small team, is responsible for deployment, making this a stricter knowdlege that teams fail to share among it’s peers, leading to some unexpected application behavior, as the dev team didn’t prepare the code for deployment, or because the ops team didn’t know a feature would impose deployment restrictions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  TL;DR 📄
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A reverse proxy helps offloading responsabilities from the main server while using a simple abstraction.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Simple Abstraction. The reverse proxy abstraction is conceptually easy to understand and execute, requiring little effort for being adopted on existing web based systems.&lt;/li&gt;
&lt;li&gt;Battle-tested. You probably already use a reverse proxy for common operations, such as globally distributed content delivery and communication encryption.&lt;/li&gt;
&lt;li&gt;Deal with differences. At the architecture level, different applications may behave like a single unit for the end user, providing flexibility for teams to test different solutions.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;One neat trick we’ve been using here at &lt;a href="https://zrp.com.br"&gt;ZRP&lt;/a&gt; is to put most of the strangeness of application behavior behind a predictable configurable layer. This is what we call a reverse proxy server. There are different kinds of servers that behave like proxies, so this article explains what is a reverse proxy, why it exists, it’s usefulness and how to deploy a single page application written on Angular using the concepts we will estabilish.&lt;/p&gt;

&lt;p&gt;So what is a reverse proxy and how we might use this concept on our infrastructure?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a reverse proxy?
&lt;/h2&gt;

&lt;p&gt;Reverse proxy is a computer networks technique that masks your resource server, a single page application, an API, a traditional web app, with an intermediary, known as the proxy server, so when a user requests a specific resource, e.g. an image located at &lt;code&gt;/assets/images/logo.png&lt;/code&gt;, the proxy server calls the resource server and serves the content as if the content was originated from the proxy server itself.&lt;/p&gt;

&lt;p&gt;The main difference here is that the proxy is not configured on the client, therefore the “reverse”. The principles are the same regarding forward proxy, it helps the proxied location, client or server, to conceal their location and other critical information that we may want to hidden from attackers or untrusted traffic while applying different rules to the traffic.&lt;/p&gt;

&lt;p&gt;This technique also provides a way for your infrastructure to decouple your application and static assets from the proxied server, responsible for distributing the content or implementing your business logic. It allows application servers to mainly focus on a single task, delegating important activities to the proxy server, like authentication, compression, load balancing and security when the proxied server does not have the requirements to do so, shielding it from the outside world.&lt;/p&gt;

&lt;p&gt;Although application servers nowadays usually have handled all of the activities above, out of the box, or through simple extensions, this doesn’t mean we can’t use a proxy server. Another benefit of using a proxy server is the reduced computational cost of common server-side operations on the application server. Take compression, for example, which may take a while on your application due to the nature of compression algorithms that are usually CPU bounded. By delegating the operation to the proxy server you can free your application resources faster, reducing the memory footprint and the allocated CPU resources, thus improving the end-user experience with a faster response and reducing your computational cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use cases and benefits of reverse proxy
&lt;/h3&gt;

&lt;p&gt;Reverse proxies may be used in a variety of contexts, but they are mainly used to hide the existence of an origin server or servers, while hiding some characteristics that may be undesirable to be public available. Some use cases are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Conceal the Existence of a Server&lt;/strong&gt;: Using a reverse proxy you can hide an application server on a private network.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decoupling&lt;/strong&gt;: Using a reverse proxy you can decouple your application into multiple systems, following a service-oriented architecture (SOA), the conciliation process happens on the proxy server, that can forward requests to the correct application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traffic Control&lt;/strong&gt;: A reverse proxy allows you to build a Web Application Firewall (WAF) between the proxy server and the application server, allowing us to control which traffic can go in and out from the application server, which can mitigate common attacks like CSRF and XSS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSL/TLS Encryption&lt;/strong&gt;: Using a reverse proxy you can delegate the encryption to a single server, offloading the task to the proxy server. This is particularly useful on container environments, where the application services receive incoming traffic from the proxy server without any encryption, but clients send data encrypted over the wire to the proxy server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Balancing&lt;/strong&gt;: Using a reverse proxy enables you to distribute and manage traffic to multiple application servers, which is good both for availability and scalability, while also enabling blue / green deployments with ease.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compression&lt;/strong&gt;: Using a reverse proxy enables your application server to return plain-text results, delegating compression to the proxy server. The compression greatly reduces the payload size, giving end-users better load-times and responsiveness. Also, by delegating the task to the proxy server you effectively reduce the load on your application server because compression algorithms are usually CPU bounded.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reducing Application Server Load&lt;/strong&gt;: Using a reverse proxy we can effectively reduce the load on the application server for dynamically generated content by rapidly processing the request on the server and thus delegating the transmission of data over the network to the proxy server, releasing threads from the application server for new incoming requests. This technique, also known as spoon feed, helps popular websites to process all incoming traffic while reducing server overload.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A/B Testing&lt;/strong&gt;: Using a reverse proxy we can distribute content from different sources without the client even noticing. This allows us to distribute a different version of the same page, for example, and measure how well they perform over-time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IP Conciliation&lt;/strong&gt;: Using a reverse proxy we can conciliate applications that make one system, but lives on different addresses, to a single address. For example, your new company institutional page could be a static website, and your blog could be powered by WordPress, and you want to allow users to navigate between the two as if they were in the same ecosystem. Using a reverse proxy, you can achieve this without the user ever noticing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt;: Using a reverse proxy we can add some basic HTTP authentication to an application server that has none, protecting resources from unwanted users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching&lt;/strong&gt;: Using a reverse proxy you can cache resources from the application server, thus offloading the server. The proxy is responsible for serving the content to end-users, releasing resources to process important requests that the proxy server could not handle by itself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Geographically Reduced Latency&lt;/strong&gt;: Using a reverse proxy you can delegate incoming requests to the nearest server, reducing the latency to the end-user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Geographically Dynamic Content&lt;/strong&gt;: Using a reverse proxy you can distribute content based on the user’s current location (location accuracy may be limited), which allows websites to be automatically translated and display different content. It’s also an important point since regulations may be different depending on the users location. This is now a very hot subject because of the recent GDPR movements and in Brazil our own regulatory policy called LGPD.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Deployment and Configuration
&lt;/h2&gt;

&lt;p&gt;Now that we’ve listed the main use cases for reverse proxy, let’s deploy a very simple Angular application using Amazon S3 and Amazon CloudFront.&lt;/p&gt;

&lt;p&gt;First of all, we must create our Angular App. In these initial steps we will install the &lt;a href="https://cli.angular.io/"&gt;@angular/cli&lt;/a&gt; package using NPM and create our awesome project, change directory to our project and run it to check if everything is fine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TrOEokxg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dghxpltb7rin4f7wu3h6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TrOEokxg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dghxpltb7rin4f7wu3h6.gif" alt="Installing Angular CLI, creating and serving a new project"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--global&lt;/span&gt; @angular/cli
ng new reverse-proxy-demo &lt;span class="nt"&gt;--defaults&lt;/span&gt;

&lt;span class="nb"&gt;cd&lt;/span&gt; ./reverse-proxy-demo
npx ng serve
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After a few moments our app should have successfully compiled and be available at &lt;a href="http://localhost:4200"&gt;http://localhost:4200&lt;/a&gt;, and we’re ready to deploy our app;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eozVPZxc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1917/1%2ATZ7iaKO8VN6qJVrWs00GuQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eozVPZxc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1917/1%2ATZ7iaKO8VN6qJVrWs00GuQ.jpeg" alt="Brand new Angular app ready for deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First we need a place to store our static assets like images, fonts, style sheets and JavaScript code into the cloud, so we can start serving our SPA to our users. To do it we will be using Amazon S3 through the AWS CLI. For instructions on how to install the AWS CLI &lt;a href="https://aws.amazon.com/cli/"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;S3 is an object storage service provided by AWS that has a decent &lt;a href="https://en.wikipedia.org/wiki/Service-level_agreement"&gt;service-level agreement (SLA)&lt;/a&gt; and costs very little per GB of data. S3 also charges users for requests and transfers, which we should take into account when deploying a static website, though for the average website this cost can be neglected. For a more detailed overview into their pricing model you can &lt;a href="https://aws.amazon.com/s3/pricing/"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before we upload our assets, let’s create a bucket. Bucket names are globally unique, so define a name for your bucket and also where do you want to have it created. My bucket name is zrp-tech-reverse-proxy-demo and I created a bucket in N. Virginia (us-east-1). We’ve also set our access control list (ACL) to private, which is not recommended anymore, but it will be enough in our use case. A private ACL on bucket creation basically makes all objects private, so we will be unable to download them directly from S3. In the terminal, type the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3api create-bucket &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--bucket&lt;/span&gt; &amp;lt;YOUR_BUCKET_NAME&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--acl&lt;/span&gt; private &lt;span class="se"&gt;\&lt;/span&gt;
          &lt;span class="nt"&gt;--region&lt;/span&gt; &amp;lt;YOUR_AWS_REGION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that our bucket is ready, let’s compile our app and upload it to our freshly created bucket. To do so we must run the build procedure from the Angular CLI, which will output our 3rd party licenses, our index.html file, our application code and our application styles, alongside the angular runtime and polyfills for older browsers. Let’s compile our application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build the application&lt;/span&gt;
npx ng build &lt;span class="nt"&gt;--prod&lt;/span&gt;

&lt;span class="c"&gt;# You can check the results listing the dist/&amp;lt;APP_NAME&amp;gt; contents&lt;/span&gt;
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-als&lt;/span&gt; ./dist/reverse-proxy-demo

&lt;span class="c"&gt;# We can also test locally using http-server&lt;/span&gt;
&lt;span class="c"&gt;# and opening localhost:8080&lt;/span&gt;
npx http-server dist/reverse-proxy-demo
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IbmVJiqi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://miro.medium.com/max/900/1%2A_Ok1iQ3AlGE9J6h2Ip26Eg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IbmVJiqi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://miro.medium.com/max/900/1%2A_Ok1iQ3AlGE9J6h2Ip26Eg.gif" alt="Building and Testing the Application Locally"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After we’ve builded our application, we can sync it with our S3 bucket. We can already leverage HTTP Caching by setting, alongside every file, a Cache-Control metadata key with a max-age value. We will use 86400 seconds as our max-age value, which translates to 24 hours.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Upload the dist/reverse-proxy-demo folder to&lt;/span&gt;
&lt;span class="c"&gt;# an app folder inside the bucket&lt;/span&gt;
aws s3 &lt;span class="nb"&gt;sync&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
       ./dist/reverse-proxy-demo s3://&amp;lt;YOUR_BUCKET_NAME&amp;gt;/ &lt;span class="se"&gt;\&lt;/span&gt;
       &lt;span class="nt"&gt;--cache-control&lt;/span&gt; max-age&lt;span class="o"&gt;=&lt;/span&gt;86400

&lt;span class="c"&gt;# We can then list the uploaded files&lt;/span&gt;
aws s3 &lt;span class="nb"&gt;ls &lt;/span&gt;s3://&amp;lt;YOUR_BUCKET_NAME&amp;gt;/

&lt;span class="c"&gt;# We can try to download our file, but it will return 403&lt;/span&gt;
&lt;span class="c"&gt;# Because our ACL was set to private by default&lt;/span&gt;
curl &lt;span class="nt"&gt;-v&lt;/span&gt; https://&amp;lt;YOUR_BUCKET_NAME&amp;gt;.s3.amazonaws.com/index.html

&lt;span class="c"&gt;# We could actually presign the file for&lt;/span&gt;
&lt;span class="c"&gt;# a minute and enable access to it.&lt;/span&gt;
&lt;span class="c"&gt;# This will return a 200 status code&lt;/span&gt;
curl &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;aws s3 presign s3://&amp;lt;YOUR_BUCKET_NAME&amp;gt;/index.html &lt;span class="nt"&gt;--expires-in&lt;/span&gt; 60&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you pay attention to the last request to the presigned url, using the verbose flag, you should notice that our Cache-Control header correctly returns, as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lbb3z_V6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1125/1%2AUBF0et7euN2hcYvzMD0Z4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lbb3z_V6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1125/1%2AUBF0et7euN2hcYvzMD0Z4w.png" alt="Response headers for presigned request to index.html on S3. Notice the cache-control header returning the expected max-age value."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that our files are uploaded, we will create our reverse proxy using AWS CloudFront. To do this, we will use the AWS Web Interface. On the &lt;a href="https://console.aws.amazon.com/cloudfront/home?#"&gt;CloudFront Console&lt;/a&gt; click “Create Distribution” and in web click “Get Started”, this will redirect us to a form where we can configure our reverse proxy;&lt;/p&gt;

&lt;p&gt;From there first let’s setup our origin. Our origin will be our proxied server, in this particular case, Amazon S3, which follows the format &lt;code&gt;&amp;lt;YOUR_BUCKET_NAME&amp;gt;.s3.amazonaws.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We also need to specify the path from our origin from which the resources may be loaded, the origin path field, in our case /, so we leave it blank. Our Origin ID is an arbitrary string to identify the proxied server. A reverse proxy can hide many servers, so we could have an arbitrary number of origins configured. In this case we will call it Angular App.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--67PIlju_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1125/1%2AnmUf3F7QqClmVIaYlqvUag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--67PIlju_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1125/1%2AnmUf3F7QqClmVIaYlqvUag.png" alt="Configuring our Proxied Server and Authentication Policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To ensure that the bucket contents will only be served through CloudFront, we can restrict the bucket access. This will automatically create an AWS Policy for our bucket, allowing our principal, the CloudFront Distribution, to read data from the bucket, but denying the possibility for third parties to directly read the bucket contents &lt;strong&gt;(Our use case for Authentication)&lt;/strong&gt;. To do so, set “Restrict Bucket Access” to “Yes”, Origin Access Identity to “Create a New Identity”, Comment to “CloudFrontAccessIdentity” and “Grant Read Permissions on Bucket” to “Yes, Update Bucket Policy”, which will automatically update the bucket policy to enforce our security policy. When using an origin different than S3, for example, an API that requires a X-Api-Key header, we could provide Origin Custom Headers, but we will not use this option for now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cVOLWr7m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/721/1%2Afe82EDGnFBptbPx8SwELVQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cVOLWr7m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/721/1%2Afe82EDGnFBptbPx8SwELVQ.jpeg" alt="Our caching and SSL/TLS Encryption Policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we must configure our default cache behavior. The default cache behavior is applied to all objects served by our reverse proxy. After we create the distribution, we can provide different behaviors for different objects, e.g. we want images to be cached for extended periods of time, but we will not do it here because we want to apply the same policy for all static assets generated by our application &lt;strong&gt;(enforcing our Caching use case)&lt;/strong&gt;. We can also compress objects &lt;strong&gt;(enforcing our Compression use case)&lt;/strong&gt;, so content will be digested directly from S3 without compression, but served compressed for clients.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hz3StG32--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/674/1%2A-4tf--ZzGt1bBGJIfgS9kQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hz3StG32--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/674/1%2A-4tf--ZzGt1bBGJIfgS9kQ.jpeg" alt="Our compression use case"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can configure our CloudFront Distribution to only serve assets through HTTPS &lt;strong&gt;(enforcing our SSL/TLS Encryption policy)&lt;/strong&gt;, redirecting HTTP content to HTTPS. We can also only allow methods like GET and HEAD, since we just want to serve the content, not perform any kind of server side operation on the proxied server. Another interesting option is to define how Object Caching within the proxy server is performed. We will use the incoming Cache-Control header from the proxied server. We will not forward cookies, neither the query string.&lt;/p&gt;

&lt;p&gt;Now we can finally launch our Cloudfront Distribution. The distribution settings are not important in the scope of this article, but you can set different regional placements for your distribution &lt;strong&gt;(our use case of Geographically Reduced Latency)&lt;/strong&gt;, SSL/TLS version restrictions, HTTP2 and IPv6 support. You should definitely check it out.&lt;br&gt;
For now, the only parameter we should set in this particular case is the Default Root Object. The default root object is the object returned by the Cloudfront Distribution if no object is specified in the request, therefore, in the request path. In our case, our application must serve the &lt;code&gt;index.html&lt;/code&gt; file, and so our default root object is &lt;code&gt;index.html&lt;/code&gt;. Now just click Create Distribution and wait a few minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0b3RL-H9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/808/1%2AzPHspNqXBk9xxFj5dg0Reg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0b3RL-H9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/808/1%2AzPHspNqXBk9xxFj5dg0Reg.jpeg" alt="Our newly created distribution"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;Now that our application is deployed, we can access the distribution using the url described in the domain name, and voilá.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sHxXAtdK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1918/1%2AhGFzasruaEWlZocOc5Euxw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sHxXAtdK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1918/1%2AhGFzasruaEWlZocOc5Euxw.jpeg" alt="Our Angular App is now being served by Cloudfront, concealing the existence of S3."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we pay close attention to the styles.css file, we can notice the effect of our caching policy, alongside some information regarding the proxy server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hnpaBPsx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/622/1%2AgmXYKjqngQI-cF0yWk7a_Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hnpaBPsx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/622/1%2AgmXYKjqngQI-cF0yWk7a_Q.png" alt="Response Headers for styles.css before caching"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First of all, our first request had a &lt;code&gt;x-cache&lt;/code&gt; header with &lt;code&gt;Miss from cloudfront&lt;/code&gt;, which indicates that the requested object wasn’t cached yet on the distribution. Second, we could check that our &lt;code&gt;cache-control&lt;/code&gt; header was correctly processed, seting our &lt;code&gt;max-age&lt;/code&gt; to a day, and that the content returned a 200 status code, as expected. Now, if we made a second request, things start to get a little bit more interesting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_k6p4j4P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/622/1%2ANyUnagR6IX80qOOA-MduAA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_k6p4j4P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/622/1%2ANyUnagR6IX80qOOA-MduAA.png" alt="Response Headers for styles.css after browser and cdn caching"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our second request was a hit (no pun intended). Our distribution cached the content, and so did the browser. The styles.css file is loaded directly from the browser cache, and was in the proxy server for ~62 seconds. The content will be cached until the cache expires, when the browser will try to fetch the content again from the distribution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So in conclusion, a reverse proxy is a powerful tool that you already (probably) use. They are easy to configure and can take away much of the pain from your application.&lt;/p&gt;

&lt;p&gt;Nowadays most of the reverse proxy technology is based on software and runs on commodity hardware. Also, there are a lot of cloud providers in the market offering solutions based on this concept, so you should check them out to see the benefits and costs associated with each implementation.&lt;/p&gt;

&lt;p&gt;It’s easier than ever to find a solution that fits your problem, so try a lot before you try to tape every piece of your deployment together.&lt;/p&gt;

&lt;p&gt;If you have any questions, feel free to contact me at any time.&lt;/p&gt;

&lt;p&gt;I hope you liked this introduction, until next time. 🚀&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A practical and gentle introduction to web scraping with Puppeteer</title>
      <dc:creator>Nikolas Serafini</dc:creator>
      <pubDate>Fri, 13 Mar 2020 20:34:16 +0000</pubDate>
      <link>https://forem.com/zrp/a-practical-and-gentle-introduction-to-web-scraping-with-puppeteer-20jn</link>
      <guid>https://forem.com/zrp/a-practical-and-gentle-introduction-to-web-scraping-with-puppeteer-20jn</guid>
      <description>&lt;p&gt;If you are wondering what that is, &lt;a href="https://github.com/puppeteer/puppeteer"&gt;Puppeteer&lt;/a&gt; is a Google-maintained Node library that provides an API over the DevTools protocol, offering us the ability to take control over Chrome or Chromium and do very nice automation and scraping related things.&lt;/p&gt;

&lt;p&gt;It's very resourceful, widely used, and probably what you should take a look today if you need to develop something of the like. It's use even extends to performing e2e tests with front-end web frameworks such as &lt;a href="https://github.com/angular/angular"&gt;Angular&lt;/a&gt;, it's a very powerful tool.&lt;/p&gt;

&lt;p&gt;In this article we aim to show some of the essential Puppeteer operations along with a very simple example of extracting Google's first page results for a keyword, as a way of wrapping things up.&lt;br&gt;
Oh, and a full and working repository example with all the code shown in this post can be found &lt;a href="https://github.com/Emethium/starting-with-puppeteer"&gt;here if you need!&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;We'll learn how to make Puppeteer's basic configuration&lt;/li&gt;
&lt;li&gt;Also how to access Google's website and scrap the results page&lt;/li&gt;
&lt;li&gt;All of this getting into detail about a couple of commonly used API functions &lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  First step, launching a Browser instance
&lt;/h2&gt;

&lt;p&gt;Before we can attempt to do anything, we need to launch a Browser instance as a matter to actually access a specific website. As the name suggests, we are actually going to launch a full-fledged Chromium browser (or not, we can run in &lt;a href="https://developers.google.com/web/updates/2017/04/headless-chrome"&gt;headless mode&lt;/a&gt;), capable of opening multiple tabs and as feature-rich as the browser you may be using right now.&lt;/p&gt;

&lt;p&gt;Launching a Browser can be simple as typing await puppeteer.launch(), but we should be aware that there is a &lt;a href="https://github.com/puppeteer/puppeteer/blob/master/docs/api.md#puppeteerlaunchoptions"&gt;huge amount of launching options&lt;/a&gt; available, whose use depends on your needs. Since we will be using Docker in the example, some additional tinkering is done here so we can run it inside a container without problems, but still serves as a good example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;initializePuppeteer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;launchArgs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="c1"&gt;// Required for Docker version of Puppeteer&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;--no-sandbox&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;--disable-setuid-sandbox&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="c1"&gt;// Disable GPU&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;--disable-gpu&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="c1"&gt;// This will write shared memory files into /tmp instead of /dev/shm,&lt;/span&gt;
  &lt;span class="c1"&gt;// because Docker’s default for /dev/shm is 64MB&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;--disable-dev-shm-usage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;];&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;launch&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;executablePath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/usr/bin/chromium-browser&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;launchArgs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;defaultViewport&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;768&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Working with tabs
&lt;/h2&gt;

&lt;p&gt;Since we have already initialized our Browser, we need to create tabs (or pages) to be able to access our very first website. Using the function we defined above, we can simply do something of the like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;initializePuppeteer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;scrapSomeSite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Accessing a website
&lt;/h2&gt;

&lt;p&gt;Now that we have a proper page opened, we can manage to access a website and do something nice. By default, the newly created page always open blank so we must manually navigate to somewhere specific. Again, a very simple operation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://www.google.com/?gl=us&amp;amp;hl=en&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;waitUntil&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;load&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a couple of options in this operation that requires extra attention and can heavily impact your implementation if misused:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;timeout&lt;/code&gt;: while the default is 30s, if we are dealing with a somewhat slow website or even running behind proxies, we need to set a proper value to avoid undesired execution errors.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;waitUntil&lt;/code&gt;: this guy is really important as different sites have completely different behaviors. It defines the page events that are going to be waited before considering that the page actually loaded, not waiting for the right events can break your scraping code. We can use one or all of them, defaulting to &lt;code&gt;load&lt;/code&gt; . You can find all the available options &lt;a href="https://github.com/puppeteer/puppeteer/blob/v2.0.0/docs/api.md#pagegotourl-options"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Page shenanigans
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Google's first page
&lt;/h3&gt;

&lt;p&gt;So, we finally opened a web page! That's nice. We now have arrived at the actually fun part.&lt;br&gt;
Let's follow the idea of scraping Google's first result page, shall we? Since we have already navigated to the main page we need to do two different things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fill the form field with a keyword&lt;/li&gt;
&lt;li&gt;Press the search button&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before we can interact with any element inside a page, we need to find it by code first, so then we can replicate all the necessary steps to accomplish our goals. This is a little detective work, and it may take some time to figure out.&lt;/p&gt;

&lt;p&gt;We are using the US Google page so we all see the same page, the link is in the code example above. If we take a look at Google's HTML code you'll see that a lot of element properties are properly obfuscated with different hashes that change over time, so we have lesser options to always get the same element we desire.&lt;/p&gt;

&lt;p&gt;But, lucky us, if we inspect the input field, one can find easy-to-spot properties such as &lt;code&gt;title="Search"&lt;/code&gt; on the element. If we check it with a &lt;code&gt;document.querySelectorAll("[title=Search]")&lt;/code&gt; on the browser we'll verify that it is a unique element for this query. One down.&lt;/p&gt;

&lt;p&gt;We could apply the same logic to the submit button, but I'll take a different approach here on purpose. Since everything is inside a form , and we only have one in the page, we can &lt;strong&gt;forcefully submit it&lt;/strong&gt; to instantly navigate to the result screen, by simply calling a form.submit(). Two down.&lt;/p&gt;

&lt;p&gt;And how we can "find" these elements and do these awesome operations by code? Easy-peasy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Filling the form&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;inputField&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[title=Search]&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;inputField&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;puppeteer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Forces form submission&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;$eval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;form&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;waitForNavigation&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;waitUntil&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;load&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So we first grab the input field by executing a &lt;code&gt;page.$(selectorGoesHere)&lt;/code&gt; , function that actually runs &lt;code&gt;document.querySelector&lt;/code&gt; on the browser's context, returning the &lt;strong&gt;first&lt;/strong&gt; element that matches our selector. Being that said you have to make sure that you're fetching the right element with a correct and unique selector, otherwise things may not go the way they should. On a side note, to fetch &lt;strong&gt;all&lt;/strong&gt; the elements that match a specific selector, you may want to run a &lt;code&gt;page.$$(selectorGoesHere)&lt;/code&gt; , that runs a &lt;code&gt;document.querySelectorAll&lt;/code&gt; inside the browser's context.&lt;/p&gt;

&lt;p&gt;As for actually typing the keyword into the element, we can simply use the &lt;code&gt;page.type&lt;/code&gt; function with the content we want to search for. Keep in mind that, depending on the website, you may want to add a typing &lt;strong&gt;delay&lt;/strong&gt; (as we did in the example) to simulate a human-like behavior. Not adding a delay may lead to weird things like input drop downs not showing or a plethora of different strange things that we don't really want to face.&lt;/p&gt;

&lt;p&gt;Want to check if we filled everything correctly? Taking a screenshot and the page's full HTML for inspecting is also very easy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;screenshot&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./firstpage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;fullPage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jpeg&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;html&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To submit the form, we are introduced to a &lt;strong&gt;very&lt;/strong&gt; useful function: &lt;code&gt;page.$eval(selector, pageFunction)&lt;/code&gt; . It actually runs a &lt;code&gt;document.querySelector&lt;/code&gt; for it's first argument, and passes the element result as the first argument of the provided page function. This is really useful if you have to run code that &lt;strong&gt;needs to be inside the browser's context to work&lt;/strong&gt;, as our &lt;code&gt;form.submit()&lt;/code&gt; . As the previous function we mentioned, we also have the alternate &lt;code&gt;page.$$eval(selector, pageFunction)&lt;/code&gt; that works the same way but differs by running a &lt;code&gt;document.querySelectorAll&lt;/code&gt; for the selector provided instead.&lt;/p&gt;

&lt;p&gt;As forcing the form submission causes a page navigation, we need to be explicit in what conditions we should wait for it before we continue with the scraping process. In this case, waiting until the navigated page launches a &lt;code&gt;load&lt;/code&gt; event is sufficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  The result page
&lt;/h3&gt;

&lt;p&gt;With the result page loaded we can finally extract some data from it! We are looking only for the textual results, so we need to scope them down first.&lt;br&gt;
If we take a very careful look, the entire results container can be found with the &lt;code&gt;[id=search] &amp;gt; div &amp;gt; [data-async-context]&lt;/code&gt; selector. There are probably different ways to reach the same element, so that's not a definitive answer. If you find a easier path, let me know.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CF0wDw4F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kh1nvlqigh7wvbzwb38z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CF0wDw4F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kh1nvlqigh7wvbzwb38z.png" alt="The text result container" width="664" height="834"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And, lucky us, every text entry here has the weird &lt;code&gt;.g&lt;/code&gt; class! So, if we query this container element we found for every sub-element that has this specific class (yes, this is also supported) we can have direct access to all the results! And we can do all that with stuff we already mentioned:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rawResults&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[id=search] &amp;gt; div &amp;gt; [data-async-context]&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;filteredResults&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;rawResults&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;$$eval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;.g&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="nb"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filteredResults&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So we use the &lt;code&gt;page.$&lt;/code&gt; function to take a hold on that beautiful container we just saw, so then a &lt;code&gt;.$$eval&lt;/code&gt; function can be used on this container to fetch all the sub-elements that have the &lt;code&gt;.g&lt;/code&gt; class, applying a custom function for these entries. As for the function, we just retrieved the &lt;code&gt;innerText&lt;/code&gt; for every element and removed the empty strings on the end, to tidy up our results.&lt;/p&gt;

&lt;p&gt;One thing that should not be overlooked here is that we had to use &lt;code&gt;Array.from()&lt;/code&gt; on the returning &lt;code&gt;results&lt;/code&gt; so we could actually make use of functions like &lt;code&gt;map&lt;/code&gt; , &lt;code&gt;filter&lt;/code&gt; and &lt;code&gt;reduce&lt;/code&gt; . The returning element from a &lt;code&gt;.$$eval&lt;/code&gt; call is a &lt;strong&gt;&lt;code&gt;NodeList&lt;/code&gt;&lt;/strong&gt; , not an &lt;code&gt;Array&lt;/code&gt;, and it does not offer support for some of the functions that we otherwise would find on the last.&lt;/p&gt;

&lt;p&gt;If we check on the filtered results, we'll find something of the like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  '\n' +
    'puppeteer/puppeteer: Headless Chrome Node.js API - GitHub\n' +
    'github.com › puppeteer › puppeteer\n' +
    'Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium. What can I do? Most things that you can do manually ...\n' +
    '‎Puppeteer API · ‎37 releases · ‎Puppeteer for Firefox · ‎How do I get puppeteer to ...',
  '\n' +
    'Puppeteer | Tools for Web Developers | Google Developers\n' +
    'developers.google.com › web › tools › puppeteer\n' +
    'Jan 28, 2020 - Puppeteer is a Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. It can also be configured to use full (non-headless) Chrome or Chromium.\n' +
    '‎Quick start · ‎Examples · ‎Headless Chrome: an answer · ‎Debugging tips',
  'People also ask\n' +
    'What is puppeteer used for?\n' +
    'How does a puppeteer work?\n' +
    'What is puppeteer JS?\n' +
    'Does puppeteer need Chrome installed?\n' +
    'Feedback',
...
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we have all the data we want right here! We could parse every entry here on several different ways, and create full-fledged objects for further processing, but I'll leave this up to you. &lt;/p&gt;

&lt;p&gt;Our objective was to get our hands into the text data, and we managed just that. Congratulations to us, we finished!&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping things up
&lt;/h2&gt;

&lt;p&gt;Our scope here was to present Puppeteer itself along with a series of operations that could be considered basic for almost every web scraping context. This is most probably a mere start for more complex and deeper operations one may find during a page's scraping process.&lt;/p&gt;

&lt;p&gt;We barely managed to scratch the surface of &lt;a href="https://github.com/puppeteer/puppeteer/blob/master/docs/api.md"&gt;Puppeteer's extensive API&lt;/a&gt;, one that you should really consider taking a serious look into. It's pretty well-written and loaded with easy-to-understand examples for almost everything.&lt;/p&gt;

&lt;p&gt;This is just the first of a series of posts regarding Web scraping with Puppeteer that will (probably) come to fruition in the future. Stay tuned!&lt;/p&gt;

</description>
      <category>node</category>
      <category>beginners</category>
      <category>puppeteer</category>
    </item>
    <item>
      <title>Running your first Docker Image on ECS</title>
      <dc:creator>Nikolas Serafini</dc:creator>
      <pubDate>Wed, 04 Mar 2020 17:28:40 +0000</pubDate>
      <link>https://forem.com/zrp/running-your-first-docker-image-on-ecs-17c4</link>
      <guid>https://forem.com/zrp/running-your-first-docker-image-on-ecs-17c4</guid>
      <description>&lt;p&gt;Working with containers has become a hefty trend in Software Engineering in this past couple of years. Containers can offer several advantages for software development and application deployment, possibly taking away a lot of problems faced by development and devops teams. We'll peek a little bit on the basics and hows of running your first Docker image on Amazon Web Services.&lt;/p&gt;

&lt;p&gt;Of the gruesome volume of different services offered by Amazon, the Elastic Container Service (ECS) is the one to go when we commonly need to run and deploy dockerized applications.&lt;/p&gt;

&lt;p&gt;For one to fully deploy a application in said service, at least three little things are needed: an configured &lt;strong&gt;ECS Cluster&lt;/strong&gt;, at least &lt;strong&gt;one&lt;/strong&gt; image on ECR, and a &lt;strong&gt;Task Definition&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Cluster&lt;/strong&gt; is a grouping of tasks and services, whose creation is free of charge by itself. Tasks must be placed inside a cluster to run, so their creation is mandatory. You'll probably want to create different clusters for different applications or execution contexts to keep everything tied up and organized.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Task Definition&lt;/strong&gt; is an AWS resource used to describe containers and volume definitions of ECR tasks. There one can define what docker images to use, environment and network configuration, instance requirements for task placement among other configurations. Which lead us to the definition of &lt;strong&gt;Task&lt;/strong&gt;, a Task Definition instance, the running entity executing all the software in the images included, in the way we defined.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we can jump into the Task Definition configuration shenanigans we must first have at &lt;strong&gt;least&lt;/strong&gt; one container image (since a Task Definition can make use of multiple containers at once) on one ECR repository and a already-configured (or empty) Cluster.&lt;/p&gt;

&lt;p&gt;Docker images must be built and pushed to another AWS Service, the Elastic Container Registry (ECR). In the pretty straightforward service, each different Docker image must be tagged and pushed to a separate repository so they can be referenced in our task definitions. It's a must.&lt;br&gt;
Amazon is friendly enough to even give us all the necessary commands to do all the basic operations: all you have to do is push that beautiful "&lt;strong&gt;View push commands&lt;/strong&gt;" button, as every needed step is explained in detail over there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iW5z9W_8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lq2qqp5vo9v23k7iatms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iW5z9W_8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lq2qqp5vo9v23k7iatms.png" alt="ECR Repository page" width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clusters can be created by clicking on the intuitive "&lt;strong&gt;Create Cluster&lt;/strong&gt;" button on the ECS main page. Make sure to select the correct template for your needs, depending on the OS you'll want to use, but you'll probably want to stick with the "&lt;strong&gt;Linux + Networking&lt;/strong&gt;" one. Do not choose the "&lt;strong&gt;Networking only&lt;/strong&gt;" unless you're really sure on about what you're doing, as you will be dealing with yet another completely different service, &lt;strong&gt;AWS Fargate&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We have the option of creating an empty cluster, thus not allocating any kind of on-demand or spot instances, or not doing that. Since creating an empty cluster gives us the hassle of providing running machines in some way so our tasks can actually run (and that's completely out of our scope here), I suggest you to just pick a &lt;code&gt;t2.micro&lt;/code&gt; as a ECS Instance Type as it's almost free. Keep in mind that if you want to meddle with Fargate no instances need to be picked as Amazon will take care of machine allocation for you, so you can just create an empty cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Task Definition
&lt;/h2&gt;

&lt;p&gt;With all the needed configuration at hand, we can finally safely access the "&lt;strong&gt;Create new Task Definition&lt;/strong&gt;" section found no the ECS Task Definition main page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cf43QV7Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gizi8tjy87e1jn16lu0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cf43QV7Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gizi8tjy87e1jn16lu0x.png" alt="Create task definition location" width="633" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At first, Amazon will ask you to choose your Task Definition launch type based on where you want run your tasks, giving you two choices: &lt;strong&gt;Fargate&lt;/strong&gt; and &lt;strong&gt;ECS&lt;/strong&gt;. Fargate is an AWS alternate service that allows us to execute tasks without the hassle of explicitly allocating machines to run your containers, whereas ECS is a launch type that requires you to manually configure several execution aspects, a lower level operation. Both launch types work and are billed differently, you can find more detailed information &lt;a href="https://medium.com/r/?url=https%3A%2F%2Fdocs.aws.amazon.com%2FAmazonECS%2Flatest%2Fdeveloperguide%2Flaunch_types.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0yGHZPM6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8cbmicwrmcjb50amm765.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0yGHZPM6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8cbmicwrmcjb50amm765.png" alt="Launch Type" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the following page we will find the more meaningful and deeper configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task Definition Name&lt;/strong&gt;: the name of your Task Definition. Obligatory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requires Compatibilities&lt;/strong&gt;: is set following the Launch Type chosen in the first configuration page.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Mode&lt;/strong&gt;: set network mode ECS will use to start our containers. If Fargate is used, we have no other option as to set the &lt;strong&gt;awsvpc&lt;/strong&gt; mode. If you are not really sure what this means, &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#network_mode"&gt;take a look here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task Execution Role&lt;/strong&gt;: necessary role to be given to each task execution as to allow it to have access and sufficient permissions to run correctly inside the AWS ecosystem. An Execution Role has access to several different permission policies to give specific access to the applications running under it. For instance, a common use case for ECS-executed tasks are to have access to ECR (so we can pull our images) and to have permission to use Cloudwatch (Amazon logging and monitoring service), meaning we would have to attach to our Role policies like: "&lt;strong&gt;ecr:BatchGetImage&lt;/strong&gt;" and "&lt;strong&gt;logs:PutLogEvents&lt;/strong&gt;". Do not confuse this with the Task Role, the Task Execution Role wraps the permissions to be conceded to the ECS agent responsible for placing the tasks, not to the tasks themselves. Detailed information of Task Execution Roles can be found &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task Role&lt;/strong&gt;: necessary role to be given to the task itself. It follows the same principles listed above, so if your application needs to send messages to a SQS queue, communicate to Redis cluster or save data into a RDS database, you'll need to set all the specific policies into your custom Task Role configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task size&lt;/strong&gt;: a pretty straightforward section, allows you to set specific memory and CPU requirements for running your task. Keep in mind that if you try to run your task on a machine with lower specs than specified here, it won't start at all.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container Definitions&lt;/strong&gt;: the fun part. You'll have to pass through all the steps listed below for each container you want to include inside the same Task Definition. You can set up any number of containers you need.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Container name&lt;/strong&gt;: self-explanatory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image&lt;/strong&gt;: the ECR repository image URL from which you'll pull the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Limits&lt;/strong&gt;: the definition hard or soft memory limits (Mib) for the container. &lt;a href="https://stackoverflow.com/questions/44764135/aws-ecs-task-memory-hard-and-soft-limits"&gt;Link to universal knowledge if you do not know what this means&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port mappings&lt;/strong&gt;: if your application will make use of specific ports for any kind of outgoing communication, you'll probably have to bind the host ports to your container ones. Otherwise, nothing is gonna actually work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthcheck&lt;/strong&gt;: sets up a container health check routine in specific timed intervals. Useful to continuously monitor your container health and used by ECS to know when your application actually started running. If you defined a specific route for this in your application, you'll probably have something of the like as your command: &lt;code&gt;CMD-SHELL,curl -f http://localhost:port/healthcheck || exit 1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment&lt;/strong&gt;: Lets you set up the amount of CPU units your container is going to use, configure all the environment variables you need so your application runs correctly among other configuration. Also allows you to set the container as essential, meaning in the scenario that if it dies for some reason, the entire task is going to be killed shortly after.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Startup Dependency Ordering&lt;/strong&gt;: allows you to control the order the containers are going to start, and when they are going to. Not mandatory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container Timeouts&lt;/strong&gt;: self-explanatory and not mandatory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Settings&lt;/strong&gt;: also self-explanatory. Not really mandatory, unless you have some advanced network shenanigans going on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage and Logging&lt;/strong&gt;: Couple of advanced setups. At a basic level you'll probably want to configure the Cloudwatch Log configuration, or simply let Amazon handle everything with the beautiful "&lt;strong&gt;Auto-configure CloudWatch Logs&lt;/strong&gt;" button.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security, Resource Limits, and Docker Labels&lt;/strong&gt;: advanced an context-specific configurations. Not mandatory. We'll not cover them here.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that's it. There a couple of other options listed in the task definition main page, as Mesh and FireLens integration, but they are very specific and not really needed for your everyday task definitions. You can skip the rest and press the "&lt;strong&gt;Create&lt;/strong&gt;" button.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running your task
&lt;/h2&gt;

&lt;p&gt;After all this explanation, we want to see if the containerized application will actually run right? So click on your created ECS cluster, hit the bottom "&lt;strong&gt;Tasks&lt;/strong&gt;" tab and click on the "&lt;strong&gt;Run new Task&lt;/strong&gt;" button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_YnDFMk0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dvo6i8nkts2msvc9dnoh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_YnDFMk0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dvo6i8nkts2msvc9dnoh.png" alt="Manually running a task" width="717" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the new screen, select "&lt;strong&gt;EC2&lt;/strong&gt;" as a Launch Type, fill in your task definition name and the name of the cluster where we'll run it and click on "Run Task". If you had set the &lt;code&gt;t2.micro&lt;/code&gt; as we suggested, your application will boot in no time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rQt_3Xug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y27xdcmizs4volmv1yox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rQt_3Xug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y27xdcmizs4volmv1yox.png" alt="Manually running a task" width="556" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check all the running information of your task in the "&lt;strong&gt;Task&lt;/strong&gt;" tab of your cluster.&lt;/p&gt;

&lt;p&gt;Since we did not enter into the merits of what kind of application you are trying to run, it's up to yourself to check if your application is running as it should. You can check the logs of your application by clicking on the task (inside the aforementioned "Task" tab) and searching for the "&lt;strong&gt;View logs in CloudWatch&lt;/strong&gt;" under the desired configured container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;Our main objective here was to show the simplest (and somewhat lengthy) path of how one can deploy a containerized application on Amazon Web Services without any kind of context what one could actually run over there.&lt;/p&gt;

&lt;p&gt;A lot of deeper points were omitted since they serve no purpose in a introduction, a couple of other complementary ones were also not mentioned now (such as configuring services and making spot fleet requests) but are going to be featured in future following articles. Those additional content will be complementary and crucial for a more consistent understanding of the multitude of ECS services and overall environment.&lt;/p&gt;

&lt;p&gt;I invite you all to stay tuned for the next articles. Feel free to use the comments section below to post any questions or commentaries, we'll take a look on them all, promise.&lt;br&gt;
Until next time, happy deploying.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
