<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: matt from bitLeaf.io</title>
    <description>The latest articles on Forem by matt from bitLeaf.io (@bitleaf_io).</description>
    <link>https://forem.com/bitleaf_io</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bitleaf_io"/>
    <language>en</language>
    <item>
      <title>Dotnet Core Docker Build and Run</title>
      <dc:creator>matt from bitLeaf.io</dc:creator>
      <pubDate>Wed, 20 May 2020 23:46:21 +0000</pubDate>
      <link>https://forem.com/bitleaf_io/dotnet-core-docker-build-and-run-48nj</link>
      <guid>https://forem.com/bitleaf_io/dotnet-core-docker-build-and-run-48nj</guid>
      <description>&lt;h2&gt;
  
  
  Let's Create a Dotnet Core Docker Image
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mhjRu2yH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1570224369883-efd53e222a18%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D2000%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mhjRu2yH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1570224369883-efd53e222a18%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D2000%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Dotnet Core Docker Build and Run"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a previous post we &lt;a href="https://dev.to/bitleaf_io/dotnet-core-dockerfile-2174"&gt;created our Dockerfile&lt;/a&gt;. In this next step we are going to use our Dockerfile script to create a Docker image. Then once we have the image we can run it in a container. So let the games begin.&lt;/p&gt;

&lt;p&gt;For our sample aspnet app, I'm just using the code from the asp.net samples repository...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/bitleaf-io/aspdotnetcore-docker"&gt;https://github.com/bitleaf-io/aspdotnetcore-docker&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, I'll clone that repository to my local machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Projects/bitleaf/dotnet-docker3 
❯ git clone git@github.com:bitleaf-io/aspdotnetcore-docker.git
Cloning into 'aspdotnetcore-docker'...
remote: Enumerating objects: 77, done.
remote: Counting objects: 100% (77/77), done.
remote: Compressing objects: 100% (60/60), done.
remote: Total 77 (delta 12), reused 73 (delta 12), pack-reused 0
Receiving objects: 100% (77/77), 683.33 KiB | 381.00 KiB/s, done.
Resolving deltas: 100% (12/12), done.
Projects/bitleaf/dotnet-docker3 took 10s 
❯ cd aspdotnetcore-docker/
aspdotnetcore-docker on master 
❯ ls
aspnetapp Dockerfile README.md
aspdotnetcore-docker on master 

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Great. We now have the sample repository to work with. It contains our &lt;a href="https://dev.to/bitleaf_io/dotnet-core-dockerfile-2174"&gt;Dockerfile we created in the previous post&lt;/a&gt; and some sample &lt;a href="http://asp.net"&gt;asp.net&lt;/a&gt; code from Microsoft.&lt;/p&gt;

&lt;p&gt;Let's take a look at our Dockerfile again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# &amp;lt;https://hub.docker.com/_/microsoft-dotnet-core&amp;gt;
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /source

# copy csproj and restore as distinct layers
COPY *.sln .
COPY aspnetapp/*.csproj ./aspnetapp/
RUN dotnet restore -r linux-musl-x64

# copy everything else and build app
COPY aspnetapp/. ./aspnetapp/
WORKDIR /source/aspnetapp
RUN dotnet publish -c release -o /app -r linux-musl-x64 --self-contained false --no-restore

# final stage/image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine
WORKDIR /app
COPY --from=build /app ./

ENTRYPOINT ["./aspnetapp"]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is the same file we created before and it's going to allow us to build and run our sample &lt;a href="http://asp.net"&gt;asp.net&lt;/a&gt; core application inside of Docker. Remember the three stages of Docker...&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Dockerfile&lt;/li&gt;
&lt;li&gt;Build a Docker image based on that Dockerfile&lt;/li&gt;
&lt;li&gt;Run the Docker container based on the Docker image&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, we have our Dockerfile. Now we need to build our Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ docker build --pull -t aspnetapp .
Sending build context to Docker daemon 5.16MB
Step 1/12 : FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
3.1: Pulling from dotnet/core/sdk
376057ac6fa1: Pull complete 
5a63a0a859d8: Pull complete 
496548a8c952: Pull complete 
2adae3950d4d: Pull complete 
2e16b9e1a161: Pull complete 
00f5f595bb47: Pull complete 
218e0b1856d2: Pull complete 
Digest: sha256:134f0793a9a65a237430b5d98050cb63e7c8718b0d2e73f4f974384a98023d56
Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/sdk:3.1
---&amp;gt; 8c4dd5ac064a
Step 2/12 : WORKDIR /source
 ---&amp;gt; Using cache
 ---&amp;gt; 0c604866f2f8
Step 3/12 : COPY *.sln .
 ---&amp;gt; 6eabf9f173e9
Step 4/12 : COPY aspnetapp/*.csproj ./aspnetapp/
 ---&amp;gt; 0b6d0ebff6fa
Step 5/12 : RUN dotnet restore -r linux-musl-x64
 ---&amp;gt; Running in d6593d853c89
  Determining projects to restore...
  Restored /source/aspnetapp/aspnetapp.csproj (in 3.7 sec).
Removing intermediate container d6593d853c89
 ---&amp;gt; 9ad377760bb8
Step 6/12 : COPY aspnetapp/. ./aspnetapp/
 ---&amp;gt; 997f14845ebc
Step 7/12 : WORKDIR /source/aspnetapp
 ---&amp;gt; Running in 1493794ee1a3
Removing intermediate container 1493794ee1a3
 ---&amp;gt; bff3d9c480ac
Step 8/12 : RUN dotnet publish -c release -o /app -r linux-musl-x64 --self-contained false --no-restore
 ---&amp;gt; Running in 11cb4f8d8956
Microsoft (R) Build Engine version 16.6.0+5ff7b0c9e for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

  aspnetapp -&amp;gt; /source/aspnetapp/bin/release/netcoreapp3.1/linux-musl-x64/aspnetapp.dll
  aspnetapp -&amp;gt; /source/aspnetapp/bin/release/netcoreapp3.1/linux-musl-x64/aspnetapp.Views.dll
  aspnetapp -&amp;gt; /app/
Removing intermediate container 11cb4f8d8956
 ---&amp;gt; e3f76099643a
Step 9/12 : FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine
3.1-alpine: Pulling from dotnet/core/aspnet
cbdbe7a5bc2a: Already exists 
80caa14a70db: Pull complete 
08a0d3029c8d: Pull complete 
76c18e0100a6: Pull complete 
Digest: sha256:d9275e02fa9f52a31917a5ef7c0612811c64d2a6a401eb9654939595dab7e5de
Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine
 ---&amp;gt; 57c690133803
Step 10/12 : WORKDIR /app
 ---&amp;gt; Running in e07191c61223
Removing intermediate container e07191c61223
 ---&amp;gt; 0edcf08059b6
Step 11/12 : COPY --from=build /app ./
 ---&amp;gt; 12e89067c073
Step 12/12 : ENTRYPOINT ["./aspnetapp"]
 ---&amp;gt; Running in 421522f32772
Removing intermediate container 421522f32772
 ---&amp;gt; bb6e34978e96
Successfully built bb6e34978e96
Successfully tagged aspnetapp:latest
aspdotnetcore-docker on master via •NET v3.1.300 took 18s 
❯

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So, all we did was run &lt;code&gt;docker build --pull -t aspnetapp .&lt;/code&gt; in the root of our repository directory. So what did it do? First off let's look at the docker command...&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--pull&lt;/code&gt; in the docker command line means to pull the the latest specified images in our Dockerfile. In our case we wanted the aspnet core SDK and Runtime 3.1 images. So this switch just copied those images down from Docker hub to our local machine for us to use in our own Docker image.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-t aspnetapp .&lt;/code&gt; in the docker command line means to name our Docker image &lt;code&gt;aspnetapp&lt;/code&gt;. We'll use this to reference our image when we are ready to run it. Also not the trailing &lt;code&gt;.&lt;/code&gt; at the end. That just means to look at the current directory for the Dockerfile.&lt;/p&gt;

&lt;p&gt;After we run that we see the Dockerfile in action. Everything is occurring inside of Docker. It's bringing down the SDK and Runtime images and copying source files from our machine, but the build and execution is all inside of Docker.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Aside: By default Docker goes against Docker Hub to locate and bring down images. You can point to different docker repositories and even create your own private repository to host your images.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let's take a look at what images we now have on our local machine...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aspnetapp latest bb6e34978e96 About an hour ago 110MB
mcr.microsoft.com/dotnet/core/sdk 3.1 8c4dd5ac064a About an hour ago 705MB
mcr.microsoft.com/dotnet/core/aspnet 3.1-alpine 57c690133803 About an hour ago 105MB

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;By running the &lt;code&gt;docker images&lt;/code&gt; command we can see what images we have downloaded to our local machine. You can see our newly created Docker image along with the Microsoft Dotnet SDK and runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running our Dotnet Core Docker Image
&lt;/h2&gt;

&lt;p&gt;Now the time has come to actually run our aspnet core code inside of Docker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ docker run --rm -p 8000:80 aspnetapp
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
      Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
      No XML encryptor configured. Key {220ed00c-80b6-46a5-a59c-62caa78ace19} may be persisted to storage in unencrypted form.
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We tell docker to run our image by running &lt;code&gt;docker run --rm -p 8000:80 aspnetapp&lt;/code&gt;. Let's break down these commands.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--rm&lt;/code&gt; tells Docker to remove the container when it's done running. This is just so we don't leave a container on our system since this is just for testing.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-p 8000:80&lt;/code&gt; tells Docker to map a network port from &lt;code&gt;80&lt;/code&gt; on the Docker container (which is the port our aspnet core app is running on inside the container) to our host machine port &lt;code&gt;8000&lt;/code&gt;. This will let us open our browser on our machine and access the code that is running inside of the Docker container.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aspnetapp&lt;/code&gt; tell Docker what image to run. This is the name we gave it earlier. By default it always gets the latest version of the image (just like if we added the tag &lt;code&gt;:latest&lt;/code&gt; after &lt;code&gt;aspnetapp&lt;/code&gt; like &lt;code&gt;aspnetapp:latest&lt;/code&gt;. If we want to specify a particular version we would add that tag like &lt;code&gt;aspnetapp:1.0.0&lt;/code&gt;. Tags can be anything you like, they don't have to be in a particular format.&lt;/p&gt;

&lt;p&gt;If we open our machine's browser and go to...&lt;/p&gt;

&lt;p&gt;&lt;a href="http://localhost:8000/"&gt;http://localhost:8000/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;...you should see the beautiful 'Welcome to .NET Core' page. Again that page you are accessing is running inside of that Docker container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VB7uukZo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bitleaf.io/blog/content/images/2020/05/dotnetcore-welcome.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VB7uukZo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bitleaf.io/blog/content/images/2020/05/dotnetcore-welcome.png" alt="Dotnet Core Docker Build and Run"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now why not take a look at what is inside of the container that is based on our Docker image. Opne up another terminal and let's see what Docker containers we currently have on the system (including our current running one)...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
610cddb6f411 aspnetapp "./aspnetapp" 47 seconds ago Up 45 seconds 0.0.0.0:8000-&amp;gt;80/tcp zen_mendel

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Cool. I see our container running. The container is Linux based so lets shell into it. We take note of the Container ID above and run the docker exec command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ docker exec -it 610cddb6f411 sh
/app # ls
appsettings.Development.json aspnetapp aspnetapp.Views.pdb aspnetapp.dll aspnetapp.runtimeconfig.json wwwroot
appsettings.json aspnetapp.Views.dll aspnetapp.deps.json aspnetapp.pdb web.config

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So there's our compiled application inside of the Docker container which is based on our image. What does that docker command actually do.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;exec&lt;/code&gt; Docker exec executes a command inside of a running container. In our case it is executing &lt;code&gt;sh&lt;/code&gt; which is the shell.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-it&lt;/code&gt; tells Docker we want to have an interactive terminal to work with, so we can type in commands against the container.&lt;/p&gt;

&lt;p&gt;Now enter &lt;code&gt;exit&lt;/code&gt; to disconnect from the container's shell. Let's go back to our original terminal with the Docker container running and hit &lt;code&gt;ctrl+c&lt;/code&gt; to kill it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delete Docker Images
&lt;/h2&gt;

&lt;p&gt;If we run &lt;code&gt;docker images&lt;/code&gt; again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aspnetapp latest bb6e34978e96 About an hour ago 110MB
mcr.microsoft.com/dotnet/core/sdk 3.1 8c4dd5ac064a About an hour ago 705MB
mcr.microsoft.com/dotnet/core/aspnet 3.1-alpine 57c690133803 About an hour ago 105MB

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We see our three images. We can remove images from our local machine with the &lt;code&gt;docker image rm&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ docker image rm aspnetapp
Untagged: aspnetapp:latest
Deleted: sha256:bb6e34978e96ed4eea3fc64adf943a09b9e798e0926b4c2308e341e3e306e76f
Deleted: sha256:12e89067c0734b17e2a60685ad200c4f11ea63db9854ba8cf473ea4a732b7cc3
Deleted: sha256:2d6b8df9e2adf8c4e144d1dafc47dbc25a73f856f0d761d924d8f84ea810c07b
Deleted: sha256:0edcf08059b6534b1bf18c330152f413c13ba7b7923fac9b4327abf82c8cec4b
Deleted: sha256:670db31ab972c44245b0fb18262d6e4a1367f02ad30a9207e2b458fd4d7bbe8f

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This removed our custom &lt;code&gt;aspnetapp&lt;/code&gt; Docker image. We can always build it again if we need it. You could have also used the Image ID instead of the name.&lt;/p&gt;

&lt;p&gt;Now if I list our images again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mcr.microsoft.com/dotnet/core/sdk 3.1 8c4dd5ac064a About an hour ago 705MB
mcr.microsoft.com/dotnet/core/aspnet 3.1-alpine 57c690133803 About an hour ago 105MB

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can see our custom Docker image is now gone from our local machine.&lt;/p&gt;

&lt;p&gt;Hopefully you can see the power of this. You don't have to wait and make sure your hosting server has all right versions of the software over deployment weekend. You just let it build and deploy your Docker image. It will work anywhere Docker is running. SO much less stress. Obviously there's a little more too it with managing your company's services through the use of container orchestration like Kubernetes, but you can see from a developer standpoint how this makes this helps remove those wonderful deployment surprises.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>docker</category>
    </item>
    <item>
      <title>Dotnet Core Dockerfile</title>
      <dc:creator>matt from bitLeaf.io</dc:creator>
      <pubDate>Wed, 20 May 2020 01:52:57 +0000</pubDate>
      <link>https://forem.com/bitleaf_io/dotnet-core-dockerfile-2174</link>
      <guid>https://forem.com/bitleaf_io/dotnet-core-dockerfile-2174</guid>
      <description>&lt;h2&gt;
  
  
  Welcome to the Wonderful World of Docker
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MMm_M0H7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1520038410233-7141be7e6f97%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D2000%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MMm_M0H7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1520038410233-7141be7e6f97%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D2000%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Dotnet Core Dockerfile"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'll take 20 seconds of your time to say this... The world of spending weekends publishing and deploying to your company's Windows IIS Servers is coming to an end. I've spent many a weekend doing so and I'm thrilled to see the light at the end of the tunnel. That light is containerization. Containerization is nothing new, it's been there way before Docker. Why is it so great though? One of the many things that make it so great is it gives you your weekend back. It does this by setting up a complete environment that builds and deploys your .Net code in a single command. Ok, that's great, but you can do that now. The great this is it's now portable. I can create my code in Windows, deploy to a Linux Docker image and take that image and run it anywhere Docker is installed. I don't need to worry about what's installed on the destination. As long as it has Docker, you're golden. So, let's start our journey at the Dockerfile.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Flow of Docker
&lt;/h2&gt;

&lt;p&gt;When I first started with Docker, it all seemed a bit scary. What is this voodoo. What is a Dockerfile? What is a Docker image? What is a Docker container? Like with most tech things once you get your hands on it and start understanding it, you'll see it's not scary and in this case it's actually pretty awesome.&lt;/p&gt;

&lt;p&gt;The flow for Docker is simple...&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Dockerfile&lt;/li&gt;
&lt;li&gt;Create Docker image from Dockerfile&lt;/li&gt;
&lt;li&gt;Run Docker container based on Docker image&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you're familiar with VMs, it's not really any different in how they work. You download some VM image, like Ubuntu and you run it. Same thing for Docker, just without the overhead of the whole OS emulation. There are lot's of pretty &lt;a href="https://www.docker.com/blog/containers-replacing-virtual-machines/"&gt;Docker vs VM&lt;/a&gt; pictures that explain the difference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1. The Dockerfile
&lt;/h2&gt;

&lt;p&gt;When you tell Visual Studio to enable Docker on your solution, it adds a Dockerfile for you. What is a Dockerfile? It's nothing more than a scripting language like a batch file or bash, etc. You tell it to go out to Docker, grab some base image, copy some files into the image and do some stuff. It's really nothing magical at all, just a simple script of commands.&lt;/p&gt;

&lt;p&gt;Here's a sample Dotnet Dockerfile...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# &amp;lt;https://hub.docker.com/_/microsoft-dotnet-core&amp;gt;
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /source

# copy csproj and restore as distinct layers
COPY *.sln .
COPY aspnetapp/*.csproj ./aspnetapp/
RUN dotnet restore -r linux-musl-x64

# copy everything else and build app
COPY aspnetapp/. ./aspnetapp/
WORKDIR /source/aspnetapp
RUN dotnet publish -c release -o /app -r linux-musl-x64 --self-contained false --no-restore

# final stage/image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine
WORKDIR /app
COPY --from=build /app ./

ENTRYPOINT ["./aspnetapp"]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let's break down what's going on there. It's actually relatively simple once you see it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Images and Tags
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# &amp;lt;https://hub.docker.com/_/microsoft-dotnet-core&amp;gt;
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /source

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://hub.docker.com/_/microsoft-dotnet-core"&gt;Docker hub&lt;/a&gt; is the major hub of all public Docker images. A Docker image is the package that gets created from your Dockerfile script. In this case we are saying we want to download to our local machine the image that contains the dotnet 3.1 sdk so we can use that to build our dotnet core 3.1 solution. If you go to Docker hub and look at the various &lt;a href="https://hub.docker.com/_/microsoft-dotnet-core-sdk/"&gt;dotnet sdk docker images&lt;/a&gt; Microsoft offers, you'll see you have quite a few options. They are broken down into dotnet sdk version and host OS. For example, if you look at our one above you'll see it end's in &lt;code&gt;:3.1&lt;/code&gt;. That is called a Docker Tag. It tells Docker which particular version of the image you want. In our case we want Dotnet Core SDK v3.1 running on Debian Linux. If instead I put &lt;code&gt;:2.1-alpine&lt;/code&gt; I'm telling docker I want the Dotnet Core SDK v2.1 running on Alpine Linux. How do I know that, because it tells me that right on the Docker hub page. When you're looking for a particular image, just head out to Docker Hub and the page for the image will list the available versions.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;AS build&lt;/code&gt; at the end of the &lt;code&gt;FROM&lt;/code&gt; just let's us reference this image later on down in the script as we'll show in a bit. It will let us just say take the source code you just built in the &lt;code&gt;build&lt;/code&gt; step so we can run it.&lt;/p&gt;

&lt;p&gt;You'll also notice we run &lt;code&gt;WORKDIR /source&lt;/code&gt;. That's nothing more than changing the directory in the image to /source.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Your Dotnet Core Solution in Docker
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# copy csproj and restore as distinct layers
COPY *.sln .
COPY aspnetapp/*.csproj ./aspnetapp/
RUN dotnet restore -r linux-musl-x64

# copy everything else and build app
COPY aspnetapp/. ./aspnetapp/
WORKDIR /source/aspnetapp
RUN dotnet publish -c release -o /app -r linux-musl-x64 --self-contained false --no-restore

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Our next section does the actual build of our Dotnet core solution. Let's go through it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;COPY *.sln .&lt;/code&gt; and &lt;code&gt;COPY aspnetapp/*.csproj ./aspnetapp/&lt;/code&gt; both oddly enough copy stuff. They copy it from your local machine to inside the docker image. In the first case we are copying our solution file to the root of the current directory in the Docker image (which is /source as we set above). The second command copies our csproj files to a new directory in the docker image of &lt;code&gt;/source/aspnetapp&lt;/code&gt;. So now we have our solution and csproj files inside of the docker image (well they will be once we build this thing).&lt;/p&gt;

&lt;p&gt;Next we &lt;code&gt;RUN dotnet restore -r linux-musl-x64&lt;/code&gt; which is just running the standard dotnet command to restore our packages. In our case the &lt;code&gt;-r&lt;/code&gt; switch specifies to use the lightweight Alpine Linux based runtime, since we'll be using that as our runtime later on.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;COPY aspnetapp/. ./aspnetapp/&lt;/code&gt; now copies the rest of our source code over to the Docker image in preparation for the build. So we will actually be building inside of the Docker image. You don't need to pre-build on your machine. We then change the directory in the image to our code where the csproj file is located at &lt;code&gt;/source/aspnetapp&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally we build/publish our aspnet app...&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RUN dotnet publish -c release -o /app -r linux-musl-x64 --self-contained false --no-restore&lt;/code&gt; ... We are just running the dotnet publish command outputting our published code to the /app directory inside the Docker image.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running your Dotnet Core Project in Docker
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# final stage/image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine
WORKDIR /app
COPY --from=build /app ./

ENTRYPOINT ["./aspnetapp"]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We now have our application built inside the /app directory in our image. Now we want to bring down the runtime to actually run our application. The same similar FROM command...&lt;/p&gt;

&lt;p&gt;&lt;code&gt;FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine&lt;/code&gt; ... will run our application. In our case we are saying we want the dotnet core version 3.1 runtime running on Alpine Linux.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Aside:  Why would we pick one host OS over the other? In most cases in honestly doesn't matter. For most of your projects they'll all work equally. Alpine Linux has become quite popular in the Docker community for the simple reason is that it is very small and lightweight. I generally try to go with Alpine first for that reason.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now we brought down the dotnet core runtime image. Next we change our working directory to /app in the image. Next we our going to copy our built code into that directory.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;COPY --from=build /app ./&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This copy is different in that it specifies that we want to copy the contents of the /app directory in our &lt;code&gt;build&lt;/code&gt; image that we setup above to the runtime image's current directory. So we're creating a Dockerfile that will create an image of our application, but that image itself is using both the sdk and runtime image. Again, all this is happening inside Docker containers, nothing is going on with our local machine. Fantastic.&lt;/p&gt;

&lt;p&gt;Finally the magic...We run our application.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ENTRYPOINT ["./aspnetapp"]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Entrypoint is basically like CMD (which Docker also has) in that it executes a single executable, which in our case is our published app.&lt;/p&gt;

&lt;p&gt;Next step for another &lt;a href="http://bitleaf.io"&gt;BitLeaf.io&lt;/a&gt; blog post is to actually run this thing and see what we can do with it.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>docker</category>
    </item>
    <item>
      <title>Creating a DigitalOcean Droplet with Terraform - Part 3 of 3 - Cloud-init</title>
      <dc:creator>matt from bitLeaf.io</dc:creator>
      <pubDate>Tue, 28 Apr 2020 00:57:06 +0000</pubDate>
      <link>https://forem.com/bitleaf_io/creating-a-digitalocean-droplet-with-terraform-part-3-of-3-cloud-init-358d</link>
      <guid>https://forem.com/bitleaf_io/creating-a-digitalocean-droplet-with-terraform-part-3-of-3-cloud-init-358d</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oh7sG17G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1558494949-ef010cbdcc31%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D2000%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oh7sG17G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1558494949-ef010cbdcc31%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D2000%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Creating a DigitalOcean Droplet with Terraform - Part 3 of 3 - Cloud-init"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In parts &lt;a href="https://dev.to/bitleaf_io/creating-a-digitalocean-droplet-with-terraform-part-1-of-3-1pko"&gt;1 and 2 of our Creating a DigitalOcean Droplet with Terraform&lt;/a&gt; series we setup our Terraform configuration and created a DigitalOcean droplet and volume. In the final part we now are going to configure that droplet so when it gets created it already has the OS setup how we want it.&lt;/p&gt;

&lt;p&gt;To be able to setup the droplet operating system as part of our Terraform configuration we are going to use the cloud-init method. There are different ways to go about this, but cloud-init is the standard to be able to setup your cloud based instances in an automated fashion. There is a lot you can do with it and there are some &lt;a href="https://cloudinit.readthedocs.io/en/latest/topics/examples.html"&gt;examples of what cloud-init can do&lt;/a&gt; on the &lt;a href="https://cloudinit.readthedocs.io/en/latest/index.html"&gt;cloud-init site&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Cloud-init is just another configuration file that we can call from our Terraform configuration. Cloud-init uses the YAML format. So when working with cloud-init files, make sure to watch your indentations.&lt;/p&gt;

&lt;p&gt;So let's take a look at the cloud-init file and then we'll go through what it's doing in our example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cloud-config

package_update: true
package_upgrade: true
package_reboot_if_required: true

groups:
    - docker

users:
    - name: leaf
      lock_passwd: true
      shell: /bin/bash
      ssh_authorized_keys:
      - ${init_ssh_public_key}
      groups: docker
      sudo: ALL=(ALL) NOPASSWD:ALL

packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common
  - unattended-upgrades

runcmd:
  - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
  - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  - apt-get update -y
  - apt-get install -y docker-ce docker-ce-cli containerd.io
  - systemctl start docker
  - systemctl enable docker
  - curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - chmod +x /usr/local/bin/docker-compose

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We start off with the line...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cloud-config

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This line is critical in telling cloud-init that we are using a cloud-config style file. You can also do a standard bash shell indication.&lt;/p&gt;

&lt;p&gt;Our next three lines...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package_update: true
package_upgrade: true
package_reboot_if_required: true

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;These simply tell our operating system to update the packages to the latest version and do any necessary reboots. This is great and saves you the manual effort.&lt;/p&gt;

&lt;p&gt;Now for the user/group information...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;groups:
    - docker

users:
    - name: leaf
      lock_passwd: true
      shell: /bin/bash
      ssh_authorized_keys:
      - ${init_ssh_public_key}
      groups: docker
      sudo: ALL=(ALL) NOPASSWD:ALL

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we are going to tell our operating system to create a new group 'docker', and a new user 'leaf'. For the user we set:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: leaf ... This sets the username to 'leaf'&lt;/li&gt;
&lt;li&gt;lock_passwd: true ... This turns off password logins&lt;/li&gt;
&lt;li&gt;ssh_authorized_keys ... This is a variable that contains our DigitalOcean SSH key. This will let you login as 'leaf' using your existing SSH key. The &lt;code&gt;${init_ssh_public_key}&lt;/code&gt; is going to be set when we add the cloud-init call to our Terraform configuration.&lt;/li&gt;
&lt;li&gt;groups: docker ... Assign the user to the 'docker' group&lt;/li&gt;
&lt;li&gt;sudo: ALL=(ALL) NOPASSWD:ALL ... This will add the user to the Sudoers file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now for the packages...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common
  - unattended-upgrades

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Most of this is just setting up basic package management to make it more secure and easier for package updates. The last one 'unattended-upgrades' is fantastic for your cloud servers. This will automatically install all security related updates to your server so you don't have to keep logging in and patching them (at least for security patches). You can of course also add your own packages to the list.&lt;/p&gt;

&lt;p&gt;The final piece does our docker and docker-compose installs...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;runcmd:
  - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
  - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  - apt-get update -y
  - apt-get install -y docker-ce docker-ce-cli containerd.io
  - systemctl start docker
  - systemctl enable docker
  - curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - chmod +x /usr/local/bin/docker-compose

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;These commands are simply a way to install docker and docker-compose on Ubuntu. There is nothing changed from if you needed to run this manually. You just have each step as a separate yaml entry.&lt;/p&gt;

&lt;p&gt;So that's it for our cloud-init example. Now let's go see how to have that run as part of our Terraform setup&lt;/p&gt;




&lt;p&gt;Back in Terraform land we need to update our configuration to call our cloud-init.yaml file.&lt;/p&gt;

&lt;p&gt;Here is our updated 'droplet_volume.tf' file with our cloud-init pieces included. I'll highlight and discuss those pieces below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Specify the Terraform provider to use
provider "digitalocean" {
  token = var.do_token
}

data "template_file" "cloud-init-yaml" {
  template = file("${path.module}/files/cloud-init.yaml")
  vars = {
    init_ssh_public_key = file(var.ssh_public_key)
  }
}

# Setup a DO volume
resource "digitalocean_volume" "bitleaf_volume_1" {
  region = "nyc3"
  name = "biteaf-volume-1"
  size = 5
  initial_filesystem_type = "ext4"
  description = "bitleaf volume 1"
}

# Setup a second DO volume
resource "digitalocean_volume" "bitleaf_volume_2" {
  region = "nyc3"
  name = "biteaf-volume-2"
  size = 5
  initial_filesystem_type = "ext4"
  description = "bitleaf volume 2"
}

# Setup a DO droplet
resource "digitalocean_droplet" "bitleaf_server_1" {
  image = var.droplet_image
  name = "bitleaf-server-1"
  region = var.region
  size = var.droplet_size
  private_networking = var.private_networking
  ssh_keys = [
    var.ssh_key_fingerprint
  ]
  user_data = data.template_file.cloud-init-yaml.rendered
}

# Connect the volume to the droplet
resource "digitalocean_volume_attachment" "bitleaf_volume_1" {
  droplet_id = digitalocean_droplet.bitleaf_server_1.id
  volume_id = digitalocean_volume.bitleaf_volume_1.id
}

# Connect the second volume to the droplet
resource "digitalocean_volume_attachment" "bitleaf_volume_2" {
  droplet_id = digitalocean_droplet.bitleaf_server_1.id
  volume_id = digitalocean_volume.bitleaf_volume_2.id
}

# Output the public IP address of the new droplet
 output "public_ip_server" {
  value = digitalocean_droplet.bitleaf_server_1.ipv4_address
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Most of our Terraform configuration is the same. We did add one new block.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "template_file" "cloud-init-yaml" {
  template = file("${path.module}/files/cloud-init.yaml")
  vars = {
    init_ssh_public_key = file(var.ssh_public_key)
  }
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is a data block. In my setup I have my 'cloud-init.yaml' file in a 'files' directory. So the &lt;code&gt;template&lt;/code&gt; parameter is just reading that file in. The reason it's called a 'template' parameter is because it allows us to replace variable entries in our cloud-init file. In our case we wanted to put in our ssh key. In the &lt;code&gt;vars&lt;/code&gt; parameter we are setting the &lt;code&gt;init_ssh_public_key&lt;/code&gt; variable with our local public key. The file means it's reading the contents of the file that is in the path that we specified in our 'variables.tf' file for &lt;code&gt;ssh_public_key&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The other thing we added to our Terraform configuration is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user_data = data.template_file.cloud-init-yaml.rendered

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;under the Droplet resource block. This is populating the &lt;code&gt;user_data&lt;/code&gt; DigitalOcean property with the contents of our rendered 'cloud-init.yaml' file. By &lt;em&gt;rendered&lt;/em&gt; it just means that the template was loaded and any variable substitutions have been made.&lt;/p&gt;

&lt;p&gt;That's it. Just those couple of changes to our Terraform configuration and we'll have a nicely customized Droplet ready to go.&lt;/p&gt;

&lt;p&gt;I found the best way to learn what you can do with cloud-init is to check out their &lt;a href="https://cloudinit.readthedocs.io/en/latest/topics/examples.html"&gt;cloud-init examples page&lt;/a&gt;. The great thing about cloud-init is that it is a standard that is followed in many cloud providers. This isn't anything specific to one provider, so your knowledge will be portable.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>digitalocean</category>
      <category>devops</category>
    </item>
    <item>
      <title>Creating a DigitalOcean Droplet with Terraform - Part 2 of 3</title>
      <dc:creator>matt from bitLeaf.io</dc:creator>
      <pubDate>Sat, 25 Apr 2020 15:04:06 +0000</pubDate>
      <link>https://forem.com/bitleaf_io/creating-a-digitalocean-droplet-with-terraform-part-2-of-3-hda</link>
      <guid>https://forem.com/bitleaf_io/creating-a-digitalocean-droplet-with-terraform-part-2-of-3-hda</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YXb-D4gW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bitleaf.io/blog/content/images/2020/04/terraform-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YXb-D4gW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bitleaf.io/blog/content/images/2020/04/terraform-1.jpg" alt="Creating a DigitalOcean Droplet with Terraform - Part 2 of 3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to Part 2 of our 3 part series post on creating a DigitalOcean droplet with Terraform. Please check out &lt;a href="https://dev.to/bitleaf_io/creating-a-digitalocean-droplet-with-terraform-part-1-of-3-1fn9-temp-slug-4358909"&gt;Part 1 of Creating a DigitalOcean droplet with Terraform&lt;/a&gt; before continuing.&lt;/p&gt;

&lt;p&gt;In Part 1 we setup our Terraform configuration files. Now in Part 2 we are going to use Terraform to commit those changes and create the objects on DigitalOcean.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: This will create objects in DigitalOcean and you will get charged for any uptime. However at the end of this example we will destroy all the objects that we created, so you shouldn't be left with anything that will get charged ongoing.&lt;/p&gt;

&lt;p&gt;Make sure you have &lt;a href="https://www.terraform.io/downloads.html"&gt;downloaded Terraform&lt;/a&gt; for your platform. It's a single executable, so it makes it simple with no install. These posts are based on Terraform v0.12.&lt;/p&gt;

&lt;p&gt;Open up your shell/command prompt and change over to the directory with your Terraform files. First we are going to initialize things by bringing down the necessary information to work with our DigitalOcean provider. This will pull down the necessary files to a local '.terraform' directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "digitalocean" (terraform-providers/digitalocean) 1.16.0...

...remaining text left out for brevity...

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Terraform init command





&lt;p&gt;Now that our provider information has been pulled down, we need to tell Terraform to stage our setup. We do that by calling 'terraform plan'. Plan will check over our configuration files, making sure there are no mistakes, and then let us know what it will do when we finally commit this. It's also good to use the '-out' switch with 'terraform plan' so it outputs the plan and you can re-run that exact setup again using that later in the future. That will let you create an exact copy of whatever you just setup.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Note&lt;/em&gt;: If you get prompted for your DigitalOcean credentials, make sure to go back to Part 1 of these posts and export/set your environment variables again in the current shell/command prompt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -out droplet_volume.tfplan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_droplet.bitleaf_server_1 will be created
  + resource "digitalocean_droplet" "bitleaf_server_1" {
      + backups = false
      + created_at = (known after apply)
      + disk = (known after apply)
      + id = (known after apply)
      + image = "ubuntu-18-04-x64"
      + ipv4_address = (known after apply)
      + ipv4_address_private = (known after apply)
      + ipv6 = false
      + ipv6_address = (known after apply)
      + ipv6_address_private = (known after apply)
      + locked = (known after apply)
      + memory = (known after apply)
      + monitoring = false
      + name = "bitleaf-server-1"
      + price_hourly = (known after apply)
      + price_monthly = (known after apply)
      + private_networking = false
      + region = "nyc3"
      + resize_disk = true
      + size = "s-1vcpu-2gb"
      + ssh_keys = [
          + "&amp;lt;redacted&amp;gt;",
        ]
      + status = (known after apply)
      + urn = (known after apply)
      + vcpus = (known after apply)
      + volume_ids = (known after apply)
      + vpc_uuid = (known after apply)
    }

  # digitalocean_volume.bitleaf_volume_1 will be created
  + resource "digitalocean_volume" "bitleaf_volume_1" {
      + description = "bitleaf volume 1"
      + droplet_ids = (known after apply)
      + filesystem_label = (known after apply)
      + filesystem_type = (known after apply)
      + id = (known after apply)
      + initial_filesystem_type = "ext4"
      + name = "biteaf-volume-1"
      + region = "nyc3"
      + size = 5
      + urn = (known after apply)
    }

  # digitalocean_volume_attachment.bitleaf_volume_1 will be created
  + resource "digitalocean_volume_attachment" "bitleaf_volume_1" {
      + droplet_id = (known after apply)
      + id = (known after apply)
      + volume_id = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: droplet_volume.tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "droplet_volume.tfplan"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Terraform plan





&lt;p&gt;So as you can see, Terraform is letting you know exactly what it plans to do. It also shows that there are some pieces of information like IP addresses that we'll have access to after the droplet has been created. We're using one of those to output our public IP address.&lt;/p&gt;

&lt;p&gt;Now that Terraform has checked out configuration and let us know what it will be doing it's time to actually commit it and create the droplet and volume on DigitalOcean. We'll be using the 'terraform apply' command for that. We'll also be telling 'terraform apply' to use our plan file that we just generated. If you don't pass in the plan it will re-run the plan again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply "droplet_volume.tfplan"
digitalocean_volume.bitleaf_volume_1: Creating...
digitalocean_droplet.bitleaf_server_1: Creating...
digitalocean_volume.bitleaf_volume_1: Creation complete after 7s
digitalocean_droplet.bitleaf_server_1: Still creating... [10s elapsed]
digitalocean_droplet.bitleaf_server_1: Still creating... [20s elapsed]
digitalocean_droplet.bitleaf_server_1: Still creating... [30s elapsed]
digitalocean_droplet.bitleaf_server_1: Creation complete after 33s
digitalocean_volume_attachment.bitleaf_volume_1: Creating...
digitalocean_volume_attachment.bitleaf_volume_1: Still creating... [10s elapsed]
digitalocean_volume_attachment.bitleaf_volume_1: Creation complete after 12s

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Outputs:

public_ip_server = 123.456.78.9

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Terraform apply





&lt;p&gt;Terraform just went out and from a couple configuration files created a droplet, a volume, and attached the volume to the droplet. You'll notice that under 'Resources' that it has '3 added'. If you might recall, the attaching of the volume to the droplet was a Terraform resource type, so the three are that, the droplet, and the volume. It also nicely provided us with the public IP address to our new droplet. We could now just ssh into the droplet as root. It has our DigitalOcean SSH key so no password will be required.&lt;/p&gt;

&lt;p&gt;So that's awesome. Now, let's run Terraform plan again and see what happens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

digitalocean_volume.bitleaf_volume_1: Refreshing state... 
digitalocean_droplet.bitleaf_server_1: Refreshing state... 
digitalocean_volume_attachment.bitleaf_volume_1: Refreshing state... ]

------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Terraform plan after apply





&lt;p&gt;So what happened? The cool thing is Terraform keeps track of the state of your environment and what has been run through Terraform. It knows that you already created that particular droplet and volume. So let's change our configuration and add a second volume to our droplet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Specify the Terraform provider to use
provider "digitalocean" {
  token = var.do_token
}

# Setup a DO volume
resource "digitalocean_volume" "bitleaf_volume_1" {
  region = "nyc3"
  name = "biteaf-volume-1"
  size = 5
  initial_filesystem_type = "ext4"
  description = "bitleaf volume 1"
}

# Setup a second DO volume
resource "digitalocean_volume" "bitleaf_volume_2" {
  region = "nyc3"
  name = "biteaf-volume-2"
  size = 5
  initial_filesystem_type = "ext4"
  description = "bitleaf volume 2"
}

# Setup a DO droplet
resource "digitalocean_droplet" "bitleaf_server_1" {
  image = var.droplet_image
  name = "bitleaf-server-1"
  region = var.region
  size = var.droplet_size
  private_networking = var.private_networking
  ssh_keys = [
    var.ssh_key_fingerprint
  ]
  # user_data = data.template_file.cloud-init-yaml.rendered
}

# Connect the volume to the droplet
resource "digitalocean_volume_attachment" "bitleaf_volume_1" {
  droplet_id = digitalocean_droplet.bitleaf_server_1.id
  volume_id = digitalocean_volume.bitleaf_volume_1.id
}

# Connect the second volume to the droplet
resource "digitalocean_volume_attachment" "bitleaf_volume_2" {
  droplet_id = digitalocean_droplet.bitleaf_server_1.id
  volume_id = digitalocean_volume.bitleaf_volume_2.id
}

# Output the public IP address of the new droplet
 output "public_ip_server" {
  value = digitalocean_droplet.bitleaf_server_1.ipv4_address
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Adding a second volume to our droplet





&lt;p&gt;We now updated our configuration to simply add a second volume and to attach that second volume to the droplet. Now let's run 'terraform plan' and see what it says.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -out droplet_volume2.tfplan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

digitalocean_volume.bitleaf_volume_1: Refreshing state... 
digitalocean_droplet.bitleaf_server_1: Refreshing state... 
digitalocean_volume_attachment.bitleaf_volume_1: Refreshing state... 

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_volume.bitleaf_volume_2 will be created
  + resource "digitalocean_volume" "bitleaf_volume_2" {
      + description = "bitleaf volume 2"
      + droplet_ids = (known after apply)
      + filesystem_label = (known after apply)
      + filesystem_type = (known after apply)
      + id = (known after apply)
      + initial_filesystem_type = "ext4"
      + name = "biteaf-volume-2"
      + region = "nyc3"
      + size = 5
      + urn = (known after apply)
    }

  # digitalocean_volume_attachment.bitleaf_volume_2 will be created
  + resource "digitalocean_volume_attachment" "bitleaf_volume_2" {
      + droplet_id = 
      + id = (known after apply)
      + volume_id = (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: droplet_volume2.tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "droplet_volume2.tfplan"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Terraform plan after the update





&lt;p&gt;Since Terraform keeps track of the state of the changes we made to our Terraform configurations and with what we have already run it has calculated the necessary changes it needs to make to our DigitalOcean infrastructure. In this case it will be adding two resources (a volume, and attaching that volume to the droplet). If we went ahead and run that it will now commit those updates to DigitalOcean.&lt;/p&gt;

&lt;p&gt;Now that we ran through these examples let's go ahead and clean up after ourselves and remove the created droplet and volumes. This is done through the ominous sounding 'terraform destroy' command. Again, since Terraform keeps track of the state of our changes it knows what needs to be removed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
digitalocean_volume.bitleaf_volume_1: Refreshing state...
digitalocean_volume.bitleaf_volume_2: Refreshing state... 
digitalocean_droplet.bitleaf_server_1: Refreshing state... 
digitalocean_volume_attachment.bitleaf_volume_1: Refreshing state... 
digitalocean_volume_attachment.bitleaf_volume_2: Refreshing state... 

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # digitalocean_droplet.bitleaf_server_1 will be destroyed
  - resource "digitalocean_droplet" "bitleaf_server_1" {
      - backups = false -&amp;gt; null
      - created_at = "2020-04-25T02:43:05Z" -&amp;gt; null
      - disk = 50 -&amp;gt; null
      - id = "189890644" -&amp;gt; null
      - image = "ubuntu-18-04-x64" -&amp;gt; null
      - ipv4_address = "" -&amp;gt; null
      - ipv6 = false -&amp;gt; null
      - locked = false -&amp;gt; null
      - memory = 2048 -&amp;gt; null
      - monitoring = false -&amp;gt; null
      - name = "bitleaf-server-1" -&amp;gt; null
      - price_hourly = 0.01488 -&amp;gt; null
      - price_monthly = 10 -&amp;gt; null
      - private_networking = false -&amp;gt; null
      - region = "nyc3" -&amp;gt; null
      - resize_disk = true -&amp;gt; null
      - size = "s-1vcpu-2gb" -&amp;gt; null
      - ssh_keys = [
          - "",
        ] -&amp;gt; null
      - status = "active" -&amp;gt; null
      - tags = [] -&amp;gt; null
      - urn = "do:droplet:189890644" -&amp;gt; null
      - vcpus = 1 -&amp;gt; null
      - volume_ids = [
          - "",
          - "",
        ] -&amp;gt; null
    }

  # digitalocean_volume.bitleaf_volume_1 will be destroyed
  - resource "digitalocean_volume" "bitleaf_volume_1" {
      - description = "bitleaf volume 1" -&amp;gt; null
      - droplet_ids = [
          - 189890644,
        ] -&amp;gt; null
      - filesystem_type = "ext4" -&amp;gt; null
      - id = "" -&amp;gt; null
      - initial_filesystem_type = "ext4" -&amp;gt; null
      - name = "biteaf-volume-1" -&amp;gt; null
      - region = "nyc3" -&amp;gt; null
      - size = 5 -&amp;gt; null
      - tags = [] -&amp;gt; null
      - urn = "do:volume:" -&amp;gt; null
    }

  # digitalocean_volume.bitleaf_volume_2 will be destroyed
  - resource "digitalocean_volume" "bitleaf_volume_2" {
      - description = "bitleaf volume 2" -&amp;gt; null
      - droplet_ids = [
          - 189890644,
        ] -&amp;gt; null
      - filesystem_type = "ext4" -&amp;gt; null
      - id = "" -&amp;gt; null
      - initial_filesystem_type = "ext4" -&amp;gt; null
      - name = "biteaf-volume-2" -&amp;gt; null
      - region = "nyc3" -&amp;gt; null
      - size = 5 -&amp;gt; null
      - tags = [] -&amp;gt; null
      - urn = "do:volume:" -&amp;gt; null
    }

  # digitalocean_volume_attachment.bitleaf_volume_1 will be destroyed
  - resource "digitalocean_volume_attachment" "bitleaf_volume_1" {
      - droplet_id = -&amp;gt; null
      - id = "" -&amp;gt; null
      - volume_id = "" -&amp;gt; null
    }

  # digitalocean_volume_attachment.bitleaf_volume_2 will be destroyed
  - resource "digitalocean_volume_attachment" "bitleaf_volume_2" {
      - droplet_id = -&amp;gt; null
      - id = "" -&amp;gt; null
      - volume_id = "" -&amp;gt; null
    }

Plan: 0 to add, 0 to change, 5 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Terraform destroy





&lt;p&gt;Awesome. Terraform knows we have a droplet and two volumes and now we can easily clear the state by removing those with the 'destroy' command. If we wanted to bring them back up again we can simply run the 'terraform apply' command pointing to our saved plan file.&lt;/p&gt;

&lt;p&gt;So now we know how to automate the creation of things like droplets and volumes, but how do we do some customization of the operating system on the droplet. That is where a 'cloud-init' file comes into play and we can do it all during our Terraform setup. We'll cover that in Part 3 of Creating a DigitalOcean Droplet with Terraform.&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Creating a DigitalOcean Droplet with Terraform - Part 1 of 3</title>
      <dc:creator>matt from bitLeaf.io</dc:creator>
      <pubDate>Sat, 25 Apr 2020 13:14:00 +0000</pubDate>
      <link>https://forem.com/bitleaf_io/creating-a-digitalocean-droplet-with-terraform-part-1-of-3-1pko</link>
      <guid>https://forem.com/bitleaf_io/creating-a-digitalocean-droplet-with-terraform-part-1-of-3-1pko</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n3rXflLn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bitleaf.io/blog/content/images/2020/04/terraform.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n3rXflLn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bitleaf.io/blog/content/images/2020/04/terraform.jpg" alt="Creating a DigitalOcean Droplet with Terraform - Part 1 of 3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's say you want to quickly bring up 1 or 100 DigitalOcean droplets. Let's also have them nicely pre-configured with our ssh keys and latest software updates. Let's also throw docker onto them. This could be done manually for sure. You could also walk across the country versus flying. In our droplet example let me introduce you to Terraform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; is free and comes as a single executable you &lt;a href="https://www.terraform.io/downloads.html"&gt;download&lt;/a&gt; onto your laptop/desktop. You provide it some configuration files and it goes out and creates whatever you just told it to. One configuration template file to create 1 to infinite number of objects.&lt;/p&gt;

&lt;p&gt;Let's just get to the code. Here's what I want to do...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a droplet (OK, that was an obvious one)&lt;/li&gt;
&lt;li&gt;In that droplet I want to auto configure it...&lt;/li&gt;
&lt;li&gt;Add my ssh key so I can ssh in without username/password&lt;/li&gt;
&lt;li&gt;Update any packages&lt;/li&gt;
&lt;li&gt;Set it up for auto security updates&lt;/li&gt;
&lt;li&gt;Install docker and docker-compose&lt;/li&gt;
&lt;li&gt;Attach a volume to the droplet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We need two things from DigitalOcean...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Token
You can get your DigitalOcean API Token by clicking on API under the Account menu on the left side. Currently the link is &lt;a href="https://cloud.digitalocean.com/account/api/tokens"&gt;https://cloud.digitalocean.com/account/api/tokens&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Your ssh key fingerprint
You can get your DigitalOcean SSH key fingerprint by clicking on Settings under the Account menu on the left side and then clicking the Security tab. Currently the link for that is &lt;a href="https://cloud.digitalocean.com/account/security"&gt;https://cloud.digitalocean.com/account/security&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We now want to store those two secret pieces of information. We don't want to put those in our Terraform configuration files due to the obvious security issue with that, especially if we check things into git. So, let's store those in environment variables on our local machines for now. In Mac/Linux we use 'export', for Windows we use 'set'. For Terraform to pick up on environment variables you need to prefix them with 'TF_VAR'.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export TF_VAR_do_token=&amp;lt;my token&amp;gt;
export TF_VAR_ssh_key_fingerprint=&amp;lt;fingerprint&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Setting environment variables for secrets.





&lt;p&gt;Terraform wouldn't work very well if everything was hardcoded into the configuration files. So it lets you define variables. For example, here's how we define a variable in a Terraform config file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "do_token" {
  description = "Digital Ocean Api Token"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Terraform variable defined in an environment variable.





&lt;p&gt;As you might notice the 'do_token' matches the name of our environment variable we set above after the 'TF_VAR' prefix. Terraform will automatically pick that up from our environment variable and we can now use the 'do_token' variable in our configuration. The description is optional and just for your use.&lt;/p&gt;

&lt;p&gt;You can also set defaults in your variables for ones that aren't set in your environment. For example, here's a variable that default's our droplet region to 'nyc3'.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "region" {
  description = "Digital Ocean Region"
  default = "nyc3"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Terraform variable with a default value.





&lt;p&gt;We're going to setup our variables in a separate file called 'variables.tf'.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "do_token" {
  description = "DigitalOcean Api Token"
}
variable "ssh_key_fingerprint" {
  description = "Fingerprint of the public ssh key stored on DigitalOcean"
}

variable "region" {
  description = "DigitalOcean region"
  default = "nyc3"
}
variable "droplet_image" {
  description = "DigitalOcean droplet image name"
  default = "ubuntu-18-04-x64"
}
variable "droplet_size" {
  description = "Droplet size for server"
  default = "s-1vcpu-2gb"
}
variable private_networking {
  default = "false"
}
variable "ssh_public_key" {
  description = "Local public ssh key"
  default = "~/.ssh/id_rsa.pub"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
variables.tf





&lt;p&gt;You can see some of the defaults we set. Those slug names like 's-1vcpu-2gb' can be found on &lt;a href="https://slugs.do-api.dev/"&gt;https://slugs.do-api.dev/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now onto the meat (or for you vegetarians, the veggie) of it. We have our secret keys to the DigitalOcean kingdom and our variables ready to go. Now we need the config that will actually do stuff. That stuff in our case is to create a droplet and a volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Specify the Terraform provider to use
provider "digitalocean" {
  token = var.do_token
}

# Setup a DO volume
resource "digitalocean_volume" "bitleaf_volume_1" {
  region = "nyc3"
  name = "biteaf-volume-1"
  size = 5
  initial_filesystem_type = "ext4"
  description = "bitleaf volume 1"
}

# Setup a DO droplet
resource "digitalocean_droplet" "bitleaf_server_1" {
  image = var.droplet_image
  name = "bitleaf-server-1"
  region = var.region
  size = var.droplet_size
  private_networking = var.private_networking
  ssh_keys = [
    var.ssh_key_fingerprint
  ]
  # user_data = data.template_file.cloud-init-yaml.rendered
}

# Connect the volume to the droplet
resource "digitalocean_volume_attachment" "bitleaf_volume_1" {
  droplet_id = digitalocean_droplet.bitleaf_server_1.id
  volume_id = digitalocean_volume.bitleaf_volume_1.id
}

# Output the public IP address of the new droplet
 output "public_ip_server" {
  value = digitalocean_droplet.bitleaf_server_1.ipv4_address
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
droplet_volume.tf





&lt;p&gt;A few things going on here. I commented the various sections of the Terraform config. Let's walk through them. I won't get into the weeds of them as all the various DigitalOcean Terraform options are documented on the &lt;a href="https://www.terraform.io/docs/providers/do/index.html"&gt;DigitalOcean Terraform Provider&lt;/a&gt; page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provider block:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "digitalocean" {
  token = var.do_token
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Provider block





&lt;p&gt;In the provider block we specify oddly enough the particular Terraform provider we want to use from the &lt;a href="https://www.terraform.io/docs/providers/index.html"&gt;list of Terraform providers&lt;/a&gt;. In our case we want to use 'digitalocean'. The DO provider is takes in a token (which is your API token). You can see the use of 'var.' to specify that we are using a variable. In this case we are using the 'do_token' variable from our 'variables.tf' file which in turn gets it from the local environment variable we set earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Volume resource block:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "digitalocean_volume" "bitleaf_volume_1" {
  region = "nyc3"
  name = "biteaf-volume-1"
  size = 5
  initial_filesystem_type = "ext4"
  description = "bitleaf volume 1"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Volume resource block





&lt;p&gt;In the volume resource block we are telling Terraform we want to create a DO Volume. The 'digitalocean_volume' is telling Terraform the type of resource we want to create. The 'bitleaf_volume_1' is just how we can access this resource in our configuration script. It's like the variable name for this resource in the configuration.&lt;br&gt;&lt;br&gt;
We'll go over what we set here, but you can see the &lt;a href="https://www.terraform.io/docs/providers/do/r/volume.html"&gt;full list of DigitalOcean Volume Terraform options&lt;/a&gt;. Most of what we set is somewhat self explanatory. We want a new DO Volume created in NYC3, we will call it bitleaf-volume-1. The volume will be 5gb in size and use the ext4 filesystem type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Droplet resource block:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "digitalocean_droplet" "bitleaf_server_1" {
  image = var.droplet_image
  name = "bitleaf-server-1"
  region = var.region
  size = var.droplet_size
  private_networking = var.private_networking
  ssh_keys = [
    var.ssh_key_fingerprint
  ]
  # user_data = data.template_file.cloud-init-yaml.rendered
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Droplet resource block





&lt;p&gt;In this block we're telling Terraform we want a DO droplet created. We then set some of the available &lt;a href="https://www.terraform.io/docs/providers/do/r/droplet.html"&gt;DigitalOcean Droplet Terraform options&lt;/a&gt;. Again we provide the Terraform resource name of 'digitalocean_droplet' to say we want a droplet created. You can see we are making use of some of the variables again from our variables.tf file. We are using the default values that we defined in that file for the image, region, size, etc. We are directly specifying the name of 'bitleaf-server-1'.&lt;br&gt;&lt;br&gt;
The 'user_data' piece is commented out right not. We'll get to that in Part 3 of these posts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Volume Attachment resource block:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "digitalocean_volume_attachment" "bitleaf_volume_1" {
  droplet_id = digitalocean_droplet.bitleaf_server_1.id
  volume_id = digitalocean_volume.bitleaf_volume_1.id
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Volume Attachment resource block





&lt;p&gt;The Volume Attachment resource block is a little different. With the Volume and Droplet blocks we were actually creating an object (a Volume and a Droplet). With this particular resource we are telling DO that we want to attach our new Volume to our new Droplet. We get the 'droplet_id' to attach to by telling Terraform to get the id of our 'bitleaf_server_1' droplet resource. Similarly, we tell Terraform the 'volume_id' by getting the id of the 'bitleaf_volume_1' volume resource.&lt;br&gt;&lt;br&gt;
So this is really nice. We can have a volume auto created and also auto attached to our droplet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output block:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; output "public_ip_server" {
  value = digitalocean_droplet.bitleaf_server_1.ipv4_address
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Output block





&lt;p&gt;Finally, we have a new type of block, in this case an output block. You can probably guess that this will indeed output stuff. In our case we are telling Terraform to output the public IP address from our new droplet to the screen. Here's the full list of available &lt;a href="https://www.terraform.io/docs/providers/do/d/droplet.html"&gt;Terraform Droplet source information&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now the moment has arrived. We've created our Terraform configuration. We have our DigitalOcean secrets all set. Head on over to &lt;a href="https://bitleaf.io/blog/creating-a-digitalocean-droplet-with-terraform-part-2-of-3/"&gt;Part 2 of Creating a DigitalOcean Droplet with Terraform&lt;/a&gt; to run this thing.&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Modern Deployments with .NET</title>
      <dc:creator>matt from bitLeaf.io</dc:creator>
      <pubDate>Thu, 23 Apr 2020 20:07:22 +0000</pubDate>
      <link>https://forem.com/bitleaf_io/modern-deployments-with-net-14h7</link>
      <guid>https://forem.com/bitleaf_io/modern-deployments-with-net-14h7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k1fJW1DU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1514984879728-be0aff75a6e8%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D2000%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k1fJW1DU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1514984879728-be0aff75a6e8%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D2000%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Modern Deployments with .NET"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We live in a golden age of .NET. We now have something we haven't had before - &lt;em&gt;choice&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Choice 1&lt;/em&gt;.&lt;br&gt;&lt;br&gt;
We can continue to build our ASP.NET applications and publish them to run under IIS on Windows.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Choice 2.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
We can use .NET Core (and soon .NET 5) to publish and run on Windows/Max/Linux under Nginx/etc or under Kestrel directly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Choice 2a.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
We can use .NET Core (and soon .NET 5) along with Docker and CI/CD to build and deploy on commit to Windows or Linux servers.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Choice&lt;/strong&gt;  &lt;strong&gt;2b.&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;&lt;br&gt;
We can use .NET Core (and soon .NET 5) along with Docker and CI/CD to build and deploy on commit to Windows or Linux and manage it all under Kubernetes to allow for deploying, no downtime upgrades, and scaling vertically or horizontally at a click.&lt;/p&gt;

&lt;p&gt;It's that last choice, the golden '2b' that BitLeaf follows. After years of dealing with deployments, upgrading, and scaling it now is truly a golden age that we can stop manually managing all that. I have spent a good portion of my career on weekend deployments of my code and all the pains it brought. With the tools and options we have available to us, like BitLeaf, I'm happy to not have that pain anymore.&lt;/p&gt;

&lt;p&gt;We have choices now. They don't even have to be scary. &lt;strong&gt;Whether in your own setup, on Azure, or on BitLeaf, please just try it&lt;/strong&gt;. You'll be amazed how far we have come and how much better it can be.&lt;/p&gt;

&lt;p&gt;I'll do my best in coming posts to help you along your journey to the new modern way of getting your beautiful .NET code out into the world.&lt;/p&gt;

</description>
      <category>dotnet</category>
    </item>
  </channel>
</rss>
