<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Martez Reed</title>
    <description>The latest articles on Forem by Martez Reed (@martezr).</description>
    <link>https://forem.com/martezr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/martezr"/>
    <language>en</language>
    <item>
      <title>Installing Puppet Enterprise 2021</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Mon, 20 Dec 2021 03:56:06 +0000</pubDate>
      <link>https://forem.com/martezr/installing-puppet-enterprise-2021-5dgl</link>
      <guid>https://forem.com/martezr/installing-puppet-enterprise-2021-5dgl</guid>
      <description>&lt;p&gt;Puppet is a popular open source configuration management tool for managing the configuration of systems using declarative code. Puppet Enterprise is the commercial distribution or version of Puppet that includes enterprise features like RBAC, a Web UI, and more features. Puppet Enterprise provides a free trial of the platform for use on up to 10 nodes. In this blog post we’ll walk through installing the latest release of Puppet Enterprise.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Getting Started&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Verify system requirements&lt;/li&gt;
&lt;li&gt;Download the Puppet Enterprise installer&lt;/li&gt;
&lt;li&gt;Install Puppet Enterprise 2021&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Verify System Requirements
&lt;/h3&gt;

&lt;p&gt;The first thing we need to do is verify the system requirements for installing Puppet Enterprise. The Puppet Enterprise primary server supports installation on most of the popular linux distributions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware Requirements:&lt;/strong&gt; &lt;a href="https://puppet.com/docs/pe/2021.4/hardware_requirements.html#hardware-requirements"&gt;https://puppet.com/docs/pe/2021.4/hardware_requirements.html#hardware-requirements&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software Requirements:&lt;/strong&gt; &lt;a href="https://puppet.com/docs/pe/2021.4/supported_operating_systems.html#supported_operating_systems"&gt;https://puppet.com/docs/pe/2021.4/supported_operating_systems.html#supported_operating_systems&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Download Puppet Enterprise
&lt;/h3&gt;

&lt;p&gt;Now that the system requirements have been verified we need to download the Puppet Enterprise installer. To download the installer, go to the Puppet website to access the free 10 node trial (&lt;a href="https://puppet.com/try-puppet/puppet-enterprise"&gt;https://puppet.com/try-puppet/puppet-enterprise&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T8kpWL_5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AinNsYZdFOeQ4d_LTK7bjJg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T8kpWL_5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AinNsYZdFOeQ4d_LTK7bjJg.png" alt="" width="880" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have provided your information you will then be redirected to the installer download page. This page includes packages for the primary server along with the agents. Click the &lt;strong&gt;CURL&lt;/strong&gt; or &lt;strong&gt;WGET&lt;/strong&gt; button next to the desired operating system to copy the curated command to your clipboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IdplqVjY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2APls1pD6tRsr4Dox7rEv5wA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IdplqVjY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2APls1pD6tRsr4Dox7rEv5wA.png" alt="" width="880" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we just need to run the curl or wget command on the system we’ll designated for the Puppet Enterprise installation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Puppet Enterprise
&lt;/h3&gt;

&lt;p&gt;Installing Puppet Enterprise is a pretty simple and straightforward process that should take a few minutes depending upon the system hardware.&lt;/p&gt;

&lt;p&gt;Untar the Puppet Enterprise installer tarball&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -xzf puppet-enterprise-2021.4.0-el-7-x86_64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the working directory to the unpacked installer directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd puppet-enterprise-2021.4.0-el-7-x86_64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the Puppet Enterprise installer&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./puppet-enterprise-installer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set the Puppet Enterprise console password once the installer has completed successfully&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;puppet infrastructure console_password --password=REPLACE_ME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the installation successfully completed we can now access the Puppet Enterprise web console by opening a web browser to the IP address or FQDN of the server via HTTPS. The username is &lt;strong&gt;admin&lt;/strong&gt; and the password is the password specified in the previous step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5UUH5ezR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A6_Yp7kak8soar2dXeuk0Ow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5UUH5ezR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A6_Yp7kak8soar2dXeuk0Ow.png" alt="" width="880" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>puppet</category>
      <category>devops</category>
      <category>configurationmanagem</category>
      <category>puppetenterprise</category>
    </item>
    <item>
      <title>Deploying Kuma Service Mesh with Puppet Bolt</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Mon, 05 Apr 2021 15:38:27 +0000</pubDate>
      <link>https://forem.com/martezr/deploying-kuma-service-mesh-with-puppet-bolt-4p09</link>
      <guid>https://forem.com/martezr/deploying-kuma-service-mesh-with-puppet-bolt-4p09</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ELdH9FO_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AkL6VfwfmmPLgD7ikWENsHQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ELdH9FO_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AkL6VfwfmmPLgD7ikWENsHQ.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kuma is a platform agnostic open-source control plane for service mesh and microservices management, with support for Kubernetes, VM, and bare metal environments. Kubernetes has quickly become the de facto platform on which new applications are being built to take advantage of the benefits that come with containerization. The challenge that many organizations are facing is integrating containerized workloads in Kubernetes with VM based workloads in a meaningful way. In this case Kuma service mesh can be used to provide the benefits of service mesh in a k8s/vm hybrid scenario.&lt;/p&gt;

&lt;p&gt;In this blog post we’ll take a look at how to use Puppet Bolt to deploy the universal version of the Kuma service mesh control plane intended for virtual machines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initialize the Bolt project
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Ensure that the latest version of Puppet Bolt is installed before getting started.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Puppet Bolt utilizes &lt;strong&gt;Project&lt;/strong&gt; directories as launching points for running Bolt operations. Create a directory for our Puppet Bolt project name &lt;strong&gt;kumamesh.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir kumamesh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the working directory to &lt;strong&gt;kumamesh&lt;/strong&gt; directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd kumamesh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have a directory for hosting our Bolt project, we need to initialize the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt project init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the &lt;strong&gt;puppet-kuma&lt;/strong&gt; module from the associated Github repository to the bolt-project.yaml file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--------
name: kumamesh
modules:
  - git: [https://github.com/martezr/puppet-kuma.git](https://github.com/martezr/puppet-kuma.git)
    ref: 'main'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the module and its dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt module install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the Bolt plan
&lt;/h3&gt;

&lt;p&gt;In order to utilize plans in Bolt, we need to create a directory named  &lt;strong&gt;plans&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir plans
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have our plans directory created we’ll use the Kuma service mesh module to install the Kuma service mesh control plane backed by a PostgreSQL database.&lt;/p&gt;

&lt;p&gt;Create a file named &lt;strong&gt;controlplane.pp&lt;/strong&gt; with the following content in the &lt;strong&gt;plans&lt;/strong&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;plan kumamesh::controlplane (
  TargetSpec $targets
) {
  apply_prep($targets)
  apply($targets){

    class { 'kuma':
      version =&amp;gt; '1.1.1',
    }

    class { 'kuma::controlplane':
      manage_postgres =&amp;gt; true,
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we’ve created our plan we can ensure that it’s recognized by Bolt by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan registers properly the output should include a &lt;strong&gt;kumamesh_::controlplane_&lt;/strong&gt; entry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plans
  aggregate::count                    
  aggregate::targets                  
  canary                              
  facts                               
  facts::external                     
  facts::info                         
  **kumamesh::controlplane**              
  puppet_agent::run                   
  puppet_connect::test_input_data     
  puppetdb_fact                       
  reboot                              
  secure_env_vars                     
  terraform::apply
  terraform::destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the plan registered we are now ready to run the plan by running the bolt plan run kumamesh::controlplane command. We’ve specified the target which is the node we want to install the control plane on. In this example we’ve used IP addresses but resolvable hostnames could have been used as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan run kumamesh::controlplane --target 10.0.0.111
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan ran successfully it should have generated output similar to that displayed below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Starting: plan kumamesh::controlplane
Starting: install puppet and gather facts on 10.0.0.111
Finished: install puppet and gather facts with 0 failures in 12.07 sec
Starting: apply catalog on 10.0.0.111
Finished: apply catalog with 0 failures in 53.15 sec
Finished: plan kumamesh::controlplane in 1 min, 6 sec
Plan completed successfully with no result

Plan completed successfully with no result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the plan has completed successfully we can now view the control plane GUI by browsing to &lt;a href="http://controlplane_ip:5681/gui"&gt;http://controlplane_ip:5681/gui&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZQbXQLfr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Aa6nJqjK-Kgiip6IjoaZYEA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZQbXQLfr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Aa6nJqjK-Kgiip6IjoaZYEA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>servicemesh</category>
      <category>devops</category>
      <category>puppet</category>
      <category>kuma</category>
    </item>
    <item>
      <title>Deploying HashiCorp Consul Agents With Puppet Bolt</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Mon, 07 Dec 2020 15:17:42 +0000</pubDate>
      <link>https://forem.com/puppet/deploying-hashicorp-consul-agents-with-puppet-bolt-2c2c</link>
      <guid>https://forem.com/puppet/deploying-hashicorp-consul-agents-with-puppet-bolt-2c2c</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyetw80apk3ydqam93owe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyetw80apk3ydqam93owe.png" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;HashiCorp Consul is an open source tool that solves these new complexities by providing service discovery, health checks, load balancing, a service graph, mutual TLS identity enforcement, and a configuration key-value store.&lt;/p&gt;

&lt;p&gt;Service discovery is the Consul feature we’ll focus on in this blog post. Service discovery is particularly important in environments where workloads are more ephemeral than the traditional server that runs for years. Service discovery is very similar to how we think of name resolution using DNS but with a richer set of features.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workloads register themselves and their services through agents&lt;/li&gt;
&lt;li&gt;Consul enables health checks to be used for checking the health of a service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consul is a distributed system that is composed of servers and clients. This blog post assumes that a server or group of servers have already been deployed and now clients need to be deployed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajd5o9b2efezi7xvn8a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajd5o9b2efezi7xvn8a5.png" width="800" height="502"&gt;&lt;/a&gt;HashiCorp Consul Architecture&lt;/p&gt;

&lt;p&gt;In this blog post we’ll take a look at how to use Puppet Bolt to deploy a Consul agent alongside an NGINX web server and register it as a service in Consul.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initialize the Bolt project
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Ensure that the latest version of Puppet Bolt is installed before getting started.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Puppet Bolt utilizes &lt;strong&gt;Project&lt;/strong&gt; directories as launching points for running Bolt operations. Create a directory for our Puppet Bolt project name &lt;strong&gt;consuldeploy.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir consuldeploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the working directory to &lt;strong&gt;consuldeploy&lt;/strong&gt; directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd consuldeploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have a directory for hosting our Bolt project, we need to initialize the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt project init --modules kyleanderson-consul, puppet-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the Bolt YAML plan
&lt;/h3&gt;

&lt;p&gt;In order to utilize plans in Bolt, we need to create a directory named  &lt;strong&gt;plans&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir plans
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have our plans directory created we’ll plan out what we want to accomplish as part of our plan. We’ll accomplish the following tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy NGINX&lt;/li&gt;
&lt;li&gt;Install unzip to unzip the consul agent zip file&lt;/li&gt;
&lt;li&gt;Install the Consul agent&lt;/li&gt;
&lt;li&gt;Register the web service in Consul&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ve got a plan of what we want to do and now we are ready to create the Bolt plan.&lt;/p&gt;

&lt;p&gt;Create a file named &lt;strong&gt;consul_agent.yaml&lt;/strong&gt; with the following content in the &lt;strong&gt;plans&lt;/strong&gt; directory. The plan includes four parameters that are used to specify various configurations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
parameters:
  targets:
    type: TargetSpec
  consul_datacenter:
    type: String
    description: "The consul datacenter to join"
    default: puppet-bolt
  consul_agent_version:
    type: String
    description: "The consul agent version to install"
    default: 1.9.0
  consul_servers:
    type: Array[String]
    description: "An array of consul servers to connect to"

steps:
  - name: installnginx
    targets: $targets
    resources:
      - class: nginx
  - name: deployconsul
    targets: $targets
    resources:
      - package: unzip
        parameters:
          ensure: latest
      - class: consul
        parameters:
          version: $consul_agent_version
          config_hash:
            data_dir: '/opt/consul'
            datacenter: $consul_datacenter
            retry_join: $consul_servers
          services:
            web:
              checks:
                - http: http://localhost
                  interval: 10s
                  timeout: 5s
              port: 80
              tags:
                - web
                - nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we’ve created our plan we can ensure that it’s recognized by Bolt by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan registers properly the output should include a &lt;strong&gt;consuldeploy::consul_agent_&lt;/strong&gt; entry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aggregate::count
aggregate::nodes
aggregate::targets
canary
consuldeply::consul_agent
facts
facts::external
facts::info
puppet_agent::run
puppetdb_fact
reboot  
secure_env_vars
terraform::apply
terraform::destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the plan registered we are now ready to run the plan by running the bolt plan run consuldeploy::consul_agent command. We’ve specified the target which is the node we want to install the consul agent on as well as an array of consul servers. In this example we’ve used IP addresses but resolvable hostnames could have been used as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan run consuldeploy::consul_agent --target 10.0.0.123 consul_servers='["10.0.0.193"]'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan ran successfully it should have generated output similar to that displayed below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Starting: plan consuldeploy::consul_agent
Starting: install puppet and gather facts on 10.0.0.123
Finished: install puppet and gather facts with 0 failures in 24.97 sec
Starting: apply catalog on 10.0.0.123
Finished: apply catalog with 0 failures in 20.06 sec
Starting: install puppet and gather facts on 10.0.0.123
Finished: install puppet and gather facts with 0 failures in 3.37 sec
Starting: apply catalog on 10.0.0.123
Finished: apply catalog with 0 failures in 22.18 sec
Finished: plan consuldeploy::consul_agent in 1 min, 12 sec
Plan completed successfully with no result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the plan has completed successfully we can now view the web service in the consul server dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbiuewf9hb4yx3wokydq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbiuewf9hb4yx3wokydq.png" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This blog post has shown the basic consul agent configuration using Puppet Bolt but the &lt;a href="https://forge.puppet.com/modules/KyleAnderson/consul" rel="noopener noreferrer"&gt;consul module&lt;/a&gt; on the Puppet Forge that was used in this post highlights additional settings and configuration options that can be configured.&lt;/p&gt;

</description>
      <category>hashicorp</category>
      <category>puppet</category>
      <category>devops</category>
      <category>puppetbolt</category>
    </item>
    <item>
      <title>Deploy a Node application using Puppet Bolt</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Tue, 10 Nov 2020 15:01:13 +0000</pubDate>
      <link>https://forem.com/puppet/deploy-a-node-application-using-puppet-bolt-2ngh</link>
      <guid>https://forem.com/puppet/deploy-a-node-application-using-puppet-bolt-2ngh</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8togkk1jovn1xd72pkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8togkk1jovn1xd72pkh.png" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Application deployment is a multi-step process that can even include touching multiple machines or systems. In this blog post we’ll look at how we can use Puppet Bolt to deploy a node application. The application is a simple todo application created by &lt;a href="https://github.com/scotch-io/node-todo" rel="noopener noreferrer"&gt;scotch.io&lt;/a&gt; that is composed of a node application and a mongodb database. We’ll deploy the application and database on separate servers to show how this can be handled with Puppet Bolt.&lt;/p&gt;

&lt;p&gt;We’ll create three (3) Bolt plans that will each be used for a different part of the automation. At a high level we want to install the mongodb database on our database virtual machine and then install our node application on the app virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plan Overview&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;todoapp&lt;/strong&gt; : The todoapp plan will be the overarching plan that actually handles the orchestration of the application deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;db&lt;/strong&gt; : The &lt;strong&gt;db&lt;/strong&gt; plan installs the mongodb database on the database server virtual machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;app&lt;/strong&gt; : The &lt;strong&gt;app&lt;/strong&gt; plan installs node, npm, git and other prerequisites on the application server virtual machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Forge modules&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The deployment plan uses the following Puppet Forge modules to deploy the application stack.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;puppet/mongodb&lt;/li&gt;
&lt;li&gt;puppet/nodejs&lt;/li&gt;
&lt;li&gt;camptocamp/systemd&lt;/li&gt;
&lt;li&gt;puppetlabs/vcsrepo&lt;/li&gt;
&lt;li&gt;puppet/firewalld&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Initialize the Bolt project
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Ensure that the latest version of Puppet Bolt is installed before getting started.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Puppet Bolt utilizes &lt;strong&gt;Project&lt;/strong&gt; directories as launching points for running Bolt operations. Create a directory for our Puppet Bolt project name &lt;strong&gt;nodeproject.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir nodeproject
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the working directory to &lt;strong&gt;nodeproject&lt;/strong&gt; directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd nodeproject
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have a directory for hosting our Bolt project, we need to initialize the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt project init --modules puppet-mongodb,puppet-nodejs,camptocamp-systemd,puppetlabs-vcsrepo,puppet-firewalld
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command should generate output similar to that shown below if it ran successfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Installing project modules

→ Resolving module dependencies, this may take a moment

→ Writing Puppetfile at

/system/path/nodeproject/Puppetfile

→ Syncing modules from

/system/path/nodeproject/Puppetfile to

/system/path/nodeproject/modules

→ Generating type references

Successfully synced modules from /system/path/nodeproject/Puppetfile to /system/path/nodeproject/modules

Successfully created Bolt project at /system/path/nodeproject
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the Bolt YAML plan
&lt;/h3&gt;

&lt;p&gt;In order to utilize plans in Bolt, we need to create a directory named plans.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir plans
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have our plans directory created we’ll plan out what we want to accomplish as part of our plan.&lt;/p&gt;

&lt;p&gt;The first thing we’ll do is create the todoapp plan that will call the app and db plans. The name of the sub or nested plans are app and db which are referenced in our plan by the name of the bolt project and the plan (&lt;strong&gt;bolt_project::bolt_plan)&lt;/strong&gt;. The plan accepts two parameters ( &lt;strong&gt;app, db&lt;/strong&gt; ) that specify the IP address or FQDN of the virtual machines used for the application and database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
parameters:
  app:
    type: TargetSpec
  db:
    type: TargetSpec

steps:
  - plan: nodeproject::db
    description: "Deploy todo node application mongodb database"
    parameters:
      db_targets: $db
  - plan: nodeproject::app
    description: "Deploy todo node application"
    parameters:
      app_targets: $app
      db_targets: $db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we’ve got the todoapp plan in place we need to create the db plan to install the mongodb database. Create another plan in the &lt;strong&gt;plans&lt;/strong&gt; directory named  &lt;strong&gt;db.yaml&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure mongodb yum repository&lt;/li&gt;
&lt;li&gt;Install mongodb&lt;/li&gt;
&lt;li&gt;Create firewalld rule for mongodb (port 27017)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
parameters:
  db_targets:
    type: TargetSpec

steps:
  - name: installmongo
    targets: $db_targets
    resources:
    - class: mongodb::globals
      parameters:
        version: 4.2.0
        manage_package_repo: true
        server_package_name: mongodb-org-server
        bind_ip: [0.0.0.0]
    - class: mongodb::server
      parameters:
        port: 27017
  - name: configurefirewall
    targets: $db_targets
    resources:
    - class: firewalld
    - firewalld_zone: 'public'
      parameters:
        ensure: present
        purge_ports: true
    - firewalld_port: 'node db'
      parameters:
        ensure: present
        port: 27017
        protocol: tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we’re ready to create the app plan to deploy the node application. Deploying the node application requires a number of steps to actually get the application running. The plan accepts the IP address or DNS name of the application virtual machine and the database virtual machine as parameters.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the git package&lt;/li&gt;
&lt;li&gt;Create a firewalld rule for the node application (port 8080)&lt;/li&gt;
&lt;li&gt;Clone the node-todo git repository&lt;/li&gt;
&lt;li&gt;Install node and npm&lt;/li&gt;
&lt;li&gt;Install the node-todo application packages (npm install)&lt;/li&gt;
&lt;li&gt;Write the database configuration file&lt;/li&gt;
&lt;li&gt;Create a systemd unit file for the node application&lt;/li&gt;
&lt;li&gt;Reload the systemd daemon to recognize the unit file changes&lt;/li&gt;
&lt;li&gt;Start the todo-app service
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
parameters:
  app_targets:
    type: TargetSpec
  db_targets:
    type: String

steps:
  - name: installprerequisites
    description: "install app prerequisite software"
    targets: $app_targets
    resources:
    - package: git
      parameters:
        ensure: present
  - name: configurefirewall
    targets: $app_targets
    resources:
    - class: firewalld
    - firewalld_zone: 'public'
      parameters:
        ensure: present
        purge_ports: true
    - firewalld_port: 'node app'
      parameters:
        ensure: present
        port: 8080
        protocol: 'tcp'
  - name: installtodoapp
    targets: $app_targets
    resources:
    - vcsrepo: '/opt/node-todo'
      parameters:
        ensure: present
        provider: git
        source: '[https://github.com/scotch-io/node-todo.git'](https://github.com/scotch-io/node-todo.git')
        trust_server_cert: true
    - class: nodejs
    - nodejs::npm: 'app'
      parameters:
        ensure: present
        target: /opt/node-todo
        use_package_json: true
    - file: '/opt/node-todo/config/database.js'
      parameters:
        ensure: present
        content: &amp;gt;
          module.exports = {
            remoteUrl : "mongodb://$db_targets:27017/uwO3mypu",
            localUrl: "mongodb://$db_targets:27017/meanstacktutorials"
          };
    - file: '/etc/systemd/system/todo-app.service'
      parameters:
        ensure: present
        content: &amp;gt;
          [Unit]

          Description=Todo node application

          Documentation=[https://github.com/scotch-io/node-todo](https://github.com/scotch-io/node-todo)

          After=network.target

          [Service]

          Type=simple

          WorkingDirectory=/opt/node-todo

          ExecStart=/usr/bin/npm start

          Restart=on-failure

          [Install]

          WantedBy=multi-user.target
        notify: Class[systemd::systemctl::daemon_reload]
    - class: systemd::systemctl::daemon_reload
    - service: 'todo-app'
      parameters:
        ensure: running
        subscribe: File[/etc/systemd/system/todo-app.service]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we’ve created our plans we can ensure that it’s recognized by Bolt by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan registers properly the output should include the following entries.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;nodeproject::todoapp&lt;/em&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;nodeproject::app&lt;/em&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;nodeproject::db&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aggregate::count
aggregate::nodes
aggregate::targets
canary
facts
facts::external
facts::info
nodeproject::app  
nodeproject::db  
nodeproject::todoapp
puppet_agent::run
puppetdb_fact
reboot
secure_env_vars
terraform::apply
terraform::destroy
secure_env_vars
terraform::apply
terraform::destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the plans registered we are now ready to run the todoapp plan by running the bolt plan run nodeproject::todoapp command. The plan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan run nodeproject::todoapp app=10.0.0.41 db=10.0.0.42
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan ran successfully it should have generated output similar to that displayed below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Starting: plan nodeproject::todoapp
Starting: plan nodeproject::db
Starting: install puppet and gather facts on 10.0.0.42
Finished: install puppet and gather facts with 0 failures in 74.59 sec
Starting: apply catalog on 10.0.0.42
Finished: apply catalog with 0 failures in 35.91 sec
Starting: install puppet and gather facts on 10.0.0.42
Finished: install puppet and gather facts with 0 failures in 8.03 sec
Starting: apply catalog on 10.0.0.42
Finished: apply catalog with 0 failures in 19.44 sec
Finished: plan nodeproject::db in 2 min, 18 sec
Starting: plan nodeproject::app
Starting: install puppet and gather facts on 10.0.0.41
Finished: install puppet and gather facts with 0 failures in 74.78 sec
Starting: apply catalog on 10.0.0.41
Finished: apply catalog with 0 failures in 25.49 sec
Starting: install puppet and gather facts on 10.0.0.41
Finished: install puppet and gather facts with 0 failures in 7.59 sec
Starting: apply catalog on 10.0.0.41
Finished: apply catalog with 0 failures in 18.23 sec
Starting: install puppet and gather facts on 10.0.0.41
Finished: install puppet and gather facts with 0 failures in 8.9 sec
Starting: apply catalog on 10.0.0.41
Finished: apply catalog with 0 failures in 65.48 sec
Finished: plan nodeproject::app in 3 min, 21 sec
Finished: plan nodeproject::todoapp in 5 min, 40 sec
Plan completed successfully with no result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the command completes successfully we can check that everything worked by opening entering the IP address or FQDN of the Bolt target in a web browser. The site should show the following message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1qqofbfu5l9rcyhzys6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1qqofbfu5l9rcyhzys6.png" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have now successfully deployed a node application using Puppet Bolt. The automation can be made more elaborate such as interacting with a load balancer, performing database prep operations or more.&lt;/p&gt;

</description>
      <category>configurationmanagement</category>
      <category>puppet</category>
      <category>puppetbolt</category>
      <category>devops</category>
    </item>
    <item>
      <title>Using Puppet Bolt with REST APIs</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Mon, 02 Nov 2020 14:11:49 +0000</pubDate>
      <link>https://forem.com/puppet/using-puppet-bolt-with-rest-apis-2emi</link>
      <guid>https://forem.com/puppet/using-puppet-bolt-with-rest-apis-2emi</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finf0cpfmsyizalh3eljo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finf0cpfmsyizalh3eljo.jpeg" width="800" height="417"&gt;&lt;/a&gt;Puppet Bolt Logo&lt;/p&gt;

&lt;p&gt;Puppet Bolt is an open source automation and orchestration tool. One of the common tasks in an orchestration workflow is interacting with an external system using a REST API. This could be something like creating a user or updating a record in a CMDB. A built-in task for making &lt;a href="https://forge.puppet.com/puppetlabs/http_request" rel="noopener noreferrer"&gt;HTTP request&lt;/a&gt; calls was added in Bolt 2.30.0 and JSON output parsing was added in Bolt 2.32.0. This enables us to quickly add a step to a plan to integrate with a REST API endpoint. In this blog post we’ll walk through how to fetch data (GET request) and submit data (POST request) using the HashiCorp Vault API to show the interaction.&lt;/p&gt;

&lt;h3&gt;
  
  
  HashiCorp Vault Setup
&lt;/h3&gt;

&lt;p&gt;HashiCorp Vault is bundled as a compiled binary that can be run easily by simply downloading the binary from the HashiCorp Vault &lt;a href="https://www.vaultproject.io/downloads" rel="noopener noreferrer"&gt;download page&lt;/a&gt;. Unzip the download bundle, set the vault binary to be executable on linux or mac os x and finally run the following command to start Vault.&lt;/p&gt;

&lt;p&gt;In this example the Vault root token is being manually set to “vaultsecret”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault server -dev -dev-root-token-id="vaultsecret"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Initialize the Bolt project
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Ensure that the latest version of Puppet Bolt is installed before getting started.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Puppet Bolt utilizes &lt;strong&gt;Project&lt;/strong&gt; directories as launching points for running Bolt operations. Create a directory for our Puppet Bolt project name &lt;strong&gt;restproject.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir restproject
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the working directory to &lt;strong&gt;restproject&lt;/strong&gt; directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd restproject
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have a directory for hosting our Bolt project, we need to initialize the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt project init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command should generate output similar to that shown below if it ran successfully.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Bolt YAML plan
&lt;/h3&gt;

&lt;p&gt;In order to utilize plans in Bolt, we need to create a directory named plans.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir plans
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have our plans directory created we’ll plan out what we want to accomplish as part of our plan. We’ll keep this plan as simple as possible to show how easy it is to use Puppet Bolt to make API calls. We’ll accomplish the following tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write a static password to HashiCorp Vault (POST method)&lt;/li&gt;
&lt;li&gt;Read the static password created from HashiCorp Vault (GET method)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Interacting with the Vault API&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The API endpoint requires authentication which is why we are passing a X-Vault-Token header with the Vault root token.&lt;/li&gt;
&lt;li&gt;The Vault API returns a JSON payload and the &lt;strong&gt;json_endpoint&lt;/strong&gt; parameter for the read step has been set to true to properly parse the returned JSON.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ve got a plan of what we want to do and now we are ready to create the Bolt plan.&lt;/p&gt;

&lt;p&gt;Create a file named &lt;strong&gt;api.yaml&lt;/strong&gt; with the following content in the &lt;strong&gt;plans&lt;/strong&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
parameters:
  targets:
    type: TargetSpec

steps:
  - name: write_password
    task: http_request
    targets: $targets
    parameters:
      method: post
      body: '{"data": {"password": "supersecurepassword"}}'
      base_url: '[http://localhost:8200/v1/secret/data/boltsecret'](http://localhost:8200/v1/secret/data/boltsecret')
      headers:
        Content-Type: application/json
        X-Vault-Token: 'vaultsecret'
  - name: read_password
    task: http_request
    targets: $targets
    parameters:
      method: get
      base_url: '[http://localhost:8200/v1/secret/data/boltsecret'](http://localhost:8200/v1/secret/data/boltsecret')
      json_endpoint: true
      headers:
        Content-Type: application/json
        X-Vault-Token: 'vaultsecret'

return: $read_password.first.value['body']['data']['data']['password']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The original JSON payload returned from HashiCorp Vault is displayed below for the purposes of understanding the JSON parsing used to return just the password value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "body": {
      "request_id": "e896d6c8-20c3-02ee-ccd7-b97778217acb",
      "lease_id": "",
      "renewable": false,
      "lease_duration": 0,
      "data": {
        "data": {
          "password": "supersecurepassword"
        },
        "metadata": {
          "created_time": "2020-10-31T17:06:54.890898Z",
          "deletion_time": "",
          "destroyed": false,
          "version": 1
        }
      },
      "wrap_info": null,
      "warnings": null,
      "auth": null
    },
    "status_code": 200
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we’ve created our plan we can ensure that it’s recognized by Bolt by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan registers properly the output should include a &lt;strong&gt;&lt;em&gt;restproject::api&lt;/em&gt;&lt;/strong&gt; entry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aggregate::count
aggregate::nodes
aggregate::targets
canary
facts
facts::external
facts::info
puppet_agent::run
puppetdb_fact
reboot
restproject::api  
secure_env_vars
terraform::apply
terraform::destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the plan registered we are now ready to run the plan by running the bolt plan run restproject::api command. The target for the plan is localhost as we want to run the API call from the machine we are running Bolt on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan run restproject::api --target localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan ran successfully it should have generated output similar to that displayed below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Starting: plan restproject::api
Starting: task http_request on localhost
Finished: task http_request with 0 failures in 0.26 sec
Starting: task http_request on localhost
Finished: task http_request with 0 failures in 0.26 sec
Finished: plan restproject::api in 0.57 sec
"supersecurepassword"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan completed successfully the value of the password created in the first step should be returned.&lt;/p&gt;

&lt;p&gt;This example walked through how to interact with REST API endpoints using the http_request task that is included with Puppet Bolt.&lt;/p&gt;

</description>
      <category>configurationmanagem</category>
      <category>puppetbolt</category>
      <category>devops</category>
    </item>
    <item>
      <title>Automate in YAML with Puppet Bolt</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Wed, 21 Oct 2020 17:44:59 +0000</pubDate>
      <link>https://forem.com/puppet/automate-in-yaml-with-puppet-bolt-50p4</link>
      <guid>https://forem.com/puppet/automate-in-yaml-with-puppet-bolt-50p4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finf0cpfmsyizalh3eljo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finf0cpfmsyizalh3eljo.jpeg" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Automate in YAML with Puppet Bolt&lt;/p&gt;

&lt;p&gt;The Puppet DSL or Domain Specific Language is one of the things most associated with Puppet. A number of other automation tools make use of a DSL such as HashiCorp Terraform which uses HCL or HashiCorp Configuration Language.&lt;/p&gt;

&lt;p&gt;One of the most interesting things about Puppet Bolt is its ability to support the Puppet DSL but also YAML. YAML is quicker to adopt especially for those already familiar with YAML.&lt;/p&gt;

&lt;p&gt;In this post we’ll take a look at how to quickly get started with a YAML plan by deploying a simple website using NGINX.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initialize the Bolt project
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Ensure that the latest version of Puppet Bolt is installed before getting started.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Puppet Bolt utilizes &lt;strong&gt;Project&lt;/strong&gt; directories as launching points for running Bolt operations. Create a directory for our Puppet Bolt project name  &lt;strong&gt;webapp.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir webapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the working directory to &lt;strong&gt;webapp&lt;/strong&gt; directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd webapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have a directory for hosting our Bolt project, we need to initialize the project and also add the NGINX Puppet module from the Puppet Forge.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt project init --modules puppet-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command should generate output similar to that shown below if it ran successfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Installing project modules

→ Resolving module dependencies, this may take a moment

→ Writing Puppetfile at
    /system/path/webapp/Puppetfile

→ Syncing modules from
    /system/path/webapp/Puppetfile to
    /system/path/webapp/modules

Successfully synced modules from /system/path/webapp/Puppetfile to /system/path/webapp/modules
Successfully created Bolt project at /system/path/webapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the Bolt YAML plan
&lt;/h3&gt;

&lt;p&gt;In order to utilize plans in Bolt, we need to create a directory named plans.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir plans
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have our plans directory created we’ll plan out what we want to accomplish as part of our deployment. We’ll keep this plan as simple as possible to show how easy it is to use YAML with Puppet Bolt. We’ll accomplish the following tasks&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install NGINX&lt;/li&gt;
&lt;li&gt;Create an HTML file using content specified&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ve got a plan of what we want to do and now we are ready to create the Bolt plan. We’ll dig into the basic syntax of a Bolt YAML plan to understand the following Bolt Plan.&lt;/p&gt;

&lt;p&gt;Create a file named &lt;strong&gt;deploy.yaml&lt;/strong&gt; with the following content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parameters:
  targets:
    type: TargetSpec

steps:
  - name: installnginx
    targets: $targets
    resources:
      - class: nginx
  - name: deploycontent
    targets: $targets
    resources:
      - file: /usr/share/nginx/html/index.html
        parameters:
          ensure: present
          content: '&amp;lt;!DOCTYPE html&amp;gt;&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;My Puppet Bolt Site&amp;lt;/h1&amp;gt;&amp;lt;p&amp;gt;I used Bolt to deploy a website.&amp;lt;/p&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Plan Syntax
&lt;/h4&gt;

&lt;p&gt;The plan above includes two sections which are parameters and steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#parameters-key" rel="noopener noreferrer"&gt;Parameters&lt;/a&gt; allow us to pass values to the plan, in this case our plan accepts a parameter named &lt;strong&gt;targets&lt;/strong&gt; with a type of TargetSpec. This is used to pass the IP address or FQDN of the machine or machines that we want to run the plan against.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#steps-key" rel="noopener noreferrer"&gt;Steps&lt;/a&gt; as the name implies are the steps that we want to run against the machine. The plan above includes two stepsThe following&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#message-step" rel="noopener noreferrer"&gt;Message&lt;/a&gt;: The message step is used to print a message.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#command-step" rel="noopener noreferrer"&gt;Command&lt;/a&gt;: The command step is used to run a command against the target or targets.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#task-step" rel="noopener noreferrer"&gt;Task&lt;/a&gt;: The task step is used to run a Bolt task against the target or targets.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#script-step" rel="noopener noreferrer"&gt;Script&lt;/a&gt;: The script step is used to run a script against the target or targets.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#file-download-step" rel="noopener noreferrer"&gt;File Download&lt;/a&gt;: The file download step is used to download a file from the target or targets to the system that Bolt is running on.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#file-upload-step" rel="noopener noreferrer"&gt;File Upload&lt;/a&gt;: The file upload step is used to upload a file from the system that Bolt is running on to the target or targets.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#plan-step" rel="noopener noreferrer"&gt;Plan&lt;/a&gt;: The plan step is used to run other plans as part of a plan (nested plans).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html#resources-step" rel="noopener noreferrer"&gt;Resources&lt;/a&gt;: The resource step is used apply Puppet resources to the target or targets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we’ve created our plan we can ensure that it’s recognized by Bolt by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan registers properly the output should include a &lt;strong&gt;&lt;em&gt;webapp::deploy&lt;/em&gt;&lt;/strong&gt; entry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aggregate::count
aggregate::nodes
aggregate::targets
canary
facts
facts::external
facts::info
puppet_agent::run
puppetdb_fact
reboot
secure_env_vars
terraform::apply
terraform::destroy
**webapp::deploy**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the plan registered we are now ready to run the plan by running the bolt plan run webapp::deploy command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt plan run webapp::deploy --targets web01.grt.local --no-host-key-check --user root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the plan ran successfully it should have generated output similar to that displayed below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Starting: plan webapp::deploy
Starting: install puppet and gather facts on 10.0.0.40
Finished: install puppet and gather facts with 0 failures in 12.3 sec
Starting: apply catalog on 10.0.0.40
Finished: apply catalog with 0 failures in 72.64 sec
Starting: install puppet and gather facts on 10.0.0.40
Finished: install puppet and gather facts with 0 failures in 7.01 sec
Starting: apply catalog on 10.0.0.40
Finished: apply catalog with 0 failures in 15.02 sec
Finished: plan webapp::deploy in 1 min, 48 sec
Plan completed successfully with no result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the command completes successfully we can check that everything worked by opening entering the IP address or FQDN of the Bolt target in a web browser. The site should show the following message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0lh04f0to1vfz0527d4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0lh04f0to1vfz0527d4.png" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have now successfully deployed a website using Puppet Bolt. The automation can be made more elaborate such as downloading the web files from a git repository or uploading a directory of files.&lt;/p&gt;

&lt;p&gt;Additional information about Puppet Bolt YAML plans can be found &lt;a href="https://puppet.com/docs/bolt/latest/writing_yaml_plans.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>puppet</category>
      <category>configurationmanagem</category>
      <category>devops</category>
      <category>puppetbolt</category>
    </item>
    <item>
      <title>Terraform is not Ansible or Puppet</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Wed, 14 Oct 2020 13:44:53 +0000</pubDate>
      <link>https://forem.com/puppet/terraform-is-not-ansible-or-puppet-4d76</link>
      <guid>https://forem.com/puppet/terraform-is-not-ansible-or-puppet-4d76</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu9uceip9rbluk4p8im2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu9uceip9rbluk4p8im2.png" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terraform is not Ansible or Puppet&lt;/p&gt;

&lt;p&gt;There’s a common misconception that Terraform does the exact same thing as Ansible, Puppet or other tools that fall into the configuration management category. The focus of this post is on detailing why this is a misconception even though some believe it to be fact.&lt;/p&gt;

&lt;p&gt;Before we get started HashiCorp pretty much settles the debate for us. The following is pulled directly from the HashiCorp Terraform documentation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Terraform is not a configuration management tool, and it allows existing tooling to focus on their strengths: bootstrapping and initializing resources. — &lt;a href="https://www.terraform.io/intro/vs/chef-puppet.html" rel="noopener noreferrer"&gt;https://www.terraform.io/intro/vs/chef-puppet.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This statement explains some of the ethos that has been followed by the Terraform team. Even with that, the delineation between the two is not readily apparent as there are things Terraform can do that Ansible or Puppet does and vice versa.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So what is Terraform good at?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Terraform is great with APIs
&lt;/h3&gt;

&lt;p&gt;Terraform is built to interface with REST APIs. Terraform creates resources by calling a REST API endpoint to perform CRUD (create, read, update, delete) operations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cloud Infrastructure
&lt;/h4&gt;

&lt;p&gt;Terraform is an excellent tool for codifying cloud infrastructure such as AWS S3 buckets or Azure AKS clusters. Terraform also has providers for VMware vSphere and OpenStack to codify private cloud infrastructure. All of these provide a REST API for creating, reading, updating and deleting resources so Terraform is great at this.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;But can’t Ansible and Puppet create cloud resources?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Yes, Ansible and Puppet can also be used to create cloud resources. Which tool you use to provision cloud resources comes down to preference. Typically you would evaluate which tool supports all the resources you anticipate creating as well as how you like creating the resources with that tool.&lt;/p&gt;

&lt;h4&gt;
  
  
  Application Configuration
&lt;/h4&gt;

&lt;p&gt;This is where things can get really confusing. Most modern applications expose a REST API for configuring the application. Since we know that Terraform interacts with REST APIs then it can be used to configure applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;But can’t Ansible and Puppet also configure applications?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Yes, Ansible and Puppet are also capable of configuring applications. Where things diverge is that Ansible and Puppet are ideally suited for installing the applications on the operating system unlike Terraform. This is also why it would make sense to use Ansible or Puppet to install and configure the application otherwise there could be a ping ponging between the tools in a provisioning pipeline ( &lt;strong&gt;Terraform&lt;/strong&gt; [Infrastructure] -&amp;gt; &lt;strong&gt;Puppet&lt;/strong&gt; [Application Install] -&amp;gt; &lt;strong&gt;Terraform&lt;/strong&gt; [Application Configuration]). Technically Terraform can be used to install software or configure system settings but based upon the HashiCorp statement above, that’s not the focus for Terraform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform is not great at installing software or configuring operating system level settings.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;But that’s not true, public cloud readily supports configuration at boot time by passing a script&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is true that Terraform can pass scripts or commands that are executed on a virtual machine or cloud instance at boot time. This model works well for smaller scripts but can become unwieldy once the script grows to some substantial size. Additionally, with some cloud init or userdata integrations there is no direct feedback regarding the status of the script. This means that you need those execution logs to be shipped to a centralized logger or connect directly to the machine to review them.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;But terraform has remote exec resources so I can just use one of those instead?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Using a remote exec does provide that realtime execution feedback however, HashiCorp recommends against using remote exec resources when possible.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Provisioners should only be used as a last resort. For most common situations there are better alternatives. — &lt;a href="https://www.terraform.io/docs/provisioners/remote-exec.html" rel="noopener noreferrer"&gt;https://www.terraform.io/docs/provisioners/remote-exec.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This isn’t to say that they aren’t used quite frequently with great success but that it’s not an ideal solution.&lt;/p&gt;

&lt;p&gt;This isn’t to say that Terraform can’t be used to install applications or configure operating system level settings but that that’s not what it is great at. Configuration management tools are great at doing those things and I’ll walk through some of the benefits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of configuration management
&lt;/h3&gt;

&lt;p&gt;Configuration management tools have a number of benefits in comparison to bash or powershell scripts but we’ll cover three that should be familiar to those that use Terraform.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuration As Code
&lt;/h4&gt;

&lt;p&gt;Terraform utilizes HCL (HashiCorp Configuration Language) to define resources. Defining configuration as code is a paradigm that many have adopted and love, which is why they use Terraform. Remote execs and userdata integrations break from this model and relies on inline commands or scripts. Configuration management tools like Ansible and Puppet enable this “as code” paradigm to be used for system configuration as well.&lt;/p&gt;

&lt;h4&gt;
  
  
  State management (Idempotency)
&lt;/h4&gt;

&lt;p&gt;Terraform utilizes the construct of state to provide idempotency. This means that when you run a &lt;strong&gt;terraform plan&lt;/strong&gt; and the Terraform code matches the current state of the resource you anticipate that Terraform won’t change anything.&lt;/p&gt;

&lt;p&gt;Idempotency is the ability for automation to be run multiple times and only change what needs to be changed. This is how Terraform works, it verifies the current state of the resource and makes changes only when when the state doesn’t match the configuration. Terraform remote execs and userdata integration address idempotency by running the automation only during provisioning. This is a core aspect of configuration management tools and allows the automation to be run multiple times.&lt;/p&gt;

&lt;h4&gt;
  
  
  State Correction (Desired State)
&lt;/h4&gt;

&lt;p&gt;One of the reasons people love declarative tools is the ability to change a configuration by simply changing the value of a parameter. When you change a setting in Terraform and run &lt;strong&gt;terraform plan&lt;/strong&gt; you anticipate Terraform to show that there’s a change to be made. Once you run a &lt;strong&gt;terraform apply&lt;/strong&gt; you expect the setting to be changed to align with the newly declared setting. The advantage of this is that you don’t have to concern yourself with all of the details of how to actually update a resource.&lt;/p&gt;

&lt;p&gt;Configuration management tools can update resources in a similar manner along with being used to correct configuration drift. Configuration drift is typically caused by manual changes or another tool making changes. This is capability helps ensure that systems stay in alignment with the declared configuration and is especially useful for security and regulatory compliance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Ultimately, Terraform “can” do a lot of things Ansible or Puppet does but many of the reasons for using Terraform are the same reasons to use a configuration management tool in conjunction with Terraform.&lt;/p&gt;

</description>
      <category>puppet</category>
      <category>ansible</category>
      <category>terraform</category>
      <category>devops</category>
    </item>
    <item>
      <title>Configuration Management in a Service Mesh World</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Fri, 02 Oct 2020 17:13:21 +0000</pubDate>
      <link>https://forem.com/puppet/configuration-management-in-a-service-mesh-world-1hj5</link>
      <guid>https://forem.com/puppet/configuration-management-in-a-service-mesh-world-1hj5</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f6drslof4kxjw0mmcg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f6drslof4kxjw0mmcg7.png" width="800" height="442"&gt;&lt;/a&gt;CNCF Service Mesh&lt;/p&gt;

&lt;p&gt;Configuration Management in a Service Mesh World&lt;/p&gt;

&lt;p&gt;In the IT space one could say that kubernetes is eating the IT world and it’s bringing an entire ecosystem to the table. One of the dinner guests is service mesh.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In a world with kubernetes and containers and microservices, why would we need configuration management?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Many organizations are embarking on the journey to kubernetes nirvana and taking advantage of all the many benefits that a service mesh can provide. Similar to a public cloud journey it’s when you get past the low hanging fruit or net new applications that the sheer complexity starts to become apparent.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We need to have our kubernetes workloads integrate with our “legacy” applications in a meaningful way.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Applications running on bare metal and virtual machines will now need to be connected to the service mesh to take advantage of all the benefits we love about service mesh. Integrating these workloads into the service mesh can be as simple as installing the service mesh agents on the VM or bare metal machine or involve standing up service mesh infrastructure for the non-kubernetes workloads and federating the two environments.&lt;/p&gt;

&lt;p&gt;Many of the major service mesh projects already have non-kubernetes support to varying degrees.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.consul.io/" rel="noopener noreferrer"&gt;HashiCorp Consul&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kuma.io/" rel="noopener noreferrer"&gt;Kuma&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;Istio&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Great, we can just install the service mesh agent on the virtual machine or bare metal machine. No reason for configuration management.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Unfortunately the process to bootstrap a VM or bare metal machine isn’t that simple for most service meshes. There are a number of operations to bootstrap the agent that are automagically done for workloads running on kubernetes but not for non-kubernetes workloads. Configuration management tools such as Puppet, Chef and Ansible are ideally suited to help with automating the bootstrap process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Certificate management
&lt;/h3&gt;

&lt;p&gt;Mutual TLS or mTLS between service endpoints is one of the major benefits of service meshes. This enables encrypted communication between services as well as the ability to restrict which services can talk based upon their identity. Many of the service meshes come with a built-in Certificate Authority (CA) to handle the certificate management process. Most are integrated with the assumption that the primary use case is kubernetes workloads and certificate distribution for non-kubernetes workloads is a manual process. Additionally, external certificate authorities are also supported but then there’s a completely different certificate bootstrap process that needs to be automated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration Management’s Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Configuration management tools can help with the automated generation and distribution of the certificates used with the service mesh. It also can make integration with external certificate authorities like HashiCorp Vault simpler with the various existing integration with configuration management tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Micro segmentation
&lt;/h3&gt;

&lt;p&gt;Micro segmentation is essentially granting workloads or services the ability to talk to other services on a need to communicate basis. The common example is a three tier application stack with web, app and database tiers. The web servers don’t need to send traffic to one another or the database servers. Similarly the database server shouldn’t need to send traffic to the web servers. This helps mitigate unfettered lateral movement in the event of a security breach.&lt;/p&gt;

&lt;h4&gt;
  
  
  Service Mesh Policies
&lt;/h4&gt;

&lt;p&gt;Service meshes enable the ability to restrict which services are allowed to communicate with another service. In this way we can ensure that a particular service can communicate with its database. This is but it only controls service to service communication and not what pods can communicate within the environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Kubernetes Network Policy
&lt;/h4&gt;

&lt;p&gt;In order to achieve micro segmentation for kubernetes workloads we need to leverage kubernetes network policies or a similar construct to ensure we’ve restricted ingress and egress traffic from the pod to only what is absolutely necessary. This helps ensure that all our kubernetes pods are able to communicate with the oracle production database that’s running on bare metal hardware.&lt;/p&gt;

&lt;h4&gt;
  
  
  VM/Bare Metal Host Firewall
&lt;/h4&gt;

&lt;p&gt;The ability to do this has been available for years with host based firewalls like the windows firewall and IP Tables for linux along with third party host firewalls. The challenge has always been the willingness to manage the necessary rules. Public cloud has made this much more appealing with security groups along with network policies in Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration Management’s Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Managing host firewall rules at some level of scale is nearly impossible without some management tool. Configuration management tools can be used to define the appropriate firewall rules in a manner similar to kubernetes network policies and lock down east-west traffic in an environment.&lt;/p&gt;

&lt;p&gt;The use of configuration management to help bridge kubernetes and non-kubernetes services makes the adoption of kubernetes in a hybrid world much easier.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>configurationmanagem</category>
      <category>kubernetes</category>
      <category>servicemesh</category>
    </item>
    <item>
      <title>Puppet Bolt Dynamic Inventory for Azure</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Fri, 04 Sep 2020 13:00:08 +0000</pubDate>
      <link>https://forem.com/puppet/puppet-bolt-dynamic-inventory-for-azure-1lfl</link>
      <guid>https://forem.com/puppet/puppet-bolt-dynamic-inventory-for-azure-1lfl</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctwlvrztewu2jd7b1dxr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctwlvrztewu2jd7b1dxr.jpeg" width="800" height="460"&gt;&lt;/a&gt;Microsoft Azure Logo&lt;/p&gt;

&lt;p&gt;Public cloud workloads are often very dynamic in nature and sometimes there isn’t a master list of all the instances that have been provisioned. There are times that you need to run a command against all the workloads or a subset of workloads based upon some node metadata such as an instance or virtual machine tag. In this blog post we’ll take a look at how Puppet Bolt integrates with Microsoft Azure.&lt;/p&gt;

&lt;p&gt;Puppet Bolt includes an &lt;a href="https://forge.puppet.com/puppetlabs/azure_inventory" rel="noopener noreferrer"&gt;Azure inventory plugin&lt;/a&gt; that enables the dynamic discovery of workloads in an Azure environment. The following virtual machine attributes can be used for targeting or classifying virtual machines.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;resource group&lt;/li&gt;
&lt;li&gt;scale set&lt;/li&gt;
&lt;li&gt;location&lt;/li&gt;
&lt;li&gt;tags&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Bolt will only target virtual machines and virtual machine scale sets that have a public IP address. The uri of the target will be set to the public IP address and the name will be set to either the fully qualified domain name if one exists or the instance name otherwise.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Generate Azure Credentials
&lt;/h3&gt;

&lt;p&gt;The first thing we need to do is to generate Azure credentials for Puppet Bolt to use when searching for virtual machines. The following command generates the necessary credentials assuming you are logged into Azure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az ad sp create-for-rbac --query "{ client_id: appId, client_secret: password, tenant_id: tenant }"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Azure credentials should be displayed on the screen similar to those displayed below.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Safe guard the generated credentials, they should not be shared.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "client_id": "b27e2468-e9ad-5ea8-c043-196fc8d2q1mw",
  "client_secret": "91f28cwg-49e3-1qr2-825a-42fne279fd01",
  "tenant_id": "tg4b7md3-630k-8664-2t45-d1w923dww21w"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Inventory File
&lt;/h3&gt;

&lt;p&gt;Now that we’ve got our Azure credentials we’re ready to create our Bolt inventory file. In this example we’re specifying the Azure location and the Azure resource group for our azure-vms Bolt inventory group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# inventory.yaml
version: 2
groups:
  - name: azure-vms
    targets:
      - _plugin: azure_inventory
        tenant_id: tg4b7md3-630k-8664-2t45-d1w923dww21w
        client_id: b27e2468-e9ad-5ea8-c043-196fc8d2q1mw
        client_secret: 91f28cwg-49e3-1qr2-825a-42fne279fd01
        subscription_id: 9a656783-3215-4627-b1e2-c8973fh5r21w
        location: eastus
        resource_group: bolt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we’ve defined the criteria for our Bolt inventory group we can run the bolt inventory show command to list the virtual machines that Bolt found for the group or groups specified. In the example we are listing all the virtual machines from all groups.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bolt inventory show --targets all -i inventory.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command should return the names of the Azure virtual machines that were found based upon the attributes provided.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nixagent
1 target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This unlocks the ability to quickly run commands or scripts against a dynamic group of virtual machines in an Azure environment.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>bolt</category>
      <category>puppet</category>
      <category>azure</category>
    </item>
    <item>
      <title>Puppet Azure Key Vault Integration</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Mon, 24 Aug 2020 14:05:03 +0000</pubDate>
      <link>https://forem.com/puppet/puppet-azure-key-vault-integration-328j</link>
      <guid>https://forem.com/puppet/puppet-azure-key-vault-integration-328j</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9ro597xdl9uz8qjflkv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9ro597xdl9uz8qjflkv.png" width="800" height="569"&gt;&lt;/a&gt;Puppet Azure Key Vault Integration&lt;/p&gt;

&lt;p&gt;Automation is a necessity in today’s IT landscape but with great power comes great responsibility. Properly handling sensitive data such as machine credentials, API keys and passwords isn’t always easy when developing an automated solution. One common challenge is getting a secret from secure storage to an application’s configuration file for use by the application.&lt;/p&gt;

&lt;p&gt;Azure helps simplify this process for Azure virtual machines with the use of Azure Key Vault and Azure Managed Identity. Azure Key Vault is a cloud service used to manage keys, secrets, and certificates. Azure Managed Identities enable a virtual machine in this case to be granted permission to the Azure Key Vault using an assigned identity.&lt;/p&gt;

&lt;p&gt;This blog post takes a look at how Puppet can integrate with native Azure services to simplify writing a secret stored in Azure Key Vault to an application configuration file on an Azure virtual machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Key Vault Puppet Module
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://forge.puppet.com/tragiccode/azure_key_vault" rel="noopener noreferrer"&gt;azure_key_vault&lt;/a&gt; forge module includes support for server side secrets retrieval using hiera and agent side retrieval using deferred functions. The module utilizes the Azure metadata service to retrieve the access credentials for interacting with Azure Key Vault to access secrets stored in Azure Key Vault.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server side retrieval (hiera)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The module includes support for a custom hiera backend. This enables puppet code to retrieve sensitive data from Azure Key Vault during a hiera lookup. The Puppet master fetches the sensitive data from the Azure Key Vault and sends the unencrypted data to agent over the secure agent communication channel. The focus of this post is on the agent side retrieval and additional information about the server side retrieval can be found in the module’s documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent side retrieval (deferred function)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Puppet 6 includes what are known as &lt;a href="https://puppet.com/docs/puppet/latest/deferring_functions.html" rel="noopener noreferrer"&gt;deferred functions&lt;/a&gt;, these enable functions to run on the agent node as part of a Puppet run. Utilizing deferred functions has a number of advantages over server side retrieval with hiera. One of the biggest advantages is that the sensitive data doesn’t need to be decrypted on the master and then delivered to the agent node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Key Vault Integration
&lt;/h3&gt;

&lt;p&gt;Now that we understand how we deferred functions work and what the azure_key_vault module supports, we’ll look at an example of the integration.&lt;/p&gt;

&lt;p&gt;The example is an application configuration file dynamically populated using Puppet with the password being fetched from Azure Key Vault by the agent node.&lt;/p&gt;

&lt;h4&gt;
  
  
  Azure Authentication
&lt;/h4&gt;

&lt;p&gt;The agent node virtual machine is expected to be running on Azure and use a managed system identity or a user assigned identity with the appropriate permissions to access the Azure key vault.&lt;/p&gt;

&lt;h4&gt;
  
  
  Demo App Module
&lt;/h4&gt;

&lt;p&gt;In our demo application Puppet module we need to create an &lt;strong&gt;init.pp&lt;/strong&gt; manifest and a &lt;strong&gt;config.yaml.epp&lt;/strong&gt; template file in module’s &lt;strong&gt;files&lt;/strong&gt; directory.&lt;/p&gt;

&lt;p&gt;The following example highlights the Azure Key Vault integration to populate the password in a configuration file. The example uses Puppet’s EPP templating to dynamically create and populate the file content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
host: &amp;lt;%= $host %&amp;gt;
password: &amp;lt;%= $admin_password_secret.unwrap %&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The template file is placed in the module’s files directory instead of the templates directory since the template file must be present on the agent node for the deferred rendering. &lt;a href="https://puppet.com/docs/puppet/6.17/template_with_deferred_values.html" rel="noopener noreferrer"&gt;https://puppet.com/docs/puppet/6.17/template_with_deferred_values.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The following manifest shows how the password is being set by a value retrieved from Azure Key Vault.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class demoapp (
  String $key_vault_name = 'grtdevvault',
  String $key_vault_secret_name = 'app-password',
  String $host = 'grtapp01.grt.local',
) {

$password = Deferred('azure_key_vault::secret',
                ["$key_vault_name","$key_vault_secret_name",{"metadata_api_version"=&amp;gt;"2018-04-02","vault_api_version"=&amp;gt;"2016-10-01"}])

$hash_variables = {
    'admin_password_secret' =&amp;gt; $password,
    'host' =&amp;gt; $host,
  }

file { '/opt/demoapp':
    ensure =&amp;gt; directory,
  }

file { '/opt/demoapp/config.yaml':
    ensure =&amp;gt; file,
    content =&amp;gt; Deferred('inline_epp',
                [file('demoapp/config.yaml.epp'), $hash_variables]),
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that our Puppet code has been added, the next Puppet agent run will write the file to the agent node’s filesystem with the unencrypted version of the password in the config.yaml file for use by the application similar to that shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
host: grtapp01.grt.local
password: Secret12345
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This blog post covered a simple and powerful use case for ensuring that sensitive data such as passwords are being stored and handled in a secure manner.&lt;/p&gt;

&lt;h4&gt;
  
  
  References
&lt;/h4&gt;

&lt;p&gt;Puppet EPP Templates with deferred functions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://puppet.com/docs/puppet/6.17/template_with_deferred_values.html" rel="noopener noreferrer"&gt;https://puppet.com/docs/puppet/6.17/template_with_deferred_values.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Azure virtual machine managed identity&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/key-vault/general/tutorial-python-virtual-machine" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/key-vault/general/tutorial-python-virtual-machine&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>puppet</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploying Puppet Enterprise Agents with HashiCorp Terraform on Azure VMs</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Tue, 18 Aug 2020 15:28:16 +0000</pubDate>
      <link>https://forem.com/puppet/deploying-puppet-enterprise-agents-with-hashicorp-terraform-on-azure-vms-jk6</link>
      <guid>https://forem.com/puppet/deploying-puppet-enterprise-agents-with-hashicorp-terraform-on-azure-vms-jk6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctwlvrztewu2jd7b1dxr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctwlvrztewu2jd7b1dxr.jpeg" width="800" height="460"&gt;&lt;/a&gt;Microsoft Azure Logo&lt;/p&gt;

&lt;p&gt;HashiCorp Terraform is an open source Infrastructure as Code (IaC) tool that is widely used to deploy cloud infrastructure in the public cloud such as AWS and Azure along with on-premises VMware vSphere environments.&lt;/p&gt;

&lt;p&gt;One of the challenges is developing a method for bootstrapping the instances with configuration management agents such as the Puppet Enterprise agent. In this blog post we cover a simple and easy way to install the Puppet Enterprise agent on Azure virtual machines provisioned with HashiCorp Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Virtual Machine Extensions
&lt;/h3&gt;

&lt;p&gt;Microsoft Azure supports what are known as &lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/overview" rel="noopener noreferrer"&gt;virtual machine extensions&lt;/a&gt; which small applications that provide post-deployment configuration and automation on Azure VMs. There are a number of extensions available from companies such as DataDog, New Relic and others. These extensions have been created to wrap the installation and configuration of their respective agents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom Script Extension&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In addition to extensions created by vendors, Microsoft Azure has created a custom script extensions that allows arbitrary commands or scripts to be executed during the post-provisioning stage. The HashiCorp Terraform Azure provider includes a resource for custom script extensions and can be used to quickly install the Puppet Enterprise agent on a virtual machine during the provisioning process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Puppet Enterprise Agent Installation
&lt;/h3&gt;

&lt;p&gt;Puppet Enterprise provides a simple method for installing the Puppet Enterprise agent using the &lt;a href="https://puppet.com/docs/pe/2019.8/installing_agents.html#using_install_script" rel="noopener noreferrer"&gt;PE agent install script&lt;/a&gt;. Using this script enables us to easily provide additional agent configuration information such as trusted facts that are embedded in the CSR or a pre-shared key used for automatically signing the agent SSL certificate. This method assumes that a certificate autosiging process is in place to allow the certificate to be automatically signed during the bootstrap process.&lt;/p&gt;

&lt;p&gt;If sensitive information such as the pre-shared key is passed as part of the provisioning code it should be properly secured. There are several options to properly secure that information.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a custom wrapper script that dynamically fetches the sensitive information from Azure Key Vault&lt;/li&gt;
&lt;li&gt;Create a custom wrapper script that dynamically fetches the sensitive information from a HashiCorp Vault deployment&lt;/li&gt;
&lt;li&gt;Embed the sensitive information in a custom wrapper script that is securely stored in an Azure Blob&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Linux&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Puppet Enterprise agent installation script for Linux uses Bash and an example is show below:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The hostname should be replaced with the FQDN of your Puppet Enterprise master or compiler load balancer&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -k https://puppetmaster.grt.local:8140/packages/current/install.bash] | sudo bash -s custom_attributes:challengePassword=PASSWORD123 extension_requests:pp_role=web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we’ve got our installation command we just need to add it to an azurerm_virtual_machine_extension Terraform resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_virtual_machine_extension" "linux_pe_install" {
  name ="PEAgentInstallLinux"
  virtual_machine_id = azurerm_linux_virtual_machine.example.id
  publisher ="Microsoft.Azure.Extensions"
  type ="CustomScript"
  type_handler_version ="2.0"

  settings =  &amp;lt;&amp;lt; SETTINGS
    {
        "commandToExecute": "curl -k https://puppetmaster.grt.local:8140/packages/current/install.bash | sudo bash -s custom_attributes:challengePassword=PASSWORD123 extension_requests:pp_role=web"
    }
SETTINGS

  tags ={
    environment ="Production"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Windows&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Puppet Enterprise agent installation script for Windows uses PowerShell and an example is show below:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The hostname should be replaced with the FQDN of your Puppet Enterprise master or compiler load balancer&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[System.Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; [Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}; $webClient = New-Object System.Net.WebClient; $webClient.DownloadFile('https://puppetmaster.grt.local:8140/packages/current/install.ps1', 'install.ps1'); .\install.ps1 custom_attributes:challengePassword=PASSWORD123 extension_requests:pp_role=database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we’ve got our installation command we just need to add it to an azurerm_virtual_machine_extension Terraform resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_virtual_machine_extension" "windows_pe_install" {
  name ="PEAgentInstallWindows"
  virtual_machine_id = azurerm_windows_virtual_machine.example.id
  publisher ="Microsoft.Azure.Extensions"
  type ="CustomScript"
  type_handler_version ="2.0"

  settings =  &amp;lt;&amp;lt; SETTINGS
    {
        "commandToExecute": "[System.Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; [Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}; $webClient = New-Object System.Net.WebClient; $webClient.DownloadFile('https://puppetmaster.grt.local:8140/packages/current/install.ps1', 'install.ps1'); .\install.ps1 custom_attributes:challengePassword=PASSWORD123 extension_requests:pp_role=database"
    }
SETTINGS

  tags = {
    environment ="Production"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are certainly more complex or intricate configurations that can be developed to install the Puppet Enterprise agent. This post focused on providing a simple method to easily get started with deploying Puppet Enterprise agents with HashiCorp Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/overview" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/overview&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_extension.html" rel="noopener noreferrer"&gt;https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_extension.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>hashicorp</category>
      <category>puppet</category>
    </item>
    <item>
      <title>Creating Azure VM Images With Packer and Puppet Bolt</title>
      <dc:creator>Martez Reed</dc:creator>
      <pubDate>Tue, 11 Aug 2020 14:36:38 +0000</pubDate>
      <link>https://forem.com/puppet/creating-azure-vm-images-with-packer-and-puppet-bolt-a7p</link>
      <guid>https://forem.com/puppet/creating-azure-vm-images-with-packer-and-puppet-bolt-a7p</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctwlvrztewu2jd7b1dxr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctwlvrztewu2jd7b1dxr.jpeg" width="800" height="460"&gt;&lt;/a&gt;Microsoft Azure Logo&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.packer.io/" rel="noopener noreferrer"&gt;HashiCorp Packer&lt;/a&gt; is a free and open source tool for creating golden images for multiple platforms from a single source configuration. Packer makes it easy to codify VM images for Microsoft Azure.&lt;/p&gt;

&lt;p&gt;In this blog post we’ll look at how to use HashiCorp Packer and Puppet Bolt to define our VM templates in code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Puppet Bolt Packer Plugin
&lt;/h3&gt;

&lt;p&gt;HashiCorp Packer doesn’t natively integrate with Puppet Bolt. A Packer plugin has been created to simplify this integration. To begin using the plugin, the latest release bundle for your operating system should be downloaded from the &lt;a href="https://github.com/martezr/packer-provisioner-puppet-bolt/releases/latest" rel="noopener noreferrer"&gt;https://github.com/martezr/packer-provisioner-puppet-bolt/releases/latest&lt;/a&gt; Github repository and unpacked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftywurx7l074gp0zcx7qc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftywurx7l074gp0zcx7qc.png" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the packer-provisioner-puppet-bolt binary has been unpacked, it should be moved to a path on the system where Packer can find it as covered in the link below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.packer.io/docs/extending/plugins#installing-plugins" rel="noopener noreferrer"&gt;https://www.packer.io/docs/extending/plugins#installing-plugins&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Puppet Bolt Plan
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Ensure that the latest version of Puppet Bolt is installed before getting started.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this post we’ll be using Puppet Bolt to install NGINX as a simple example of the integration between Packer and Bolt. The Bolt YAML plan below installs the epel-release repository, nginx and enables the service to start at boot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parameters:
  targets:
    type: TargetSpec

steps:
  - command: yum -y install epel-release
    targets: $targets
    description: "Install epel-release"
  - command: yum -y install nginx
    targets: $targets
    description: "Install nginx"
  - command: systemctl enable nginx
    targets: $targets
    description: "Start nginx on boot"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Packer Template
&lt;/h3&gt;

&lt;p&gt;We now need to create our Packer template that defines the settings for our VM image such as the operating system and hardware configuration. Before we create our template we’ll generate our Azure credentials if we don’t already have credentials and create a dedicated resource group for the VM image generated by Packer.&lt;/p&gt;

&lt;p&gt;Create a new Azure resource group for the VM image or using an existing resource group. We’ll specify a resource group in our Packer template later on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az group create -n packerbolt -l centralus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to generate Azure credentials for Packer to use when building the VM image. The following command generates the necessary credentials assuming you are logged into Azure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az ad sp create-for-rbac --query "{ client_id: appId, client_secret: password, tenant_id: tenant }"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Azure credentials should be displayed on the screen similar to those displayed below.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Safe guard the generated credentials, they should not be shared.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "client_id": "b27e2468-e9ad-5ea8-c043-196fc8d2q1mw",
  "client_secret": "91f28cwg-49e3-1qr2-825a-42fne279fd01",
  "tenant_id": "tg4b7md3-630k-8664-2t45-d1w923dww21w"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can pass the credentials at the command line, include them in a variables file or add them as environment variables as seen below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export ARM_CLIENT_ID="b27e2468-e9ad-5ea8-c043-196fc8d2q1mw"
export ARM_CLIENT_SECRET="91f28cwg-49e3-1qr2-825a-42fne279fd01"
export ARM_TENANT_ID="tg4b7md3-630k-8664-2t45-d1w923dww21w"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the Azure credentials set we can now create our Packer template file to define our VM image. The &lt;strong&gt;managed_image_resource_group_name&lt;/strong&gt; field is set to the Azure resource group we created earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "variables": {
    "client_id": "{{env `ARM_CLIENT_ID`}}",
    "client_secret": "{{env `ARM_CLIENT_SECRET`}}",
    "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}",
    "tenant_id": "{{env `ARM_TENANT_ID`}}",
    "ssh_user": "centos",
    "ssh_pass": "{{env `ARM_SSH_PASS`}}"
  },
  "builders": [{
    "type": "azure-arm",

    "client_id": "{{user `client_id`}}",
    "client_secret": "{{user `client_secret`}}",
    "subscription_id": "{{user `subscription_id`}}",
    "tenant_id": "{{user `tenant_id`}}",

    "managed_image_resource_group_name": "packerbolt",
    "managed_image_name": "MyCentOSImage",

    "ssh_username": "{{user `ssh_user`}}",
    "ssh_password": "{{user `ssh_pass`}}",

    "os_type": "Linux",
    "image_publisher": "OpenLogic",
    "image_offer": "CentOS",
    "image_sku": "8_2",
    "image_version": "latest",
    "ssh_pty": "true",

    "location": "Central US",
    "vm_size": "Standard_B1MS"
  }],
  "provisioners": [
    {
      "type": "puppet-bolt",
      "user": "centos",
      "run_as": "root",
      "bolt_module_path": "/Users/martez.reed/Documents/GitHub/puppet-on-azure/Bolt",
      "bolt_plan": "azure::web",
      "bolt_params": {}
    },
    {
      "execute_command": "echo '{{user `ssh_pass`}}' | {{ .Vars }} sudo -S -E sh '{{ .Path }}'",
      "inline": [
        "yum update -y",
        "/usr/sbin/waagent -force -deprovision+user &amp;amp;&amp;amp; export HISTSIZE=0 &amp;amp;&amp;amp; sync"
      ],
      "inline_shebang": "/bin/sh -x",
      "type": "shell",
      "skip_clean": true
    }
]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Puppet Bolt provisioner section from the full template above shows that we’ve specified a few settings for our Puppet Bolt provisioner. We specified a Bolt plan, a path for where to look for our modules and authentication along with privilege escalation information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "type": "puppet-bolt",
  "user": "centos",
  "run_as": "root",
  "bolt_module_path": "./puppet-on-azure/Bolt",
  "bolt_plan": "azure::web",
  "bolt_params": {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the Packer template created we can now build our Azure image by running the packer build command and provide the name of the template file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;packer build centos8.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The build will take a few minutes and should display output similar to that shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;azure-arm: output will be in this color.

==&amp;gt; azure-arm: Running builder ...
==&amp;gt; azure-arm: Getting tokens using client secret
==&amp;gt; azure-arm: Getting tokens using client secret
    azure-arm: Creating Azure Resource Manager (ARM) client ...
==&amp;gt; azure-arm: WARNING: Zone resiliency may not be supported in Central US, checkout the docs at [https://docs.microsoft.com/en-us/azure/availability-zones/](https://docs.microsoft.com/en-us/azure/availability-zones/)
==&amp;gt; azure-arm: Creating resource group ...
==&amp;gt; azure-arm: -&amp;gt; ResourceGroupName : 'pkr-Resource-Group-rivksir0po'
==&amp;gt; azure-arm: -&amp;gt; Location : 'Central US'
==&amp;gt; azure-arm: -&amp;gt; Tags :
==&amp;gt; azure-arm: Validating deployment template ...
==&amp;gt; azure-arm: -&amp;gt; ResourceGroupName : 'pkr-Resource-Group-rivksir0po'
==&amp;gt; azure-arm: -&amp;gt; DeploymentName : 'pkrdprivksir0po'
==&amp;gt; azure-arm: Deploying deployment template ...
==&amp;gt; azure-arm: -&amp;gt; ResourceGroupName : 'pkr-Resource-Group-rivksir0po'
==&amp;gt; azure-arm: -&amp;gt; DeploymentName : 'pkrdprivksir0po'
==&amp;gt; azure-arm: Getting the VM's IP address ...
==&amp;gt; azure-arm: -&amp;gt; ResourceGroupName : 'pkr-Resource-Group-rivksir0po'
==&amp;gt; azure-arm: -&amp;gt; PublicIPAddressName : 'pkriprivksir0po'
==&amp;gt; azure-arm: -&amp;gt; NicName : 'pkrnirivksir0po'
==&amp;gt; azure-arm: -&amp;gt; Network Connection : 'PublicEndpoint'
==&amp;gt; azure-arm: -&amp;gt; IP Address : '23.101.127.134'
==&amp;gt; azure-arm: Waiting for SSH to become available...
==&amp;gt; azure-arm: Connected to SSH!
==&amp;gt; azure-arm: Provisioning with Puppet Bolt...
==&amp;gt; azure-arm: Executing Bolt: bolt plan run azure::web --params {} --modulepath /Users/martez.reed/Documents/GitHub/puppet-on-azure/Bolt --targets ssh://127.0.0.1:65059 --user centos --no-host-key-check --private-key /var/folders/ly/bwpnd5gn5tv7549rgn80x4jw0000z_/T/packer-provisioner-bolt.164237326.key --run-as root
    azure-arm: Starting: plan azure::web
    azure-arm: Starting: Install epel-release on ssh://127.0.0.1:65059
    azure-arm: Finished: Install epel-release with 0 failures in 11.75 sec
    azure-arm: Starting: Install nginx on ssh://127.0.0.1:65059
    azure-arm: Finished: Install nginx with 0 failures in 17.38 sec
    azure-arm: Finished: plan azure::web in 29.15 sec
    azure-arm: Plan completed successfully with no result
==&amp;gt; azure-arm: Provisioning with shell script: /var/folders/ly/bwpnd5gn5tv7549rgn80x4jw0000z_/T/packer-shell758099055
==&amp;gt; azure-arm: Querying the machine's properties ...
==&amp;gt; azure-arm: -&amp;gt; ResourceGroupName : 'pkr-Resource-Group-rivksir0po'
==&amp;gt; azure-arm: -&amp;gt; ComputeName : 'pkrvmrivksir0po'
==&amp;gt; azure-arm: -&amp;gt; Managed OS Disk : '/subscriptions/2a646183-6919-4320-a1f3-c6985fc5d87e/resourceGroups/PKR-RESOURCE-GROUP-RIVKSIR0PO/providers/Microsoft.Compute/disks/pkrosrivksir0po'
==&amp;gt; azure-arm: Querying the machine's additional disks properties ...
==&amp;gt; azure-arm: -&amp;gt; ResourceGroupName : 'pkr-Resource-Group-rivksir0po'
==&amp;gt; azure-arm: -&amp;gt; ComputeName : 'pkrvmrivksir0po'
==&amp;gt; azure-arm: Powering off machine ...
==&amp;gt; azure-arm: -&amp;gt; ResourceGroupName : 'pkr-Resource-Group-rivksir0po'
==&amp;gt; azure-arm: -&amp;gt; ComputeName : 'pkrvmrivksir0po'
==&amp;gt; azure-arm: Capturing image ...
==&amp;gt; azure-arm: -&amp;gt; Compute ResourceGroupName : 'pkr-Resource-Group-rivksir0po'
==&amp;gt; azure-arm: -&amp;gt; Compute Name : 'pkrvmrivksir0po'
==&amp;gt; azure-arm: -&amp;gt; Compute Location : 'Central US'
==&amp;gt; azure-arm: -&amp;gt; Image ResourceGroupName : 'packerbolt'
==&amp;gt; azure-arm: -&amp;gt; Image Name : 'MyCentOSImage'
==&amp;gt; azure-arm: -&amp;gt; Image Location : 'Central US'
==&amp;gt; azure-arm: Deleting resource group ...
==&amp;gt; azure-arm: -&amp;gt; ResourceGroupName : 'pkr-Resource-Group-rivksir0po'
==&amp;gt; azure-arm: 
==&amp;gt; azure-arm: The resource group was created by Packer, deleting ...
==&amp;gt; azure-arm: Deleting the temporary OS disk ...
==&amp;gt; azure-arm: -&amp;gt; OS Disk : skipping, managed disk was used...
==&amp;gt; azure-arm: Deleting the temporary Additional disk ...
==&amp;gt; azure-arm: -&amp;gt; Additional Disk : skipping, managed disk was used...
==&amp;gt; azure-arm: Removing the created Deployment object: 'pkrdprivksir0po'
==&amp;gt; azure-arm: ERROR: -&amp;gt; ResourceGroupNotFound : Resource group 'pkr-Resource-Group-rivksir0po' could not be found.
==&amp;gt; azure-arm:
Build 'azure-arm' finished.

==&amp;gt; Builds finished. The artifacts of successful builds are:
--&amp;gt; azure-arm: Azure.ResourceManagement.VMImage:

OSType: Linux
ManagedImageResourceGroupName: packerbolt
ManagedImageName: MyCentOSImage
ManagedImageId: /subscriptions/2a646183-6919-4320-a1f3-c6985fc5d87e/resourceGroups/packerbolt/providers/Microsoft.Compute/images/MyCentOSImage
ManagedImageLocation: Central US
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Puppet Bolt plan can be much more complex but the goal of this post was to showcase how easy it is to integrate the two together.&lt;/p&gt;

</description>
      <category>puppetbolt</category>
      <category>puppet</category>
      <category>devops</category>
      <category>azure</category>
    </item>
  </channel>
</rss>
