<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kentaro Wakayama</title>
    <description>The latest articles on Forem by Kentaro Wakayama (@coder_society).</description>
    <link>https://forem.com/coder_society</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/coder_society"/>
    <language>en</language>
    <item>
      <title>Getting Started with AWS Amplify</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Tue, 28 Dec 2021 15:28:35 +0000</pubDate>
      <link>https://forem.com/coder_society/getting-started-with-aws-amplify-3cdf</link>
      <guid>https://forem.com/coder_society/getting-started-with-aws-amplify-3cdf</guid>
      <description>&lt;p&gt;Serverless cloud infrastructure is the next step in building apps. But if you’ve tried to navigate the vast number of services in the AWS console, you know that using the cloud is easier said than done. Today’s developers are overwhelmed with the amount of services AWS offers.&lt;/p&gt;

&lt;p&gt;The solution? AWS Amplify, which helps developers to easily build and deploy complete mobile and web apps by providing a collection of CLI tools, libraries, frameworks, and cloud services.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does AWS Amplify Work? 
&lt;/h2&gt;

&lt;p&gt;Your main interaction point when using Amplify is the CLI tool, which comes with many commands that help you set up and maintain a serverless project. The Amplify CLI can create infrastructure by generating and deploying CloudFormation code. It also includes code generators for mobile and web apps that use the following languages: JavaScript/TypeScript, Dart, Swift, Java and GraphQL.&lt;/p&gt;

&lt;p&gt;This way, you get new AWS resources deployed to the cloud, configured with best practices, and the boilerplate code needed to connect your frontend with these resources.&lt;/p&gt;

&lt;p&gt;Amplify also has its own set of cloud services which you can use to set up and manage your apps, including web hosting based on Amazon S3 and Amazon CloudFront, the Amplify Console and Amplify Admin UI. The Amplify Console is used to get insights into your app’s workings after you deploy it, while the Amplify Admin UI is a visual alternative to the Amplify CLI, where you can create backends in the browser. &lt;/p&gt;

&lt;p&gt;There is also a large set of frontend libraries and components that help you connect with the AWS services and ease the integration of Amplify with frontend frameworks like React or Vue. This includes a library for authentication with &lt;a href="https://aws.amazon.com/cognito"&gt;Amazon Cognito&lt;/a&gt;, AWS’ own identity management service, and a GraphQL client to connect to &lt;a href="https://aws.amazon.com/appsync"&gt;AppSync&lt;/a&gt;, a hosted GraphQL API service.&lt;/p&gt;

&lt;p&gt;With Amplify DataStore, Amplify even includes a relatively high-level library that eases the pain of setting up offline functionality and real-time synchronization for your apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building an Image Gallery
&lt;/h2&gt;

&lt;p&gt;To get a better understanding of what Amplify can do, let’s build something with it! An image gallery web app is a simple task for Amplify. This tutorial shows you how to use the Amplify auth, hosting, and storage plugins, and how to generate most of the code with the Amplify CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Prerequisites
&lt;/h3&gt;

&lt;p&gt;This tutorial requires the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS account&lt;/li&gt;
&lt;li&gt;AWS CLI&lt;/li&gt;
&lt;li&gt;Node.js &amp;amp; NPM&lt;/li&gt;
&lt;li&gt;Amplify CLI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you use the Cloud9 IDE to follow along, you only have to install and configure the Amplify CLI manually; everything else is set up correctly out of the box. Make sure that the Amplify CLI was configured via &lt;code&gt;amplify configure&lt;/code&gt;, to use your AWS credentials. AWS has a detailed &lt;a href="https://docs.amplify.aws/start/getting-started/installation/q/integration/js/"&gt;guide on installing and configuring the Amplify CLI&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Initializing the Sample Application
&lt;/h3&gt;

&lt;p&gt;First, you need to create a new directory for your app with several sub directories.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;image-gallery
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;image-gallery
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;src
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;dist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;src&lt;/code&gt; directory is used to store the JavaScript code, while the &lt;code&gt;dist&lt;/code&gt; directory is used by Amplify to bundle, minify and upload the JavaScript and HTML to the cloud.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;package.json&lt;/code&gt; file in the projects root directory with this content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"image-gallery"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.1.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="nl"&gt;"build"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"esbuild src/app.js --bundle --minify --define:global=window --outfile=dist/app.js"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="nl"&gt;"aws-amplify"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"latest"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"devDependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="nl"&gt;"esbuild"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^0.13.15"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should use ESBuild to bundle and minify the Amplify SDK with your application code for deployment. The &lt;code&gt;amplify publish&lt;/code&gt; CLI command will automatically call the build script and search the &lt;code&gt;dist&lt;/code&gt; directory for files to upload to Amplify’s hosting service.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Initializing Amplify
&lt;/h3&gt;

&lt;p&gt;Initialize Amplify in the project with the below command. When asked, choose the defaults.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;amplify init &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will set up basic Amplify-related resources in the cloud, including two IAM roles and a S3 bucket for deployment-related data. It will also generate a &lt;code&gt;src/aws-exports.js&lt;/code&gt; file that contains all the credentials your frontend needs to connect to the services you’ll deploy later. This file will be updated with every call to the Amplify CLI, so you should never change it manually.&lt;/p&gt;

&lt;p&gt;Next, add &lt;code&gt;hosting&lt;/code&gt;, &lt;code&gt;auth&lt;/code&gt;, and &lt;code&gt;storage&lt;/code&gt;, in that order.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;amplify add hosting
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select the defaults, which are listed below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;? Select the plugin module to execute: Hosting with Amplify Console (Managed hosting with custom domains, Continuous deployment)
? Choose a type: Manual deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;amplify add auth  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, the CLI will ask you some questions, with default answers given. Here, the defaults are okay too. They are as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Do you want to use the default authentication and security configuration? Default configuration
How do you want users to be able to sign in? Username
Do you want to configure advanced settings? No, I am done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;amplify add storage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The storage category needs a bit of customization. The default configuration only allows authenticated users to access the storage, but you also want unauthenticated users to view the images. For this, you’ll need the following configurations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;? Select from one of the below mentioned services: Content (Images, audio, video, etc.)
✔ Provide a friendly name for your resource that will be used to label this category in the project: ...
✔ Provide bucket name: ... 
✔ Who should have access: Auth and guest users
✔ What kind of access do you want for Authenticated users? create/update, read, delete
✔ What kind of access do you want for Guest users? read
✔ Do you want to add a Lambda Trigger for your S3 Bucket? no
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the storage, select &lt;code&gt;Auth and guest users&lt;/code&gt; to make sure you have access. Authenticated users need permission to &lt;code&gt;create&lt;/code&gt;, &lt;code&gt;update&lt;/code&gt;, and &lt;code&gt;read&lt;/code&gt; files, and guest users need permission to &lt;code&gt;read&lt;/code&gt; files.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Writing the Source Code
&lt;/h3&gt;

&lt;p&gt;Create an &lt;code&gt;index.html&lt;/code&gt; file in the &lt;code&gt;dist&lt;/code&gt; directory with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;charset=&lt;/span&gt;&lt;span class="s"&gt;"utf-8"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;Amplify Image Gallery&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;link&lt;/span&gt; &lt;span class="na"&gt;rel=&lt;/span&gt;&lt;span class="s"&gt;"stylesheet"&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"//unpkg.com/bamboo.css@1.3.7/dist/dark.min.css"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;style&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/style&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Amplify Image Gallery&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"signin-status"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;Sign Up or Sign In.&lt;span class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;br&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;h2&amp;gt;&lt;/span&gt;Step 1: Sign Up&lt;span class="nt"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;

Username &lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"signup-username"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&amp;lt;br&amp;gt;&lt;/span&gt;

Email &lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"signup-email"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&amp;lt;br&amp;gt;&lt;/span&gt;

Password &lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"password"&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"signup-password"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&amp;lt;br&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;
  Passwords must be 8 characters long, contain one number and one special
  character.
&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;button&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"signup-button"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;Sign Up With Email&lt;span class="nt"&gt;&amp;lt;/button&amp;gt;&amp;lt;br&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;h2&amp;gt;&lt;/span&gt;Step 2: Confirm Sign Up&lt;span class="nt"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;

Confirmation Code &lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"signup-confirmation-code"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;button&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"confirm-signup-button"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;Confirm Sign Up&lt;span class="nt"&gt;&amp;lt;/button&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;h2&amp;gt;&lt;/span&gt;Step 3: Sign In&lt;span class="nt"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;

Username &lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"signin-username"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&amp;lt;br&amp;gt;&lt;/span&gt;

Password &lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"password"&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"signin-password"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;button&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"signin-button"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;Sign In&lt;span class="nt"&gt;&amp;lt;/button&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;h2&amp;gt;&lt;/span&gt;Step 4: Upload image&lt;span class="nt"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"file"&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"upload-input"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;h2&amp;gt;&lt;/span&gt;Images&lt;span class="nt"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"gallery"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/div&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"app.js"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This HTML page has a few buttons for the sign-up process. They will be hooked up with logic in the following JavaScript file.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;app.js&lt;/code&gt; file in the &lt;code&gt;src&lt;/code&gt; directory with this content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Amplify&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Auth&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Storage&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-amplify&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;awsconfig&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./aws-exports&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="nx"&gt;Amplify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;awsconfig&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;statusDiv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signin-status&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;refreshGallery&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;galleryDiv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gallery&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="nx"&gt;galleryDiv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerHTML&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;img&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nx"&gt;galleryDiv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;appendChild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;refreshGallery&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signup-button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;click&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signup-username&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;email&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signup-email&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signup-password&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;signUp&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;attributes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;email&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="nx"&gt;statusDiv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Check "&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;" inbox for a confirmation code.`&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;confirm-signup-button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;click&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;code&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signup-confirmation-code&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signup-username&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;confirmSignUp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="nx"&gt;statusDiv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Sinup confirmed. You can sign in now.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signin-button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;click&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signin-username&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;signin-password&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;signIn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="nx"&gt;statusDiv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Signed in as &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;upload-input&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;change&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;files&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;refreshGallery&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;app.js&lt;/code&gt; file fills the HTML with life. The &lt;code&gt;aws-exports.js&lt;/code&gt; file contains all the credentials your client needs to interact with the deployed infrastructure. You’ll use it to initialize the Amplify SDK.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;refreshGallery&lt;/code&gt; function fetches a list of all the names of the files you uploaded to AWS Amplify Storage, which is essentially an S3 bucket. Then, it uses the &lt;code&gt;Storage.get&lt;/code&gt; method to generate signed URLs that can be put into &lt;code&gt;img&lt;/code&gt; HTML elements to display your images.&lt;/p&gt;

&lt;p&gt;Next, event listeners are added to all buttons to manage the authentication flow. The Amplify SDK offers an Auth object with various methods to sign up, sign in, and confirm email codes. A sign-in is required to use the upload feature later.&lt;/p&gt;

&lt;p&gt;To install the Amplify SDK for JavaScript and  ESBuild, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npm i
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the libraries are installed, you can deploy a serverless infrastructure with the Amplify CLI and publish the changes to the AWS cloud:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;amplify publish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will deploy the serverless infrastructure and package, as well as upload your frontend code. It can take a few minutes to complete because many resources will be provisioned.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Using the App
&lt;/h3&gt;

&lt;p&gt;After the deployment is finished, Amplify will present you with an URL. The app will look like the image in Figure 1, below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BV-zddQ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/aws-amplify-image-gallery.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BV-zddQ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/aws-amplify-image-gallery.png" alt="" width="880" height="1028"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: Image gallery app&lt;/p&gt;

&lt;p&gt;Open the URL in a browser and simply follow the instructions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign up with your email&lt;/li&gt;
&lt;li&gt;Enter the confirmation code you received from AWS Cognito and validate it&lt;/li&gt;
&lt;li&gt;Sign in&lt;/li&gt;
&lt;li&gt;Upload an image&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The image will be sent directly to an S3 bucket that was created by Amplify when you set up the storage category with the CLI. The Amplify SDK is then used to generate signed URLs, which allow you to access the images via HTTP. The whole code for loading the images is in the refreshGallery function inside the &lt;code&gt;src/app.js&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;refreshGallery&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;galleryDiv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gallery&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="nx"&gt;galleryDiv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerHTML&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createElement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;img&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;galleryDiv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;appendChild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Cleanup
&lt;/h3&gt;

&lt;p&gt;After you’ve deployed and used the system, you can delete it with just one Amplify CLI command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;amplify delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will delete all local files Amplify generated and destroy the CloudFormation templates and all related resources deployed in the cloud.&lt;/p&gt;

&lt;p&gt;Check the &lt;a href="https://s3.console.aws.amazon.com/s3/home"&gt;S3 console&lt;/a&gt; for buckets related to the project, after calling the above command. To prevent data loss in Amplify, and in turn, CloudFormation, they won’t be deleted automatically.&lt;/p&gt;

&lt;p&gt;Also, the IAM user created by Amplify to deploy resources will be left after calling the delete command. You find it in your &lt;a href="https://console.aws.amazon.com/iamv2/home?#/users"&gt;IAM dashboard&lt;/a&gt;, where you have to delete it manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  What About the Pricing?
&lt;/h2&gt;

&lt;p&gt;If you’ve ever used services like Firebase or Heroku, you probably know that solutions that ease developers’ interactions with the cloud can become expensive, especially when scaling up later. Usually, these solutions are built as totally stand-alone services, specially crafted for one purpose: making the cloud easier to use.&lt;/p&gt;

&lt;p&gt;AWS went in a different direction with Amplify. It’s built on the AWS infrastructure services you already know, so you have to pay the same amount of money as when you use these services without Amplify. And since the services it uses are serverless, they all offer on-demand payment options. If you build a system that nobody wants to use, you don’t have to pay for the unused resources.&lt;/p&gt;

&lt;p&gt;If you build an Amplify-based app, check out the &lt;a href="https://calculator.aws/#/"&gt;AWS Pricing Calculator&lt;/a&gt; to estimate how much it will cost you. All resources follow the same pricing models with and without Amplify.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;AWS Amplify is an exciting approach to serverless infrastructure. It caters mainly to frontend and mobile developers, so its projects can be maintained even if you don’t have enough full-stack or backend developers at hand.&lt;/p&gt;

&lt;p&gt;Amplify differs from solutions like Firebase in that Amplify is just an abstraction layer above existing serverless AWS services. Besides the Amplify console and the Amplify Admin UI, which manage your app, it doesn’t introduce special services to run your app. This is also why the costs of running an app with Amplify are the same as when configuring all underlying services manually.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, &lt;a href="https://www.linkedin.com/company/codersociety/"&gt;follow us on LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>amplify</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Kubernetes Logging in Production</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Tue, 26 Oct 2021 09:23:33 +0000</pubDate>
      <link>https://forem.com/coder_society/kubernetes-logging-in-production-1ld3</link>
      <guid>https://forem.com/coder_society/kubernetes-logging-in-production-1ld3</guid>
      <description>&lt;p&gt;Historically, in monolithic architectures, logs were stored directly on bare metal or virtual machines. They never left the machine disk and the operations team would check each one for logs as needed.&lt;/p&gt;

&lt;p&gt;This worked on long-lived machines, but machines in the cloud are ephemeral. As more companies run their services on containers and orchestrate deployments with Kubernetes, logs can no longer be stored on machines and implementing a log management strategy is of the utmost importance.&lt;/p&gt;

&lt;p&gt;Logs are an effective way of debugging and monitoring your applications, and they need to be stored on a separate backend where they can be queried and analyzed in case of pod or node failures. These separate backends include systems like Elasticsearch, GCP’s Stackdriver, and AWS’ Cloudwatch.&lt;/p&gt;

&lt;p&gt;Storing logs off of the cluster in a storage backend is called cluster-level logging. In this article we’ll discuss how to implement this approach in your own Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging Architectures
&lt;/h2&gt;

&lt;p&gt;In a Kubernetes cluster there are two main log sources, your application and the system components.&lt;/p&gt;

&lt;p&gt;Your application runs as a container in the Kubernetes cluster and the container runtime takes care of fetching your application’s logs while Docker redirects those logs to the stdout and stderr streams. In a Kubernetes cluster, both of these streams are written to a JSON file on the cluster node.&lt;/p&gt;

&lt;p&gt;These container logs can be fetched anytime with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs podname
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The other source of logs are system components. Some of the system components (namely &lt;code&gt;kube-scheduler&lt;/code&gt; and &lt;code&gt;kube-proxy&lt;/code&gt;) run as containers and follow the same logging principles as your application.&lt;/p&gt;

&lt;p&gt;The other system components (&lt;code&gt;kubelet&lt;/code&gt; and &lt;code&gt;container runtime&lt;/code&gt; itself) run as a native service. If &lt;code&gt;systemd&lt;/code&gt; is available on the machine the components write logs in &lt;code&gt;journald&lt;/code&gt;, otherwise they write a &lt;code&gt;.log&lt;/code&gt; file in &lt;code&gt;/var/log&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Now that we understand which components of your application and cluster generate logs and where they’re stored, let’s look at some common patterns to offload these logs to separate storage systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging Patterns
&lt;/h2&gt;

&lt;p&gt;The two most prominent patterns for collecting logs are the sidecar pattern and the DaemonSet pattern.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. DaemonSet pattern
&lt;/h3&gt;

&lt;p&gt;In the DaemonSet pattern, logging agents are deployed as pods via the DaemonSet resource in Kubernetes. Deploying a DaemonSet ensures that each node in the cluster has one pod with a logging agent running. This logging agent is configured to read the logs from &lt;code&gt;/var/logs&lt;/code&gt; directory and send them to the storage backend. You can see a diagram of this configuration in figure 1.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TzJHu8lE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/kubernetes-logging-daemonset.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TzJHu8lE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/kubernetes-logging-daemonset.png" alt="DaemonSet pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: A logging agent running per node via a DaemonSet&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Sidecar pattern
&lt;/h3&gt;

&lt;p&gt;Alternatively, in the sidecar pattern a dedicated container runs along every application container in the same pod. This sidecar can be of two types, streaming sidecar or logging agent sidecar.&lt;/p&gt;

&lt;p&gt;The streaming sidecar is used when you are running an application that writes the logs to a file instead of stdout/stderr streams, or one that writes the logs in a nonstandard format. In that case, you can use a streaming sidecar container to publish the logs from the file to its own stdout/stderr stream, which can then be picked up by Kubernetes itself.&lt;/p&gt;

&lt;p&gt;The streaming sidecar can also bring parity to the log structure by transforming the log messages to standard log format. You can see this pattern in figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--plvsdl6_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/kubernetes-logging-streaming-sidecar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--plvsdl6_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/kubernetes-logging-streaming-sidecar.png" alt="Streaming sidecar pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: Streaming sidecar pattern&lt;/p&gt;

&lt;p&gt;Another approach is the logging agent sidecar, where the sidecar itself ships the logs to the storage backend. Each pod contains a logging agent like Fluentd or Filebeat, which captures the logs from the application container and sends them directly to the storage backend, as illustrated in figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z3bIP7Sn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/kubernetes-logging-logging-agent-sidecar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z3bIP7Sn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/kubernetes-logging-logging-agent-sidecar.png" alt="Logging agent sidecar pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: Logging agent sidecar pattern  &lt;/p&gt;

&lt;h2&gt;
  
  
  Pros and Cons
&lt;/h2&gt;

&lt;p&gt;Now that we’ve gone over both the DaemonSet and sidecar approaches, let’s get acquainted with the pros and cons of each.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. DaemonSet (Node Level)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Node-level logging is easier to implement because it hooks into the existing file based logging and is less resource intensive than a sidecar approach as there are less containers running per node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The logs are available via the kubectl command for debugging, as the log files are available to kubelet which returns the content of the log file.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Less flexibility in supporting different log structures or applications that write to log files instead of streams. You would need to modify the application log structure to achieve parity, or handle the difference in your storage backend.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Since they’re stored as JSON files on the node disk, logs can’t be held forever. You need to have a log rotation mechanism in place to recycle old logs. If you are using Container Runtime Interface, kubelet takes care of rotating the logs and no explicit solution needs to be implemented.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Sidecar
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You have the flexibility to customize sidecars per application container. For example, an application might not have the ability to write to &lt;code&gt;stdout/stderr&lt;/code&gt;, or it might have some different logging format. In these cases, a sidecar container can bring parity to the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you’re using a logging agent sidecar without streaming, you don't need to rotate the logs because no logs are being stored on the node disk.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Running a sidecar for each application container is quite resource intensive when compared to node-level pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Adding a sidecar to each deployment creates an extra layer of complexity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you’re using a streaming sidecar for an application that writes its logs to files, you’ll use double the storage for the same logs because you’ll be duplicating the entries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you’re using a logging agent sidecar without streaming, you’ll lose the ability to access logs via &lt;code&gt;kubectl&lt;/code&gt;. This is because &lt;code&gt;kubelet&lt;/code&gt; no longer has access to the JSON logs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With a logging agent sidecar you also need a node-level agent, otherwise you won’t be able to collect the system component logs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Putting Theory into Practice
&lt;/h2&gt;

&lt;p&gt;Now that we’ve looked at the possible patterns for logging in a Kubernetes cluster, let’s put them into action. We’ll deploy dummy containers generating logs and create Kubernetes resources to implement the logging patterns we discussed above.&lt;/p&gt;

&lt;p&gt;For this example we’ll use Fluentd as a logging agent, and we will install Elasticsearch for logging backend and Kibana for visualization purposes. We will install Elasticsearch and Kibana using Helm charts into the same cluster. Do note however that your storage backend should not be on the same cluster and we are doing it for demo purposes only. Thanks to Fluentd’s pluggable architecture, it supports various different sinks. That’s why the Elasticsearch backend can be replaced by any cloud-native solution, including Stackdriver or Cloudwatch.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Installing Elasticsearch and Kibana
&lt;/h3&gt;

&lt;p&gt;We will deploy the Elasticsearch and Kibana using the official Helm charts which can be found here(&lt;a href="https://github.com/elastic/helm-charts/blob/master/elasticsearch"&gt;Elasticsearch&lt;/a&gt;, &lt;a href="https://github.com/elastic/helm-charts/blob/master/kibana"&gt;Kibana&lt;/a&gt;). For installing via Helm you would need a helm binary on your path but installation of Helm is outside the scope of this post.&lt;/p&gt;

&lt;p&gt;Let us start by adding helm repos.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add elastic https://helm.elastic.co
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we will install the Elasticsearch and Kibana charts into our cluster.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;elasticsearch elastic/elasticsearch
helm &lt;span class="nb"&gt;install &lt;/span&gt;kibana elastic/kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This will install the latest version of Elasticsearch and Kibana in your cluster which can then be used as storage backend for your logs.&lt;/p&gt;

&lt;p&gt;We have used the default values in our charts but you can change any parameter based on your needs when you are installing this in production.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. DaemonSet
&lt;/h3&gt;

&lt;p&gt;We will be deploying Fluentd as a DaemonSet. To keep the verbosity low, we won’t be creating a separate ServiceAccount and ClusterRole.  But in a production environment, Fluentd pods should run with a separate service account with limited access.&lt;/p&gt;

&lt;p&gt;You can deploy Fluentd as a DaemonSet by using following the Kubernetes resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DaemonSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
&lt;span class="na"&gt;  namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="na"&gt;  labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;    k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd-logger&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;    metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;      labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;        k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd-logger&lt;/span&gt;
&lt;span class="na"&gt;    spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;      containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;      - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
&lt;span class="na"&gt;        image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent/fluentd-kubernetes-daemonset:elasticsearch&lt;/span&gt;
&lt;span class="na"&gt;        env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;        - name:  FLUENT\_ELASTICSEARCH\_HOST&lt;/span&gt;
&lt;span class="na"&gt;          value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;elasticsearch-master"&lt;/span&gt;
&lt;span class="s"&gt;        - name:  FLUENT\_ELASTICSEARCH\_PORT&lt;/span&gt;
&lt;span class="na"&gt;          value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9200"&lt;/span&gt;
&lt;span class="na"&gt;        volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;        - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
&lt;span class="na"&gt;          mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
&lt;span class="na"&gt;        - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dockerlogs&lt;/span&gt;
&lt;span class="na"&gt;          mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/containers&lt;/span&gt;
&lt;span class="na"&gt;          readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;      volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;      - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
&lt;span class="na"&gt;        hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;          path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
&lt;span class="na"&gt;      - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dockerlogs&lt;/span&gt;
&lt;span class="na"&gt;        hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;          path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/containers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we’re mounting two volumes: one at &lt;code&gt;/var/log&lt;/code&gt; and another at &lt;code&gt;/var/log/docker/containers&lt;/code&gt;, where the system components and Docker runtime put the logs, respectively.&lt;/p&gt;

&lt;p&gt;The image we are using is already configured with smart defaults to be used with DaemonSet, but &lt;a href="https://hub.docker.com/r/fluent/fluentd-kubernetes-daemonset"&gt;you can change the configuration&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Save the above YAML resource in a file named &lt;code&gt;fluentd-ds.yaml&lt;/code&gt; and apply the resource via the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; fluentd-ds.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start a Fluentd pod on each node in your Kubernetes cluster. &lt;/p&gt;

&lt;p&gt;Now we’ll see how to implement streaming and logging agent sidecar patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Sidecar
&lt;/h3&gt;

&lt;p&gt;First, let’s look at the streaming sidecar pattern when our application is writing logs to a file instead of stream. We’re running a sidecar to read those logs and write it back to the stdout/stderr stream.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;legacy-app&lt;/span&gt;
&lt;span class="na"&gt;    image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
&lt;span class="na"&gt;    args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    - /bin/sh&lt;/span&gt;
&lt;span class="s"&gt;    - -c&lt;/span&gt;
&lt;span class="s"&gt;    - &amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;      i=0;&lt;/span&gt;
&lt;span class="s"&gt;      while &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;
&lt;span class="s"&gt;      do&lt;/span&gt;
&lt;span class="s"&gt;        echo "$i&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(date)" &amp;gt;&amp;gt; /var/log/output.log;&lt;/span&gt;
&lt;span class="s"&gt;        i=$((i+1));&lt;/span&gt;
&lt;span class="s"&gt;        sleep 1;&lt;/span&gt;
&lt;span class="s"&gt;      done      &lt;/span&gt;
&lt;span class="na"&gt;    volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;    - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
&lt;span class="na"&gt;      mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
&lt;span class="na"&gt;  - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;streaming-sidecar&lt;/span&gt;
&lt;span class="na"&gt;    image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
&lt;span class="na"&gt;    args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;\[/bin/sh, -c, 'tail -n+1 -f /var/log/output.log'\]&lt;/span&gt;
&lt;span class="na"&gt;    volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;    - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
&lt;span class="na"&gt;      mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
&lt;span class="na"&gt;  volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
&lt;span class="na"&gt;    emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we have a dummy container writing logs to files in the &lt;code&gt;/var/log&lt;/code&gt; directory of the container. Now these logs can’t be fetched by the container runtime, that’s why we implemented a streaming sidecar to tail the logs from the &lt;code&gt;/var/log&lt;/code&gt; location and redirect it to the &lt;code&gt;stdout&lt;/code&gt; stream.&lt;/p&gt;

&lt;p&gt;This log stream will be picked up by the container runtime and stored as a JSON file at the &lt;code&gt;/var/log&lt;/code&gt; directory on the node, which will in turn be picked up by the node-level logging agent.&lt;/p&gt;

&lt;p&gt;Now, let’s look at the logging agent sidecar. In this pattern we’ll deploy Fluentd as a sidecar, which will directly write to our Elasticsearch storage backend.&lt;/p&gt;

&lt;p&gt;Unfortunately, there is no prebuilt image with an Elasticsearch plugin installed, and creating a custom Docker image is out of the scope of this article. Instead, we’ll use the same Fluentd image that we used in the DaemonSet example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;count&lt;/span&gt;
&lt;span class="na"&gt;    image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
&lt;span class="na"&gt;    args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    - /bin/sh&lt;/span&gt;
&lt;span class="s"&gt;    - -c&lt;/span&gt;
&lt;span class="s"&gt;    - &amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;      i=0;&lt;/span&gt;
&lt;span class="s"&gt;      while &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;
&lt;span class="s"&gt;      do&lt;/span&gt;
&lt;span class="s"&gt;        echo "$i&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(date)" &amp;gt;&amp;gt; /var/log/output.log;&lt;/span&gt;
&lt;span class="s"&gt;        i=$((i+1));&lt;/span&gt;
&lt;span class="s"&gt;        sleep 1;&lt;/span&gt;
&lt;span class="s"&gt;      done      &lt;/span&gt;
&lt;span class="na"&gt;    volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;    - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
&lt;span class="na"&gt;      mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
&lt;span class="na"&gt;  - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logging-agent&lt;/span&gt;
&lt;span class="na"&gt;    image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent/fluentd-kubernetes-daemonset:elasticsearch&lt;/span&gt;
&lt;span class="na"&gt;     env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;      - name:  FLUENT\_ELASTICSEARCH\_HOST&lt;/span&gt;
&lt;span class="na"&gt;        value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;elastisearch-master"&lt;/span&gt;
&lt;span class="s"&gt;      - name:  FLUENT\_ELASTICSEARCH\_PORT&lt;/span&gt;
&lt;span class="na"&gt;        value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9200"&lt;/span&gt;
&lt;span class="na"&gt;    volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;    - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
&lt;span class="na"&gt;      mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
&lt;span class="na"&gt;  volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
&lt;span class="na"&gt;    emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Given the ephemeral nature of pods and nodes, it’s very important to store logs from your Kubernetes cluster in a separate storage backend. There are multiple patterns that you can use to set up the logging architecture that we discussed in this article.&lt;/p&gt;

&lt;p&gt;Note that we suggest a mix of both sidecar and node-level patterns for your production systems. This includes setting up cluster-wide, node-level logging using a DaemonSet pattern, and implementing a streaming sidecar container for applications that do not support writing logs to stream (&lt;code&gt;stdout/stderr&lt;/code&gt;) or that don’t write in a standard log format. This streaming container will automatically surface logs for node-level agents to be picked up.&lt;/p&gt;

&lt;p&gt;For the choice of storage backend, you can choose self-hosted, open-source solutions such as Elasticsearch, or you can go the managed service route with options like cloud-hosted Elasticsearch, Stackdriver, or Cloudwatch. The choice of backend that’s right for you will depend on the cost, query, and log analysis requirements that you want to implement with your architecture.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, &lt;a href="https://www.linkedin.com/company/codersociety/"&gt;follow us on LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>logging</category>
      <category>devops</category>
    </item>
    <item>
      <title>NoOps: What Does the Future Hold for DevOps Engineers?</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Sun, 11 Jul 2021 14:40:02 +0000</pubDate>
      <link>https://forem.com/coder_society/noops-what-does-the-future-hold-for-devops-engineers-56f8</link>
      <guid>https://forem.com/coder_society/noops-what-does-the-future-hold-for-devops-engineers-56f8</guid>
      <description>&lt;p&gt;With cloud adoption on the rise, the level of abstraction in application architecture has increased— from traditional on-premises servers to containers and serverless deployments. The focus on automation has also increased to the point where manual intervention is no longer preferred, even for infrastructure-related activities like backups, security management, and patch updates. This desired state equates to a NoOps environment, which involves smaller teams that can manage your application lifecycle. Ideally, in such an environment, the efforts required by your operations team will be eliminated.&lt;/p&gt;

&lt;p&gt;It is beyond debate that DevOps is now deeply integrated into the DNA of all cloud-first organizations and is today more of a norm than a rarity. Cloud applications demand agility, and DevOps delivers it. However, does NoOps mean the end of the DevOps era? Or is it simply the next step in the progression of DevOps?&lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps vs. NoOps
&lt;/h2&gt;

&lt;p&gt;The success of DevOps greatly depends on the synergy between your development and operations teams, as it brings together system administrators and developers who would otherwise work in silos. Meanwhile, the process of continuous integration and continuous deployment is crucial in DevOps, which helps in identifying issues early on and avoiding them. This in turn results in the faster delivery of solutions. However, note that the operations team is still very much in the picture. There is still a dependency on the operations team to take care of nitty-gritties like: infra config management, security settings, backups, patch management, etc. &lt;/p&gt;

&lt;p&gt;With the cloud, abstraction and automation are scaling to new heights every day. You name it, and you can have it “as-a-service” in the cloud, be it compute, storage, network, or security, to list just a few. Cloud service providers are also investing heavily in their automation ecosystem. You can easily provision your application components using automation templates or just a few API calls. The ongoing management of these components can be automated as well, meaning less overhead to maintain environments and minimal to no human intervention. This leads us to NoOps—the increased abstraction of infrastructure, tightly integrated with development workflows, that requires no operations team to oversee the process. &lt;/p&gt;

&lt;p&gt;NoOps, a term originally coined by &lt;a href="https://go.forrester.com/blogs/11-02-07-i_dont_want_devops_i_want_noops/"&gt;Forrester&lt;/a&gt;, aims to improve productivity and deliver results much faster than DevOps. In the ideal scenario, developers never have to collaborate with a member of the operations team. Instead, they can use a set of tools and services to responsibly deploy the required cloud components in a secure manner, including both the code and infrastructure. Managed cloud services, like PaaS or serverless, serve as the backbone of NoOps and leverage CI/CD as their core engine for deployment. Hence, note that not all scenarios fit the bill for NoOps. &lt;/p&gt;

&lt;h2&gt;
  
  
  NoOps: Advantages and Challenges
&lt;/h2&gt;

&lt;p&gt;NoOps and DevOps essentially try to achieve the same thing: improve the software deployment process and reduce time to market. But while the collaboration between developers and operations team was emphasized in DevOps, the focus has shifted to complete automation in NoOps. This may sound like a silver bullet, but this new approach comes with both advantages and challenges. &lt;/p&gt;

&lt;h3&gt;
  
  
  Advantages
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. More automation, less headcount
&lt;/h4&gt;

&lt;p&gt;NoOps shifts the focus to services that are deployable by design without manual intervention. From infrastructure to management activities, the aim is to control everything using code, meaning every component should be deployed as part of the code and maintainable in the long run. NoOps essentially seeks to eliminate the manpower required to support the ecosystem for your code. &lt;/p&gt;

&lt;h4&gt;
  
  
  2. Use the full power of the cloud
&lt;/h4&gt;

&lt;p&gt;NoOps is best suited for born-in-the-cloud environments that leverage PaaS and serverless solutions. Microservices and API-based application architectures fit the bill perfectly, as they offer fine-grained modularity along with automation. Leading cloud service providers like AWS, Azure, and GCP have a laser focus on providing more services and capabilities in PaaS and serverless, which would help accelerate the adoption of NoOps. The current increase in database-as-a-service, container-as-a-service, and &lt;a href="https://azure.microsoft.com/en-in/services/functions/"&gt;function-as-a-service&lt;/a&gt; options in the cloud favor this trend as well, and all of these technologies support extreme automation. &lt;/p&gt;

&lt;h4&gt;
  
  
  3. Shift from operations to business results
&lt;/h4&gt;

&lt;p&gt;NoOps also shifts the focus from operations to business outcomes. Unlike DevOps, where the dev team and ops team work together to deliver value propositions to the customer, NoOps ideally eliminates any dependency on the operations team, which further reduces time to market. Again, the focus is shifted to priority tasks that deliver value to customers—in other words, “fast beats slow.” &lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. You Still Need Ops
&lt;/h4&gt;

&lt;p&gt;In theory, not requiring an operations team to take care of your infrastructure might sound lucrative. However, depending on the level of automation that’s achievable, you might still need them around to take care of exceptions or to monitor outcomes. Expecting developers to take care of this would nullify the benefits of NoOps and take away their focus from delivering business outcomes. It is also not a practical approach, considering that developers don’t necessarily have the required skill sets to address operational issues. For example, consider a Disaster Recovery (DR) scenario. You would still need support from an operations team to invoke the DR plan and switch traffic to the failover site.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Consider Your Environment
&lt;/h4&gt;

&lt;p&gt;Also, not all environments can transition to NoOps. Hybrid deployments and legacy infrastructures would pose a bottleneck—automation is still possible, but human intervention cannot be entirely eliminated in these cases. While aiming for NoOps, PaaS and serverless could become a limiting factor as well, especially during digital transformation. Additional efforts required to refactor legacy monolithic applications to fit the paradigms of PaaS would be counterintuitive. You would need to carefully evaluate the pros and cons, on a case by case basis, before embarking on the NoOps approach.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Who Will Take Care of Security?
&lt;/h4&gt;

&lt;p&gt;Last but not least is the question of security and compliance. Automated deployments aligned with security best practices will not completely eliminate the need for you to take care of security. Traditionally, there is a segregation of duties between operations and development teams. The operations team works together with the security team to enforce controls that protect applications from threats and vulnerabilities. The operations team, meanwhile, is also responsible for handling Identity and Access Management (IAM) solutions. &lt;/p&gt;

&lt;p&gt;Threat vectors and attack methods in the cloud evolve by the day, and so should your cloud security controls. The same goes for compliance. Not all organizations can delegate that responsibility to a set of automated processes. Reducing, or eliminating, the operations team could result in you needing to increase your investment in a security team to ensure the security and compliance of your environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Destination NoOps
&lt;/h2&gt;

&lt;p&gt;DevOps is considered more of a journey, not a destination, where the focus is on continuous improvement. It would be safer to say that NoOps is the evolution of DevOps, targeting a perfect end-state of extreme automation. It allows organizations to redirect time, effort, and resources from operations to business outcomes. However, this change cannot happen overnight. &lt;/p&gt;

&lt;p&gt;For NoOps to become a reality, a lot of groundwork is involved. You need to identify the right application stack and managed services in the cloud, i.e., PaaS and serverless, for the transition to happen. You need to bake in the component management, configuration, and security controls to get started. Even then, there would be some loose ends like legacy systems that would take more time and effort to transition or that cannot be transitioned at all. And if there’s even a single legacy system left behind, you would still need someone to take care of its operational aspects. &lt;/p&gt;

&lt;p&gt;In a NoOps world, the role of DevOps engineers also changes, as they get the opportunity to learn new skills and processes required for NoOps. Like DevOps, NoOps is more about the shift in culture and process, rather than technology. Organizations need to be intentional about this shift while staying grounded as to the practicalities of the transition.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, follow us on &lt;a href="https://www.linkedin.com/company/codersociety/"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://codersociety.com"&gt;https://codersociety.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloudnative</category>
      <category>operations</category>
    </item>
    <item>
      <title>Managing Secrets in Node.js with HashiCorp Vault</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Sun, 04 Jul 2021 13:53:07 +0000</pubDate>
      <link>https://forem.com/coder_society/managing-secrets-in-node-js-with-hashicorp-vault-2p1k</link>
      <guid>https://forem.com/coder_society/managing-secrets-in-node-js-with-hashicorp-vault-2p1k</guid>
      <description>&lt;p&gt;As the number of services grows in an organization, the problem of secret management only gets worse. Between Zero Trust and the emergence of microservices, handling secrets such as tokens, credentials, and keys has become an increasingly challenging task. That’s where a solution like HashiCorp’s Vault can help organizations solve their secret management woes.&lt;/p&gt;

&lt;p&gt;Although there are secret management tools native to each cloud provider, using these solutions lock you in with a specific cloud provider. Vault, on the other hand, is open source and portable.&lt;/p&gt;

&lt;p&gt;In this article we’ll look at how &lt;a href="https://www.vaultproject.io/"&gt;HashiCorp’s Vault&lt;/a&gt; can help organizations manage their secrets and in turn enhance their cybersecurity posture. We’ll then set up Vault in dev mode on our machines and interact with it via its web UI and CLI. Finally, we’ll programmatically interact with Vault using Node.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vault Top Features
&lt;/h2&gt;

&lt;p&gt;Vault is HashiCorp’s open-source product for managing secrets and sensitive data. Here’s a list of Vault’s top features that make it a popular choice for secret management:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Built-in concept of low trust and enforcement of security by identity
&lt;/li&gt;
&lt;li&gt;Encryption at rest &lt;/li&gt;
&lt;li&gt;Several ways to &lt;a href="https://www.vaultproject.io/docs/auth"&gt;authenticate&lt;/a&gt; against Vault, e.g., tokens, LDAP, AppRole, etc.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.vaultproject.io/docs/concepts/policies"&gt;Policies&lt;/a&gt;  to govern the level of access of each identity &lt;/li&gt;
&lt;li&gt;Lots of secret backends, each catering to specific needs, including key-value store, &lt;a href="https://www.vaultproject.io/docs/secrets/ad"&gt;Active Directory&lt;/a&gt;, etc. &lt;/li&gt;
&lt;li&gt;Support for multiple storage backends for high availability, e.g., databases (MySQL, Postgres), object stores (GCS, S3), HashiCorp’s Consul, etc. &lt;/li&gt;
&lt;li&gt;Ability to generate dynamic secrets, such as &lt;a href="https://www.vaultproject.io/docs/secrets/databases"&gt;database credentials&lt;/a&gt;, cloud service account keys (&lt;a href="https://www.vaultproject.io/docs/secrets/gcp"&gt;Google&lt;/a&gt;, &lt;a href="https://www.vaultproject.io/docs/secrets/aws"&gt;AWS&lt;/a&gt;, &lt;a href="https://www.vaultproject.io/docs/secrets/azure"&gt;Azure&lt;/a&gt;), &lt;a href="https://www.vaultproject.io/docs/secrets/pki"&gt;PKI certificates&lt;/a&gt;, etc.
&lt;/li&gt;
&lt;li&gt;Built-in &lt;a href="https://www.vaultproject.io/docs/concepts/lease"&gt;TTL and lease&lt;/a&gt; for provided credentials &lt;/li&gt;
&lt;li&gt;Built-in &lt;a href="https://www.vaultproject.io/docs/audit"&gt;audit&lt;/a&gt; trail which logs every interaction with Vault &lt;/li&gt;
&lt;li&gt;Several ways to interact with the Vault service, including Web UI, CLI, Rest API, and programmatic access via language libraries&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These features make Vault a compelling choice for cloud-based microservices architecture, where each microservice will authenticate with Vault in a distributed manner and access the secrets. The access to secrets can be managed for each individual microservice using policies following the principle of least privilege.&lt;/p&gt;

&lt;p&gt;In the next section, we’ll set up Vault in dev mode and discuss ways to set it up in production. We’ll then configure the dev Vault instance for our hands-on demo, learning different configuration options along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup for Hands-On Demo
&lt;/h2&gt;

&lt;p&gt;We’ll use Docker to set up Vault on our local machine. Note that this setup is not production ready. We’ll start Vault in dev mode, which uses all the insecure default configurations.&lt;/p&gt;

&lt;p&gt;Running Vault in production isn’t easy. To do so, you can either choose &lt;a href="https://www.hashicorp.com/cloud-platform"&gt;HashiCorp Cloud Platform&lt;/a&gt;, the fully managed Vault in the cloud, or leave it to your organization’s infrastructure team to set up a secure and highly available Vault cluster. &lt;/p&gt;

&lt;p&gt;Let's get started. &lt;/p&gt;

&lt;h3&gt;
  
  
  Start Vault in Dev Mode
&lt;/h3&gt;

&lt;p&gt;We’ll start the Vault service by using the official Docker image vault:1.7.3.&lt;/p&gt;

&lt;p&gt;If you run the container without any argument, it will start the Vault server in Dev mode by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; vault &lt;span class="nt"&gt;-p&lt;/span&gt; 8200:8200 vault:1.7.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As Vault is starting, you’ll see a stream of logs. The most prominent log is a warning telling you that Vault is running in development mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;WARNING! dev mode is enabled! In this mode, Vault runs entirely &lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="nt"&gt;-memory&lt;/span&gt; and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you read the message closely, you’ll notice a few things. First, it says the Vault is unsealed with a single unseal key, and second, it mentions a root token. What does this mean?&lt;/p&gt;

&lt;p&gt;By default, when you start Vault in production mode it’s &lt;a href="https://www.vaultproject.io/docs/concepts/seal"&gt;sealed&lt;/a&gt;, meaning you can’t interact with it yet. To get started, you’ll need to unseal it and get the unseal keys and root token to authenticate against Vault.&lt;/p&gt;

&lt;p&gt;In case a breach is detected, the Vault server can be sealed again to protect against malicious access.&lt;/p&gt;

&lt;p&gt;The other information that gets printed in logs is a root token, which can be used to authenticate against Vault. The option of authentication by tokens is enabled by default and the root token can be used to initiate the first interaction with Vault.&lt;/p&gt;

&lt;p&gt;Note that if your organization’s infrastructure team has set up the Vault, they might have enabled some other auth backends as discussed in the previous section.&lt;/p&gt;

&lt;p&gt;Copy the root token, as we’ll use it to login to Vault UI.&lt;/p&gt;

&lt;p&gt;Head over to &lt;a href="http://localhost:8200"&gt;http://localhost:8200&lt;/a&gt; and you’ll see the login screen below on the Vault web UI.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aQfGpLtT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/vault-login.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aQfGpLtT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/vault-login.png" alt=""&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;h3&gt;
  
  
  Enable KV Secret Backend
&lt;/h3&gt;

&lt;p&gt;Enter your root token (copied from the previous step) and hit “Sign In.” You’ll be greeted with the following screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p1l3Dp0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/vault-secret-backend.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p1l3Dp0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.codersociety.com/uploads/vault-secret-backend.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that there is already a &lt;code&gt;KV backend&lt;/code&gt; enabled at path &lt;code&gt;secret&lt;/code&gt;. This comes enabled in dev mode by default.&lt;/p&gt;

&lt;p&gt;If it is not enabled in your Vault installation, you can do so by clicking on &lt;code&gt;Enable New Engine&lt;/code&gt; and then selecting &lt;code&gt;KV backend&lt;/code&gt; and follow through the setup.&lt;/p&gt;

&lt;p&gt;We’ll use this backend to store our secrets and then later retrieve them in the Node.js demo.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure AppRole Auth Method
&lt;/h3&gt;

&lt;p&gt;We’ll now configure the AppRole auth method, which our Node.js application will use to retrieve the secrets from our key value backend.&lt;/p&gt;

&lt;p&gt;Select &lt;code&gt;Access&lt;/code&gt; from the top menu. You’ll see only the &lt;code&gt;token&lt;/code&gt; method enabled. &lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;Enable New Method&lt;/code&gt; and select &lt;code&gt;AppRole&lt;/code&gt;. Leave the settings to default and click &lt;code&gt;Enable Method&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Policy for Secret Access
&lt;/h3&gt;

&lt;p&gt;We’ll create a policy that allows read-only access to the KV secret backend.&lt;/p&gt;

&lt;p&gt;Select &lt;code&gt;Policies&lt;/code&gt; from the top menu and click &lt;code&gt;Create ACL Policy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Enter name as &lt;code&gt;readonly-kv-backend&lt;/code&gt;, and enter following content for &lt;code&gt;Policy&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;path "secret/data/mysql/webapp" {
  capabilities = [ "read" ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following the principle of least privilege, this policy will only give read access to secrets at the specific path.&lt;/p&gt;

&lt;p&gt;Hit &lt;code&gt;Create Policy&lt;/code&gt; to save it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create AppRole for Node.js Application
&lt;/h3&gt;

&lt;p&gt;We’re going to switch gears and use Vault CLI to finish setting up our demo. There are two ways to access Vault CLI; you can download the Vault binary, or you can exec into Vault container and access the CLI. For this demo we’ll use the latter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; vault /bin/sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ll then set up the &lt;code&gt;VAULT_ADDR&lt;/code&gt; and &lt;code&gt;VAULT_TOKEN&lt;/code&gt; environment variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VAULT_ADDR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:8200
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VAULT_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;ROOT TOKEN&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s create an AppRole and attach our policy to this role.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vault write auth/approle/role/node-app-role &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;token_ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1h &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;token_max_ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4h &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;token_policies&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;readonly-kv-backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You should be able to see it being created successfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Success! Data written to: auth/approle/role/node-app-role
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each AppRole has a &lt;code&gt;RoleID&lt;/code&gt; and &lt;code&gt;SecretID&lt;/code&gt;, much like a username and password. The application can exchange this &lt;code&gt;RoleID&lt;/code&gt; and &lt;code&gt;SecretID&lt;/code&gt; for a token, which can then be used in subsequent requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get RoleID and SecretID
&lt;/h3&gt;

&lt;p&gt;Now we’ll fetch the &lt;code&gt;RoleID&lt;/code&gt; pertaining to the node-app-role via the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vault &lt;span class="nb"&gt;read &lt;/span&gt;auth/approle/role/node-app-role/role-id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we’ll fetch the &lt;code&gt;SecretID&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vault write &lt;span class="nt"&gt;-f&lt;/span&gt; auth/approle/role/node-app-role/secret-id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you store these values somewhere safe, as we’ll use them in our Node.js application.&lt;/p&gt;

&lt;p&gt;Please note that it's not safe to deliver &lt;code&gt;SecretID&lt;/code&gt; to our applications like this. You should use &lt;a href="https://learn.hashicorp.com/tutorials/vault/approle#response-wrap-the-secretid"&gt;response wrapping&lt;/a&gt; to securely deliver &lt;code&gt;SecretID&lt;/code&gt; to your application. For the purpose of this demo, we’ll pass &lt;code&gt;SecretID&lt;/code&gt; as an environment variable to our application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Secret
&lt;/h3&gt;

&lt;p&gt;As the last step of our setup process, we’ll create a secret key-value pair that we will access via our Node.js application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vault kv put secret/mysql/webapp &lt;span class="nv"&gt;db_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"users"&lt;/span&gt; &lt;span class="nv"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"admin"&lt;/span&gt; &lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"passw0rd"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have our setup ready, we can proceed to our Node.js application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manage Secrets via Node.js
&lt;/h2&gt;

&lt;p&gt;In this section we’ll see how to interact with Vault via Node.js and use the &lt;a href="https://www.npmjs.com/package/node-vault"&gt;node-vault&lt;/a&gt; package to interact with our Vault server.&lt;/p&gt;

&lt;p&gt;Install the &lt;code&gt;node-vault&lt;/code&gt; package first, if not already installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;node-vault
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we begin, set the &lt;code&gt;ROLE_ID&lt;/code&gt; and &lt;code&gt;SECRET_ID&lt;/code&gt; environment variables to pass these values to the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ROLE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;role &lt;span class="nb"&gt;id &lt;/span&gt;fetched &lt;span class="k"&gt;in &lt;/span&gt;previous section&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SECRET_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;secret &lt;span class="nb"&gt;id &lt;/span&gt;fetched &lt;span class="k"&gt;in &lt;/span&gt;previous section&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s write the sample Node application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;vault&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;node-vault&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)({&lt;/span&gt;
&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;v1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://127.0.0.1:8200&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;roleId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ROLE_ID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secretId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SECRET_ID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;vault&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;approleLogin&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="na"&gt;role_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;roleId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="na"&gt;secret_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;secretId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nx"&gt;vault&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client_token&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Add token to vault object for subsequent requests.&lt;/span&gt;

&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;vault&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;secret/data/mysql/webapp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Retrieve the secret stored in previous steps.&lt;/span&gt;

&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;databaseName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db_name&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;databaseName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Attempt to delete the secret&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;vault&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;secret/data/mysql/webapp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// This attempt will fail as the AppRole node-app-role doesn't have delete permissions.&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Store this script as &lt;code&gt;index.js&lt;/code&gt; and run it via the &lt;code&gt;node index.js&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;If everything is set up correctly, your secrets should be printed on your screen. Apart from your secrets, you’ll also see the error in deleting the secret. This confirms that our AppRole only has access to read the secret and not delete it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article we saw the importance of having a secret manager in a distributed systems architecture. We also learned to access Vault via Node.js applications, retrieve secrets, and interface with Vault via Web UI and CLI to configure it for our sample application.&lt;/p&gt;

&lt;p&gt;From storage backends to auth backends, Vault comes with a lot of options so you can tune it perfectly to your organization’s needs. If you’re looking for a secret management solution to your microservices architecture challenges, HashiCorp’s Vault should be at the top of your list.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, follow us on &lt;a href="https://www.linkedin.com/company/codersociety/"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://codersociety.com"&gt;https://codersociety.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Containers in the Cloud: What Are Your Options?</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Wed, 30 Jun 2021 14:46:50 +0000</pubDate>
      <link>https://forem.com/coder_society/containers-in-the-cloud-what-are-your-options-5ea</link>
      <guid>https://forem.com/coder_society/containers-in-the-cloud-what-are-your-options-5ea</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TGRDcUVt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtiuouuwo14q0calwnig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TGRDcUVt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtiuouuwo14q0calwnig.png" alt="container-workloads-aws-azure-gcp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The cloud has opened up new possibilities for the deployment of microservice architectures. Managed and unmanaged container services, along with serverless hosting options, have revolutionized the deployment of container workloads in the cloud.&lt;/p&gt;

&lt;p&gt;While an unmanaged or build-your-own approach for your container ecosystem in the cloud gives you greater control over the stack, you need to own the end-to-end lifecycle, security, and operations of the solution. Managed container services, on the other hand, are hassle free and more popular due to their built-in integration with your current cloud ecosystem, best practices, security, and modularity.&lt;/p&gt;

&lt;p&gt;All three leading cloud providers—AWS, Azure, and GCP—have a strong portfolio of products and services to support containerized workloads for cloud-native as well as hybrid deployments. Kubernetes (K8s) remains the most popular container orchestration solution in the cloud. This enterprise-class solution is also the preferred platform for production deployments. Each of the major cloud service providers offer a native managed Kubernetes service as well as standalone container solutions. Both of these solutions easily integrate with the robust ecosystem of supporting services offered by your cloud platform, including container registries, identity and access management, and security monitoring.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore some of the more popular options for deploying containers in the cloud and use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Popular Options for Container Workloads in the Cloud
&lt;/h2&gt;

&lt;p&gt;Each of the major cloud service providers offers a number of available options for container workload hosting, which we’ll examine in the following sections.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Web Services
&lt;/h2&gt;

&lt;p&gt;AWS provides a diverse set of services for container workloads, the most popular being EKS, ECS, and Fargate. It also offers an extended ecosystem of services and tools, like AWS Deep Learning Containers for machine learning, Amazon Elastic Container Registry, and EKS Anywhere (coming in 2021) for hybrid deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Amazon Elastic Kubernetes Service
&lt;/h3&gt;

&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) can be used to create managed Kubernetes clusters in AWS, where the deployment, scaling, and patching are all managed by the platform itself. The service is Kubernetes-certified and uses Amazon EKS Distro, an open-source version of Kubernetes.&lt;/p&gt;

&lt;p&gt;Since the control plane is managed by AWS, the solution automatically gets the latest security updates with zero downtime and ensures a safe hosting environment for containers. The service also has assured high availability, with an SLA of 99.95% uptime—achieved by deploying the control plane of Kubernetes across multiple AWS Availability Zones. &lt;/p&gt;

&lt;p&gt;AWS charges a flat rate of $0.10 per hour for EKS clusters, as well as additional charges for the EC2 instances or EBS volumes used by the worker nodes. The cost can be reduced by opting for EC2 Spot Instances for development and testing environments, and Reserved Instances for production deployments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/eks"&gt;EKS&lt;/a&gt; is most beneficial if you’re planning for a production-scale deployment of microservices-based applications on AWS, easily scalable web applications, integration with machine learning models, batch processing jobs, and the like.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. AWS Fargate
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/fargate"&gt;AWS Fargate&lt;/a&gt; is a serverless compute service for containers that can be integrated with Amazon EKS and Amazon ECS. It reduces operational overhead, as you don’t have to deploy and configure the underlying infrastructure for hosting containers, and you’re charged only for the compute capacity being used to run the workloads.&lt;/p&gt;

&lt;p&gt;The containers run in an isolated environment with a dedicated kernel runtime, thereby ensuring improved security for the workloads. You can also leverage Spot Instances (for dev/test environments) and compute savings plans for committed usage to reduce the overall cost. If you’re looking to switch from a monolithic to a microservices-based architecture, with minimal development and management overhead, AWS Fargate offers great benefits.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Amazon Elastic Container Service
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html"&gt;Amazon Elastic Container Service&lt;/a&gt; (ECS) can be used to host container services either in a self-managed cluster of EC2 instances or on a serverless infrastructure managed by Fargate. The former approach provides better control over the end-to-end stack hosting the container workloads.&lt;/p&gt;

&lt;p&gt;In addition, it provides centralized visibility of your services and the capability to manage them via API calls. If you’re using EC2 for the underlying cluster, the same management features can be used for ECS as well. However, the cluster management, scaling, and operations layer are all handled by the platform, thereby eliminating that overhead.&lt;/p&gt;

&lt;p&gt;ECS is a regional service that is highly available across Availability Zones within an AWS region, ensuring the availability of your hosted container workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microsoft Azure
&lt;/h2&gt;

&lt;p&gt;Azure offers a managed Kubernetes service as well as options for deploying standalone container instances. Azure Container Registry, integration with Security Center, and container image scanning are just a few other Azure value-added services to support container workload ecosystems.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Azure Kubernetes Service
&lt;/h3&gt;

&lt;p&gt;The managed Kubernetes service, &lt;a href="https://azure.microsoft.com/en-in/services/kubernetes-service/"&gt;Azure Kubernetes Service&lt;/a&gt; (AKS), is one of the most popular container hosting services in the public cloud today. It consists of a control plane hosting the master nodes, which is managed by the Azure platform that exposes the Kubernetes APIs; then there are the customer-managed agent nodes, where the container workloads are deployed.&lt;/p&gt;

&lt;p&gt;The platform handles all cluster management activities, such as health monitoring and maintenance. It also offers easy integration with Azure RBAC and Azure AD for cluster management, built-in integration with Azure Monitor, and flexibility to use Docker Registry or Azure Container Registry for retrieving container images.&lt;/p&gt;

&lt;p&gt;AKS can be used without a cluster management fee while maintaining an SLA of 99.5%. You pay only for the VM instances, storage, and networking resources used for the AKS cluster on a per-second billing model. There is also an option to purchase uptime SLA for $0.10 per cluster per hour. High availability can be configured by using Azure Availability Zones during cluster deployments. If optional uptime SLA is purchased, such clusters will have an assured SLA of 99.95% and the clusters that do not use Availability Zones will have an SLA of 99.9%. In both cases, the customer is required to pay for the agent nodes that host the workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Azure Container Instances
&lt;/h3&gt;

&lt;p&gt;Azure Container Instances provides an easy-to-use solution for deploying containers in Azure without deploying an orchestration platform. Because &lt;a href="https://azure.microsoft.com/en-in/services/container-instances/"&gt;Container Instances&lt;/a&gt; does not require the provisioning of any VMs, the instances are started in a matter of seconds. The service also gives you the flexibility to configure the CPU core and memory required for the workloads and are charged only for that. The service can integrate with Azure Files for persistent storage, connect to Azure Virtual Network, and also integrate with Azure Monitor for resource-usage monitoring.&lt;/p&gt;

&lt;p&gt;Azure Container Instances is best suited for deploying isolated container instances for simple applications that don't require advanced capabilities such as on-demand scaling or multi-container service discovery.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Azure Web App for Containers 
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/services/app-service/containers/"&gt;Azure Web App&lt;/a&gt; allows you to deploy containers on the service using container images from Docker Hub or Azure Container Registry. The backend OS patching, capacity management, and load balancing of services are handled by the platform, and the service enables on-demand scaling, either through scale-up or scale-out options based on configured scaling rules. This also helps with cost management, where costs are automatically reduced during off-peak hours. The service ensures high availability as well since the container services can be deployed across multiple Azure Regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Cloud Platform
&lt;/h2&gt;

&lt;p&gt;Kubernetes, which originated as a Google in-house project, offers a strong suite of products for managed container hosting both in the cloud and on-premises.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Google Kubernetes Engine 
&lt;/h3&gt;

&lt;p&gt;Google Kubernetes Engine (GKE) is the managed Kubernetes service from GCP that can be used to host highly available and scalable container workloads. It also provides a &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/sandbox-pods"&gt;GKE sandbox&lt;/a&gt; option, if you need to run workloads prone to security threats in an isolated environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/kubernetes-engine"&gt;GKE clusters&lt;/a&gt; can be deployed as both multi-zonal and regional to protect workloads from cloud outages. GKE also comes with many out-of-the-box security features, such as data encryption and vulnerability scanning for container images through integration with the &lt;a href="https://cloud.google.com/container-analysis/docs/ar-quickstart"&gt;Container Analysis service&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As with the managed Kubernetes services provided by the other cloud service providers, GKE offers automated repair of faulty nodes, upgrades, and on-demand scaling. It can also be integrated with GCP monitoring services for in-depth visibility into the health of deployed applications. &lt;/p&gt;

&lt;p&gt;If you’re planning to host graphic-intensive, HPC, and ML workloads, you can augment GKE through specialized hardware accelerators like GPU and TPU during deployment. Finally, GKE offers per-second billing and—for development environments that can tolerate downtime—an option to use &lt;a href="https://cloud.google.com/preemptible-vms"&gt;preemptible VMs&lt;/a&gt; for your cluster to further reduce costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Cloud Run
&lt;/h3&gt;

&lt;p&gt;If you’re looking to run containerized applications in GCP without the overhead of managing the underlying infrastructure, &lt;a href="https://cloud.google.com/run"&gt;Cloud Run&lt;/a&gt; is one option. With this fully managed serverless hosting service for containers, you’re charged only for the resources consumed by the containers.&lt;/p&gt;

&lt;p&gt;Cloud Run can also be deployed into Anthos &lt;a href="https://cloud.google.com/kuberun/docs/choosing-a-platform"&gt;GKE clusters&lt;/a&gt; or on-premises workloads. Well integrated with other GCP services such as Cloud Code, Cloud Logging, Monitoring, Artifact Registry, and Cloud Build, Cloud Run meets all your containerized application development needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hybrid Deployments
&lt;/h2&gt;

&lt;p&gt;For many organizations, the first step of their cloud adoption journey is implementing hybrid deployments, where some of the components for their containerized application remain on-premises and others are moved to the cloud. There are several popular tools and services available to help meet the needs of hybrid and multicloud deployments, and all leading cloud providers have focused investments in this space.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Azure Arc
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-in/services/azure-arc/"&gt;Azure Arc&lt;/a&gt; provides a unified Azure-based management platform for servers, data services, and Kubernetes clusters deployed on-premises as well as across multicloud environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/overview"&gt;Azure Arc-enabled Kubernetes&lt;/a&gt; clusters support multiple popular distributions of Kubernetes that are certified by the Cloud Native Computing Foundation (CNCF). The service allows you to list Kubernetes clusters across heterogeneous environments in Azure for a unified view and enables integration with Azure management capabilities, such as Azure Policy and Azure Monitor.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Google Anthos 
&lt;/h3&gt;

&lt;p&gt;Google Cloud’s &lt;a href="https://cloud.google.com/anthos"&gt;Anthos&lt;/a&gt; is a comprehensive and advanced solution that can be leveraged to deploy managed Kubernetes clusters in the cloud and on-premises. Anthos provides a &lt;a href="https://cloud.google.com/anthos/gke/docs/on-prem/1.6"&gt;GKE on-premises&lt;/a&gt; option you can use to deploy new GKE clusters into your private cloud on-premises. It's also possible to &lt;a href="https://cloud.google.com/anthos/multicluster-management/connect/registering-a-cluster"&gt;register&lt;/a&gt; existing non-GKE clusters with Anthos. &lt;a href="https://cloud.google.com/anthos/gke/docs/aws"&gt;GKE on AWS&lt;/a&gt; helps in multicloud scenarios, where a compatible GKE environment in AWS can be created, updated, or deleted using a &lt;a href="https://cloud.google.com/anthos/gke/docs/aws/how-to/installation-overview"&gt;management service&lt;/a&gt; from the Anthos UI. Meanwhile, Anthos Config Management and Service Mesh solutions help with policy automation, security management, and visibility into your applications deployed across multiple clusters for a unified management experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. AWS Outposts
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/outposts/"&gt;AWS Outposts&lt;/a&gt; is a hybrid cloud service that brings AWS services, including container services like EKS, to your on-premises data center. Currently shipped as a 42U rack unit, this service is installed, updated, and fully managed by AWS. The solution can be connected to a local AWS Region for a hybrid experience, where the services in Outposts can connect directly to the services in the cloud.&lt;/p&gt;

&lt;p&gt;AWS Outposts targets customers who prefer to deploy containerized workloads on-premises for data residency, local processing, and more, while having the flexibility to use AWS Cloud’s supporting services for their applications. &lt;/p&gt;

&lt;p&gt;The recently announced EKS Anywhere is another option, designed to deliver the same experience as Amazon EKS for Hybrid deployments. &lt;a href="https://aws.amazon.com/eks/eks-anywhere/"&gt;EKS Anywhere&lt;/a&gt; is expected to be available in 2021.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right Container Service
&lt;/h2&gt;

&lt;p&gt;With the wide spectrum of services available in the cloud for hosting containerized workloads, the first step in choosing the right service for your requirements is to map your specific application requirements to features of a given service. Managed container services from the major cloud service providers are recommended, as they offer better integration with their own cloud platform services.&lt;/p&gt;

&lt;p&gt;When choosing the best container service for your application, opt for production-ready services while avoiding potential vendor lock-in. You should also take into consideration the solution’s future roadmap as well as ease of monitoring, logging, availability, scalability, security management, and automation.&lt;/p&gt;

&lt;p&gt;Starting with a managed Kubernetes service is a great option, as Kubernetes is best suited for scalable, secure, and highly available production deployments. If there are clear requirements for hybrid integration, where some of your containerized workloads might remain on-premises, opt for hybrid solutions like Azure Arc or Google Anthos. And finally, if you are looking for simple isolated container deployments, serverless solutions may be the best fit.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>13 Best Practices for using Helm</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Mon, 21 Jun 2021 00:51:27 +0000</pubDate>
      <link>https://forem.com/coder_society/13-best-practices-for-using-helm-2mac</link>
      <guid>https://forem.com/coder_society/13-best-practices-for-using-helm-2mac</guid>
      <description>&lt;p&gt;Helm is an indispensable tool for deploying applications to Kubernetes clusters. But it is only by following best practices that you’ll truly reap the benefits of Helm. Here are 13 best practices to help you create, operate, and upgrade applications using Helm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Take Your Helm Charts to the Next Level
&lt;/h2&gt;

&lt;p&gt;Helm is the package manager for Kubernetes. It reduces the effort of deploying complex applications thanks to its templating approach and rich ecosystem of reusable and production-ready packages, also known as Helm charts. With &lt;a href="https://helm.sh/"&gt;Helm&lt;/a&gt;, you can deploy packaged applications as a collection of versioned, pre-configured Kubernetes resources.&lt;/p&gt;

&lt;p&gt;Let's assume you’re deploying a database with Kubernetes—including multiple deployments, containers, secrets, volumes, and services. Helm allows you to install the same database with a single command and a single set of values. Its declarative and idempotent commands makes Helm an ideal tool for continuous delivery (CD).&lt;/p&gt;

&lt;p&gt;Helm is a Cloud Native Computing Foundation (CNCF) project created in 2015 and graduated in April 2020. With the &lt;a href="https://v3.helm.sh/docs/faq/#changes-since-helm-2"&gt;latest version of Helm 3&lt;/a&gt;, it has become even more integrated into the Kubernetes ecosystem.&lt;/p&gt;

&lt;p&gt;This article features 13 best practices for creating Helm charts to manage your applications running in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Take Advantage of the Helm Ecosystem
&lt;/h2&gt;

&lt;p&gt;Helm gives you access to a wealth of community expertise—perhaps the tool’s greatest benefit. It collects charts from developers worldwide, which are then shared through chart repositories. You can add the &lt;a href="http://github.com/helm/charts"&gt;official stable chart repository&lt;/a&gt; to your local setup as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo add stable https://charts.helm.sh/stable

&lt;span class="s2"&gt;"stable"&lt;/span&gt; has been added to your repositories

Then you can search &lt;span class="k"&gt;for &lt;/span&gt;charts, &lt;span class="k"&gt;for &lt;/span&gt;example, MySQL:  

&lt;span class="nv"&gt;$ &lt;/span&gt;helm search hub mysql

URL CHART VERSION  APP VERSION DESCRIPTION

https://hub.helm.sh/charts/bitnami/mysql 8.2.3 8.0.22 Chart to create a Highly available MySQL cluster

https://hub.helm.sh/charts/t3n/mysql 0.1.0 8.0.22 Fast, reliable, scalable, and easy to use open-...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see a long list of results, which shows how big the Helm chart ecosystem is.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Use Subcharts to Manage Your Dependencies
&lt;/h2&gt;

&lt;p&gt;Because applications deployed to Kubernetes consist of fine-grained, interdependent pieces, their Helm charts have various resource templates and dependencies. For instance, let's assume your backend relies on a database and a message queue. The database and message queue are already standalone applications (e.g. PostgreSQL and RabbitMQ). Creating or using separate charts for the standalone applications and adding them to the parent charts is therefore recommended. The dependent applications are named as subcharts here.&lt;/p&gt;

&lt;p&gt;There are three essential elements for creating and configuring subcharts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Chart structure&lt;/strong&gt;&lt;br&gt;
The folder structure should be in the following order:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;backend-chart
  - Chart.yaml
  - charts
      - message-queue
          - Chart.yaml
          - templates
          - values.yaml
      - database
          - Chart.yaml
          - templates
          - values.yaml
  - values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Chart.yaml&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Additionally, the chart.yaml in the parent chart should list any dependencies and conditions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend-chart&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A Helm chart for backend&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;message-queue&lt;/span&gt;
&lt;span class="s"&gt;    condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;message-queue.enabled&lt;/span&gt;
&lt;span class="s"&gt;  - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database&lt;/span&gt;
&lt;span class="s"&gt;    condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database.enabled&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. values.yaml&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, you can set or override the values of subcharts in the parent chart with the following values.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;message-queue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;span class="s"&gt;  image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;acme/rabbitmq&lt;/span&gt;
&lt;span class="s"&gt;    tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
&lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating and using subcharts establishes an abstraction layer between the parent and dependency applications. These separate charts make it easy to deploy, debug, and update applications in Kubernetes with their separate values and upgrade lifecycles. You can walk through the folder structure, dependencies, and value files in a sample chart like &lt;a href="https://github.com/bitnami/charts/tree/master/bitnami/wordpress"&gt;bitnami/wordpress&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Use Labels to Find Resources Easily
&lt;/h2&gt;

&lt;p&gt;Labels are crucial to Kubernetes’ internal operations and the daily work of Kubernetes operators. Almost every resource in Kubernetes offers labels for different purposes such as grouping, resource allocation, load balancing, or scheduling.&lt;/p&gt;

&lt;p&gt;A single Helm command will allow you to install multiple resources. But it’s vital to know where these resources originate. Labels enable you to find your resources created by Helm releases quickly.&lt;/p&gt;

&lt;p&gt;The most common method is to define labels in &lt;code&gt;helpers.tpl&lt;/code&gt;, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;/*&lt;/span&gt;
&lt;span class="nv"&gt;Common labels&lt;/span&gt;
&lt;span class="nv"&gt;*/&lt;/span&gt;&lt;span class="pi"&gt;}}&lt;/span&gt;

&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- define "common.labels" -&lt;/span&gt;&lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;
&lt;span class="s"&gt;app.kubernetes.io/instance&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Release.Name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="s"&gt;app.kubernetes.io/managed-by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Release.Service&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end -&lt;/span&gt;&lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You then need to use the “include” function with labels in the resource templates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-queue&lt;/span&gt;
&lt;span class="s"&gt;  labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;include "common.labels" . | indent 4&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you should be able to list all resources with the label selectors. For example, you can list all the pods of my-queue deployment with the &lt;code&gt;kubectl get pods -l app.kubernetes.io/instance=[Name of the Helm Release]&lt;/code&gt; command. This step is essential for locating and debugging those resources managed by Helm.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Document Your Charts
&lt;/h2&gt;

&lt;p&gt;Documentation is essential for ensuring maintainable Helm charts. Adding comments in the resource templates and the README helps teams with the development and use of Helm charts.&lt;/p&gt;

&lt;p&gt;You should use the following three options to document your charts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Comments:&lt;/strong&gt; Template and values files are YAML files. You can add &lt;a href="https://yaml.org/spec/1.2/spec.html#comment//"&gt;comments&lt;/a&gt; and provide helpful information about the fields inside the YAML files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;README&lt;/strong&gt;: A chart’s README is a markdown file that explains how to use the charts. You can check the contents of a README file with the following command: &lt;code&gt;helm show readme [Name of the Chart]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NOTES.txt&lt;/strong&gt;: This is a special file located at &lt;code&gt;templates/NOTES.txt&lt;/code&gt; that provides helpful information about the deployment of releases. The content of the &lt;code&gt;NOTES.txt&lt;/code&gt; file can also be templated with functions and values similar to resource templates:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You have deployed the following release: {{ .Release.Name }}.
To get further information, you can run the commands:
  $ helm status {{ .Release.Name }}
  $ helm get all {{ .Release.Name }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the end of the helm install or helm upgrade command, Helm prints out the content of the NOTES.txt like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;RESOURCES:
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; v1/Secret
NAME        TYPE      DATA      AGE
my-secret   Opaque    1         0s

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; v1/ConfigMap
NAME           DATA      AGE
db-configmap   3         0s

NOTES:
You have deployed the following release: precious-db.
To get further information, you can run the commands:
  &lt;span class="nv"&gt;$ &lt;/span&gt;helm status precious-db
  &lt;span class="nv"&gt;$ &lt;/span&gt;helm get all precious-db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Test Your Charts
&lt;/h2&gt;

&lt;p&gt;Helm charts consist of multiple resources that are to be deployed to the cluster. It is essential to check that all the resources are created in the cluster with the correct values. For instance, when deploying a database, you should check that the database passwords are set correctly.&lt;/p&gt;

&lt;p&gt;Fortunately, Helm offers a test functionality to run some containers in the cluster in order to validate applications. For example, the resource templates annotated with &lt;code&gt;"helm.sh/hook": test-success&lt;/code&gt; are run by Helm as test cases.&lt;/p&gt;

&lt;p&gt;Let's assume you are deploying WordPress with the MariaDB database. The Helm chart maintained by &lt;a href="https://github.com/bitnami/charts/tree/master/bitnami/wordpress"&gt;Bitnami has a pod to validate the database connection&lt;/a&gt; with the following definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Release.Name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}-credentials-test"&lt;/span&gt;
&lt;span class="s"&gt;  annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    "helm.sh/hook"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-success&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="s"&gt;      env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;        - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MARIADB\_HOST&lt;/span&gt;
&lt;span class="s"&gt;          value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;include "wordpress.databaseHost" . | quote&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="s"&gt;        - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MARIADB\_PORT&lt;/span&gt;
&lt;span class="s"&gt;          value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3306"&lt;/span&gt;
&lt;span class="s"&gt;        - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WORDPRESS\_DATABASE\_NAME&lt;/span&gt;
&lt;span class="s"&gt;          value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;default "" .Values.mariadb.auth.database | quote&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="s"&gt;        - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WORDPRESS\_DATABASE\_USER&lt;/span&gt;
&lt;span class="s"&gt;          value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;default "" .Values.mariadb.auth.username | quote&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="s"&gt;        - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WORDPRESS\_DATABASE\_PASSWORD&lt;/span&gt;
&lt;span class="s"&gt;          valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;            secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;              name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;include "wordpress.databaseSecretName" .&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="s"&gt;              key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mariadb-password&lt;/span&gt;
&lt;span class="s"&gt;      command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;        - /bin/bash&lt;/span&gt;
&lt;span class="s"&gt;        - -ec&lt;/span&gt;
&lt;span class="s"&gt;        - |&lt;/span&gt;
&lt;span class="s"&gt;          mysql --host=$MARIADB\_HOST --port=$MARIADB\_PORT --user=$WORDPRESS\_DATABASE\_USER --password=$WORDPRESS\_DATABASE\_PASSWORD&lt;/span&gt;
&lt;span class="s"&gt;  restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is recommended to write tests for your charts and to run them after the installation. For example, you can use the helm test &lt;code&gt;&amp;lt;RELEASE_NAME&amp;gt;&lt;/code&gt; command to run tests. The tests are a valuable asset for validating and finding issues in applications installed with Helm.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Ensure Secrets Are Secure
&lt;/h2&gt;

&lt;p&gt;Sensitive data, such as keys or passwords, are stored as secrets in Kubernetes. Although it is possible to secure secrets on the Kubernetes side, they are mostly stored as text files as part of Helm templates and values.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/jkroepke/helm-secrets"&gt;helm-secrets&lt;/a&gt; plugin offers secret management and protection for your critical information. It delegates the secret encryption to Mozilla &lt;a href="https://github.com/mozilla/sops"&gt;SOPS&lt;/a&gt;, which supports AWS KMS, Cloud KMS on GCP, Azure Key Vault, and PGP.&lt;/p&gt;

&lt;p&gt;Let's assume you’ve collected your sensitive data in a file named secrets.yaml as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;postgresql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  postgresqlUsername&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
&lt;span class="s"&gt;  postgresqlPassword&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WoZpCAlBsg&lt;/span&gt;
&lt;span class="s"&gt;  postgresqlDatabase&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can encrypt the file with the plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm secrets enc secrets.yaml
Encrypting secrets.yaml
Encrypted secrets.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the file will be updated and all values will be encrypted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;postgresql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    postgresqlUsername&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENC\[AES256\_GCM,data:D14/CcA3WjY=,iv...==,type:str\]&lt;/span&gt;
&lt;span class="s"&gt;    postgresqlPassword&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENC\[AES256\_GCM,data:Wd7VEKSoqV...,type:str\]&lt;/span&gt;
&lt;span class="s"&gt;    postgresqlDatabase&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENC\[AES256\_GCM,data:8ur9pqDxUA==,iv:R...,type:str\]&lt;/span&gt;
&lt;span class="na"&gt;sops&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The data in secrets.yaml above was not secure and helm-secrets solves the problem of storing sensitive data as part of Helm charts.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Make Your Chart Reusable by Using Template Functions
&lt;/h2&gt;

&lt;p&gt;Helm supports over 60 functions that can be used inside templates. The functions are defined in the &lt;a href="https://godoc.org/text/template"&gt;Go template language&lt;/a&gt; and &lt;a href="https://masterminds.github.io/sprig/"&gt;Sprig template library&lt;/a&gt;. Functions in template files significantly simplify Helm operations.&lt;/p&gt;

&lt;p&gt;Let's look at the following template file as an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Release.Name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-configmap&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.environment | default "dev" | quote&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="s"&gt;  region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.region | upper | quote&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the environment value is not provided, it will be defaulted by the template function. When you check the region field, you’ll see there is no default value defined in the template. However, the field has another function called upper to convert the provided value into uppercase.&lt;/p&gt;

&lt;p&gt;Another essential and useful function is &lt;code&gt;required&lt;/code&gt;. It enables you to set a value as required for template rendering. For instance, let's assume you need a name for your ConfigMap with the following template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;required "Name is required" .Values.configName&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the entry is empty, the template rendering will fail with the error Name is required. Template functions are very useful when creating Helm charts. They can improve templating, reduce code duplication, and can be used to validate values before deploying your applications to Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Update Your Deployments When ConfigMaps or Secrets Change
&lt;/h2&gt;

&lt;p&gt;It is common to have ConfigMaps or secrets mounted to containers. Although the deployments and container images change with new releases, the ConfigMaps or secrets do not change frequently. The following annotation makes it possible to roll out new deployments when the ConfigMap changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;      annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;        checksum/config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;include (print $.Template.BasePath "/configmap.yaml") . | sha256sum&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any change in the ConfigMap will calculate a new &lt;code&gt;sha256sum&lt;/code&gt; and create new versions of deployment. This ensures the containers in the deployments will restart using the new ConfigMap.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Opt Out of Resource Deletion with Resource Policies
&lt;/h2&gt;

&lt;p&gt;In a typical setup, after installing a Helm chart, Helm will create multiple resources in the cluster. You can then upgrade it by changing values and adding or removing resources. Once you no longer need the application, you can delete it, which removes all resources from the cluster. Some resources, however, should be kept in the cluster even after running Helm uninstall. Let's assume you’ve deployed a database with PersistentVolumeClaim and want to store the volumes even if you are deleting the database release. For such resources, you need to use the resource-policy annotations as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    "helm.sh/resource-policy"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keep&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Helm commands, such as uninstall, upgrade, or rollback would result in the deletion of the above secret. But by using the resource policy as shown, Helm will skip the deletion of the secret and allow it to be orphaned. The annotation should therefore be used with great care, and only for the resources needed after Helm Releases has been deleted.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Useful Commands for Debugging Helm Charts
&lt;/h2&gt;

&lt;p&gt;Helm template files come with many different functions and multiple sources of values for creating Kubernetes resources. It is an essential duty of the user to know what is deployed to the cluster. Therefore, you need to learn how to debug templates and verify charts. There are four essential commands to use for debugging:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;helm lint:&lt;/strong&gt; The linter tool conducts a series of tests to ensure your chart is correctly formed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;helm install --dry-run --debug:&lt;/strong&gt;  This function renders the templates and shows the resulting resource manifests. You can also check all the resources before deployment and ensure the values are set and the templating functions work as expected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;helm get manifest:&lt;/strong&gt; This command retrieves the manifests of the resources that are installed to the cluster. If the release is not working as expected, this should be the first command you use to find out what is running in the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;helm get values:&lt;/strong&gt; This command is used to retrieve the release values installed to the cluster. If you have any doubts about computed or default values, this should definitely be in your toolbelt.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  11. Use the lookup Function to Avoid Secret Regeneration
&lt;/h2&gt;

&lt;p&gt;Helm functions are used to generate random data, such as passwords, keys, and certificates. Random generation creates new arbitrary values and updates the resources in the cluster with each deployment and upgrade. For example, it can replace your database password in the cluster with every version upgrade. This causes the clients to be unable to connect to the database after the password change.&lt;/p&gt;

&lt;p&gt;To address this, it is recommended to randomly generate values and override those already in the cluster. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- $rootPasswordValue&lt;/span&gt; &lt;span class="pi"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;= (randAlpha 16) | b64enc | quote&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- $secret&lt;/span&gt; &lt;span class="pi"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;= (lookup "v1" "Secret" .Release.Namespace "db-keys")&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if $secret&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- $rootPasswordValue = index $secret.data "root-password"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end -&lt;/span&gt;&lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db-keys&lt;/span&gt;
&lt;span class="s"&gt;  namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Release.Namespace&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  root-password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;$rootPasswordValue&lt;/span&gt;&lt;span class="pi"&gt;}}&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The template above first creates a 16-character randAlpha value, then checks the cluster for a secret and its corresponding field. If found, it overrides and reuses the rootPasswordValue as root-password.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. Migrate to Helm 3 for Simpler and More Secure Kubernetes Applications
&lt;/h2&gt;

&lt;p&gt;The latest Helm release, &lt;a href="https://helm.sh/docs/topics/v2_v3_migration/"&gt;Helm 3, offers many new features&lt;/a&gt; to make it a lighter, more streamlined tool. Helm v3 is recommended for its enhanced security and simplicity. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Removal of Tiller:&lt;/strong&gt; Tiller was the Helm server-side component but has been removed from v3 due to the exhaustive permissions required to make changes on the cluster in earlier versions. This also created a security risk, as anyone gaining access to Tiller would have excessive permissions to your cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved chart upgrade strategy:&lt;/strong&gt; Helm v2 relies on a two-way strategic merge patch. It compares the new release with the one in the ConfigMap storage and applies the changes. Conversely, Helm v3 compares the old manifest, the state in the cluster, and the new release. So your manual changes will not be lost while upgrading your Helm releases. This simplifies the upgrade process and enhances the reliability of the applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is a &lt;code&gt;helm-2to3&lt;/code&gt; plugin, which you can install with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm3 plugin &lt;span class="nb"&gt;install &lt;/span&gt;https://github.com/helm/helm-2to3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is a small but helpful plugin with cleanup, convert, and move commands to help you migrate and clean up your v2 configuration and create releases for v3.&lt;/p&gt;

&lt;h2&gt;
  
  
  13. Keep Your Continuous Delivery Pipelines Idempotent
&lt;/h2&gt;

&lt;p&gt;Kubernetes resources are declarative in the sense that their specification and status are stored in the cluster. Similarly, Helm is required to create declarative templates and releases. Therefore, you need to design your continuous delivery and release management to be idempotent while using Helm. An idempotent operation is one you can apply many times without changing the result following the first run. &lt;/p&gt;

&lt;p&gt;There are two essential rules to follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always use the &lt;code&gt;helm upgrade --install&lt;/code&gt; command. It installs the charts if they are not already installed. If they are already installed, it upgrades them.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;--atomic&lt;/code&gt; flag to rollback changes in the event of a failed operation during helm upgrade. This ensures the Helm releases are not stuck in the failed state.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;Helm is an indispensable tool for deploying applications to Kubernetes clusters. But it is only by following best practices that you’ll truly reap the benefits of Helm.&lt;/p&gt;

&lt;p&gt;The best practices covered in this article will help your teams create, operate, and upgrade production-grade distributed applications. From the development side, your Helm charts will be easier to maintain and secure. From the operational side, you’ll enjoy automatically updated deployments, save resources from deletion, and learn how to test and debug.&lt;/p&gt;

&lt;p&gt;Helm’s official &lt;a href="https://helm.sh/docs/topics/"&gt;topics guide&lt;/a&gt; is another good resource for checking the Helm &lt;a href="https://helm.sh/docs/helm/helm/"&gt;commands&lt;/a&gt; and understanding their design philosophy. With these resources as well as the best practices and examples outlined in this blog, you’ll surely be armed and ready to create and manage production-grade Helm applications running on Kubernetes.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, follow us on LinkedIn&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at &lt;a href="https://codersociety.com"&gt;https://codersociety.com&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>helm</category>
    </item>
    <item>
      <title>Revisiting the Twelve Factor App Methodology</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Sun, 13 Jun 2021 21:24:13 +0000</pubDate>
      <link>https://forem.com/coder_society/revisiting-the-twelve-factor-app-methodology-41di</link>
      <guid>https://forem.com/coder_society/revisiting-the-twelve-factor-app-methodology-41di</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xJWpB6KT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mub2akowrcaqyktpwie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xJWpB6KT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mub2akowrcaqyktpwie.png" alt="Twelve Factor App Methodology"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having well-defined guidelines in place can facilitate your software projects, especially more complex ones. But you don’t necessarily need to invest the time and effort to document these practices from scratch. Instead, teams can leverage best practices documented by other reputable companies and adapt them for their own projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://12factor.net/"&gt;The Twelve Factor App&lt;/a&gt;, first presented around 2011, is a set of language-agnostic guidelines used to develop modern enterprise-grade software as a service. It promotes discipline in software development while addressing architectural, deployment, and operational concerns in building software at scale. While suitable for cloud-native applications, it works just as well with software of a similar nature that is hosted on premises.&lt;/p&gt;

&lt;p&gt;So how does the Twelve Factor App hold up today? Let’s review each of the twelve factors and see.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Codebase
&lt;/h2&gt;

&lt;p&gt;A Twelve Factor App requires code to be stored in source control (e.g., Git), and defines the one-to-many relationship between a codebase and deploys (running instances of the app) resulting from it. It’s quite typical to have different versions of the app (from different commits) running in different environments.&lt;/p&gt;

&lt;p&gt;Nowadays, the advantages of source control are well understood. Storing source code and its history centrally prevents accidents like lost code. This also facilitates many operations that have become a staple of software development, such as branching, merging, reverting changes, cherry-picking, and others. Thanks to well-established processes such as git-flow, even large teams can work on the same code concurrently.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Dependencies
&lt;/h2&gt;

&lt;p&gt;Managing dependencies can be tricky, especially when they’re hidden or conflicting between apps. That’s why the Twelve Factor App recommends declaring dependencies using the relevant programming language’s package manager. It also recommends that dependencies are packaged with the application rather than relying on system-wide dependencies or tools.&lt;/p&gt;

&lt;p&gt;Declaring and isolating dependencies helps avoid conflicts between different applications that run on the same host but require different dependencies. Containers are a great way to package applications together with their dependencies, while at the same time abstracting the underlying environment and isolating them from each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Config
&lt;/h2&gt;

&lt;p&gt;The Twelve Factor App recognises that while the same code is deployed across environments, configuration is different for each environment. Therefore, configuration should be strictly separated from code and stored in environment variables. This means that only one application build needs to be created which can be tested, deployed, and run in multiple environments.&lt;/p&gt;

&lt;p&gt;The Twelve Factor App doesn’t say how to store secrets, such as passwords or API tokens. And while secrets can be thought of as configuration, their sensitive nature warrants special treatment. We recommend managing these using a secret management tool, such as HashiCorp Vault, AWS Secrets Manager, or any other valid tool in this category.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Backing Services
&lt;/h2&gt;

&lt;p&gt;Any external dependency accessed over a network (e.g., an SQL or NoSQL database, API, SMTP service, etc.) should be specified in the application’s configuration. By simply changing the configuration, an external dependency should also be replaceable with a similar service. &lt;/p&gt;

&lt;h2&gt;
  
  
  5. Build, Release, Run
&lt;/h2&gt;

&lt;p&gt;Taking a codebase and turning it into a running application in a particular environment involves the following stages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The build stage uses the code and its dependencies to produce a build.&lt;/li&gt;
&lt;li&gt;The release stage uses the build and its configuration to produce a release.&lt;/li&gt;
&lt;li&gt;The run stage runs one or more instances of the release.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This process enables a clear separation of concern and should be automated, using continuous integration and continuous deployment (CI/CD) pipelines. The CD part also maintains a history of deployments, making it easy to revert to an earlier version if necessary. Releases are generally immutable, so any change must create a new release.&lt;/p&gt;

&lt;p&gt;Developer time is expensive and should not be wasted, so this process needs to be fast. Small optimizations that reduce the time it takes to build and deploy new code (e.g., faster builds, quality checks, or deployments) can have a big positive impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Processes
&lt;/h2&gt;

&lt;p&gt;Apps and services should be stateless, with any state residing in a backing store. This is an important prerequisite for other Twelve Factor guidelines, such as concurrency and disposability.&lt;/p&gt;

&lt;p&gt;This approach provides important operational benefits, making it easier to scale and recover from failures. It’s also beneficial from a developer perspective: separating mutable data and side effects from logic makes the code deterministic, and therefore easier to test and parallelize.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Port Binding
&lt;/h2&gt;

&lt;p&gt;Twelve Factor Apps are self-contained and independent processes that do not run under the control of a parent process. They expose their services by listening on a port. This also means that they can act as backing services for other apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Concurrency
&lt;/h2&gt;

&lt;p&gt;Twelve Factor Apps can be scaled out by running multiple instances of the same application. Stateless processes are easy to spin up and down as necessary, either manually or automatically based on metrics or a schedule. At the same time, apps with isolated dependencies minimise the risk of issues when they run side by side on different hosts.&lt;/p&gt;

&lt;p&gt;In this context, minimising startup time means that the system can adapt more quickly to varying load. This is another benefit of containers, which are lightweight and much quicker to boot up when compared to virtual machines.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Disposability
&lt;/h2&gt;

&lt;p&gt;The Twelve Factor App assumes that an application can go down at any time for any reason: elastically scaling resources to meet demand, hardware failures, or even intentional application restarts. Where possible, the app can gracefully dispose of any resources by handling an appropriate shutdown signal, like SIGTERM.&lt;/p&gt;

&lt;p&gt;When designing apps around disposability, it helps if they are stateless. In that case, they only need to clean up the resources being used as part of request processing, as opposed to in-memory state that could be expensive to replenish (e.g., cache). Minimizing startup time is also useful, since running instances of an app can recover more quickly when they are physically relocated elsewhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Dev/Prod Parity
&lt;/h2&gt;

&lt;p&gt;It is a matter of good discipline in software development (as well as operations) to keep different environments in sync. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using the same type of backing software (e.g., databases) in both development and production&lt;/li&gt;
&lt;li&gt;Deploying frequently to minimize the gap between code on different environments&lt;/li&gt;
&lt;li&gt;Giving developers the responsibility to deploy and monitor their own applications, rather than having a separate operations team take care of this&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Although the Twelve Factor App suggests that developers entirely take over operational responsibilities, in practice this is not entirely necessary. It is quite common to have one or more DevOps engineers as part of a development team, ensuring that team goals are aligned across development and operational concerns, while at the same time making the best use of skill specialization.&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Logs
&lt;/h2&gt;

&lt;p&gt;The Twelve Factor App is very clear that logs should be written to standard output and treated like an event stream. The application doesn’t care where logs are ultimately stored; it is up to the execution environment to pick them up and route them to the appropriate destination.&lt;/p&gt;

&lt;p&gt;Conversely, an application that buffers logs in memory and periodically flushes them to a database is inherently flawed because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It becomes coupled with the log storage, taking on additional dependencies to communicate with it, and the log storage can’t be changed without changing the application.&lt;/li&gt;
&lt;li&gt;Log management has a direct and non-negligible impact on the application’s CPU and memory usage.&lt;/li&gt;
&lt;li&gt;This is in violation of the process being stateless.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Therefore, it is recommended to decouple the application from its log storage. It’s also imperative to avoid logging sensitive data, such as personally-identifiable information or credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. Admin Processes
&lt;/h2&gt;

&lt;p&gt;One-off processes like database migrations are an integral part of a release. Many of the points from the Twelve Factor App itself also apply to these one-off processes. Such tasks are stored in the same codebase and released as part of the same release process.&lt;/p&gt;

&lt;p&gt;However, under this point, The Twelve Factor App also recommends using a REPL shell to “run arbitrary code or inspect the app’s models against the live database.” It also suggests using ssh to run arbitrary processes on production. We disagree with these approaches. For security and risk management reasons, all deploys should go through a proven CI/CD process and there should be no developer access to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;After 10 years, it's amazing to see how the Twelve Factor App holds up. It is still a great foundation for software teams building applications that need a certain level of scale, reliability, and maintainability. It is especially useful as acceptance criteria when evaluating whether software is production ready.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, &lt;a href="https://www.linkedin.com/company/codersociety"&gt;follow us on LinkedIn&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://codersociety.com"&gt;https://codersociety.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>software</category>
      <category>engineering</category>
      <category>bestpractices</category>
    </item>
    <item>
      <title>5 Reasons to use GraphQL at Your Company</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Mon, 07 Dec 2020 17:23:50 +0000</pubDate>
      <link>https://forem.com/coder_society/5-reasons-to-use-graphql-at-your-company-2b24</link>
      <guid>https://forem.com/coder_society/5-reasons-to-use-graphql-at-your-company-2b24</guid>
      <description>&lt;p&gt;GraphQL is on the rise. Companies like Facebook, Netflix, Shopify or PayPal are using the data language and API technology to drive their products. Learn in this article, why you should be using it at your company.&lt;/p&gt;

&lt;p&gt;This article was &lt;a href="https://codersociety.com/blog/articles/graphql-reasons"&gt;originally published at Coder Society&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rise of GraphQL
&lt;/h2&gt;

&lt;p&gt;What is the best way to build an API today? REST probably comes to mind, but if you’re going to make the investment to build new software, it’s probably worth considering a few different options and choosing the best among them.&lt;/p&gt;

&lt;p&gt;GraphQL stands out as an alternative to the REST API architecture mainly (but not only) because it provides a discoverable API by design. It also comes with its own query language and a runtime for fulfilling queries via functions called resolvers.&lt;/p&gt;

&lt;p&gt;Originally developed in 2012 at Facebook as a better data-fetching solution for underpowered mobile devices, GraphQL was open-sourced in 2015. In 2018, it was moved under the care of the Linux Foundation, which maintains other important projects like Node.js, Kubernetes, and, of course, Linux itself.&lt;/p&gt;

&lt;p&gt;The general movement around GraphQL is very encouraging for anyone looking to adopt it. Its popularity has been rapidly on the rise over the last few years, as seen on &lt;a href="https://insights.stackoverflow.com/trends?tags=graphql"&gt;Stack Overflow Trends&lt;/a&gt;, for example. There are also several success stories at reputable companies such as &lt;a href="https://medium.com/paypal-engineering/graphql-a-success-story-for-paypal-checkout-3482f724fb53"&gt;PayPal&lt;/a&gt;, &lt;a href="https://netflixtechblog.com/our-learnings-from-adopting-graphql-f099de39ae5f"&gt;Netflix&lt;/a&gt;, and &lt;a href="https://medium.com/coursera-engineering/evolving-the-graph-4c587a4ad9a8"&gt;Coursera&lt;/a&gt;, where GraphQL was instrumental in building flexible and high-performant APIs in large, complex architectures.&lt;/p&gt;

&lt;p&gt;However, given the dynamic technology landscape in which we operate today, you would be forgiven for being skeptical. Could GraphQL be another fad? If it works for these companies, does that necessarily mean it will work for you? Let’s discuss the benefits and challenges of GraphQL, so that you can make an informed decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reasons to Use GraphQL
&lt;/h2&gt;

&lt;p&gt;As an API technology designed for flexibility, GraphQL is a strong enabler for both developers and consumers of APIs, as well as the organizations behind them. In this section, we’ll explore some of the key areas where GraphQL shines. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. One Data Graph for All
&lt;/h3&gt;

&lt;p&gt;GraphQL is an excellent choice for organizations with multiple teams and systems that want to make their data easily available through one unified API.&lt;/p&gt;

&lt;p&gt;No matter how many databases, services, legacy systems, and third-party APIs you use, GraphQL can hide this complexity by providing a single endpoint that clients can talk to. The GraphQL server is responsible for fetching data from the right places, and clients never need to know the details of where different pieces of data are coming from. As a result, the GraphQL ecosystem provides maximum flexibility when it comes to making data easily available for customers and internal users.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. No Over-Fetching or Under-Fetching
&lt;/h3&gt;

&lt;p&gt;Another huge benefit for GraphQL API clients is that they can request exactly what data they need, even across related entities. This is especially important because different clients have different data requirements, either because of a different business logic or because they are simply presenting a different view of data (e.g., web vs. mobile) and may also have different hardware limitations.&lt;/p&gt;

&lt;p&gt;By way of comparison, it’s much harder to efficiently retrieve nontrivial data from a REST API. Requesting data from a single endpoint will often return more data than is actually needed (overfetching), whereas requesting data about several related entities usually requires either several API calls (underfetching) or dedicated endpoints for specific client requests (which duplicates effort). GraphQL solves this issue by serving exactly the data which each client requests, nothing more and nothing less.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Better Developer Experience
&lt;/h3&gt;

&lt;p&gt;The GraphQL ecosystem comes with a number of tools that make working with GraphQL a breeze. Tools such as &lt;a href="https://github.com/graphql/graphiql"&gt;GraphiQL&lt;/a&gt; and &lt;a href="https://github.com/graphql/graphql-playground"&gt;GraphQL Playground&lt;/a&gt; provide a rich experience, allowing developers to inspect and try out APIs with minimal effort, thanks to the self-documenting features which we will get to in the next section.&lt;/p&gt;

&lt;p&gt;Also, code generation tools like &lt;a href="https://graphql-code-generator.com/"&gt;GraphQL Code Generator&lt;/a&gt; can be used to further speed up development, while other tools and best practices exist to address specific problems including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Client-side caching is available out of the box in several client libraries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://relay.dev/graphql/connections.htm"&gt;Cursor-based pagination&lt;/a&gt; provides a way to offer pagination across lists of data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;a href="https://github.com/graphql/dataloader"&gt;DataLoader&lt;/a&gt; improves performance by batching data fetch requests and also provides a basic level of server-side caching.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Higher Quality of Your System
&lt;/h3&gt;

&lt;p&gt;GraphQL APIs are built around a type system, which lays out the name and type of each field as well as the relationships between different entities. This type system, or schema, is used to validate queries sent by the client. The schema can be queried via a feature called &lt;strong&gt;introspection&lt;/strong&gt;, which is often used to generate documentation and code that will be used when integrating the API on the client-side.&lt;/p&gt;

&lt;p&gt;As a result, it requires minimal effort to have a well-documented API when using GraphQL. This provides great transparency to developers who are working with an API for the first time and makes development smoother and more efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Build for Change
&lt;/h3&gt;

&lt;p&gt;It is common for REST APIs to provide multiple versions of the same API so that it can change without breaking the existing functionality. GraphQL encourages a different approach to API modifications: evolution.&lt;/p&gt;

&lt;p&gt;When breaking changes are required (for instance, renaming a field or changing its type), you can introduce a new field and deprecate the old one, possibly removing it completely later on when you’re sure it’s no longer being used. This means that you can still change your API while maintaining backward compatibility and a single API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Considerations Before Adopting GraphQL
&lt;/h2&gt;

&lt;p&gt;GraphQL is an excellent tool to build scalable and flexible APIs, but it is not a panacea and is certainly not for everyone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Learning Curve
&lt;/h3&gt;

&lt;p&gt;Whereas REST is a simple and familiar approach to building APIs, GraphQL is a different beast altogether. Developers and infrastructure engineers alike will need to learn how to effectively develop and deploy GraphQL APIs, a task that will take some getting used to.&lt;/p&gt;

&lt;p&gt;As a result, teams that are on a tight schedule are probably better off using a technology with which they’re already familiar.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure and Tooling
&lt;/h3&gt;

&lt;p&gt;Deploying GraphQL, especially at scale, can require significant investment in infrastructure and tooling. Using it does not save you from having to deploy virtual machines or containers, set up a networking infrastructure, and deploy and maintain GraphQL server software across a large environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance and Security
&lt;/h3&gt;

&lt;p&gt;You also have to be extra careful that the additional flexibility afforded by GraphQL does not result in queries that maliciously or accidentally degrade or take down your system. This can be addressed by rate limiting or limiting query complexity and depth.&lt;/p&gt;

&lt;p&gt;Finally, it is always important to protect data that should not be public. Authentication and authorization mechanisms that are popular among other web technologies can also be used with GraphQL. Plus, pay attention to introspection, as it can leak internal types if not correctly secured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There is no doubt that REST gets the job done, but if you’re at a point where you need a better way to build APIs and serve diverse clients, then you should probably give GraphQL a try.&lt;/p&gt;

&lt;p&gt;GraphQL allows you to build evolvable and queryable APIs, hide the complexity of internal systems used to retrieve various pieces of data, and leverage a type system that results in automatic and up-to-date API documentation. These features, along with its tooling and ecosystem, make GraphQL an efficient and effective tool for API and client developers alike.&lt;/p&gt;

&lt;p&gt;Although GraphQL does require some investment, this is far outweighed by its advantages in situations where there are lots of data and services that should be made accessible to various existing and future API clients.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, &lt;a href="https://www.linkedin.com/company/codersociety"&gt;follow us on LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Introduction to GraphQL for Developers</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Tue, 24 Nov 2020 11:13:44 +0000</pubDate>
      <link>https://forem.com/coder_society/introduction-to-graphql-for-developers-4c51</link>
      <guid>https://forem.com/coder_society/introduction-to-graphql-for-developers-4c51</guid>
      <description>&lt;p&gt;GraphQL is a powerful query language for APIs and a runtime for resolving queries with data. &lt;/p&gt;

&lt;p&gt;This article was &lt;a href="https://codersociety.com/blog/articles/introduction-graphql"&gt;originally published at Coder Society&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’ll explore GraphQL’s core features, how to interact with a GraphQL API, and some development and operational challenges.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--udGYufnk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5r8aed0t48iio9ukkyt7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--udGYufnk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5r8aed0t48iio9ukkyt7.jpg" alt="GraphQL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Story of GraphQL
&lt;/h2&gt;

&lt;p&gt;Nowadays, REST seems to be the default approach for building APIs, typically based on the familiar HTTP protocol. While REST is relatively simple to work with and enjoys widespread popularity, its use of multiple endpoints to address resources sometimes gets in the way of flexibility.&lt;/p&gt;

&lt;p&gt;With such a rigid approach, some clients will get more data than they actually need (overfetching), whereas others will not get enough from a single endpoint (underfetching). This is a common issue among endpoint-based APIs like REST, and API clients have to compensate for it---for example, by issuing multiple requests and having to do the work of bringing data into the right shape on the client side. &lt;/p&gt;

&lt;p&gt;With GraphQL, however, clients can request exactly the data that they need---no more and no less---similar to querying specific fields in a database. GraphQL was developed at Facebook in 2012, when the company was reworking its mobile apps and needed a data-fetching technology that was friendly even to low-resource devices. It was open-sourced in 2015 and moved to the GraphQL Foundation in 2018.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features of GraphQL
&lt;/h2&gt;

&lt;p&gt;GraphQL shares some similarities with REST. It allows clients to request and manage data from APIs via a request-response protocol, typically run on top of HTTP. However, the way that data is organized, requested, and served is very different.&lt;/p&gt;

&lt;p&gt;GraphQL provides the following operations to work with data, via a single endpoint:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Queries: Allow clients to request data (similar to a GET request in REST).&lt;/li&gt;
&lt;li&gt;  Mutations: Allow clients to manipulate data (i.e., create, update, or delete, similar to POST, PUT, or DELETE, respectively).&lt;/li&gt;
&lt;li&gt;  Subscriptions: Allow clients to subscribe to real-time updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Therefore, at the surface, GraphQL can cover your typical API requirements. Do not be misled by the "QL" into thinking that it's used just for data retrieval.&lt;/p&gt;

&lt;p&gt;A GraphQL API is based on a type system or schema which describes the capabilities and data structures of the API. The schema is defined using the GraphQL Schema Definition Language (SDL) and includes all the different types, their fields, and how they are related. Using a feature called introspection, clients can query the schema itself. This functionality is particularly useful for tooling, such as code generation and autogeneration of API documentation. For example, projects such as GraphiQL and GraphQL Playground leverage this functionality to provide rich documentation and integrated querying experiences.&lt;/p&gt;

&lt;p&gt;What makes GraphQL so powerful is that clients can request exactly the data they need, not more and not less. It's also possible to request related data in the same query without the need for additional API calls. We'll see some examples in the next section.&lt;/p&gt;

&lt;p&gt;A GraphQL server evaluates each incoming API request and resolves the values for each requested field using resolver functions. The resolver functions are doing the &lt;code&gt;real work&lt;/code&gt; by, for example, fetching data from databases or other systems. This makes GraphQL an ideal solution for heterogeneous environments where data is located in different sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with GraphQL
&lt;/h2&gt;

&lt;p&gt;In order to understand a little better what it's like to interact with a GraphQL API, let's take a look at a few simple examples. We're going to use an updated &lt;a href="https://github.com/coder-society/starwars-server"&gt;Star Wars example server&lt;/a&gt;, which provides a fully functional GraphQL API with some example data, as well as GraphQL Playground. Simply follow these instructions to get it up and running:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Clone the example repository:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;$ git clone &lt;a href="https://github.com/coder-society/starwars-server"&gt;https://github.com/coder-society/starwars-server&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Install dependencies:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;$ cd starwars-server&lt;/p&gt;

&lt;p&gt;$ npm  install&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Start the GraphQL server:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;$ npm start&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Visit &lt;a href="http://localhost:8080/graphql"&gt;http://localhost:8080/graphql&lt;/a&gt; to access the GraphQL Playground interface.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--814cfBUG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/codersociety/image/fetch/https://cdn.codersociety.com/uploads/graphql-playground-doc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--814cfBUG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/codersociety/image/fetch/https://cdn.codersociety.com/uploads/graphql-playground-doc.png" alt="Figure 1: Exploring the API via GraphQL Playground's &amp;quot;Docs&amp;quot; tab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: Exploring the API via GraphQL Playground's "Docs" tab&lt;/p&gt;

&lt;p&gt;So here we are, looking at an unfamiliar API for the first time. Where do we start? What does it offer? Fortunately, GraphQL Playground provides rich API and schema documentation by leveraging GraphQL's introspection feature. Simply click on the "Docs" or "Schema" tab on the right to explore the API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Queries
&lt;/h3&gt;

&lt;p&gt;You can use the information in the API docs to create your first query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query  {

humans {

    id

    name

  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;humans&lt;/code&gt; query gives us a list of entities, which are of type Human, and we list the fields we want returned inside the inner curly brackets. In this case we're specifying that we want &lt;code&gt;id&lt;/code&gt; and &lt;code&gt;name&lt;/code&gt;, but we could also have returned any combination of fields supported by the &lt;code&gt;Human&lt;/code&gt; type. This returns the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

  "data":  {

    "humans":  [

      {

        "id":  "1000",

        "name":  "Luke Skywalker"

      },

      {

        "id":  "1001",

 "name":  "Darth Vader"

      },

      {

        "id":  "1002",

        "name":  "Han Solo"

      },

      {

        "id":  "1003",

        "name":  "Leia Organa"

      },

      {

        "id":  "1004",

        "name":  "Wilhuff Tarkin"

      }

    ]

  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we wanted to get specific information about one of these humans, we could use the &lt;code&gt;human&lt;/code&gt; query, passing in the specific &lt;code&gt;id&lt;/code&gt; value as an input to the query, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query  {

human (id:  1001)  {

    homePlanet

  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, the data we get back in the response corresponds to the fields we requested, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

  "data":  {

    "human":  {

      "homePlanet":  "Tatooine"

    }

  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Be774JUy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/codersociety/image/fetch/https://cdn.codersociety.com/uploads/graphql-playground-suggestions.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Be774JUy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/codersociety/image/fetch/https://cdn.codersociety.com/uploads/graphql-playground-suggestions.png" alt="Figure 2: GraphQL Playground suggests fields you can include as you type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: GraphQL Playground suggests fields you can include as you type&lt;/p&gt;

&lt;p&gt;You can add other fields that you want returned, which you can discover by examining the aforementioned API and schema documentation, or simply by seeing the suggestions in GraphQL Playground as you type. Let's try a slightly bigger query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query  {

  human(id:  1001)  {

    homePlanet

    name

    appearsIn

starships {

      id

      name

    }

  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a little more interesting because it also queries related data, in this case a &lt;code&gt;starships&lt;/code&gt; field, in which we are arbitrarily retrieving the id and name of a list of starships (entities of type &lt;code&gt;Starship&lt;/code&gt;). The response for this query is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

  "data":  {

    "human":  {

      "homePlanet":  "Tatooine",

      "name":  "Darth Vader",

      "appearsIn":  [

        "NEWHOPE",

        "EMPIRE",

        "JEDI"

      ],

      "starships":  [

        {

          "id":  "3002",

          "name":  "TIE Advanced x1"

        }

      ]

    }

  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While there is a lot more to be said about querying GraphQL APIs, these simple examples demonstrate how easy it is to consume a GraphQL API. With a single API, it's possible to serve a wide variety of clients with very different needs. For example, a mobile client might want to request only a subset of the data that a web app would need.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mutations and Subscriptions
&lt;/h3&gt;

&lt;p&gt;GraphQL can do more than just query data. The Star Wars example we're using provides one mutation example (adding a review) and one subscription example (getting notified when a review is added).&lt;/p&gt;

&lt;p&gt;Let's go back to GraphQL Playground and execute the following to start a subscription:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;subscription {

reviewAdded {

    episode

    stars

    commentary

  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What this does is to wait for updates related to a review being added, and when they're available, it returns the episode, stars, and commentary fields. Just as with queries, we choose what data we're interested in receiving.&lt;/p&gt;

&lt;p&gt;Since we can't continue writing queries while listening on a subscription, we can simply open a new tab in GraphQL Playground and execute the following mutation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mutation  {

  createReview(episode:  NEWHOPE,  review:  {

    stars:  5,

    commentary:  "Awesome"

  })  {

    episode

    stars

    commentary

  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As with queries, the inputs are in the round brackets (including a non-trivial review object in this case), and the outputs are listed in curly brackets. In this particular example, the inputs and outputs are the same, which is not very useful; however, the output of a mutation can be used to retrieve information about a newly added entity, such as an identifier. The result of this mutation is the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

  "data":  {

    "createReview":  {

      "episode":  "NEWHOPE",

      "stars":  5,

      "commentary":  "Awesome"

    }

  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we have an active subscription, you'll see the above data reflected in that subscription, even as you continue adding reviews.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--14EyUyXF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/codersociety/image/fetch/https://cdn.codersociety.com/uploads/graphql-playground-subscription.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--14EyUyXF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/codersociety/image/fetch/https://cdn.codersociety.com/uploads/graphql-playground-subscription.png" alt="Figure 3: Reviews are pushed to a subscription in GraphQL Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: Reviews are pushed to a subscription in GraphQL Playground&lt;/p&gt;

&lt;p&gt;Whether you want clients to query data, manipulate it, or receive real-time updates, GraphQL has you covered. It also offers a lot more functionality that we haven't covered, so please check the &lt;a href="https://graphql.org/learn/queries/"&gt;Queries and Mutations documentation&lt;/a&gt; to learn more about the available features.&lt;/p&gt;

&lt;h2&gt;
  
  
  GraphQL in the Real World
&lt;/h2&gt;

&lt;p&gt;There is always a difference between developing software and running it in a production environment. In the case of GraphQL, it would be a shame to talk about its merits and fail to mention some of the challenges, tools, and techniques involved in using it successfully in real-world scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Effort
&lt;/h3&gt;

&lt;p&gt;GraphQL's flexibility comes at a cost. GraphQL has a bigger learning curve than REST, and it takes some getting used to. It also requires care to understand and define every field that can be requested, since GraphQL is by its nature declarative.&lt;/p&gt;

&lt;p&gt;GraphQL involves a certain amount of complexity in developing and maintaining APIs. Tooling is evolving quickly, and it takes a fair amount of effort to keep up with all the new developments. Some tools, such as &lt;a href="https://graphql-code-generator.com/"&gt;GraphQL Code Generator&lt;/a&gt;, can help speed up development, but it's also important to be aware of any challenges and limitations (e.g., &lt;a href="https://www.apollographql.com/docs/federation/"&gt;Apollo Federation&lt;/a&gt; currently does not support subscriptions).&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance
&lt;/h3&gt;

&lt;p&gt;Optimizing APIs and making sure they remain performant is a common theme among API technologies. However, with GraphQL it's easy to run into what is called the N+1 problem. Let's say that each article on a blog has a list of comments. Because of the way GraphQL executes resolvers, it will retrieve a list of articles (1 query), and for each article it will again retrieve a list of comments (n queries). This may lead to performance problems where too many operations are executed for a single query.&lt;/p&gt;

&lt;p&gt;The GraphQL &lt;a href="https://github.com/graphql/dataloader"&gt;DataLoader&lt;/a&gt; provides a solution to this problem. Not only does it batch similar requests to minimize roundtrips, but it also offers a basic caching mechanism.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;GraphQL APIs, like other types of APIs, need to be hardened to ensure that only &lt;a href="https://graphql.org/learn/authorization/"&gt;authorized clients&lt;/a&gt; can access the available data, and to prevent malicious or accidental denial of service. The former can be addressed using standard authentication/authorization techniques such as JSON Web Tokens, where restrictions can be applied across the whole API or on field level. The latter involves making sure that heavy queries do not cripple the API by limiting the depth, complexity, and server time allocated to queries.&lt;/p&gt;

&lt;p&gt;If you are using internal types that should not be publicly accessible, it's also important to secure introspection queries to make sure internals are not leaked.&lt;/p&gt;

&lt;h3&gt;
  
  
  Federation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.apollographql.com/docs/federation/"&gt;Apollo Federation&lt;/a&gt; is a solution for how to work with GraphQL in a distributed system. API clients should only be concerned with one data graph and a single GraphQL API endpoint, no matter how many GraphQL servers your architecture is made of.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Considerations
&lt;/h3&gt;

&lt;p&gt;There are many other things to consider when deploying a GraphQL API, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Caching: GraphQL's great flexibility means that queries are very unpredictable and thus difficult to cache. Also, it is common practice to send queries over HTTP POST requests, which by their very nature are difficult to cache. Some client libraries, such as Apollo Client, mitigate this by providing advanced client-side caching features.&lt;/li&gt;
&lt;li&gt;  API versioning: GraphQL has a built-in &lt;a href="https://spec.graphql.org/June2018/#sec--deprecated"&gt;deprecation&lt;/a&gt; mechanism, and it generally encourages APIs to evolve and retain backwards compatibility rather than introducing completely separate versions.&lt;/li&gt;
&lt;li&gt;  Uploading files: File upload is not part of the GraphQL spec, so you'll need to resort to workarounds or choose specific GraphQL servers which provide this functionality.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;GraphQL is a fantastic technology for building evolvable and dynamic APIs that can adapt to the needs of diverse clients and use cases. It takes some getting used to, and it does have its development and operational challenges, like any other technology. Therefore, it's important to understand that GraphQL is another tool in your toolkit, and not something that makes REST obsolete. If you want to learn more about GraphQL and how it can help you build a future proof API, don't hesitate to reach out to us.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, &lt;a href="https://www.linkedin.com/company/codersociety"&gt;follow us on LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>graphql</category>
    </item>
    <item>
      <title>Logging in Kubernetes with Loki and the PLG Stack</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Thu, 19 Nov 2020 08:50:48 +0000</pubDate>
      <link>https://forem.com/coder_society/logging-in-kubernetes-with-loki-and-the-plg-stack-pe5</link>
      <guid>https://forem.com/coder_society/logging-in-kubernetes-with-loki-and-the-plg-stack-pe5</guid>
      <description>&lt;h2&gt;
  
  
  What is Loki?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://grafana.com/oss/loki/" rel="noopener noreferrer"&gt;Loki&lt;/a&gt; is an open-source, multi-tenant log aggregation system. It can be used with Grafana and Promtrail to collect and access logs, &lt;a href="https://grafana.com/docs/loki/latest/overview/comparisons/" rel="noopener noreferrer"&gt;similar to the ELK/EFK stack&lt;/a&gt;. While one can use Kibana and Elasticsearch to make advanced data analysis and visualizations, the Loki-based logging stack focuses on being light-weight and easy to operate.&lt;/p&gt;

&lt;p&gt;Loki provides a query language called &lt;a href="https://grafana.com/docs/loki/latest/logql" rel="noopener noreferrer"&gt;LogQL&lt;/a&gt;, which allows users to query logs. It is inspired by Prometheus' PromQL and can be considered to be a distributed "grep" that aggregates log sources.&lt;/p&gt;

&lt;p&gt;One of the main differences to conventional logging systems is that Loki indexes just the metadata rather than the logs' whole contents. Therefore, the index becomes smaller, which reduces memory consumption and ultimately lowers costs. One drawback of this design is that queries might be less performant than having everything indexed and loaded in memory.&lt;/p&gt;

&lt;p&gt;Logs are stored directly in cloud storage such as Amazon S3 or GCS without the need of having to store files on-disk. This simplifies operations and avoids issues such as running out of disk space.&lt;/p&gt;

&lt;p&gt;This article was originally published at &lt;a href="https://codersociety.com/blog/articles/loki-kubernetes-logging" rel="noopener noreferrer"&gt;Coder Society&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  8 Benefits of using Loki
&lt;/h2&gt;

&lt;p&gt;Here are some of the key benefits of using Loki in your stack:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Easy to use: It's simple to setup and easy to operate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Light-weight: It only indexes metadata instead of the full log messages like EFK is doing. This results in less expensive RAM instances required for Loki deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloud-native: It works well together with other &lt;a href="https://codersociety.com/blog/articles/cloud-native-tools" rel="noopener noreferrer"&gt;cloud-native tools&lt;/a&gt; such as Kubernetes, were metadata such as Pod labels are automatically scraped and indexed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses Object storage: It uses object storage like Amazon S3 or GCS, which is usually cheaper than using block storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scales horizontally: It can run as a single binary locally or for small scale operations and can easily be scaled horizontally for large scale operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Quorum consistency: It uses Dynamo-style &lt;a href="https://grafana.com/docs/loki/latest/architecture/#quorum-consistency" rel="noopener noreferrer"&gt;quorum consistency&lt;/a&gt; for read and write operations to guarantee uniform query results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi-tenancy support: It supports &lt;a href="https://grafana.com/docs/loki/latest/overview/#multi-tenancy" rel="noopener noreferrer"&gt;multi-tenancy&lt;/a&gt; through a tenant ID so that the tenant's data is stored separately.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Native Grafana support: It has native support in Grafana (needs Grafana v6.0).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Loki's Use Cases
&lt;/h2&gt;

&lt;p&gt;Now that we've talked about Loki's benefits, let's also look at some &lt;a href="https://grafana.com/blog/2020/05/12/an-only-slightly-technical-introduction-to-loki-the-prometheus-inspired-open-source-logging-system/#key-log-analysis-use-cases" rel="noopener noreferrer"&gt;popular use cases&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;Debugging and troubleshooting: Loki helps DevOps teams get to the bottom of problems faster by providing helpful information related to the issue at hand. For example, it is easy to see when a problem arose, what exactly happened, and how the issue came about.&lt;/p&gt;

&lt;p&gt;Monitoring: Prometheus is widely used in the industry for monitoring. However, you can identify many issues by monitoring your logs with Loki. For example, you can use it to keep an eye on your website's error rates and receive an alert whenever a certain threshold is exceeded.&lt;/p&gt;

&lt;p&gt;Cybersecurity: Loki allows you to identify threats, problems, and malicious activity in your company's systems. What's more, it helps you understand an attack's details after systems have already been compromised.&lt;/p&gt;

&lt;p&gt;Compliance: When regulations require companies to keep audit logs, Loki is a reliable and secure option to do so.&lt;/p&gt;

&lt;p&gt;Business Intelligence: Loki helps non-technical teams understand log data and develop new strategies and ideas for business growth. For example, marketers can use the data for conversion rate optimization: they can see where customers are coming from, which marketing channels are working best, and which channels need to be improved.&lt;/p&gt;

&lt;h2&gt;
  
  
  The PLG Stack (Promtail, Loki and Grafana)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Floki-plg-stack.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Floki-plg-stack.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://grafana.com/docs/loki/latest/clients/promtail/" rel="noopener noreferrer"&gt;Promtail&lt;/a&gt; is an agent that needs to be installed on each node running your applications or services. It detects targets (such as local log files), attaches labels to log streams from the pods, and ships them to Loki.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://codersociety.com/blog/articles/loki-kubernetes-logging#1-what-is-loki" rel="noopener noreferrer"&gt;Loki&lt;/a&gt; is the heart of the PLG Stack. It is responsible to store the log data.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://grafana.com/grafana/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt; is an open-source visualization platform that processes time-series data from Loki and makes the logs accessible in a web UI.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started with the PLG Stack in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Let's get started with Loki with some hands-on action. In this example, we're going to use the Loki stack to visualize the logs of a Kubernetes API server in Grafana.&lt;/p&gt;

&lt;p&gt;Before you start, make sure you have a Kubernetes cluster up and running, and &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; installed. When you're all set, we can install Loki:&lt;/p&gt;

&lt;h3&gt;
  
  
  Install the PLG Stack with Helm
&lt;/h3&gt;

&lt;p&gt;Create a Kubernetes namespace to deploy the PLG Stack to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create namespace loki
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add Loki's Helm Chart &lt;a href="https://github.com/grafana/loki/tree/master/production/helm/loki" rel="noopener noreferrer"&gt;repository&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo add loki https://grafana.github.io/loki/charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command to update the repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the Loki stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm upgrade --install loki loki/loki-stack --namespace=loki --set grafana.enabled=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install Loki, Grafana and Promtail into your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Retrieve the password to log into Grafana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get secret loki-grafana --namespace=loki -o jsonpath="{.data.admin-password}"  | base64 --decode ;  echo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The generated admin password will look like this one -&amp;gt; &lt;code&gt;jvjqUy2nhsHplVwrX8V05UgSDYEDz6pSiBZOCPHf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, execute the command below to access the Grafana UI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward --namespace loki service/loki-grafana 3000:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now open your browser, and go to &lt;a href="http://localhost:3000/" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Log in with the user name "admin" and the password you retrieved previously.&lt;/p&gt;

&lt;h3&gt;
  
  
  Loki in Grafana
&lt;/h3&gt;

&lt;p&gt;The Grafana we installed comes with the Loki data source preconfigured. So we can start right away exploring our Kubernetes logs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fgrafana-loki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fgrafana-loki.png"&gt;&lt;/a&gt;Next, click on the Explore tab on the left side. Select Loki from the data source dropdown.&lt;/p&gt;

&lt;p&gt;Click on the Log labels dropdown &amp;gt; container &amp;gt; kube-apiserver&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fgrafana-loki-query.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fgrafana-loki-query.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you should get data in the Logs window!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fgrafana-loki-logs-window.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fgrafana-loki-logs-window.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down and you will find the details on the kube-apiserver logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fgrafana-loki-container-logs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fgrafana-loki-container-logs.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  LogQL
&lt;/h3&gt;

&lt;p&gt;LogQL provides the functionality to filter logs through operators. Here is a list of operators which are supported:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;=:&lt;/code&gt; exactly equal.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;!=:&lt;/code&gt; not equal.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;=~:&lt;/code&gt; regex matches.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;!~:&lt;/code&gt; regex does not match.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's try it on another query. We start by searching all logs of the &lt;code&gt;kube-apiserver&lt;/code&gt; container. In addition to that we add filter operators to limit the results to logs which include the word &lt;code&gt;error&lt;/code&gt;, but not &lt;code&gt;timeout&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;{container="kube-apiserver"}  |=  "error"  !=  "timeout"&lt;/p&gt;

&lt;p&gt;This was a simple example of setting up and working with Loki and Grafana. If you want to learn more, head over to the &lt;a href="https://grafana.com/docs/loki/latest/" rel="noopener noreferrer"&gt;Loki documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;Companies need a simple and cost-effective solution to collect, store, and analyze log files from apps and services in distributed systems. Loki can help you &lt;a href="https://grafana.com/blog/2019/11/19/how-loki-helped-paytm-insider-save-75-of-logging-and-monitoring-costs/" rel="noopener noreferrer"&gt;dramatically reduce logging and monitoring costs&lt;/a&gt; in your production environment. In combination with Promtail and Grafana it provides all the features needed for a full logging stack which can help you find and resolve problems faster and prevent malfunctions from occurring in the future.&lt;/p&gt;

&lt;p&gt;Would you like to learn how Loki can help you gain more insides into your software system, cut costs and &lt;a href="https://codersociety.com/blog/articles/devops-success-in-organization" rel="noopener noreferrer"&gt;strengthen DevOps in your company&lt;/a&gt;? Use our &lt;a href="https://codersociety.com/contact" rel="noopener noreferrer"&gt;contact form&lt;/a&gt;, and we will get back to you shortly.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, &lt;a href="https://www.linkedin.com/company/codersociety" rel="noopener noreferrer"&gt;follow us on LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>How To: Contract Testing for Node.js Microservices with Pact</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Thu, 05 Nov 2020 11:59:10 +0000</pubDate>
      <link>https://forem.com/coder_society/how-to-contract-testing-for-node-js-microservices-with-pact-3ofl</link>
      <guid>https://forem.com/coder_society/how-to-contract-testing-for-node-js-microservices-with-pact-3ofl</guid>
      <description>&lt;p&gt;&lt;em&gt;In this article, you'll learn more about contract testing and how to use Pact to verify and ensure your Node.js microservices' API compatibility.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fma7nyoyheas6n1pieygc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fma7nyoyheas6n1pieygc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article was originally published at &lt;a href="https://codersociety.com/blog/articles/contract-testing-pact" rel="noopener noreferrer"&gt;Coder Society&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ensuring API compatibility in distributed systems
&lt;/h2&gt;

&lt;p&gt;The use of microservices is growing in popularity for good reasons. &lt;/p&gt;

&lt;p&gt;They allow software teams to develop, deploy, and scale software independently to deliver business value faster. &lt;/p&gt;

&lt;p&gt;Large software projects are broken down into smaller modules, which are easier to understand and maintain. &lt;/p&gt;

&lt;p&gt;While the internal functionality of each microservice is getting simpler, the complexity in a microservice architecture is moved to the communication layer and often requires the integration between services.&lt;/p&gt;

&lt;p&gt;However, in microservice architectures, you often find service to service communication, leading to increased complexity in the communication layer and the need to integrate other services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Famazon-netflix-services.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Famazon-netflix-services.png"&gt;&lt;/a&gt;Figure 1: Distributed systems at Amazon and Netflix&lt;/p&gt;

&lt;p&gt;Traditional &lt;a href="https://martinfowler.com/articles/microservice-testing/#testing-integration-introduction" rel="noopener noreferrer"&gt;integration testing&lt;/a&gt; has proven to be a suitable tool to verify the compatibility of components in a distributed system. However, as the number of services increases, maintaining a fully integrated test environment can become complex, slow, and difficult to coordinate. The increased use of resources can also become a problem, for example when starting up a full system locally or during continuous integration (CI). &lt;/p&gt;

&lt;p&gt;Contract testing aims to address these challenges -- let's find out how.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is contract testing?
&lt;/h2&gt;

&lt;p&gt;Contract testing is a technique for checking and ensuring the interoperability of &lt;a href="https://codersociety.com/blog/articles/nodejs-application-monitoring-with-prometheus-and-grafana" rel="noopener noreferrer"&gt;software applications&lt;/a&gt; in isolation and enables teams to deploy their microservices independently of one another. &lt;/p&gt;

&lt;p&gt;Contracts are used to define the interactions between API consumers and providers. The two participants must meet the requirements set out in these contracts, such as endpoint definitions and request and response structures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fcontract-testing.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fcontract-testing.png"&gt;&lt;/a&gt;Figure 2: A contract that defines a HTTP GET interaction&lt;/p&gt;

&lt;h2&gt;
  
  
  What is consumer-driven contract testing?
&lt;/h2&gt;

&lt;p&gt;Consumer-driven contract testing allows developers to start implementing the consumer (API client) even though the provider (API) isn't yet available. For this, the consumer writes the contract for the API provider using &lt;a href="https://martinfowler.com/bliki/TestDouble.html" rel="noopener noreferrer"&gt;test doubles&lt;/a&gt; (also known as API mocks or stubs). Thanks to these test doubles, teams can decouple the implementation and testing of consumer and provider applications so that they're not dependent on each other. Once the provider has verified its structure against the contract requirements, new consumer versions can be deployed with confidence knowing that the systems are compatible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fconsumer-driven-contract-testing.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fconsumer-driven-contract-testing.png"&gt;&lt;/a&gt;Figure 3: Consumer-driven contract testing&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Pact?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pact.io/" rel="noopener noreferrer"&gt;Pact&lt;/a&gt; is a code-first consumer-driven contract testing tool. Consumer contracts, also called Pacts, are defined in code and are generated after successfully running the consumer tests. The Pact files use JSON format and are used to spin up a Pact Mock Service to test and verify the compatibility of the provider API.&lt;/p&gt;

&lt;p&gt;The tool also offers the so-called Pact Mock Provider, with which developers can implement and test the consumer using a mocked API. This, in turn, accelerates development time, as teams don't have to wait for the provider to be available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fpact-overview.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.codersociety.com%2Fuploads%2Fpact-overview.png"&gt;&lt;/a&gt;Figure 4: Pact overview&lt;/p&gt;

&lt;p&gt;Pact was initially designed for request/response interactions and supports both REST and GraphQL APIs, as well as many different &lt;a href="https://docs.pact.io/implementation_guides" rel="noopener noreferrer"&gt;programming languages&lt;/a&gt;. For Providers written in languages that don't have native Pact support, you can still use the generic &lt;a href="https://github.com/pact-foundation/pact-provider-verifier" rel="noopener noreferrer"&gt;Pact Provider Verification tool&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try out Pact
&lt;/h2&gt;

&lt;p&gt;Why don't we test things ourselves and see how consumer-driven contract testing with Pact actually works? For this, we use &lt;a href="https://github.com/pact-foundation/pact-js" rel="noopener noreferrer"&gt;Pact JS&lt;/a&gt;, the Pact library for JavaScript, and Node.js. We've already created a &lt;a href="https://github.com/coder-society/contract-testing-nodejs-pact" rel="noopener noreferrer"&gt;sample repository&lt;/a&gt; containing an order API, which returns a list of orders. Let's start by cloning the project and installing the dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone https://github.com/coder-society/contract-testing-nodejs-pact.git

$ cd contract-testing-nodejs-pact

$ npm install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Writing a Pact consumer test
&lt;/h2&gt;

&lt;p&gt;We created a file called &lt;code&gt;consumer.spec.js&lt;/code&gt; to define the expectedinteractions between our order API client (consumer) and the order API itself (provider). We expect the following interactions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  HTTP GET request against path &lt;code&gt;/orders&lt;/code&gt; which returns a list of orders.&lt;/li&gt;
&lt;li&gt;  The order response matches a defined structure. For this we use &lt;a href="https://github.com/pact-foundation/pact-js#matching" rel="noopener noreferrer"&gt;Pact's Matchers&lt;/a&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const assert = require('assert')
const { Pact, Matchers } = require('@pact-foundation/pact')
const { fetchOrders } = require('./consumer')
const { eachLike } = Matchers

describe('Pact with Order API', () =&amp;gt; {
  const provider = new Pact({
    port: 8080,
    consumer: 'OrderClient',
    provider: 'OrderApi',
  })

  before(() =&amp;gt; provider.setup())

  after(() =&amp;gt; provider.finalize())

  describe('when a call to the API is made', () =&amp;gt; {
    before(async () =&amp;gt; {
      return provider.addInteraction({
        state: 'there are orders',
        uponReceiving: 'a request for orders',
        withRequest: {
          path: '/orders',
          method: 'GET',
        },
        willRespondWith: {
          body: eachLike({
            id: 1,
            items: eachLike({
              name: 'burger',
              quantity: 2,
              value: 100,
            }),
          }),
          status: 200,
        },
      })
    })

    it('will receive the list of current orders', async () =&amp;gt; {
      const result = await fetchOrders()
      assert.ok(result.length)
    })
  })
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the Pact consumer tests using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm run test:consumer

&amp;gt; contract-testing-nodejs-pact@1.0.0 test:consumer /Users/kentarowakayama/CODE/contract-testing-nodejs-pact
&amp;gt; mocha consumer.spec.js

[2020-11-03T17:22:44.144Z]  INFO: pact-node@10.11.0/7575 on coder.local:
    Creating Pact Server with options:
    {"consumer":"OrderClient","cors":false,"dir":"/Users/kentarowakayama/CODE/contract-testing-nodejs-pact/pacts","host":"127.0.0.1","log":"/Users/kentarowakayama/CODE/contract-testing-nodejs-pact/logs/pact.log","pactFileWriteMode":"overwrite","port":8080,"provider":"OrderApi","spec":2,"ssl":false}

  Pact with Order API
[2020-11-03T17:22:45.204Z]  INFO: pact@9.13.0/7575 on coder.local:
    Setting up Pact with Consumer "OrderClient" and Provider "OrderApi"
        using mock service on Port: "8080"
    when a call to the API is made
[{"id":1,"items":[{"name":"burger","quantity":2,"value":100}]}]
      ✓ will receive the list of current orders
[2020-11-03T17:22:45.231Z]  INFO: pact@9.13.0/7575 on coder.local: Pact File Written
[2020-11-03T17:22:45.231Z]  INFO: pact-node@10.11.0/7575 on coder.local: Removing Pact process with PID: 7576
[2020-11-03T17:22:45.234Z]  INFO: pact-node@10.11.0/7575 on coder.local:
    Deleting Pact Server with options:
    {"consumer":"OrderClient","cors":false,"dir":"/Users/kentarowakayama/CODE/contract-testing-nodejs-pact/pacts","host":"127.0.0.1","log":"/Users/kentarowakayama/CODE/contract-testing-nodejs-pact/logs/pact.log","pactFileWriteMode":"overwrite","port":8080,"provider":"OrderApi","spec":2,"ssl":false}

  1 passing (1s)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The consumer tests generate a Pact contract file named "orderclient-orderapi.json" in the "pacts" folder, which looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "consumer": {
    "name": "OrderClient"
  },
  "provider": {
    "name": "OrderApi"
  },
  "interactions": [
    {
      "description": "a request for orders",
      "providerState": "there are orders",
      "request": {
        "method": "GET",
        "path": "/orders"
      },
      "response": {
        "status": 200,
        "headers": {
        },
        "body": [
          {
            "id": 1,
            "items": [
              {
                "name": "burger",
                "quantity": 2,
                "value": 100
              }
            ]
          }
        ],
        "matchingRules": {
          "$.body": {
            "min": 1
          },
          "$.body[*].*": {
            "match": "type"
          },
          "$.body[*].items": {
            "min": 1
          },
          "$.body[*].items[*].*": {
            "match": "type"
          }
        }
      }
    }
  ],
  "metadata": {
    "pactSpecification": {
      "version": "2.0.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verifying the consumer pact against the API provider
&lt;/h2&gt;

&lt;p&gt;We can now use the generated Pact contract file to verify our order API. To do so, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm run test:provider

&amp;gt; contract-testing-nodejs-pact@1.0.0 test:provider /Users/kentarowakayama/CODE/contract-testing-nodejs-pact
&amp;gt; node verify-provider.js

Server is running on http://localhost:8080
[2020-11-03T17:21:15.038Z]  INFO: pact@9.13.0/7077 on coder.local: Verifying provider
[2020-11-03T17:21:15.050Z]  INFO: pact-node@10.11.0/7077 on coder.local: Verifying Pacts.
[2020-11-03T17:21:15.054Z]  INFO: pact-node@10.11.0/7077 on coder.local: Verifying Pact Files
[2020-11-03T17:21:16.343Z]  WARN: pact@9.13.0/7077 on coder.local: No state handler found for "there are orders", ignoring
[2020-11-03T17:21:16.423Z]  INFO: pact-node@10.11.0/7077 on coder.local: Pact Verification succeeded.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code to verify the provider can be found in &lt;a href="https://github.com/coder-society/contract-testing-nodejs-pact/blob/master/verify-provider.js" rel="noopener noreferrer"&gt;verify-pact.js&lt;/a&gt; and looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const path = require('path')
const { Verifier } = require('@pact-foundation/pact')
const { startServer } = require('./provider')

startServer(8080, async (server) =&amp;gt; {
  console.log('Server is running on http://localhost:8080')

  try {
    await new Verifier({
      providerBaseUrl: 'http://localhost:8080',
      pactUrls: [path.resolve(__dirname, './pacts/orderclient-orderapi.json')],
    }).verifyProvider()
  } catch (error) {
    console.error('Error: ' + error.message)
    process.exit(1)
  }

  server.close()
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts the API server and runs the Pact Verifier. After successful verification, we know that the order API and the client are compatible and can be deployed with confidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;By now you should have a good understanding of contract testing and how consumer-driven contract testing works. You also learned about Pact and how to use it to ensure the compatibility of your Node.js microservices.&lt;/p&gt;

&lt;p&gt;To avoid exchanging the Pact JSON files manually, you can use &lt;a href="https://docs.pact.io/pact_broker" rel="noopener noreferrer"&gt;Pact Broker&lt;/a&gt; to share contracts and verification results. This way, Pact can be integrated into your CI/CD pipeline -- we will talk more about this in a future blog post.&lt;/p&gt;

&lt;p&gt;Visit the &lt;a href="https://docs.pact.io/" rel="noopener noreferrer"&gt;Pact documentation&lt;/a&gt; to learn more about Pact and consumer-driven contract testing for your microservices.&lt;/p&gt;

&lt;p&gt;For more articles like this, visit our &lt;a href="https://codersociety.com/blog" rel="noopener noreferrer"&gt;Coder Society Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For our latest insights and updates, you can &lt;a href="https://www.linkedin.com/company/codersociety" rel="noopener noreferrer"&gt;follow us on LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>node</category>
    </item>
    <item>
      <title>13 Tools To Become Cloud Native</title>
      <dc:creator>Kentaro Wakayama</dc:creator>
      <pubDate>Fri, 30 Oct 2020 09:50:38 +0000</pubDate>
      <link>https://forem.com/coder_society/13-tools-to-become-cloud-native-4lbh</link>
      <guid>https://forem.com/coder_society/13-tools-to-become-cloud-native-4lbh</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I2ejykq1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zlokaaga02d46ufbtz7p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I2ejykq1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zlokaaga02d46ufbtz7p.png" alt="My Cloud Native Tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://codersociety.com/blog/articles/cloud-native-tools"&gt;The complete version of this article is available at Coder Society Blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is my list of cloud native tools. Companies that leverage the full suite of tools can often deliver faster, with less friction and lower development and maintenance costs.&lt;/p&gt;

&lt;p&gt;Are we missing anything? Let me know!&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Microservices
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/architecture/guide/architecture-styles/microservices"&gt;Microservices&lt;/a&gt; scope product functionality into units that can be individually deployed. For example, in traditional pre-cloud-native deployments, it was common to have a single website service that managed APIs and customer interactions. With microservices, you would decompose this website into multiple services, such as a checkout service and a user service. Then, you could develop, deploy, and scale these services individually.\&lt;br&gt;
Additionally, microservices are often stateless, and leveraging &lt;a href="https://12factor.net/"&gt;twelve-factor applications&lt;/a&gt; allows companies to take advantage of the flexibility that cloud native tooling offers.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://nodejs.org/"&gt;Node.js&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://kotlinlang.org/"&gt;Kotlin&lt;/a&gt;, &lt;a href="https://golang.org/"&gt;Golang&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Continuous Integration / Continuous Deployment
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.redhat.com/en/topics/devops/what-is-ci-cd"&gt;Continuous Integration / Continuous Deployment (CI/CD)&lt;/a&gt; is an infrastructure component that supports automatic test execution (and optionally deployments) in response to version control events such as pull requests and merges. CI/CD enables companies to implement quality gates such as unit tests, static analysis, or security analysis. Ultimately, CI/CD is a foundational tool in the cloud native ecosystem that can lead to substantial engineering efficiencies and reduced error counts.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://docs.gitlab.com/ee/ci/"&gt;Gitlab CI/CD&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://github.com/features/actions"&gt;Github Actions&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Containers
&lt;/h3&gt;

&lt;p&gt;Containers are at the heart of cloud native ecosystems and enable &lt;a href="https://medium.com/dm03514-tech-blog/devops-containers-velocity-through-reduced-coordination-532f0ac000e5"&gt;unmatched velocity and quality gains&lt;/a&gt; by simplifying developer operations. Using containers with tools like &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;, teams can specify their system dependencies while providing a uniform and generic execution layer. This layer enables infrastructure teams to operate a single infrastructure, such as a container orchestrator, like Kubernetes. Engineering teams can store container images in a container registry, which in most cases, also provides vulnerability analysis and fine-grained access control. Popular services for this are as &lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt;, &lt;a href="https://cloud.google.com/container-registry"&gt;Google Container Registry&lt;/a&gt;, or &lt;a href="https://quay.io/"&gt;Quay&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://podman.io/"&gt;Podman&lt;/a&gt;, &lt;a href="https://linuxcontainers.org/lxd/introduction/"&gt;LXD&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Container Orchestration
&lt;/h3&gt;

&lt;p&gt;Container orchestrators are tools for launching and managing large numbers of containers and remove the need for language-specific or team-specific deployment strategies. They allow users to specify a container image or group of images and some configuration. Finally, the orchestrators take these specifications and translate them into running workloads.\&lt;br&gt;
Container orchestrators enable infrastructure teams to maintain a single infrastructure component, which can execute any container that adheres to the &lt;a href="https://github.com/opencontainers/runtime-spec/blob/master/spec.md"&gt;OCI specification&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://cloud.google.com/run"&gt;Google Cloud Run&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Infrastructure as Code
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.hashicorp.com/resources/what-is-infrastructure-as-code"&gt;Infrastructure as code&lt;/a&gt; is a strategy that puts cloud configuration under version control. Companies often manage their cloud resources manually by configuring them through an admin panel. Manual configuration, however, makes it hard to keep track of changes. Infrastructure as code addresses this by defining cloud resources as code and putting them under version control. Changes are made in the infrastructure config in code and promoted through the company's deployment process, which can include peer reviews, CI, and CD. Version control provides an audit log that shows who has changed which resources and when.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://www.pulumi.com/"&gt;Pulumi&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Secrets
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.hashicorp.com/products/vault/secrets-management"&gt;Secret management&lt;/a&gt; is essential for cloud native solutions but is often neglected at smaller scales. Secrets are anything private, such as passwords, private keys, and API credentials. At the very least, secrets should be encrypted and stored in configuration. Mature solutions enable temporary database credentials or rotating credentials to be issued, making secret management even more secure. Finding a fitting solution for secret management is vital for cloud native applications since containerized services scale horizontally and may be scheduled on many different machines. Ultimately, organizations that ignore secret management could increase the surface area for credential leakage.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://www.vaultproject.io/"&gt;Vault&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://github.com/bitnami-labs/sealed-secrets"&gt;Sealed Secrets&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Certificates
&lt;/h3&gt;

&lt;p&gt;Secure communication over TLS is not only best practice but a must-have. This is especially important in container-based solutions because many different services may run on the same physical machine. Without encryption, an attacker who gains access to the host's network could read all traffic between these services. At the very least, it becomes untenable to manually update certificates for cloud native deployments, which is why some sort of automated solution is essential.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://github.com/jetstack/cert-manager"&gt;cert-manager&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs"&gt;Google managed certificates&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  8. API Gateway
&lt;/h3&gt;

&lt;p&gt;API gateways are reverse proxies with features beyond traditional reverse proxies, such as &lt;a href="https://httpd.apache.org/"&gt;Apache&lt;/a&gt; and &lt;a href="https://www.nginx.com/"&gt;NGINX&lt;/a&gt;.\&lt;br&gt;
API gateways support:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Protocols like gRPC, HTTP/2, and Websockets&lt;/li&gt;
&lt;li&gt;  Dynamic configuration&lt;/li&gt;
&lt;li&gt;  Mutual TLS&lt;/li&gt;
&lt;li&gt;  Routing&lt;/li&gt;
&lt;li&gt;  Resiliency primitives, such as rate limiting and circuit breaking&lt;/li&gt;
&lt;li&gt;  Visibility in the form of metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://github.com/datawire/ambassador"&gt;Ambassador&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://konghq.com/"&gt;Kong&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Logging
&lt;/h3&gt;

&lt;p&gt;Logging is a &lt;a href="https://www.oreilly.com/library/view/distributed-systems-observability/9781492033431/ch04.html"&gt;foundational pillar of observability&lt;/a&gt;. Logging is often very familiar and accessible to teams, making it a great starting place to introduce observability. Logs are essential to understanding what is happening in systems. Cloud native tools emphasize &lt;a href="https://grafana.com/blog/2016/01/05/logs-and-metrics-and-graphs-oh-my"&gt;time series&lt;/a&gt; for metrics, since they are more cost-effective than logs to store. However, logs are an invaluable tool for debugging, and some systems are only observable through them, which makes logging a requirement.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://www.cncf.io/blog/2020/07/27/logging-in-kubernetes-efk-vs-plg-stack/"&gt;EFK&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://github.com/grafana/loki"&gt;Loki&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Monitoring
&lt;/h3&gt;

&lt;p&gt;Monitoring systems store important events as a time series. Monitoring data is aggregated, which means that you don't store all events. This makes it cost-effective for cloud native systems, and is essential for understanding the state of cloud native systems and answering these questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  How many operations are occurring?&lt;/li&gt;
&lt;li&gt;  What is the result of the operations (success, failure, or status codes)?&lt;/li&gt;
&lt;li&gt;  How long do operations take? &lt;/li&gt;
&lt;li&gt;  What are the counts of important resources such as queue depths or thread pools? &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can assign different dimensions to monitoring metrics to drill into performance on an individual machine, operating system, version, etc.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;/&lt;a href="https://prometheus.io/docs/visualization/grafana/"&gt;Grafana&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://www.datadoghq.com/"&gt;Datadog&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  11. Alerting
&lt;/h3&gt;

&lt;p&gt;Alerting makes logs and metrics actionable, notifies operators of system issues, and pairs well with time series metrics. For example, alerts can notify teams when there is an increase in HTTP 500 status codes or when request duration increases. Alerting is essential for cloud native systems. Without alerts, you don't get notified of incidents, which in the worst case means that companies don't know that there have been problems.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://prometheus.io/docs/alerting/latest/alertmanager/"&gt;Prometheus Alertmanager&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://grafana.com/docs/grafana/latest/alerting/"&gt;Grafana Alerts&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  12. Tracing
&lt;/h3&gt;

&lt;p&gt;Cloud native technologies reduce the overhead in launching and scaling services. As a result, teams often launch more services than they did pre-cloud. &lt;a href="https://opentracing.io/docs/overview/what-is-tracing/"&gt;Tracing enables teams&lt;/a&gt; to monitor communication &lt;em&gt;between&lt;/em&gt; services and makes it easy to visualize an entire end-user transaction and each stage in that transaction. When performance issues arise, teams can see what service errors are occurring and how long each phase of the transaction is taking. Tracing is a next level observation and debugging tool that can significantly reduce downtime by allowing teams to debug issues faster.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://www.jaegertracing.io/"&gt;Jaeger&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://zipkin.io/"&gt;Zipkin&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  13. Service Mesh
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh"&gt;Service meshes&lt;/a&gt; are the Swiss army knife of cloud networking. They can provide dynamic routing, load balancing, service discovery, networking policies, and resiliency primitives such as &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/circuit-breaker"&gt;circuit breakers&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/retry"&gt;retries&lt;/a&gt;, and deadlines. Service meshes are an evolution in load balancing for cloud native architectures.&lt;/p&gt;

&lt;p&gt;Recommended Technology: &lt;a href="https://istio.io/"&gt;Istio&lt;/a&gt;\&lt;br&gt;
Alternative Technology: &lt;a href="https://www.consul.io/"&gt;Consul&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stay Competitive With Cloud Native
&lt;/h2&gt;

&lt;p&gt;Cloud native tools help companies stay competitive by increasing both quality and availability while decreasing time to market. &lt;/p&gt;

&lt;p&gt;Companies that choose the right tools can maintain competitive advantages through increased delivery speed and agility. &lt;/p&gt;

&lt;p&gt;Adopting cloud native technologies may seem daunting, but just remember that beginning with a single technology can already provide considerable benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  About me, Kentaro Wakayama
&lt;/h2&gt;

&lt;p&gt;Kentaro is CEO and Solutions Architect at Coder Society. With his in-depth knowledge of software development and cloud technologies, Kentaro often takes on the lead engineer's role. His analytical, organized, and people-oriented nature makes him an apt advisor on software projects and flexible staffing.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Coder Society
&lt;/h2&gt;

&lt;p&gt;We are a network of 150+ tech freelancers developing custom software solutions for leading companies on-site and remotely.&lt;/p&gt;

&lt;p&gt;If you need help getting started with your cloud native journey, &lt;a href="https://codersociety.com/"&gt;check out what we can do&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
