<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: appfleet</title>
    <description>The latest articles on Forem by appfleet (@appfleet).</description>
    <link>https://forem.com/appfleet</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/appfleet"/>
    <language>en</language>
    <item>
      <title>appfleet, a new edge compute platform goes live</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Mon, 01 Mar 2021 10:39:10 +0000</pubDate>
      <link>https://forem.com/appfleet/appfleet-a-new-edge-compute-platform-goes-live-39fl</link>
      <guid>https://forem.com/appfleet/appfleet-a-new-edge-compute-platform-goes-live-39fl</guid>
      <description>&lt;p&gt;First of all what is appfleet? &lt;a href="https://appfleet.com/"&gt;appfleet is an edge compute platform&lt;/a&gt; that allows people to deploy their web applications globally. Instead of running your code in a single centralized location you can now run it everywhere, at the same time.&lt;/p&gt;

&lt;p&gt;In simpler terms appfleet is a next-gen CDN, instead of being limited to only serving static content closer to your users you can now do the same thing for your whole codebase. Run the whole thing where just your cache used to be.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/7n617ZF-oT4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This results in drastic performance improvement, lower latency, better uptime and an enormous amount of new use-cases and possibilities.&lt;/p&gt;

&lt;p&gt;In part it's because we are not limiting our users to HTTP services. You have complete freedom to run any kind of service over any protocol you want, on any port. &lt;/p&gt;

&lt;p&gt;Do you want to build your own global nameservers? Deploy a container running a DNS server over UDP port 53. It takes only a few clicks. A globally distributed database? Sure thing. Or something simple like image optimization on the edge? No problem. How about a monstrous container running a web service, redis, ssh, DNS, MySQL and an admin service all on different ports, all at the same time? Who are we to stop you, go ahead!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q58ss_Yu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hdm665ssv1g3v3s1wjd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q58ss_Yu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hdm665ssv1g3v3s1wjd1.png" alt="The appfleet dashboard showing your deployed applications"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We launched our closed beta quite a while ago. And since then we have worked with many developers and business owners to improve our platform and support the many exciting use-cases that people kept coming up with.&lt;/p&gt;

&lt;p&gt;And while the system was technically ready to accept real customers months ago, we decided it was best to stay in beta and ensure the stability of the platform.&lt;/p&gt;

&lt;p&gt;Since then we have polished our user-experience, redesigned many parts of our UI multiple times and made sure our backend is ready for whatever people throw at it.&lt;/p&gt;

&lt;p&gt;Do you know what is the first thing people run when given a container and asked to test it? They run a fork bomb.  Of course we were ready for that and while the specific instance became unavailable it had no impact on our system or other clients. All thanks to the multiple layers of isolation we have built. And nowadays the instance won't even go down, so fork-bomb away.&lt;/p&gt;

&lt;p&gt;Each container runs in a virtualized box of its own, with its own resources, filesystem and security in place. Security and stability were top priorities for us and everything we built had that in mind.&lt;/p&gt;

&lt;p&gt;Whenever we were working on a new feature we kept asking ourselves "What if?". "What if this system goes down?", "What if the user does something unexpected?"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yvLPUpc9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3e41sfbu13p9jyirb8rx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yvLPUpc9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3e41sfbu13p9jyirb8rx.png" alt="appfleet cluster creation process and region selection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;appfleet is based on multiple services and modules interacting with each other. We made sure that even if something breaks, like our whole API goes down, or our DB, or anything else really, the already running applications would not feel a thing. Everything is built to run in standalone mode and if the worst happens to wait until things get better.&lt;/p&gt;

&lt;p&gt;We also drew from our experience building and maintaining &lt;a href="https://www.jsdelivr.com/"&gt;jsDelivr&lt;/a&gt;, a free CDN for open source projects that currently serves 100 billion requests every month and more than 3 Petabytes of traffic! It is used by millions of websites all over the world and all of them trust us to ensure it never goes down. We integrated multiple levels of failover with multiple checks on every step to ensure the system can automatically handle different kinds of issues and fix itself.&lt;/p&gt;

&lt;p&gt;The appfleet platform was built to be as simple as possible and make edge compute accessible to everyone, from open source projects, to solo developers and even big enterprises that need something that works without relying on an army of DevOps engineers. This is one of the reasons we decided to build on top of containers, this allows our users to easily migrate to or from appfleet and to even run legacy applications.&lt;/p&gt;

&lt;p&gt;Today March 1st 2021 is the beginning of a long an exciting journey to make the web faster and accessible to all!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dashboard.appfleet.com/register"&gt;Register now and get $10 of free credits&lt;/a&gt; to use as you see fit. No need to enter your credit card until you are ready for production workloads.&lt;/p&gt;

&lt;p&gt;And if you are a non-profit or an open source project let us know to get sponsored with free services. We even offer &lt;a href="https://github.com/jsdelivr/jsdelivr/issues/18154"&gt;free design services&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Join us and make sure you send us your feedback and ideas! Even better email the founder directly at &lt;a href="mailto:d@appfleet.com"&gt;d@appfleet.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>cloud</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Demystifying Open-Source Orchestration of Unikernels With Unik</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Thu, 24 Sep 2020 17:10:23 +0000</pubDate>
      <link>https://forem.com/appfleet/demystifying-open-source-orchestration-of-unikernels-with-unik-h01</link>
      <guid>https://forem.com/appfleet/demystifying-open-source-orchestration-of-unikernels-with-unik-h01</guid>
      <description>&lt;h1&gt;
  
  
  Abstract
&lt;/h1&gt;

&lt;p&gt;As the cloud-native ecosystem continues to evolve, many alternative solutions are popping-up, that challenges the status quo of application deployment methodologies. One of these solutions that is quickly gaining traction is &lt;strong&gt;Unikernels, which are executable images that can run natively on a hypervisor without the need for a separate operating system&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;For cloud-native platforms to integrate unikernels into their ecosystem, they are required to provide unikernels with the same services they provide for containers. In this post, we are going to introduce &lt;strong&gt;UniK, an open-source orchestration system for unikernels&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;UniK (pronounced you-neek) is a tool for simplifying compilation and orchestration of unikernels. Similar to the way Docker builds and orchestrates containers, UniK automates compilation of popular languages (C/C++, Golang, Java, Node.js. Python) into unikernels. UniK deploys unikernels as virtual machines on various cloud platforms. &lt;/p&gt;

&lt;h1&gt;
  
  
  UniK Design
&lt;/h1&gt;

&lt;p&gt;The UniK Daemon consists of 3 major components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The API server&lt;/li&gt;
&lt;li&gt;Compilers&lt;/li&gt;
&lt;li&gt;Providers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;strong&gt;API Server&lt;/strong&gt; handles requests from the CLI or any HTTP Client, then determines which is the appropriate &lt;strong&gt;provider&lt;/strong&gt; and/or &lt;strong&gt;compiler&lt;/strong&gt; to service the request.&lt;/p&gt;

&lt;p&gt;When the &lt;strong&gt;API Server&lt;/strong&gt; receives a &lt;em&gt;build&lt;/em&gt; request (&lt;code&gt;POST /images/:image_name/create&lt;/code&gt;), it calls the specified &lt;strong&gt;compiler&lt;/strong&gt; to build the raw image, and then passes the raw image to the specified &lt;strong&gt;provider&lt;/strong&gt;, which processes the raw image with the &lt;code&gt;Stage()&lt;/code&gt; method, turning it into an infrastructure-specific bootable image (e.g. an &lt;em&gt;Amazon AMI on AWS&lt;/em&gt;)&lt;/p&gt;

&lt;p&gt;Let's go ahead and try spinning up a unikernel ourselves.&lt;/p&gt;

&lt;h1&gt;
  
  
  Deploying a GO HTTP daemon with UniK
&lt;/h1&gt;

&lt;p&gt;In this tutorial, we are going to be:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/solo-io/unik/blob/master/docs/getting_started.md#installing-unik"&gt;Installing UniK&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/solo-io/unik/blob/master/docs/getting_started.md#write-a-go-http-server"&gt;Writing a simple HTTP Daemon in Go&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/solo-io/unik/blob/master/docs/getting_started.md#compile-an-image-and-run-on-virtualbox"&gt;Compiling to a unikernel and launching an instance on Virtualbox&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Install, configure, and launch UniK
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure that each of the following are installed&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="http://www.docker.com/"&gt;Docker&lt;/a&gt; installed and running with at least 6GB available space for building images&lt;/li&gt;
&lt;li&gt;&lt;code&gt;jq&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;make&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.virtualbox.org/"&gt;Virtualbox&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Install UniK&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone https://github.com/solo-io/unik.git
$ cd unik
$ make binary
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: &lt;code&gt;make&lt;/code&gt; will take quite a few minutes the first time it runs. The UniK &lt;code&gt;Makefile&lt;/code&gt; is pulling all of the Docker images that bundle UniK's dependencies.&lt;/p&gt;

&lt;p&gt;Then, place the &lt;code&gt;unik&lt;/code&gt; executable in your &lt;code&gt;$PATH&lt;/code&gt; to make running UniK commands easier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mv _build/unik /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configure a Host-only network on VirtualBox&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Open Virtualbox.&lt;br&gt;
2.Open &lt;strong&gt;Preferences &amp;gt; Network &amp;gt; Host-only Networks&lt;/strong&gt;.&lt;br&gt;
3.Click the green &lt;em&gt;Add&lt;/em&gt; button on the right side of the UI.&lt;br&gt;
4.Record the name of the new Host-only adapter (e.g. "vboxnet0"). You will need this in your UniK configuration.&lt;br&gt;
5.Ensure that the Virtualbox DHCP Server is Enabled for this Host-only Network&lt;br&gt;
6.With the Host-only Network selected, Click the edit button (screwdriver image)&lt;br&gt;
7.In the &lt;strong&gt;Adapter&lt;/strong&gt; tab, note the IPv4 address and netmask of the adapter.&lt;br&gt;
8.In the &lt;strong&gt;DHCP Server&lt;/strong&gt; tab, check the &lt;strong&gt;Enable Server&lt;/strong&gt; box.&lt;br&gt;
9.Set &lt;strong&gt;Server Address&lt;/strong&gt; an IP on the same subnet as the Adapter IP. For example, if the adapter IP is &lt;code&gt;192.168.100.1&lt;/code&gt;, make set the DHCP server IP as &lt;code&gt;192.168.100.X&lt;/code&gt;, where X is a number between 2-254.&lt;br&gt;
10.Set &lt;strong&gt;Server Mask&lt;/strong&gt; to the netmask you just noted.&lt;br&gt;
11.Set &lt;strong&gt;Upper/Lower Address Bound&lt;/strong&gt; to a range of IPs on the same subnet. We recommend using the range &lt;code&gt;X-254&lt;/code&gt; where X is one higher than the IP you used for the DHCP server itself. E.g., if your DHCP server is &lt;code&gt;192.168.100.2&lt;/code&gt;, you can set the lower and upper bounds to &lt;code&gt;192.168.100.3&lt;/code&gt; and &lt;code&gt;192.168.100.254&lt;/code&gt;, respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure UniK daemon&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;UniK configuration files are stored in &lt;code&gt;$HOME/.unik&lt;/code&gt;. Create this directory, if it is not present:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$mkdir $HOME/.unik

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Using a text editor, create and save the following to &lt;code&gt;$HOME/.unik/daemon-config.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;providers:
  virtualbox:
    - name: my-vbox
      adapter_type: host_only
      adapter_name: NEW_HOST_ONLY_ADAPTER
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Replacing &lt;code&gt;NEW_HOST_ONLY_ADAPTER&lt;/code&gt; with the name of the network adapter you created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Launch UniK and automatically deploy the Virtualbox Instance Listener&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open a new terminal window/tab. This terminal will be where we leave the UniK daemon running.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cd&lt;/code&gt; to the &lt;code&gt;_build&lt;/code&gt;directory created by &lt;code&gt;make&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;run &lt;code&gt;unik daemon --debug&lt;/code&gt; (the &lt;code&gt;--debug&lt;/code&gt; flag is optional, if you want to see more verbose output)&lt;/li&gt;
&lt;li&gt;UniK will compile and deploy its own 30 MB unikernel. This unikernel is the &lt;a href="https://github.com/solo-io/unik/blob/master/docs/instance_listener.md"&gt;Unik Instance Listener&lt;/a&gt;. The Instance Listener uses udp broadcast to detect (the IP address) and bootstrap instances running on Virtualbox.&lt;/li&gt;
&lt;li&gt;After this is finished, UniK is running and ready to accept commands.&lt;/li&gt;
&lt;li&gt;Open a new terminal window and type &lt;code&gt;unik target --host localhost&lt;/code&gt; to set the CLI target to the your local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Write a Go HTTP server
&lt;/h3&gt;

&lt;p&gt;Open a new terminal window, but leave the window with the daemon running. This window will be used for running UniK CLI commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Create httpd.go file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file &lt;code&gt;httpd.go&lt;/code&gt; using a text editor. Copy and paste the following code in that file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "fmt"
    "net/http"
)

func main() {
    http.HandleFunc("/", handler)
    http.ListenAndServe(":8080", nil)
}

func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "my first unikernel!")
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.Run the code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Try running this code with &lt;code&gt;go run httpd.go&lt;/code&gt;. Visit &lt;a href="http://localhost:8080/"&gt;http://localhost:8080/&lt;/a&gt; to see that the server is running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Create a dummy &lt;code&gt;Godeps&lt;/code&gt; file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need to create a dummy &lt;code&gt;Godeps&lt;/code&gt; file. This is necessary to tell the Go compiler how Go projects and their dependencies are structured. Fortunately, with this example, our project has no dependencies, and we can just fill out a simple &lt;code&gt;Godeps&lt;/code&gt; file without installing &lt;code&gt;godep&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Note: For Go projects with imported dependencies, and nested packages, you will need to install Godeps and run &lt;code&gt;GO15VENDOREXPERIMENT=1 godep save ./...&lt;/code&gt; in your project. See &lt;a href="https://github.com/solo-io/unik/blob/master/docs/compilers/rump.md#golang"&gt;Compiling Go Apps with UniK&lt;/a&gt; for more information.&lt;/p&gt;

&lt;p&gt;To create the dummy &lt;code&gt;Godeps&lt;/code&gt; file, create a folder named Godeps in the same directory as &lt;code&gt;httpd.go&lt;/code&gt;. Inside, create a file named &lt;code&gt;Godeps.json&lt;/code&gt; and paste the following inside:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "ImportPath": "my_httpd",
    "GoVersion": "go1.6",
    "GodepVersion": "v63",
    "Packages": [
        "./.."
    ],
    "Deps": [
        {
            "ImportPath": "github.com/solo-io/unik/docs/examples",
            "Rev": "f8cc0dd435de36377eac060c93481cc9f3ae9688"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For the purposes of this example, what matters here is &lt;code&gt;my_httpd&lt;/code&gt;. It instructs the Go compiler that the project should be installed from &lt;code&gt;$GOPATH/src/my_httpd&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Great! Now we're ready to compile this code to a unikernel.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Compile an image to run on VirtualBox
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1.Compile sources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the following command from the directory where your &lt;code&gt;httpd.go&lt;/code&gt; is located:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unik build --name myImage --path ./ --base rump --language go --provider virtualbox

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This command will instruct UniK to compile the sources found in the working directory (&lt;code&gt;./&lt;/code&gt;) using the &lt;code&gt;rump-go-virtualbox&lt;/code&gt; compiler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Watch output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can watch the output of the &lt;code&gt;build&lt;/code&gt; command in the terminal window running the daemon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Locate disk image&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When &lt;code&gt;build&lt;/code&gt; finishes, the resulting disk image will reside at &lt;code&gt;$HOME/.unik/virtualbox/images/myImage/boot.vmdk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.Run instance of disk image&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run an instance of this image with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unik run --instanceName myInstance --imageName myImage
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5.Check IP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the instance finishes launching, let's check its IP and see if it is running our application. Run &lt;code&gt;unik instances&lt;/code&gt;. The instance IP Address should be listed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.View the instance!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Direct your browser to &lt;code&gt;http://instance-ip:8080&lt;/code&gt; and see that your instance is running! &lt;/p&gt;

&lt;h1&gt;
  
  
  Finishing up
&lt;/h1&gt;

&lt;p&gt;To clean up your image and the instance you created&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unik rmi --force --image myImage
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And we're done. We hope you will benefit from this post and tutorial and stay tuned for future posts!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Building Images Faster and Better With Multi-Stage Builds</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Thu, 24 Sep 2020 17:09:55 +0000</pubDate>
      <link>https://forem.com/appfleet/building-images-faster-and-better-with-multi-stage-builds-3b1l</link>
      <guid>https://forem.com/appfleet/building-images-faster-and-better-with-multi-stage-builds-3b1l</guid>
      <description>&lt;p&gt;There is no doubt about the fact that Docker makes it very easy to deploy multiple applications on a single box. Be it different versions of the same tool, different applications with different version dependencies - Docker has you covered. But then nothing comes free. This flexibility comes with some problems - like &lt;strong&gt;high disk usage and large images&lt;/strong&gt;. With Docker, you have to be careful about writing your Dockerfile efficiently in order to reduce the image size and also improve the build times. &lt;/p&gt;

&lt;p&gt;Docker provides a set of &lt;a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/" rel="noopener noreferrer"&gt;standard practices&lt;/a&gt; to follow in order to keep your image size small - also covers multi-stage builds in brief.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-stage&lt;/strong&gt; builds are specifically useful for use cases where we build an &lt;em&gt;artifact, binary or executable&lt;/em&gt;. Usually, there are lots of dependencies required for building the binary - for example - GCC, Maven, build-essentials, etc., but once you have the executable, you don’t need those dependencies to run the executable. Multi-stage builds use this to skim the image size. They let you build the executable in a separate environment and then build the final image only with the executable and minimal dependencies required to run the executable.&lt;/p&gt;

&lt;p&gt;For example, here’s a &lt;a href="https://github.com/go-training/helloworld" rel="noopener noreferrer"&gt;simple application&lt;/a&gt; written in &lt;em&gt;Go&lt;/em&gt;. All it does is to print “Hello World!!” as output. Let’s start without using multi-stage builds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dockerfile
FROM golang
ADD . /app
WORKDIR /app
RUN go build # This will create a binary file named app
ENTRYPOINT /app/app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Build and run the image
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t goapp .
~/g/helloworld ❯❯❯ docker run -it --rm goapp
Hello World!!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now let us check the image size
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/g/helloworld ❯❯❯ docker images | grep goapp
goapp                                          latest              b4221e45dfa0        18 seconds ago      805MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;New Dockerfile
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build executable stage
FROM golang
ADD . /app
WORKDIR /app
RUN go build
ENTRYPOINT /app/app
# Build final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /app/app .
CMD ["./app"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Re-build and run the image
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t goapp .
~/g/helloworld ❯❯❯ docker run -it --rm goapp
Hello World!!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Let us check the image again
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/g/helloworld ❯❯❯ docker images | grep goapp
goapp                                          latest              100f92d756da        8 seconds ago       8.15MB
~/g/helloworld ❯❯❯

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see a massive reduction in the image size -&amp;gt; From &lt;strong&gt;805 MB ** we are down to **8.15MB&lt;/strong&gt;. This is mostly because the &lt;em&gt;Golang&lt;/em&gt; image has lots of dependencies which our final executable doesn’t even require for running. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s happening here?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are building the image in two stages. First, we are using a &lt;em&gt;Golang&lt;/em&gt; base image, copying our code inside it and building our executable file &lt;em&gt;App&lt;/em&gt;. Now in the next stage, we are using a new &lt;em&gt;Alpine&lt;/em&gt; base image and copying the binary which we built earlier to our new stage. Important point to note here is that the image built at each stage is entirely independent.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stage 0
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build executable stage
FROM golang
ADD . /app
WORKDIR /app
RUN go build
ENTRYPOINT /app/app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Stage 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /app/app .
CMD ["./app”]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the line &lt;code&gt;COPY —from=0 /app/app&lt;/code&gt;  -&amp;gt; this lets you access data from inside the image built in previous image. &lt;/p&gt;

&lt;h1&gt;
  
  
  How multi-stage builds work?
&lt;/h1&gt;

&lt;p&gt;If you look at the process carefully multi-stage builds are not much different from actual Docker builds. The only major difference is that you build multiple independent images (1 per stage) and you get the capability to copy artifacts/files from one image to another easily. The feature which multi-stage build is providing right now was earlier archived from through scripts. People used to create the build image - copy artifact from it manually - and then copy it to a new image with no additional dependencies. In the above example, we build one image on &lt;em&gt;Stage 0&lt;/em&gt; and then in &lt;em&gt;Stage 1&lt;/em&gt; we build another image, to which we copy files from the older image - nothing complicated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsoafyo8g01m0wh02rzmu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsoafyo8g01m0wh02rzmu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: We are copying &lt;code&gt;/app&lt;/code&gt; from one image to another - not one container to another&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;This can speed up deployments and save cost in multiple ways&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You build efficient lightweight images - hence you ship lesser data during deployment which saves both cost and time.&lt;/li&gt;
&lt;li&gt;You can stop a multi-stage build at any stage - so you can use it to avoid the builder pattern with multi-stage builds. Have a single Dockerfile for both &lt;em&gt;dev&lt;/em&gt;, &lt;em&gt;staging&lt;/em&gt; and &lt;em&gt;deployment&lt;/em&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above was just a small example, multi-stage builds can be used to improve Docker images of applications written in other languages as well. Also, multi-stage builds can help to avoid writing multiple Dockerfiles (builder pattern) - instead a single Dockerfile with multiple stages can be adapted to streamline the development process. If you haven't explored it already - go ahead and do it. &lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>Tutorial: Kubernetes-Native Backup and Recovery With Stash</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Fri, 18 Sep 2020 09:29:42 +0000</pubDate>
      <link>https://forem.com/appfleet/tutorial-kubernetes-native-backup-and-recovery-with-stash-3dbd</link>
      <guid>https://forem.com/appfleet/tutorial-kubernetes-native-backup-and-recovery-with-stash-3dbd</guid>
      <description>&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;Having a proper backup recovery plan is vital to any organization's IT operation. However, when you begin to distribute workloads across data centers and regions, that process begins to become more and more complex. Container orchestration platforms such as Kubernetes have begun to ease this burden and enabled the management of distributed workloads in areas that were previously very challenging. &lt;/p&gt;

&lt;p&gt;In this post, we are going to introduce you to a Kubernetes-native tool for taking backups of your disks, helping with the crucial recovery plan. &lt;strong&gt;Stash is a Restic Operator that accelerates the task of backing up and recovering your Kubernetes infrastructure&lt;/strong&gt;. You can read more about the Operator Framework via &lt;a href="https://appfleet.com/blog/first-steps-with-the-kubernetes-operator/" rel="noopener noreferrer"&gt;this blog post&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  How does Stash work?
&lt;/h1&gt;

&lt;p&gt;Using Stash, you can backup Kubernetes volumes mounted in following types of workloads:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployment&lt;/li&gt;
&lt;li&gt;DaemonSet&lt;/li&gt;
&lt;li&gt;ReplicaSet&lt;/li&gt;
&lt;li&gt;ReplicationController&lt;/li&gt;
&lt;li&gt;StatefulSet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the heart of Stash is a Kubernetes &lt;a href="https://book.kubebuilder.io/basics/what_is_a_controller.html" rel="noopener noreferrer"&gt;controller&lt;/a&gt; which uses &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="noopener noreferrer"&gt;Custom Resource Definition (CRD)&lt;/a&gt; to specify targets and behaviors of the backup and restore process in a Kubernetes native way. A simplified architecture of Stash is shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwygjg2ki39t1e6trbj4p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwygjg2ki39t1e6trbj4p.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Installing Stash
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Using Helm 3
&lt;/h3&gt;

&lt;p&gt;Stash can be installed via &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; using the &lt;a href="https://github.com/stashed/installer/tree/v0.9.0-rc.6/charts/stash" rel="noopener noreferrer"&gt;chart&lt;/a&gt; from &lt;a href="https://github.com/appscode/charts" rel="noopener noreferrer"&gt;AppsCode Charts Repository&lt;/a&gt;. To install the chart with the release name &lt;code&gt;stash-operator&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm search repo appscode/stash --version v0.9.0-rc.6
NAME            CHART          VERSION      APP VERSION DESCRIPTION
appscode/stash  v0.9.0-rc.6    v0.9.0-rc.6  Stash by AppsCode - Backup your Kubernetes Volumes

$ helm install stash-operator appscode/stash \
  --version v0.9.0-rc.6 \
  --namespace kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Using YAML
&lt;/h1&gt;

&lt;p&gt;If you prefer to not use Helm, you can generate YAMLs from Stash chart and deploy using &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm search repo appscode/stash --version v0.9.0-rc.6
NAME            CHART VERSION APP VERSION DESCRIPTION
appscode/stash  v0.9.0-rc.6    v0.9.0-rc.6  Stash by AppsCode - Backup your Kubernetes Volumes

$ helm template stash-operator appscode/stash \
  --version v0.9.0-rc.6 \
  --namespace kube-system \
  --no-hooks | kubectl apply -f -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing on GKE Cluster
&lt;/h3&gt;

&lt;p&gt;If you are installing Stash on a GKE cluster, you will need cluster admin permissions to install Stash operator. Run the following command to grant admin permission to the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create clusterrolebinding "cluster-admin-$(whoami)" \
  --clusterrole=cluster-admin \
  --user="$(gcloud config get-value core/account)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition, if your GKE cluster is a &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="noopener noreferrer"&gt;private cluster&lt;/a&gt;, you will need to either add an additional firewall rule that allows master nodes access port &lt;code&gt;8443/tcp&lt;/code&gt; on worker nodes, or change the existing rule that allows access to ports &lt;code&gt;443/tcp&lt;/code&gt; and &lt;code&gt;10250/tcp&lt;/code&gt; to also allow access to port &lt;code&gt;8443/tcp&lt;/code&gt;. The procedure to add or modify firewall rules is described in the official GKE documentation for private clusters mentioned above.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify installation
&lt;/h3&gt;

&lt;p&gt;To check if Stash operator pods have started, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods --all-namespaces -l app=stash --watch

NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   stash-operator-859d6bdb56-m9br5   2/2       Running   2          5s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the operator pods are running, you can cancel the above command by typing &lt;code&gt;Ctrl+C&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now, to confirm CRD groups have been registered by the operator, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get crd -l app=stash

NAME                                 AGE
recoveries.stash.appscode.com        5s
repositories.stash.appscode.com      5s
restics.stash.appscode.com           5s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, you are ready to take your first backup using Stash.&lt;/p&gt;

&lt;h1&gt;
  
  
  Configuring Auto Backup for Database
&lt;/h1&gt;

&lt;p&gt;To keep everything isolated, we are going to use a separate namespace called &lt;code&gt;demo&lt;/code&gt; throughout this tutorial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create ns demo
namespace/demo created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Prepare Backup Blueprint
&lt;/h3&gt;

&lt;p&gt;We are going to use &lt;a href="https://stash.run/docs/v0.9.0-rc.6/guides/latest/backends/gcs" rel="noopener noreferrer"&gt;GCS Backend&lt;/a&gt; to store the backed up data. You can use any supported backend you prefer. You just have to configure Storage Secret and &lt;code&gt;spec.backend&lt;/code&gt; section of &lt;code&gt;BackupBlueprint&lt;/code&gt; to match with your backend. Visit &lt;a href="https://stash.run/docs/v0.9.0-rc.6/guides/latest/backends/overview" rel="noopener noreferrer"&gt;here&lt;/a&gt; to learn which backends are supported by Stash and how to configure them.&lt;/p&gt;

&lt;p&gt;For GCS backend, if the bucket does not exist, Stash needs &lt;code&gt;Storage Object Admin&lt;/code&gt; role permissions to create the bucket. For more details, please check the following &lt;a href="https://stash.run/docs/v0.9.0-rc.6/guides/latest/backends/gcs" rel="noopener noreferrer"&gt;guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Storage Secret&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;At first, let’s create a Storage Secret for the GCS backend,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo -n 'changeit' &amp;gt; RESTIC_PASSWORD
$ echo -n '&amp;lt;your-project-id&amp;gt;' &amp;gt; GOOGLE_PROJECT_ID
$ mv downloaded-sa-json.key &amp;gt; GOOGLE_SERVICE_ACCOUNT_JSON_KEY
$ kubectl create secret generic -n demo gcs-secret \
    --from-file=./RESTIC_PASSWORD \
    --from-file=./GOOGLE_PROJECT_ID \
    --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY
secret/gcs-secret created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create BackupBlueprint:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we have to create a &lt;code&gt;BackupBlueprint&lt;/code&gt; CRD with a blueprint for &lt;code&gt;Repository&lt;/code&gt; and &lt;code&gt;BackupConfiguration&lt;/code&gt; object.&lt;/p&gt;

&lt;p&gt;Below is the YAML of the &lt;code&gt;BackupBlueprint&lt;/code&gt; object that we are going to create:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: stash.appscode.com/v1beta1
kind: BackupBlueprint
metadata:
  name: postgres-backup-blueprint
spec:
  # ============== Blueprint for Repository ==========================
  backend:
    gcs:
      bucket: appscode-qa
      prefix: stash-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME}
    storageSecretName: gcs-secret
  # ============== Blueprint for BackupConfiguration =================
  task:
    name: postgres-backup-${TARGET_APP_VERSION}
  schedule: "*/5 * * * *"
  retentionPolicy:
    name: 'keep-last-5'
    keepLast: 5
    prune: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we have used few variables (format: &lt;code&gt;${&amp;lt;variable name&amp;gt;}&lt;/code&gt;) in the &lt;code&gt;spec.backend.gcs.prefix&lt;/code&gt; field. Stash will substitute these variables with values from the respective target. To learn which variables you can use in the &lt;code&gt;prefix&lt;/code&gt; field, please visit &lt;a href="https://stash.run/docs/v0.9.0-rc.6/concepts/crds/backupblueprint#repository-blueprint" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s create the &lt;code&gt;BackupBlueprint&lt;/code&gt; that we have shown above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://github.com/stashed/docs/raw/v0.9.0-rc.6/docs/examples/guides/latest/auto-backup/database/backupblueprint.yaml
backupblueprint.stash.appscode.com/postgres-backup-blueprint created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, automatic backup is configured for PostgreSQL database. We just have to add an annotation to the &lt;code&gt;AppBinding&lt;/code&gt; of the targeted database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Required Annotation for Auto-Backup Database:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You have to add the following annotation to the &lt;code&gt;AppBinding&lt;/code&gt; CRD of the targeted database to enable backup for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stash.appscode.com/backup-blueprint: &amp;lt;BackupBlueprint name&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This annotation specifies the name of the &lt;code&gt;BackupBlueprint&lt;/code&gt; object where a blueprint for &lt;code&gt;Repository&lt;/code&gt; and &lt;code&gt;BackupConfiguration&lt;/code&gt; has been defined.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare Databases
&lt;/h3&gt;

&lt;p&gt;Next, we are going to deploy two sample PostgreSQL databases of two different versions using KubeDB. We are going to backup these two databases using auto-backup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy First PostgreSQL Sample:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below is the YAML of the first &lt;code&gt;Postgres&lt;/code&gt; CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kubedb.com/v1alpha1
kind: Postgres
metadata:
  name: sample-postgres-1
  namespace: demo
spec:
  version: "11.2"
  storageType: Durable
  storage:
    storageClassName: "standard"
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
  terminationPolicy: Delete

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s create the &lt;code&gt;Postgres&lt;/code&gt; we have shown above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://github.com/stashed/docs/raw/v0.9.0-rc.6/docs/examples/guides/latest/auto-backup/database/sample-postgres-1.yaml
postgres.kubedb.com/sample-postgres-1 created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;KubeDB will deploy a PostgreSQL database according to the above specification and it will create the necessary secrets and services to access the database. It will also create an &lt;code&gt;AppBinding&lt;/code&gt; CRD that holds the necessary information to connect with the database.&lt;/p&gt;

&lt;p&gt;Verify that an &lt;code&gt;AppBinding&lt;/code&gt; has been created for this PostgreSQL sample:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get appbinding -n demo
NAME                AGE
sample-postgres-1   47s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you view the YAML of this &lt;code&gt;AppBinding&lt;/code&gt;, you will see it holds service and secret information. Stash uses this information to connect with the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get appbinding -n demo sample-postgres-1 -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: appcatalog.appscode.com/v1alpha1
kind: AppBinding
metadata:
  name: sample-postgres-1
  namespace: demo
  ...
spec:
  clientConfig:
    service:
      name: sample-postgres-1
      path: /
      port: 5432
      query: sslmode=disable
      scheme: postgresql
  secret:
    name: sample-postgres-1-auth
  secretTransforms:
  - renameKey:
      from: POSTGRES_USER
      to: username
  - renameKey:
      from: POSTGRES_PASSWORD
      to: password
  type: kubedb.com/postgres
  version: "11.2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deploy Second PostgreSQL Sample:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below is the YAML of the second &lt;code&gt;Postgres&lt;/code&gt; object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kubedb.com/v1alpha1
kind: Postgres
metadata:
  name: sample-postgres-2
  namespace: demo
spec:
  version: "10.6-v2"
  storageType: Durable
  storage:
    storageClassName: "standard"
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
  terminationPolicy: Delete

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s create the &lt;code&gt;Postgres&lt;/code&gt; we have shown above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://github.com/stashed/docs/raw/v0.9.0-rc.6/docs/examples/guides/latest/auto-backup/database/sample-postgres-2.yaml
postgres.kubedb.com/sample-postgres-2 created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that an &lt;code&gt;AppBinding&lt;/code&gt; has been created for this PostgreSQL database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get appbinding -n demo
NAME                AGE
sample-postgres-1   2m49s
sample-postgres-2   10s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we can see &lt;code&gt;AppBinding&lt;/code&gt; &lt;code&gt;sample-postgres-2&lt;/code&gt; has been created for our second PostgreSQL sample.&lt;/p&gt;

&lt;h1&gt;
  
  
  Backup
&lt;/h1&gt;

&lt;p&gt;Next, we are going to add auto-backup specific annotation to the &lt;code&gt;AppBinding&lt;/code&gt; of our desired database. Stash watches for &lt;code&gt;AppBinding&lt;/code&gt; CRD. Once it finds an &lt;code&gt;AppBinding&lt;/code&gt; with auto-backup annotation, it will create a &lt;code&gt;Repository&lt;/code&gt; and a &lt;code&gt;BackupConfiguration&lt;/code&gt; CRD according to respective &lt;code&gt;BackupBlueprint&lt;/code&gt;. Then, rest of the backup process will proceed as normal database backup as described here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backup First PostgreSQL Sample
&lt;/h3&gt;

&lt;p&gt;Let’s backup our first PostgreSQL sample using auto-backup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add Annotations&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;At first, add the auto-backup specific annotation to the AppBinding &lt;code&gt;sample-postgres-1&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl annotate appbinding sample-postgres-1 -n demo --overwrite \
  stash.appscode.com/backup-blueprint=postgres-backup-blueprint

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the annotation has been added successfully:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get appbinding -n demo sample-postgres-1 -o yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: appcatalog.appscode.com/v1alpha1
kind: AppBinding
metadata:
  annotations:
    stash.appscode.com/backup-blueprint: postgres-backup-blueprint
  name: sample-postgres-1
  namespace: demo
  ...
spec:
  clientConfig:
    service:
      name: sample-postgres-1
      path: /
      port: 5432
      query: sslmode=disable
      scheme: postgresql
  secret:
    name: sample-postgres-1-auth
  secretTransforms:
  - renameKey:
      from: POSTGRES_USER
      to: username
  - renameKey:
      from: POSTGRES_PASSWORD
      to: password
  type: kubedb.com/postgres
  version: "11.2"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following this, Stash will create a &lt;code&gt;Repository&lt;/code&gt; and a &lt;code&gt;BackupConfiguration&lt;/code&gt; CRD according to the blueprint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify Repository:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify that the &lt;code&gt;Repository&lt;/code&gt; has been created successfully by the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get repository -n demo
NAME                         INTEGRITY   SIZE   SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
postgres-sample-postgres-1                                                                2m23s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we view the YAML of this &lt;code&gt;Repository&lt;/code&gt;, we are going to see that the variables &lt;code&gt;${TARGET_NAMESPACE}&lt;/code&gt;, &lt;code&gt;${TARGET_APP_RESOURCE}&lt;/code&gt; and &lt;code&gt;${TARGET_NAME}&lt;/code&gt; has been replaced by &lt;code&gt;demo&lt;/code&gt;, &lt;code&gt;postgres&lt;/code&gt; and &lt;code&gt;sample-postgres-1&lt;/code&gt; respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get repository -n demo postgres-sample-postgres-1 -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: stash.appscode.com/v1beta1
kind: Repository
metadata:
  creationTimestamp: "2019-08-01T13:54:48Z"
  finalizers:
  - stash
  generation: 1
  name: postgres-sample-postgres-1
  namespace: demo
  resourceVersion: "50171"
  selfLink: /apis/stash.appscode.com/v1beta1/namespaces/demo/repositories/postgres-sample-postgres-1
  uid: ed49dde4-b463-11e9-a6a0-080027aded7e
spec:
  backend:
    gcs:
      bucket: appscode-qa
      prefix: stash-backup/demo/postgres/sample-postgres-1
    storageSecretName: gcs-secret

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify BackupConfiguration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify that the *&lt;code&gt;BackupConfiguration&lt;/code&gt; CRD has been created by the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get backupconfiguration -n demo
NAME                         TASK                   SCHEDULE      PAUSED   AGE
postgres-sample-postgres-1   postgres-backup-11.2   */5 * * * *            3m39s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the &lt;code&gt;TASK&lt;/code&gt; field. It denotes that this backup will be performed using &lt;code&gt;postgres-backup-11.2&lt;/code&gt; task. We had specified &lt;code&gt;postgres-backup-${TARGET_APP_VERSION}&lt;/code&gt; as task name in the &lt;code&gt;BackupBlueprint&lt;/code&gt;. Here, the variable &lt;code&gt;${TARGET_APP_VERSION}&lt;/code&gt; has been substituted by the database version.&lt;/p&gt;

&lt;p&gt;Let’s check the YAML of this &lt;code&gt;BackupConfiguration&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get backupconfiguration -n demo postgres-sample-postgres-1 -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: stash.appscode.com/v1beta1
kind: BackupConfiguration
metadata:
  creationTimestamp: "2019-08-01T13:54:48Z"
  finalizers:
  - stash.appscode.com
  generation: 1
  name: postgres-sample-postgres-1
  namespace: demo
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: false
    kind: AppBinding
    name: sample-postgres-1
    uid: a799156e-b463-11e9-a6a0-080027aded7e
  resourceVersion: "50170"
  selfLink: /apis/stash.appscode.com/v1beta1/namespaces/demo/backupconfigurations/postgres-sample-postgres-1
  uid: ed4bd257-b463-11e9-a6a0-080027aded7e
spec:
  repository:
    name: postgres-sample-postgres-1
  retentionPolicy:
    keepLast: 5
    name: keep-last-5
    prune: true
  runtimeSettings: {}
  schedule: '*/5 * * * *'
  target:
    ref:
      apiVersion: v1
      kind: AppBinding
      name: sample-postgres-1
  task:
    name: postgres-backup-11.2
  tempDir: {}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that the &lt;code&gt;spec.target.ref&lt;/code&gt; is pointing to the AppBinding &lt;code&gt;sample-postgres-1&lt;/code&gt; that we have just annotated with auto-backup annotation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wait for BackupSession:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, wait for the next backup schedule. Run the following command to watch &lt;code&gt;BackupSession&lt;/code&gt; CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ watch -n 1 kubectl get backupsession -n demo -l=stash.appscode.com/backup-configuration=postgres-sample-postgres-1

Every 1.0s: kubectl get backupsession -n demo -l=stash.appscode.com/backup-configuration=postgres-sample-postgres-1  workstation: Thu Aug  1 20:35:43 2019

NAME                                    INVOKER-TYPE          INVOKER-NAME                 PHASE       AGE
postgres-sample-postgres-1-1564670101   BackupConfiguration   postgres-sample-postgres-1   Succeeded   42s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Backup CronJob creates &lt;code&gt;BackupSession&lt;/code&gt; CRD with the following label &lt;code&gt;stash.appscode.com/backup-configuration=&amp;lt;BackupConfiguration crd name&amp;gt;&lt;/code&gt;. We can use this label to watch only the &lt;code&gt;BackupSession&lt;/code&gt; of our desired &lt;code&gt;BackupConfiguration&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify Backup&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;When backup session is completed, Stash will update the respective &lt;code&gt;Repository&lt;/code&gt; to reflect the latest state of backed up data.&lt;/p&gt;

&lt;p&gt;Run the following command to check if a snapshot has been sent to the backend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get repository -n demo postgres-sample-postgres-1
NAME                         INTEGRITY   SIZE        SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
postgres-sample-postgres-1   true        1.324 KiB   1                73s                      6m7s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we navigate to &lt;code&gt;stash-backup/demo/postgres/sample-postgres-1&lt;/code&gt; directory of our GCS bucket, we are going to see that the snapshot has been stored there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdlnzpwuv0azkeggdnztg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdlnzpwuv0azkeggdnztg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Backup Second Sample PostgreSQL
&lt;/h3&gt;

&lt;p&gt;Now, lets backup our second PostgreSQL sample using the same &lt;code&gt;BackupBlueprint&lt;/code&gt; we have used to backup the first PostgreSQL sample.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add Annotations:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Add the auto backup specific annotation to AppBinding &lt;code&gt;sample-postgres-2&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl annotate appbinding sample-postgres-2 -n demo --overwrite \
  stash.appscode.com/backup-blueprint=postgres-backup-blueprint
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify Repository:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify that the &lt;code&gt;Repository&lt;/code&gt; has been created successfully by the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get repository -n demo
NAME                         INTEGRITY   SIZE        SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
postgres-sample-postgres-1   true        1.324 KiB   1                2m3s                     6m57s
postgres-sample-postgres-2                                                                     15s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, repository &lt;code&gt;postgres-sample-postgres-2&lt;/code&gt; has been created for the second PostgreSQL sample.&lt;/p&gt;

&lt;p&gt;If we view the YAML of this &lt;code&gt;Repository&lt;/code&gt;, we will see that the variables &lt;code&gt;${TARGET_NAMESPACE}&lt;/code&gt;, &lt;code&gt;${TARGET_APP_RESOURCE}&lt;/code&gt; and &lt;code&gt;${TARGET_NAME}&lt;/code&gt; have been replaced by &lt;code&gt;demo&lt;/code&gt;, &lt;code&gt;postgres&lt;/code&gt; and &lt;code&gt;sample-postgres-2&lt;/code&gt; respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get repository -n demo postgres-sample-postgres-2 -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: stash.appscode.com/v1beta1
kind: Repository
metadata:
  creationTimestamp: "2019-08-01T14:37:22Z"
  finalizers:
  - stash
  generation: 1
  name: postgres-sample-postgres-2
  namespace: demo
  resourceVersion: "56103"
  selfLink: /apis/stash.appscode.com/v1beta1/namespaces/demo/repositories/postgres-sample-postgres-2
  uid: df58523c-b469-11e9-a6a0-080027aded7e
spec:
  backend:
    gcs:
      bucket: appscode-qa
      prefix: stash-backup/demo/postgres/sample-postgres-2
    storageSecretName: gcs-secret

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify BackupConfiguration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify that the &lt;code&gt;BackupConfiguration&lt;/code&gt; CRD has been created by the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get backupconfiguration -n demo
NAME                         TASK                   SCHEDULE      PAUSED   AGE
postgres-sample-postgres-1   postgres-backup-11.2   */5 * * * *            7m52s
postgres-sample-postgres-2   postgres-backup-10.6   */5 * * * *            70s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, notice the &lt;code&gt;TASK&lt;/code&gt; field. This time, &lt;code&gt;${TARGET_APP_VERSION}&lt;/code&gt; has been replaced with &lt;code&gt;10.6&lt;/code&gt; which is the database version of our second sample.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wait for BackupSession:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, wait for the next backup schedule. Run the following command to watch &lt;code&gt;BackupSession&lt;/code&gt; CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ watch -n 1 kubectl get backupsession -n demo -l=stash.appscode.com/backup-configuration=postgres-sample-postgres-2
Every 1.0s: kubectl get backupsession -n demo -l=stash.appscode.com/backup-configuration=postgres-sample-postgres-2  workstation: Thu Aug  1 20:55:40 2019

NAME                                    INVOKER-TYPE          INVOKER-NAME                 PHASE       AGE
postgres-sample-postgres-2-1564671303   BackupConfiguration   postgres-sample-postgres-2   Succeeded   37s

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify Backup:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the following command to check if a snapshot has been sent to the backend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get repository -n demo postgres-sample-postgres-2
NAME                         INTEGRITY   SIZE        SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
postgres-sample-postgres-2   true        1.324 KiB   1                52s                      19m

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we navigate to &lt;code&gt;stash-backup/demo/postgres/sample-postgres-2&lt;/code&gt; directory of our GCS bucket, we are going to see that the snapshot has been stored there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fov9oai8x895sr84rk86x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fov9oai8x895sr84rk86x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Cleanup
&lt;/h1&gt;

&lt;p&gt;To cleanup the Kubernetes resources created by this tutorial, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete -n demo pg/sample-postgres-1
kubectl delete -n demo pg/sample-postgres-2

kubectl delete -n demo repository/postgres-sample-postgres-1
kubectl delete -n demo repository/postgres-sample-postgres-2

kubectl delete -n demo backupblueprint/postgres-backup-blueprint
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Final thoughts
&lt;/h1&gt;

&lt;p&gt;You've now gotten a deep dive into setting up a Kubernetes-native disaster recovery and backup solution with Stash. You can find a lot of really helpful information on their documentation site &lt;a href="https://stash.run/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. More content can be found at &lt;a href="https://appfleet.com/blog" rel="noopener noreferrer"&gt;appfleet&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Enabling multicloud K8s communication with Skupper</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Tue, 15 Sep 2020 20:25:24 +0000</pubDate>
      <link>https://forem.com/appfleet/enabling-multicloud-k8s-communication-with-skupper-4b51</link>
      <guid>https://forem.com/appfleet/enabling-multicloud-k8s-communication-with-skupper-4b51</guid>
      <description>&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;There are many challenges that engineering teams face when attempting to incorporate a multi-cloud approach into their infrastructure goals. Kubernetes does a good job of addressing some of these issues, but managing the communication of clusters that span multiple cloud providers in multiple regions can become a daunting task for teams. Often this requires complex VPNs and special firewall rules to multi-cloud cluster communication. &lt;/p&gt;

&lt;p&gt;In this post, I will be introducing you to Skupper, an open source project for enabling secure communication across Kubernetes cluster. Skupper allows your application to span multiple cloud providers, data centers, and regions. Let's see it in action!&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;

&lt;p&gt;This tutorial will demonstrate how to distribute the &lt;a href="https://istio.io/docs/examples/bookinfo/"&gt;Istio Bookinfo Application&lt;/a&gt; microservices across multiple public and private clusters. The services require no coding changes to work in the distributed application environment. With Skupper, the application behaves as if all the services are running in the same cluster.&lt;/p&gt;

&lt;p&gt;In this tutorial, you will deploy the &lt;em&gt;productpage&lt;/em&gt; and ratings services on a remote, public cluster in namespace &lt;code&gt;aws-eu-west&lt;/code&gt; and the details and reviews services in a local, on-premises cluster in namespace &lt;code&gt;laptop&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Figure 1 - Bookinfo service deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sFaO-zhO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jjgiavry2arh129ven17.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sFaO-zhO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jjgiavry2arh129ven17.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows how the services will be deployed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each cluster runs two of the application services.&lt;/li&gt;
&lt;li&gt;An ingress route to the &lt;em&gt;productpage&lt;/em&gt; service provides internet user access to the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If all services were installed on the public cluster, then the application would work as originally designed. However, since two of the services are on the &lt;em&gt;laptop&lt;/em&gt; cluster, the application fails. &lt;em&gt;productpage&lt;/em&gt; can not send requests to &lt;em&gt;details&lt;/em&gt; or to &lt;em&gt;reviews&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This demo will show how Skupper can solve the connectivity problem presented by this arrangement of service deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 2 - Bookinfo service deployment with Skupper&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B9iABK83--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gye0ijj7xuw9ckq8jzdq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B9iABK83--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gye0ijj7xuw9ckq8jzdq.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Skupper is a distributed system with installations running in one or more clusters or namespaces. Connected Skupper installations share information about what services each installation exposes. Each Skupper installation learns which services are exposed on every other installation. Skupper then runs proxy service endpoints in each namespace to properly route requests to or from every exposed service.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the public namespace, the &lt;em&gt;details&lt;/em&gt; and &lt;em&gt;reviews&lt;/em&gt; proxies intercept requests for their services and forward them to the Skupper network.&lt;/li&gt;
&lt;li&gt;In the private namespace, the &lt;em&gt;details&lt;/em&gt; and &lt;em&gt;reviews&lt;/em&gt; proxies receive requests from the Skupper network and send them to the related service.&lt;/li&gt;
&lt;li&gt;In the private namespace, the &lt;em&gt;ratings&lt;/em&gt; proxy intercepts requests for its service and forwards them to the Skupper network.&lt;/li&gt;
&lt;li&gt;In the public namespace, the &lt;em&gt;ratings&lt;/em&gt; proxy receives requests from the Skupper network and sends them to the related service.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To run this tutorial you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;kubectl&lt;/code&gt; command-line tool, version 1.15 or later &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/"&gt;(installation guide)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;skupper&lt;/code&gt; command-line tool, the latest version &lt;a href="https://skupper.io/start/index.html#step-1-install-the-skupper-command-line-tool-in-your-environment"&gt;(installation guide)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Two Kubernetes namespaces, from any providers you choose, on any clusters you choose&lt;/li&gt;
&lt;li&gt;The yaml files from &lt;a href="https://github.com/skupperproject/skupper-examples-bookinfo.git"&gt;https://github.com/skupperproject/skupper-examples-bookinfo.git&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Two logged-in console terminals, one for each cluster or namespace&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Step 1: Deploy the Bookinfo application
&lt;/h1&gt;

&lt;p&gt;This step creates a service and a deployment for each of the four Bookinfo microservices.&lt;/p&gt;

&lt;p&gt;Namespace &lt;code&gt;aws-eu-west&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f public-cloud.yaml
service/productpage created
deployment.extensions/productpage-v1 created
service/ratings created
deployment.extensions/ratings-v1 created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Namespace &lt;code&gt;laptop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f private-cloud.yaml 
service/details created
deployment.extensions/details-v1 created
service/reviews created
deployment.extensions/reviews-v3 created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 2: Expose the public productpage service
&lt;/h1&gt;

&lt;p&gt;Namespace &lt;code&gt;aws-eu-west&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl expose deployment/productpage-v1 --port 9080 --type LoadBalancer

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The Bookinfo application is accessed from the public internet through this ingress port to the &lt;code&gt;productpage&lt;/code&gt; service.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 3: Observe that the application does not work
&lt;/h1&gt;

&lt;p&gt;The web address for the Bookinfo application can be discovered from namespace &lt;code&gt;aws-eu-west&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo $(kubectl get service/productpage -o jsonpath='http://{.status.loadBalancer.ingress[0].hostname}:9080')

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Open the address in a web browser. &lt;em&gt;Productpage&lt;/em&gt; responds but the page will show errors as services in namespace &lt;code&gt;laptop&lt;/code&gt; are not reachable.&lt;/p&gt;

&lt;p&gt;We can fix that now.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 4: Set up Skupper
&lt;/h1&gt;

&lt;p&gt;This step initializes the Skupper environment on each cluster.&lt;br&gt;
Namespace &lt;code&gt;laptop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;skupper init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Namespace &lt;code&gt;aws-eu-west&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;skupper init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now the Skupper infrastructure is running. Use &lt;code&gt;skupper status&lt;/code&gt; in each console terminal to see that Skupper is available.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ skupper status
Namespace '&amp;lt;ns&amp;gt;' is ready.  It is connected to 0 other namespaces.

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you move through the steps that follow, you can use &lt;code&gt;skupper status&lt;/code&gt; at any time to check your progress.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 5: Connect your Skupper installations
&lt;/h1&gt;

&lt;p&gt;Now you need to connect your namespaces with a Skupper connection.&lt;/p&gt;

&lt;p&gt;This is a two step process.&lt;br&gt;
The &lt;code&gt;skupper connection-token &amp;lt;file&amp;gt;&lt;/code&gt; command directs Skupper to generate a secret token file with certificates that grant permission to other Skupper instances to connect to this Skupper's network.&lt;/p&gt;

&lt;p&gt;Note: Protect this file as you would do for any file that holds login credentials.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;skupper connect &amp;lt;file&amp;gt;&lt;/code&gt; command directs Skupper to connect to another Skupper's network. This step completes the Skupper connection.&lt;/p&gt;

&lt;p&gt;Note that in this arrangement the Skupper instances join to form peer networks. Typically the Skupper opening the network port will be on the public cluster. A cluster running on &lt;code&gt;laptop&lt;/code&gt; may not even have an address that is reachable from the internet. After the connection is made, the Skupper network members are peers and it does not matter which Skupper opened the network port and which connected to it.&lt;/p&gt;

&lt;p&gt;The console terminals in this demo are run by the same user on the same host. This makes the token file in the ${HOME} directory available to both terminals. If your terminals are on different machines then you may need to use &lt;code&gt;scp&lt;/code&gt; or a similar tool to transfer the token file to the system hosting the &lt;code&gt;laptop&lt;/code&gt; terminal.&lt;/p&gt;
&lt;h3&gt;
  
  
  Generate a Skupper network connection token
&lt;/h3&gt;

&lt;p&gt;Namespace &lt;code&gt;aws-eu-west&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;skupper connection-token ${HOME}/PVT-to-PUB-connection-token.yaml

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Open a Skupper connection
&lt;/h3&gt;

&lt;p&gt;Namespace &lt;code&gt;laptop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;skupper connect ${HOME}/PVT-to-PUB-connection-token.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Check the connection
&lt;/h3&gt;

&lt;p&gt;Namespace &lt;code&gt;aws-eu-west&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ skupper status
Skupper enabled for "aws-eu-west". It is connected to 1 other sites.

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Namespace &lt;code&gt;laptop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ skupper status
Skupper enabled for "laptop". It is connected to 1 other sites.

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 6: Virtualize the services you want shared
&lt;/h1&gt;

&lt;p&gt;You now have a Skupper network capable of multi-cluster communication but no services are associated with it. This step uses the &lt;code&gt;kubectl annotate&lt;/code&gt; command to notify Skupper that a service is to be included in the Skupper network.&lt;/p&gt;

&lt;p&gt;Skupper uses the annotation as the indication that a service must be virtualized. The service that receives the annotation is the physical target for network requests and the proxies that Skupper deploys in other namespaces are the virtual targets for network requests. The Skupper infrastructure then routes requests between the virtual services and the target service.&lt;/p&gt;

&lt;p&gt;Namespace &lt;code&gt;aws-eu-west&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl annotate service ratings skupper.io/proxy=http
service/ratings annotated

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Namespace &lt;code&gt;laptop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl annotate service details skupper.io/proxy=http
service/details annotated

$ kubectl annotate service reviews skupper.io/proxy=http
service/reviews annotated

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Skupper is now making the annotated services available to every namespace in the Skupper network. The Bookinfo application will work as the &lt;code&gt;productpage&lt;/code&gt; service on the public cluster has access to the &lt;code&gt;details&lt;/code&gt; and &lt;code&gt;reviews&lt;/code&gt; services on the private cluster and as the reviews service on the private cluster has access to the ratings service on the public cluster.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 7: Observe that the application works
&lt;/h1&gt;

&lt;p&gt;The web address for the Bookinfo app can be discovered from namespace &lt;code&gt;aws-eu-west&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo $(kubectl get service/productpage -o jsonpath='http://{.status.loadBalancer.ingress[0].hostname}:9080')

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Open the address in a web browser. The application should now work with no errors.&lt;/p&gt;

&lt;h1&gt;
  
  
  Clean up
&lt;/h1&gt;

&lt;p&gt;Skupper and the Bookinfo services may be removed from the clusters.&lt;/p&gt;

&lt;p&gt;Namespace &lt;code&gt;aws-eu-west&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;skupper delete
kubectl delete -f public-cloud.yaml

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Namespace &lt;code&gt;laptop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;skupper delete
kubectl delete -f private-cloud.yaml 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;Enabling a multi-cloud approach has a lot of benefits and is getting easier, thanks to tools like Skupper. If you have time, try some of  Skupper's other examples on its &lt;a href="https://github.com/skupperproject"&gt;Github Repo&lt;/a&gt;. I hope you learned something from this post. Stay tuned for more!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Cloud-native benchmarking with Kubestone</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Mon, 14 Sep 2020 18:40:45 +0000</pubDate>
      <link>https://forem.com/appfleet/cloud-native-benchmarking-with-kubestone-107m</link>
      <guid>https://forem.com/appfleet/cloud-native-benchmarking-with-kubestone-107m</guid>
      <description>&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;Organizations are increasingly looking to containers and distributed applications to provide the agility and scalability needed to satisfy their clients. While doing so, modern enterprises also need the ability to benchmark their application and be aware of certain metrics in relation to their infrastructure. &lt;/p&gt;

&lt;p&gt;In this post, I am introducing you to a cloud-native bench-marking tool known as &lt;strong&gt;Kubestone&lt;/strong&gt;. This tool is meant to assist your development teams with getting performance metrics from your Kubernetes clusters.&lt;/p&gt;

&lt;h1&gt;
  
  
  How does Kubestone work?
&lt;/h1&gt;

&lt;p&gt;At it's core, Kubestone is implemented as a &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/"&gt;Kubernetes Operator&lt;/a&gt; in &lt;a href="https://golang.org/"&gt;Go language&lt;/a&gt; with the help of &lt;a href="https://kubebuilder.io/"&gt;Kubebuilder&lt;/a&gt;. You can find more info on the Operator Framework via &lt;a href="https://appfleet.com/blog/first-steps-with-the-kubernetes-operator/"&gt;this blog post&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Kubestone leverages Open Source benchmarks to measure Core Kubernetes and Application performance. As benchmarks are executed in Kubernetes, they must be containerized to work on the cluster. A certified set of benchmark containers is provided via &lt;a href="https://hub.docker.com/r/xridge/"&gt;xridge's DockerHub space&lt;/a&gt;. Here is a list of currently supported benchmarks:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Benchmark Name&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Core/CPU&lt;/td&gt;
&lt;td&gt;sysbench&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Core/Disk&lt;/td&gt;
&lt;td&gt;fio&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Core/Disk&lt;/td&gt;
&lt;td&gt;ioping&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Core/Memory&lt;/td&gt;
&lt;td&gt;sysbench&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Core/Network&lt;/td&gt;
&lt;td&gt;iperf3&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Core/Network&lt;/td&gt;
&lt;td&gt;qperf&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTTP Load Tester&lt;/td&gt;
&lt;td&gt;drill&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application/Etcd&lt;/td&gt;
&lt;td&gt;etcd&lt;/td&gt;
&lt;td&gt;Planned&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application/K8S&lt;/td&gt;
&lt;td&gt;kubeperf&lt;/td&gt;
&lt;td&gt;Planned&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application/PostgreSQL&lt;/td&gt;
&lt;td&gt;pgbench&lt;/td&gt;
&lt;td&gt;Supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application/Spark&lt;/td&gt;
&lt;td&gt;sparkbench&lt;/td&gt;
&lt;td&gt;Planned&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Let's try installing Kubestone and running a benchmark ourselves and see how it works.&lt;/p&gt;

&lt;h1&gt;
  
  
  Installing Kubestone
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; v1.13 (or newer)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kustomize.io/"&gt;Kustomize v3.1.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Cluster admin privileges&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deploy Kubestone to &lt;code&gt;kubestone-system&lt;/code&gt; namespace with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kustomize build github.com/xridge/kubestone/config/default | kubectl create -f -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once deployed, Kubestone will listen for Custom Resources created with the &lt;code&gt;kubestone.xridge.io&lt;/code&gt; group.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmarking
&lt;/h3&gt;

&lt;p&gt;Benchmarks can be executed via Kubestone by creating Custom Resources in your cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Namespace
&lt;/h3&gt;

&lt;p&gt;It is recommended to create a dedicated namespace for benchmarking.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create namespace kubestone
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After the namespace is created, you can use it to post a benchmark request to the cluster.&lt;/p&gt;

&lt;p&gt;The resulting benchmark executions will reside in this namespace.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Resource rendering
&lt;/h3&gt;

&lt;p&gt;We will be using &lt;a href="https://kustomize.io/"&gt;kustomize&lt;/a&gt; to render the Custom Resource from the &lt;a href="https://github.com/xridge/kubestone/tree/master/config/samples/fio/"&gt;github repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Kustomize takes a &lt;a href="https://github.com/xridge/kubestone/blob/master/config/samples/fio/base/fio_cr.yaml"&gt;base yaml&lt;/a&gt;, and patches with an &lt;a href="https://github.com/xridge/kubestone/blob/master/config/samples/fio/overlays/pvc/patch.yaml"&gt;overlay file&lt;/a&gt; to render the final yaml file, which describes the benchmark.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kustomize build github.com/xridge/kubestone/config/samples/fio/overlays/pvc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The Custom Resource (rendered yaml) looks as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: perf.kubestone.xridge.io/v1alpha1
kind: Fio
metadata:
  name: fio-sample
spec:
  cmdLineArgs: --name=randwrite --iodepth=1 --rw=randwrite --bs=4m --size=256M
  image:
    name: xridge/fio:3.13
  volume:
    persistentVolumeClaimSpec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    volumeSource:
      persistentVolumeClaim:
        claimName: GENERATED
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When we create this resource in Kubernetes, the operator interprets it and creates the associated benchmark. The fields of the Custom Resource controls how the benchmark will be executed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;metadata.name&lt;/code&gt;: Identifies the Custom Resource. Later, this can be used to query or delete the benchmark in the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cmdLineArgs&lt;/code&gt;: Arguments passed to the benchmark. In this case we are providing the arguments to &lt;em&gt;Fio&lt;/em&gt; (a filesystem benchmark). It instructs the benchmark to execute a random write test with 4Mb of block size with an overall transfer size of 256 MB.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image.name&lt;/code&gt;: Describes the Docker Image of the benchmark. In case of &lt;a href="https://fio.readthedocs.io/"&gt;Fio&lt;/a&gt;, we are using &lt;a href="https://cloud.docker.com/u/xridge/repository/docker/xridge/fio"&gt;xridge's fio Docker Image&lt;/a&gt;, which is built from &lt;a href="https://github.com/xridge/fio-docker/"&gt;this repository&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;volume.persistentVolumeClaimSpec&lt;/code&gt;: Given that Fio is a disk benchmark, we can set a &lt;strong&gt;PersistentVolumeClaim&lt;/strong&gt; for the benchmark to be executed. The above setup instructs Kubernetes to take 1GB of space from the default StorageClass and use it for the benchmark.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Running the benchmark
&lt;/h1&gt;

&lt;p&gt;Now, as we understand the definition of the benchmark, we can try to execute it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Make sure you installed the kubestone operator and have it running before executing this step&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kustomize build github.com/xridge/kubestone/config/samples/fio/overlays/pvc | kubectl create --namespace kubestone -f -

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Since we pipe the output of the &lt;code&gt;kustomize build&lt;/code&gt; command into &lt;code&gt;kubectl create&lt;/code&gt;, it will create the object in our Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;The resulting object can be queried using the object's type (&lt;code&gt;fio&lt;/code&gt;) and it's name (&lt;code&gt;fio-sample&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl describe --namespace kubestone fio fio-sample
Name:         fio-sample
Namespace:    kubestone
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;
API Version:  perf.kubestone.xridge.io/v1alpha1
Kind:         Fio
Metadata:
  Creation Timestamp:  2019-09-14T11:31:02Z
  Generation:          1
  Resource Version:    31488293
  Self Link:           /apis/perf.kubestone.xridge.io/v1alpha1/namespaces/kubestone/fios/fio-sample
  UID:                 21cdbe92-d6e3-11e9-ba70-4439c4920abc
Spec:
  Cmd Line Args:  --name=randwrite --iodepth=1 --rw=randwrite --bs=4m --size=256M
  Image:
    Name:  xridge/fio:3.13
  Volume:
    Persistent Volume Claim Spec:
      Access Modes:
        ReadWriteOnce
      Resources:
        Requests:
          Storage:  1Gi
    Volume Source:
      Persistent Volume Claim:
        Claim Name:  GENERATED
Status:
  Completed:  true
  Running:    false
Events:
  Type    Reason           Age   From       Message
  ----    ------           ----  ----       -------
  Normal  Created  11s   kubestone  Created /api/v1/namespaces/kubestone/configmaps/fio-sample
  Normal  Created  11s   kubestone  Created /api/v1/namespaces/kubestone/persistentvolumeclaims/fio-sample
  Normal  Created  11s   kubestone  Created /apis/batch/v1/namespaces/kubestone/jobs/fio-sample
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As the &lt;code&gt;Events&lt;/code&gt; section shows, Kubestone has created a &lt;code&gt;ConfigMap&lt;/code&gt;, a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; and a &lt;code&gt;Job&lt;/code&gt; for the provided Custom Resource. The &lt;code&gt;Status&lt;/code&gt; field tells us that the benchmark has completed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inspecting the benchmark
&lt;/h3&gt;

&lt;p&gt;The created objects related to the benchmark can be listed using &lt;code&gt;kubectl&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods,jobs,configmaps,pvc --namespace kubestone
NAME                   READY   STATUS      RESTARTS   AGE
pod/fio-sample-bqqmm   0/1     Completed   0          54s

NAME                   COMPLETIONS   DURATION   AGE
job.batch/fio-sample   1/1           15s        54s

NAME                   DATA   AGE
configmap/fio-sample   0      54s

NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
persistentvolumeclaim/fio-sample   Bound    pvc-b3898236-c698-11e9-8071-4439c4920abc   1Gi        RWO            rook-ceph-block   54s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As shown above, Fio controller has created a PersistentVolumeClaim and a ConfigMap which is used by the Fio Job during benchmark execution. The Fio Job has an associated Pod which contains our test execution. The results of the run can be shown with the &lt;code&gt;kubectl logs&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl logs --namespace kubestone fio-sample-bqqmm
randwrite: (g=0): rw=randwrite, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=1
fio-3.13
Starting 1 process
randwrite: Laying out IO file (1 file / 256MiB)

randwrite: (groupid=0, jobs=1): err= 0: pid=47: Sat Aug 24 17:58:10 2019
  write: IOPS=470, BW=1882MiB/s (1974MB/s)(256MiB/136msec); 0 zone resets
    clat (usec): min=1887, max=2595, avg=2042.76, stdev=136.56
     lat (usec): min=1953, max=2688, avg=2107.35, stdev=142.94
    clat percentiles (usec):
     |  1.00th=[ 1893],  5.00th=[ 1926], 10.00th=[ 1926], 20.00th=[ 1958],
     | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040],
     | 70.00th=[ 2057], 80.00th=[ 2073], 90.00th=[ 2114], 95.00th=[ 2409],
     | 99.00th=[ 2606], 99.50th=[ 2606], 99.90th=[ 2606], 99.95th=[ 2606],
     | 99.99th=[ 2606]
  lat (msec)   : 2=34.38%, 4=65.62%
  cpu          : usr=2.22%, sys=97.78%, ctx=1, majf=0, minf=9
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, &amp;gt;=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &amp;gt;=64=0.0%
     issued rwts: total=0,64,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=1882MiB/s (1974MB/s), 1882MiB/s-1882MiB/s (1974MB/s-1974MB/s), io=256MiB (268MB), run=136-136msec

Disk stats (read/write):
  rbd7: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Listing benchmarks
&lt;/h3&gt;

&lt;p&gt;We have learned that Kubestone uses Custom Resources to define benchmarks. We can list the installed custom resources using the &lt;code&gt;kubectl get crds&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get crds | grep kubestone
drills.perf.kubestone.xridge.io         2019-09-08T05:51:26Z
fios.perf.kubestone.xridge.io           2019-09-08T05:51:26Z
iopings.perf.kubestone.xridge.io        2019-09-08T05:51:26Z
iperf3s.perf.kubestone.xridge.io        2019-09-08T05:51:26Z
pgbenches.perf.kubestone.xridge.io      2019-09-08T05:51:26Z
sysbenches.perf.kubestone.xridge.io     2019-09-08T05:51:26Z
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Using the CRD names above, we can list the executed benchmarks in the system.&lt;/p&gt;

&lt;p&gt;Kubernetes provides a convenience feature regarding CRDs: one can use the shortened name of the CRD, which is the singular part of the fully qualified CRD name. In our case, &lt;code&gt;fios.perf.kubestone.xridge.io&lt;/code&gt; can be shortened to &lt;code&gt;fio&lt;/code&gt;. Hence, we can list the executed &lt;code&gt;fio&lt;/code&gt; benchmark using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get --namespace kubestone fios.perf.kubestone.xridge.io
NAME         RUNNING   COMPLETED
fio-sample   false     true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Cleaning up
&lt;/h3&gt;

&lt;p&gt;After a successful benchmark run the resulting objects are stored in the Kubernetes cluster. Given that Kubernetes can hold a limited number of pods in the system, it is advised that the user cleans up the benchmark runs time to time. This can be achieved by deleting the Custom Resource, which initiated the benchmark:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete --namespace kubestone fio fio-sample
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Since the Custom Resource has ownership on the created resources, the underlying pods, jobs, configmaps, pvcs, etc. are also removed by this operation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Next steps
&lt;/h1&gt;

&lt;p&gt;Now you are familiar with the key concepts of Kubestone, it is time to explore and benchmark. You can play around with Fio Benchmark via it's &lt;code&gt;cmdLineArgs&lt;/code&gt;, Persistent Volume and Scheduling related settings. You can find more information about that in Fio's benchmark page. Hopefully you gained some valuable knowledge from this post!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Using Helm with Kubernetes</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Mon, 14 Sep 2020 18:39:50 +0000</pubDate>
      <link>https://forem.com/appfleet/using-helm-with-kubernetes-c5n</link>
      <guid>https://forem.com/appfleet/using-helm-with-kubernetes-c5n</guid>
      <description>&lt;p&gt;Kubernetes is a powerful orchestration system, however, it can be really hard to configure its deployment process. Specific apps can help you manage multiple independent resources like pods, services, deployments, and replica sets. Yet, each must be described in the YAML manifest file.&lt;/p&gt;

&lt;p&gt;It’s not a problem for a single trivial app, but during production, it’s best to simplify this process: search, use, and share already implemented configurations, deploy these configurations, create configuration templates, and deploy them without effort. In other words, we need an extended version of a package manager like &lt;em&gt;APT&lt;/em&gt; for Ubuntu or &lt;em&gt;PIP&lt;/em&gt; for Python to work with the Kubernetes cluster. Luckily, we have Helm as a package manager.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Helm?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; is an open-source package manager for Kubernetes that allows developers and operators to package, configure, and deploy applications and services onto Kubernetes clusters easily. It was inspired by Homebrew for macOS and now is a part of the Cloud Native Computing Foundation. &lt;/p&gt;

&lt;p&gt;In this article, we will explore Helm 3.x which is the newest version at the time of writing this article. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuqznhziytlicj0j8wone.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuqznhziytlicj0j8wone.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Searches on Helm Hub for PostgreSQL from dozens of different repositories&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Helm can install software and dependencies, upgrade software, configure software deployments, fetch packages from repositories, alongside managing repositories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some key features of Helm include&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Role-based access controls (RBAC)&lt;/li&gt;
&lt;li&gt;Golang templates which allows you to work with configuration as text&lt;/li&gt;
&lt;li&gt;Lua scripts to process configuration as an object&lt;/li&gt;
&lt;li&gt;Deployment versions control system &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Templates allow you to configure your deployments by changing few variable values without changing the template directly. Helm packages are called &lt;strong&gt;charts&lt;/strong&gt;, and they consist of a few YAML configuration files and templates that are rendered into Kubernetes manifest files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The basic package (chart) structure&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;chart.yaml&lt;/strong&gt; - a YAML file containing information about the chart&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LICENSE (optional)&lt;/strong&gt; - a plain text file containing the license for the chart&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;README.md (optional)&lt;/strong&gt; - a human-readable README file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;values.yaml&lt;/strong&gt; - the default configuration values for this chart&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;values.schema.json (optional)&lt;/strong&gt; - a JSON Schema for imposing a structure on the values.yaml file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;charts/&lt;/strong&gt; - defines chart dependencies (recommended to use the dependencies section in &lt;code&gt;chart.yaml&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;crds/&lt;/strong&gt; - Custom Resource Definitions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;templates/&lt;/strong&gt; - directory of templates that when combined with values, will generate valid Kubernetes manifest files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Templates give you a wide range of capabilities. You can use variables from context, apply different functions (such as ‘quote’, sha256sum), use cycles and conditional cases, and import other files (also other templates or partials).&lt;/p&gt;
&lt;h1&gt;
  
  
  What are Helm’s abilities?
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;As you operate Helm though a Command Line Interface (CLI), the &lt;code&gt;helm search&lt;/code&gt; command allows you to search for a package by keywords from the repositories. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can inspect &lt;code&gt;chart.yaml&lt;/code&gt;, &lt;code&gt;values.yaml&lt;/code&gt;, and &lt;code&gt;README.md&lt;/code&gt; for a certain package. along with creating your own chart with the &lt;code&gt;helm create &amp;lt;chart-name&amp;gt;&lt;/code&gt; command. This command will generate a folder with a specified name in which you can find the mentioned structure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm can install both folder or &lt;code&gt;.tgz archives.&lt;/code&gt; To create a &lt;code&gt;.tgz&lt;/code&gt; from your package folder, use the &lt;code&gt;helm package &amp;lt;path to folder&amp;gt;&lt;/code&gt; command. This will create a &lt;code&gt;&amp;lt;package_name&amp;gt;&lt;/code&gt; package in your working directory, using the name and version from the metadata defined in the &lt;code&gt;chart.yaml&lt;/code&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm has built-in support for installing packages from an HTTP server. Helm reads a repository index hosted on the server, which describes what chart packages are available and where they are located. This is how the default stable repository works.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also create a repository from your machine with &lt;code&gt;helm serve&lt;/code&gt;. This eventually lets you create your own corporate repository or contribute to the official stable one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also call the &lt;code&gt;helm dependencies update &amp;lt;package name&amp;gt;&lt;/code&gt; command which verifies that the required charts, as expressed in &lt;code&gt;chart.yaml&lt;/code&gt;, are present in &lt;code&gt;charts/&lt;/code&gt; and are in an acceptable version. It will additionally pull down the latest charts that satisfy the dependencies, and clean up the old dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apart from &lt;em&gt;Chart&lt;/em&gt; and &lt;em&gt;Repository&lt;/em&gt; another significant concept you should know is &lt;em&gt;Release&lt;/em&gt; which is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster. And each time it is installed, a new Release is created. So, you can have multiple PostgreSQL in the same cluster, in which each Release will have its own release name. You can think of this like 'multiple Docker containers from one image'.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;
  
  
  How does it work?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg7mz53leuupbhokztlq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg7mz53leuupbhokztlq6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: developer.ibm.com&lt;/p&gt;

&lt;p&gt;Helm client is used for installing, updating and creating charts, as well as compiling and sending them to a Kubernetes API in an acceptable form. The previous version had a client-server architecture, using a program run on a cluster with Kubernetes, called Tiller. This software was responsible for deployment’s lifetime. But this approach led to some security issues which is one of the reasons why all functions are now handled by the client.&lt;/p&gt;

&lt;p&gt;Installing Helm 3 is noticeably easier than the previous version since only the client needs to be installed. It is available for Windows, macOS, and Linux. You can install the program from binary releases, Homebrew, or through a configured installation script.&lt;/p&gt;
&lt;h1&gt;
  
  
  Let’s try an example
&lt;/h1&gt;

&lt;p&gt;1.Let's start with installing Helm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash master $ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100  6794  100  6794    0     0  25961      0 --:--:-- --:--:-- --:--:-- 25931Error: could not find tillerHelm v3.1.2 is available. Changing from version .Downloading https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gzPreparing to install helm into /usr/local/binhelm installed into /usr/local/bin/helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Check if everything is installed properly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master $ helm version --short
V3.1.2+gd878d4d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.By default, Helm doesn’t have a connection to any of the repositories. Let’s add connection to the most common &lt;em&gt;stable&lt;/em&gt; one. (You can check all the available repositories with &lt;code&gt;helm repo list&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master $ helm repo add stable 

https://kubernetes-charts.storage.googleapis.com/
"stable" has been added to your repositories
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.After adding the repository, we should let Helm get updated. The current local state of Helm is kept in your environment in the home location.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master $ helm repo update

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The Helm command defaults to discovering the host already set in &lt;code&gt;~/.kube/config&lt;/code&gt;. There is a way to change or override the host, but that's beyond the scope of this scenario.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master $ helm env

HELM_BIN="helm"
HELM_DEBUG="false"
HELM_KUBECONTEXT=""
HELM_NAMESPACE="default"
HELM_PLUGINS="/root/.local/share/helm/plugins"
HELM_REGISTRY_CONFIG="/root/.config/helm/registry.json"
HELM_REPOSITORY_CACHE="/root/.cache/helm/repository"
HELM_REPOSITORY_CONFIG="/root/.config/helm/repositories.yaml"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Let's search for a WordPress in the Helm Hub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master $ helm search hub wordpress

URL                                                     CHART VERSION   APP VERSION     DESCRIPTION https://hub.helm.sh/charts/presslabs/wordpress-...      v0.8.4          v0.8.4          Presslabs WordPress Operator Helm Chart
https://hub.helm.sh/charts/presslabs/wordpress-...      v0.8.3          v0.8.3          A Helm chart for deploying a WordPress site on ...
https://hub.helm.sh/charts/bitnami/wordpress            9.0.3           5.3.2           Web publishing platform for building blogs and ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And also search in our repositories (we have only &lt;em&gt;stable&lt;/em&gt; for now).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master $ helm search repo wordpress

NAME                    CHART VERSION   APP VERSION     DESCRIPTION
stable/wordpress        9.0.2           5.3.2           DEPRECATED Web publishing platform for building...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;6.As mentioned earlier, you can inspect a Chart. For example, let’s take info from &lt;code&gt;chart.yaml&lt;/code&gt; for the &lt;em&gt;Wordpress&lt;/em&gt; chart. You can also check &lt;code&gt;helm show readme stable/wordpress&lt;/code&gt; and &lt;code&gt;helm show values stable/wordpress&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master $ helm show chart stable/wordpress

apiVersion: v1
appVersion: 5.3.2
dependencies:
- condition: mariadb.enabled
  name: mariadb
  repository: https://kubernetes-charts.storage.googleapis.com/
  tags:
  - wordpress-database
  version: 7.x.xdeprecated: truedescription: DEPRECATED Web publishing platform for building blogs and websites.
home: http://www.wordpress.com/
icon: https://bitnami.com/assets/stacks/wordpress/img/wordpress-stack-220x234.png
keywords:- wordpress- cms
- blog
- http- web- application
- php
name: wordpress
sources:
- https://github.com/bitnami/bitnami-docker-wordpress
version: 9.0.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7.Let’s create a namespace for WordPress and install a test chart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master $ kubectl create namespace wordpress

namespace/wordpress created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;master $ helm install test-wordpress stable/wordpress --namespace wordpress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of this command appears messy just because it’s so big.&lt;/p&gt;

&lt;p&gt;You can also set variables, such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install test-wordpress \
  --set wordpressUsername=admin \
  --set wordpressPassword=password \
  --set mariadb.mariadbRootPassword=secretpassword \
    stable/wordpress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;8.For now, let’s ensure that everything is deployed correctly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp9mb7tp5xj59ynkp9xi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp9mb7tp5xj59ynkp9xi9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
As you can see, everything has been deployed properly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Helm is a popular open-source package manager that offers users a more flexible way to manage Kubernetes cluster. You can either create your own, or use public packages from your own or external repositories. Each package is quite flexible and, in most cases, all you need is define the right constants from which the template will be compiled to suit your needs. To create your own chart, you can use the power of Go templates and/or Lua scripts. Each update will create a history unit to which you can rollback anytime you want. With Helm, you have all the power of Kubernetes. And, in the end, Helm allows you to work with role-based access, so you can manage your cluster in a team.&lt;/p&gt;

&lt;p&gt;This brings us to the end of this brief article explaining the basics and features of Helm. We hope you enjoyed it and were able to make use of it. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Autoscaling an Amazon Elastic Kubernetes Service cluster</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Mon, 14 Sep 2020 18:39:11 +0000</pubDate>
      <link>https://forem.com/appfleet/autoscaling-an-amazon-elastic-kubernetes-service-cluster-2hmf</link>
      <guid>https://forem.com/appfleet/autoscaling-an-amazon-elastic-kubernetes-service-cluster-2hmf</guid>
      <description>&lt;p&gt;In this article we are going to consider the two most common methods for Autoscaling in EKS cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Horizontal Pod Autoscaler (HPA)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cluster Autoscaler (CA)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Horizontal Pod Autoscaler or HPA&lt;/strong&gt; is a Kubernetes component that automatically scales your service based on metrics such as CPU utilization or others, as defined through the Kubernetes metric server. The HPA scales the pods in either a deployment or replica set, and is implemented as a Kubernetes API resource and a controller. The Controller Manager queries the resource utilization against the metrics specified in each horizontal pod autoscaler definition. It obtains the metrics from either the resource metrics API for per pod metrics or the custom metrics API for any other metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fow28ko03t4htuw3yi3f1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fow28ko03t4htuw3yi3f1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To see this in action, we are going to configure HPA and then apply some load to our system to see it in action. &lt;/p&gt;

&lt;p&gt;To start with, let us start with installing Helm as a package manager for Kubernetes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get &amp;gt; helm.sh
 chmod +x helm.sh
 ./helm.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we are going to set up the server base portion of Helm called Tiller. This requires a service account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above defines a Tiller service account to which we have assigned the cluster admin role. Now let's go ahead and apply the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f tiller.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;helm init&lt;/code&gt; using the Tiller service account we have just created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm init --service-account tiller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this we have installed Tiller onto the cluster, which gives access to manage those resources within it. &lt;/p&gt;

&lt;p&gt;With Helm installed, we can now deploy the metric server. Metric servers are cluster wide aggregators of resource usage data where metrics are collected by &lt;code&gt;kubelet&lt;/code&gt; on each worker node, and are used to dictate the scaling behavior of deployments. &lt;/p&gt;

&lt;p&gt;So let's go ahead and install that now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install stable/metrics-server --name metrics-server --version 2.0.4 --namespace metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once all checks have passed, we are ready to scale the application. &lt;/p&gt;

&lt;p&gt;For the purpose of this article, we will deploy a special build of Apache and PHP designed to generate CPU utilization:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let us autoscale our deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above specifies that the HPA will increase or decrease the number of replicas to maintain an average CPU utilization across all pods by 50%. Since each pod requests 200 millicores (as specified in the previous command), the average CPU utilization of 100 millicores is maintained. &lt;/p&gt;

&lt;p&gt;Let's check the status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get hpa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Review &lt;code&gt;Targets&lt;/code&gt; column, if it says &lt;code&gt;unknown/50%&lt;/code&gt; then it means that the current CPU consumption is 0%, as we are not currently sending any request to the server. This will take a couple of minutes to show the correct value, so let us grab a cup of coffee and come back when we have got some data here. &lt;/p&gt;

&lt;p&gt;Rerun the last command and confirm that &lt;code&gt;Targets&lt;/code&gt; column is now &lt;code&gt;0%/50%&lt;/code&gt;. Now, let's generate some load in order to trigger scaling by running the following :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run -i --tty load-generator --image=busybox /bin/sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside this container, we are going to send an infinite number of requests to our service. If we flip back over to the other terminal, we can watch the autoscaler in action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get hpa -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can watch the HPA scaler pod up from 1 to our configured maximum of 10, until the average CPU utilization is below our target of 50%. It will take about 10 minutes to run and you could see we are now having 10 replicas. If we flip back to the other terminal to terminate the load test, and flip back to the scaler terminal, we can see the HPA reduce the replica count back to the minimum. &lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster Autoscaler
&lt;/h3&gt;

&lt;p&gt;The Cluster Autoscaler is the default Kubernetes component that can scale either pods or nodes in a cluster. It automatically increases the size of an autoscaling group, so that pods can continue to get placed successfully. It also tries to remove unused worker nodes from the autoscaling group (the ones with no pods running).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F34tnzgblef6m3rv3rg7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F34tnzgblef6m3rv3rg7h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following AWS CLI command will create an Auto scaling group with minimum of one and maximum count of ten:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create nodegroup --cluster &amp;lt;CLUSTER_NAME&amp;gt; --node-zones &amp;lt;REGION_CODE&amp;gt; --name &amp;lt;REGION_CODE&amp;gt; --asg-access --nodes-min 1 --nodes 5 --nodes-max 10 --managed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we need to apply an inline IAM policy to our worker nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "ec2:DescribeLaunchTemplateVersions"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This basically allows the EC2 worker nodes posting the cluster auto scaler the ability to manipulate auto scaling. Copy it and add to your EC2 IAM role.&lt;/p&gt;

&lt;p&gt;Next, download the following file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And update the following line with your cluster name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/&amp;lt;YOUR CLUSTER NAME&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we can deploy our Autoscaler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cluster-autoscaler-autodiscover.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course we should wait for the pods to finish creating. Once done, we can scale our cluster out. We will consider a simple &lt;code&gt;nginx&lt;/code&gt; application with the following &lt;code&gt;yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: extensions/v1beta2
kind: Deployment
metadata:
  name: nginx-scale
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        resources: 
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 512Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go ahead and deploy the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f nginx.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And check the deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployment/nginx-scale
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's scale a replica up to 10:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl scale --replicas=10 deployment/nginx-scale
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see our some pods in the pending state, which is the trigger that the cluster auto scaler uses to scale out our fleet of EC2 instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -o wide --watch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this article, we considered both types of EKS cluster autoscaling. We learnt how the Cluster Autoscaler initiates scale-in and scale-out operations each time it detects under-utilized instances or pending pods. Horizontal Pod Autoscaler and Cluster Autoscaler are essential features of Kubernetes when it comes to scaling a microservice application. Hope you found this article useful but there is more to come. Till then, happy scaling!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Optimize Ghost Blog Performance Including Rewriting Image Domains to a CDN</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Thu, 10 Sep 2020 18:23:33 +0000</pubDate>
      <link>https://forem.com/appfleet/optimize-ghost-blog-performance-including-rewriting-image-domains-to-a-cdn-22dj</link>
      <guid>https://forem.com/appfleet/optimize-ghost-blog-performance-including-rewriting-image-domains-to-a-cdn-22dj</guid>
      <description>&lt;p&gt;The Ghost blogging platform offers a lean and minimalist experience. And that's why we love it. But unfortunately sometimes, it can be too lean for our requirements. &lt;/p&gt;

&lt;p&gt;Web performance has become more important and relevant than ever, especially since Google started including it as a parameter in its SEO rankings. We make sure to optimize our websites as much as possible, offering the best possible user experience. This article will walk you through the steps you can take to optimize a Ghost Blog's performance while keeping it lean and resourceful. &lt;/p&gt;

&lt;p&gt;When we started working on the &lt;a href="https://appfleet.com/blog"&gt;appfleet blog&lt;/a&gt; we began with a few simple things:&lt;/p&gt;

&lt;h3&gt;
  
  
  Ghost responsive images
&lt;/h3&gt;

&lt;p&gt;The featured image in a blog have lots of parameters, which is a good thing. For example, you can set multiple sizes in &lt;code&gt;package.json&lt;/code&gt; and have Ghost automatically resize them for a responsive experience for users on mobile devices or smaller screens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"config": {
        "posts_per_page": 10,
        "image_sizes": {
            "xxs": {
                "width": 30
            },
            "xs": {
                "width": 100
            },
            "s": {
                "width": 300
            },
            "m": {
                "width": 600
            },
            "l": {
                "width": 900
            },
            "xl": {
                "width": 1200
            }
                 }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And then, all you have to do is update the theme's code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;img class="feature-image"
    srcset="{{img_url feature_image size="s"}} 300w,
            {{img_url feature_image size="m"}} 600w,
            {{img_url feature_image size="l"}} 900w,
            {{img_url feature_image size="xl"}} 1200w"
    sizes="800px"
    src="{{img_url feature_image size="l"}}"
    alt="{{title}}"
/&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Common HTML tags for performance
&lt;/h3&gt;

&lt;p&gt;Next we take a few simple steps to optimize &lt;em&gt;Asset Download Time&lt;/em&gt;. That includes adding &lt;code&gt;preconnect&lt;/code&gt; and &lt;code&gt;preload&lt;/code&gt; headers in &lt;code&gt;default.hbs&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;link rel="preconnect" href="https://fonts.gstatic.com/" crossorigin="anonymous"&amp;gt;
&amp;lt;link rel="preconnect" href="https://cdn.jsdelivr.net/" crossorigin="anonymous"&amp;gt;
&amp;lt;link rel="preconnect" href="https://widget.appfleet.com/" crossorigin="anonymous"&amp;gt;

&amp;lt;link rel="preload" as="style" href="https://fonts.googleapis.com/css?family=Red+Hat+Display:400,500,700&amp;amp;display=swap" /&amp;gt;
&amp;lt;link rel="preload" as="style" href="https://cdn.jsdelivr.net/npm/@fortawesome/fontawesome-free@5.13.0/css/all.min.css" /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As we load many files from &lt;a href="https://www.jsdelivr.com/"&gt;jsDelivr&lt;/a&gt; to improve our performance, we instruct the browser to establish a connection with the domain as soon as possible. Same goes for Google Fonts and the sidebar widget that was custom coded.&lt;/p&gt;

&lt;p&gt;Most often than not, users coming from Google or some other source to a specific blog post will navigate to the homepage to check what else we have written. For the same reason, on blog posts we also added &lt;code&gt;prefetch&lt;/code&gt; and &lt;code&gt;prerender&lt;/code&gt; tags for the main blog page.&lt;/p&gt;

&lt;p&gt;That way the browser will asynchronously download and cache it, making the next most probable action of the user almost instant:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;link rel="prefetch" href="https://appfleet.com/blog"&amp;gt;
&amp;lt;link rel="prerender" href="https://appfleet.com/blog"&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now these optimizations definitely helped but we still had a big problem. Our posts often have many screenshots and images in them, eventually impacting the page load time. &lt;/p&gt;

&lt;p&gt;To solve this problem we took two steps. Lazy load the images and use a CDN. The issue is that Ghost doesn't allow to modify or filter the contents of the post. All you can do is output the HTML.&lt;/p&gt;

&lt;p&gt;The easiest solution to this is to use a dynamic content CDN like &lt;a href="https://www.cloudflare.com/"&gt;Cloudflare&lt;/a&gt;. A CDN will proxy the whole site, won't cache the HTML, but cache all static content like images. They also have an option to lazy load all images by injecting their own Javascript.&lt;/p&gt;

&lt;p&gt;But we didn't want to use Cloudflare in this case. And didn't feel like injecting third-party JS to lazy load the images either. So what did we do?&lt;/p&gt;

&lt;h3&gt;
  
  
  Nginx to the rescue!
&lt;/h3&gt;

&lt;p&gt;Our blog is hosted on a &lt;a href="https://www.digitalocean.com/"&gt;DigitalOcean&lt;/a&gt; droplet created using its marketplace apps. It's basically an Ubuntu VM that comes pre-installed with Node.js, NPM, Nginx and Ghost.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note that even if you don't use DigitalOcean, you are still recommended to use Nginx in-front of the Node.js app of Ghost.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This eventually makes the solution pretty simple. We use Nginx to rewrite the HTML, along with enabling a CDN and lazy-loading images at the same time, without any extra JS.&lt;/p&gt;

&lt;p&gt;For CDN, you may also use the free CDN offered by Google to all AMP projects. Not many people are aware that you can use it as a regular CDN without actually implementing AMP. &lt;/p&gt;

&lt;p&gt;All you have to do is use this URL in front of your images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://appfleet-com.cdn.ampproject.org/i/s/appfleet.com/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Replace the domains with your own and change your &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; tags, and you are done. All images are now served through Google's CDN.&lt;/p&gt;

&lt;p&gt;The best part is that the images are not only served but optimized as well. Additionally, it will even serve a WebP version of the image when possible, further improving the performance of your site.&lt;/p&gt;

&lt;p&gt;As for lazy loading, you may use the native functionality of modern browsers that looks like this &lt;code&gt;&amp;lt;img loading="lazy"&lt;/code&gt;. By adding &lt;code&gt;loading="lazy"&lt;/code&gt; to all images, you instruct the browsers to automatically lazy load them once they become visible by the user.&lt;/p&gt;

&lt;p&gt;And now the code itself to achieve this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;

    server_name NAME;

    location ^~ /blog/ {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host       "appfleet.com";
        proxy_set_header        X-Forwarded-Proto https;
        proxy_pass http://127.0.0.1:2368;
        proxy_redirect off;

        #disable compression 
        proxy_set_header Accept-Encoding "";
        #rewrite the html
        sub_filter_once off;
        sub_filter_types text/html;
        sub_filter '&amp;lt;img src="https://appfleet.com' '&amp;lt;img loading="lazy" src="https://appfleet-com.cdn.ampproject.org/i/s/appfleet.com';
    }

}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;First we disable compression between node.js and nginx. Otherwise nginx can't modify the HTML if it comes in binary form. &lt;/p&gt;

&lt;p&gt;Next we use the &lt;code&gt;sub_filter&lt;/code&gt; parameter to rewrite the HTML. Ghost is using absolute paths in images, so we add the beginning as well. And in 1 line enabled both the CDN and lazyloading.&lt;/p&gt;

&lt;p&gt;Reload the config and you are good to go. Check our blog to see this in real time. &lt;/p&gt;

</description>
      <category>webperf</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>Local Kubernetes testing with KIND</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Thu, 10 Sep 2020 18:22:22 +0000</pubDate>
      <link>https://forem.com/appfleet/local-kubernetes-testing-with-kind-42l</link>
      <guid>https://forem.com/appfleet/local-kubernetes-testing-with-kind-42l</guid>
      <description>&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;If you've spent days (or even weeks?) trying to spin up a Kubernetes cluster for learning purposes or to test your application, then your worries are over. Spawned from a Kubernetes Special Interest Group, KIND is a tool that provisions a Kubernetes cluster running IN Docker. &lt;/p&gt;

&lt;p&gt;From the docs:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;kind&lt;/code&gt; is a tool for running local Kubernetes clusters using Docker container "nodes".&lt;br&gt;
&lt;code&gt;kind&lt;/code&gt; is primarily designed for testing Kubernetes 1.11+, initially targeting the &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/conformance-tests.md"&gt;conformance tests&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Installing KIND
&lt;/h1&gt;

&lt;p&gt;As it is built using &lt;code&gt;go&lt;/code&gt;, you will need to make sure you have the latest version of &lt;code&gt;golang&lt;/code&gt; installed on your machine. &lt;/p&gt;

&lt;p&gt;According to the k8s &lt;a href="https://kind.sigs.k8s.io/docs/contributing/getting-started/"&gt;docs&lt;/a&gt;, &lt;code&gt;golang -v 1.11.5&lt;/code&gt; is preferred. To install kind, run these commands (it takes a while):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go get -u sigs.k8s.io/kind
kind create cluster
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then confirm &lt;code&gt;kind&lt;/code&gt; cluster is available:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind get clusters
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Setting up kubectl
&lt;/h1&gt;

&lt;p&gt;Also, install the latest &lt;code&gt;kubernetes-cli&lt;/code&gt; using &lt;a href="https://brew.sh/"&gt;Homebrew&lt;/a&gt; or &lt;a href="https://chocolatey.org/"&gt;Chocolatey&lt;/a&gt;.The latest Docker has Kubernetes feature but it may come with older &lt;code&gt;kubectl&lt;/code&gt; . Check its version by running this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Make sure it shows &lt;code&gt;GitVersion: "v1.14.1"&lt;/code&gt; or above.If you find you are running &lt;code&gt;kubectl&lt;/code&gt; from Docker, try &lt;code&gt;brew link&lt;/code&gt; or reorder path environment variable.&lt;/p&gt;

&lt;p&gt;Once &lt;code&gt;kubectl&lt;/code&gt; and kind are ready, open bash console and run these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export KUBECONFIG=”$(kind get kubeconfig-path)”
kubectl cluster-info
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;kind&lt;/code&gt; is properly set up, some information will be shown.Now you are ready to proceed. Yay!&lt;/p&gt;

&lt;h1&gt;
  
  
  Deploying first application
&lt;/h1&gt;

&lt;p&gt;What should we deploy on the cluster? We are going to attempt deploying Cassandra since the docs have a pretty decent walk-through on it. &lt;/p&gt;

&lt;p&gt;First of all, download &lt;a href="https://kubernetes.io/examples/application/cassandra/cassandra-service.yaml"&gt;&lt;code&gt;cassandra-service.yaml&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://kubernetes.io/examples/application/cassandra/cassandra-statefulset.yaml"&gt;&lt;code&gt;cassandra-statefulset.yaml&lt;/code&gt;&lt;/a&gt; for later. Then create &lt;code&gt;kustomization.yaml&lt;/code&gt; by running two &lt;code&gt;cat&lt;/code&gt; commands.Once those &lt;code&gt;yaml&lt;/code&gt; files are prepared, layout them as following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k8s-wp/
  kustomization.yaml
  mysql-deployment.yaml
  wordpress-deployment.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then apply them to your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd k8s-wp
kubectl apply -k ./
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Validating (optional)
&lt;/h3&gt;

&lt;p&gt;Get the Cassandra Service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc cassandra
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The response is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
cassandra   ClusterIP   None         &amp;lt;none&amp;gt;        9042/TCP   45s

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that Service creation might have failed if anything else is returned. Read &lt;a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/"&gt;Debug Services&lt;/a&gt; for common issues.&lt;/p&gt;

&lt;h1&gt;
  
  
  Finishing up
&lt;/h1&gt;

&lt;p&gt;That's really all you need to know to get started with KIND, I hope this makes your life a little easier and lets you play with Kubernetes a little bit more :)&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Understanding Amazon Elastic Container Service for Kubernetes (EKS)</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Fri, 04 Sep 2020 18:24:14 +0000</pubDate>
      <link>https://forem.com/appfleet/understanding-amazon-elastic-container-service-for-kubernetes-eks-3pji</link>
      <guid>https://forem.com/appfleet/understanding-amazon-elastic-container-service-for-kubernetes-eks-3pji</guid>
      <description>&lt;p&gt;Amazon Elastic Container Service for Kubernetes or EKS provides a &lt;em&gt;Managed Kubernetes Service&lt;/em&gt;. Amazon does the undifferentiated heavy lifting, such as provisioning the cluster, performing upgrades and patching. Although it is compatible with existing plugins and tooling, EKS is not a proprietary AWS fork of Kubernetes in any way. This means you can easily migrate any standard Kubernetes application to EKS without any changes to your code base. You'll connect to your EKS cluster with &lt;code&gt;kubectl&lt;/code&gt; in the same way you would have done in a &lt;em&gt;self-hosted&lt;/em&gt; Kubernetes.&lt;/p&gt;

&lt;p&gt;At this stage, EKS is very loosely integrated with other AWS services. This is definitely expected to change over time though, as EKS adoption increases. That, said, Kubernetes is much more popular than either Elastic Beanstalk or ECS. &lt;/p&gt;

&lt;h3&gt;
  
  
  Managed Control Plane
&lt;/h3&gt;

&lt;p&gt;EKS provides a Managed Control Plane, which includes Kubernetes master nodes, API server and the &lt;code&gt;etcd&lt;/code&gt; persistence layer. As part of the &lt;em&gt;highly-available&lt;/em&gt; control plane, you get 3 masters, 3 &lt;code&gt;etcd&lt;/code&gt; and 3 worker nodes, where AWS provisions automatic backup snapshotting of &lt;code&gt;etcd&lt;/code&gt; nodes alongside automated scaling. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbs4g4txgd7f4kmcr8lvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbs4g4txgd7f4kmcr8lvb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With EKS, AWS is responsible for maintaining master nodes for you by provisioning these nodes in multiple high-availability zones to maintain redundancy. So, as your workload increases, AWS will add master nodes for you. If you were running your own Kubernetes cluster, you'd have to scale it up whenever you added a worker node.  &lt;/p&gt;

&lt;h3&gt;
  
  
  VPC Networking
&lt;/h3&gt;

&lt;p&gt;EKS runs a network topology that integrates tightly with a Virtual Private Network. EKS uses a &lt;strong&gt;Container Network Interface&lt;/strong&gt; plugin that integrates the standard Kubernetes overlay network with VPC networking. This plugin allows you to treat your EKS deployment as just another part of your existing AWS infrastructure. Things like &lt;strong&gt;network access, control list, routing tables&lt;/strong&gt; and &lt;strong&gt;subnets&lt;/strong&gt; are all available in the Kubernetes applications running in EKS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyf6lpwmfmnk71x2i8bmk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyf6lpwmfmnk71x2i8bmk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each pod gets an IP address on an Elastic Network Interface, where these addresses belong to the block of the subnet where the worker node is deployed. In the diagram above, you can see the IP addresses assigned to the Virtual Ethernet Adapter on each pod. These pod IP addresses are &lt;em&gt;fully rideable&lt;/em&gt; within the VPC, and they comply with all the policies and access controls at the network level. So, things like security groups and &lt;code&gt;ACL&lt;/code&gt; remain in effect. On each EC2 instance or worker node, Kubernetes runs a daemon set that hosts the CNI plugin. This plugin is a thin layer that communicates with the network local control point. This network local control plane maintains a pool of available IP addresses. So, when the &lt;code&gt;kubelet&lt;/code&gt; on a node schedules a pod, it asks the CNI plugin to allocate an IP address. At this point, the CNI plugin allocates an IP, grabs secondary IP address and associates it with the pod. It then hands that configuration back to the &lt;code&gt;kubelet&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  EKS-Optimized AMI
&lt;/h3&gt;

&lt;p&gt;This thing is based on Amazon Linux too. It comes pre-configured to work with EKS out-of-the-box. It has all the required services pre-installed including Docker, the Kubelet and AWS IAM Authenticator. When you are provisioning your EKS worker nodes with the AWS supplied cloud formation template, it launches your worker nodes with some EC2 user data script which bootstraps the nodes with configuration allowing them to join your EKS cluster automatically.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Amazon EKS on AWS provides a great opportunity to create a self-hosted &lt;em&gt;managed&lt;/em&gt; Kubernetes cluster. It is also compatible with open-source Kubernetes, and can be safely migrated to any other Kubernetes instance at any time. Worth to mention, that for users who use solutions for centralized management of Kubernetes clusters, it makes sense to go with EKS instead of any other option such as ECS, since EKS exposes the same API as an open-source Kubernetes. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Understanding and building Kubernetes Custom Resource Definitions (CRDs)</title>
      <dc:creator>Dmitriy A.</dc:creator>
      <pubDate>Fri, 04 Sep 2020 18:22:27 +0000</pubDate>
      <link>https://forem.com/appfleet/understanding-and-building-kubernetes-custom-resource-definitions-crds-23d7</link>
      <guid>https://forem.com/appfleet/understanding-and-building-kubernetes-custom-resource-definitions-crds-23d7</guid>
      <description>&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;So, let's say you had a service or application that was built on an orchestration platform such as Kubernetes. In doing so, you must also address an overflowing array of architectural issues, including security, multi-tenancy, API gateways, CLI, configuration management, and logging.&lt;/p&gt;

&lt;p&gt;Wouldn't you like to save some manpower and development time and focus on creating something unique to your problem?&lt;/p&gt;

&lt;p&gt;Well, it just so happens that your solution lies in what's called a Custom Resource Definition, or CRD. The CRD enables engineers to plug in your own Object and application as if they were a native Kubernetes component. This is extremely powerful in creating tool and services built on Kubernetes&lt;/p&gt;

&lt;p&gt;By doing this, you can build out the custom resources for your application as well as use Kubernetes RBAC to provide security and authentication to your application. These custom resources will be stored in the integrated &lt;a href="https://github.com/coreos/etcd" rel="noopener noreferrer"&gt;etcd&lt;/a&gt; repository with replication and proper lifecycle management. They will also leverage all the built-in cluster management features which come with Kubernetes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why do I need a CRD?
&lt;/h1&gt;

&lt;p&gt;The easiest way to answer this is that you might not! It really depends on your specific project and needs. The Kubernetes Docs answer this question like so: &lt;/p&gt;

&lt;p&gt;Use a &lt;strong&gt;ConfigMap&lt;/strong&gt; if any of the following apply:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is an existing, well-documented config file format, such as a &lt;code&gt;mysql.cnf&lt;/code&gt; or &lt;code&gt;pom.xml&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You want to put the entire config file into one key of a ConfigMap.&lt;/li&gt;
&lt;li&gt;The main use of the config file is for a program running in a Pod on your cluster to consume the file to configure itself.&lt;/li&gt;
&lt;li&gt;Consumers of the file prefer to consume file via a Pod or an environment variable in a Pod, rather than the Kubernetes API.&lt;/li&gt;
&lt;li&gt;You want to perform rolling updates via Deployment, etc., when the file is updated.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Use a &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noopener noreferrer"&gt;secret&lt;/a&gt; for sensitive data, which is similar to a ConfigMap but more secure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Use a &lt;strong&gt;Custom Resource Definition&lt;/strong&gt; (CRD or Aggregated API) if most of the following apply:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to use Kubernetes client libraries and CLIs to create and update a new resource.&lt;/li&gt;
&lt;li&gt;You want top-level support from &lt;code&gt;kubectl&lt;/code&gt; (for example: &lt;code&gt;kubectl get my-object object-name&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa.&lt;/li&gt;
&lt;li&gt;You want to write automation that handles updates to the object.&lt;/li&gt;
&lt;li&gt;You want to use Kubernetes API conventions like &lt;code&gt;.spec&lt;/code&gt;, &lt;code&gt;.status&lt;/code&gt;, and &lt;code&gt;.metadata&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You want the object to be an abstraction over a collection of controlled resources or a summation of other resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everyone has different goals in mind and context surrounding those goals, however if your project needs a flexible way to extend Kubernetes and is trying to stick heavily to the "Kubernetes-native" way of doing things then CRDs are right up your alley. You might be asking now, what are some typical CRDs? I'm glad you asked!&lt;/p&gt;

&lt;h3&gt;
  
  
  Githook example CRD
&lt;/h3&gt;

&lt;p&gt;This CRD is called &lt;strong&gt;GitHook&lt;/strong&gt;. It defines &lt;code&gt;git&lt;/code&gt; webhook events and a build pipeline. GitHook controller will subscribe webhook events to &lt;code&gt;git&lt;/code&gt; repo and when the events happen, it will run a build pipeline as defined in the CRD.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkwc0u9jfhr5avtq2yydy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkwc0u9jfhr5avtq2yydy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHook CRD controller’s job is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make sure that webhook is registered to git repo with correct information.&lt;/li&gt;
&lt;li&gt;Make sure that there is a service running and wait for the webhook events. It uses Knative service to receive that webhook since it is easy to implement and can scale to 0 when it is not in use.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With a little work and a good team, you could string some CRDs like this together and have a startup like &lt;strong&gt;CircleCI&lt;/strong&gt; or &lt;strong&gt;Gitlab&lt;/strong&gt;!&lt;/p&gt;

&lt;h1&gt;
  
  
  Final thoughts
&lt;/h1&gt;

&lt;p&gt;So in closing, CRDs are really amazing extensions of the Kubernetes API and allow a lot of flexibility in the creation of K8s native applications.  Hope this helps you wrap your head around CRDs and their uses a bit more, thanks for reading!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
