<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: elanderholm</title>
    <description>The latest articles on Forem by elanderholm (@elanderholm).</description>
    <link>https://forem.com/elanderholm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/elanderholm"/>
    <language>en</language>
    <item>
      <title>Kubernetes Volumes: What They Are and How to Use Them</title>
      <dc:creator>elanderholm</dc:creator>
      <pubDate>Wed, 19 Oct 2022 16:05:23 +0000</pubDate>
      <link>https://forem.com/elanderholm/kubernetes-volumes-what-they-are-and-how-to-use-them-348a</link>
      <guid>https://forem.com/elanderholm/kubernetes-volumes-what-they-are-and-how-to-use-them-348a</guid>
      <description>&lt;h1&gt;
  
  
  Kubernetes Volumes: What They Are and How to Use Them
&lt;/h1&gt;

&lt;p&gt;By default, the file system available to a Kubernetes pod is limited to the pod's lifetime. As such, when the pod is deleted, all changes are lost.&lt;/p&gt;

&lt;p&gt;But many applications will need to store data persistently, irrespective of whether a pod is running or not. For example, we need to retain data that was updated in the database or files written. Also, we may want to share a file system across multiple containers, and those may be running on different nodes.&lt;/p&gt;

&lt;p&gt;Let's take a look at Kubernetes volumes, which can address these problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Basics
&lt;/h2&gt;

&lt;p&gt;Most data storage that applications use is ultimately file system-based, e.g., even though a database may keep some or all of its data in memory while running, it also keeps it updated in the data files on the file system for persistence. &lt;/p&gt;

&lt;p&gt;Volumes allow us to inject the application with a reference to a file system, which the application can then read from or write to. &lt;/p&gt;

&lt;p&gt;Injecting the file system makes it independent of the container's lifetime. We need to specify an absolute path where the injected file system should be mounted within the container's file system. &lt;/p&gt;

&lt;p&gt;Volumes may be persistent or not. There are many different types of volumes, as we shall see. &lt;/p&gt;

&lt;p&gt;A volume has to first be defined using the volumes key, and then used by a container using the volumeMounts key. &lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;Below is a partial YAML snippet to illustrate how we can define and use volumes in a pod. Depending on the type of volume, its definition and usage could be in separate places.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v0
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - image: some-image-name
    name: my-container
    volumeMounts:
    - mountPath: /tempfiles
      name: temp-files-volume
  volumes:
  - name: temp-files-volume
    emptyDir: {}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we've defined a volume of the emptyDir type. We'll see more about this later. &lt;/p&gt;

&lt;p&gt;Since this type can only be used at the level of a single pod, not across, it's defined along with the pod. There could be multiple containers in a pod (though usually not), and they could all use the same volume. &lt;/p&gt;

&lt;p&gt;So, if one container in a pod writes a new file to the volume, it would be visible to the other containers in that pod that use that volume. The name of the volume can be anything. &lt;/p&gt;

&lt;p&gt;The volumeMounts entry under the container specifies where to mount that volume within the container's file system. In this case, we want /tempfiles. &lt;/p&gt;

&lt;p&gt;When the application in the container writes to /tempfiles, it'll be writing to the temp-files volume. A container may use many different volumes or none. Note that in order to use volumes, the application in the container has to use the path that we specified in mountPath. &lt;/p&gt;

&lt;p&gt;So, if you want to use a container image with volumes, make sure that the path it uses to read/write files matches the path we specified in volumeMounts. &lt;/p&gt;

&lt;p&gt;A volume could—depending on its type—specify other attributes like &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes"&gt;access modes&lt;/a&gt;, i.e., what kind of access it allows. &lt;/p&gt;

&lt;p&gt;Modes can be ReadWriteOnce, ReadOnlyMany, ReadWriteMany, and ReadWriteOncePod. Note that specifying an access mode may not constrain the actual usage by the container. See access modes for details. &lt;/p&gt;

&lt;p&gt;Now, let's take a look at different types of volumes. &lt;/p&gt;

&lt;h2&gt;
  
  
  Volume Types
&lt;/h2&gt;

&lt;h3&gt;
  
  
  EmptyDir
&lt;/h3&gt;

&lt;p&gt;Kubernetes first creates an emptyDir volume when it assigns the &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/"&gt;pod&lt;/a&gt; using that volume to a &lt;a href="https://kubernetes.io/docs/concepts/architecture/nodes/"&gt;node&lt;/a&gt;. As the name suggests, it's empty to start with, i.e., it contains no files/directories. &lt;/p&gt;

&lt;p&gt;Containers in the same pod can share the volume so that changes made by any container are visible to others. The emptyDir volume persists as long as the pod using it does—a container crash does not delete a pod. &lt;/p&gt;

&lt;p&gt;Thus, it's an ephemeral or temporary kind of storage for things like cached files/data or intermediate results, etc. Also, we cannot use it to share data across pods. &lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/"&gt;Persistent volumes&lt;/a&gt; are defined by an administrator at the Kubernetes cluster level and can be used by multiple nodes in the cluster. They can retain their data even if we delete the pod using them. &lt;/p&gt;

&lt;p&gt;Applications in containers can request to use a persistent volume by specifying a persistent volume claim. The claim specifies how much storage of what type it requires and using which access mode. &lt;/p&gt;

&lt;p&gt;The cluster can allocate the storage for a claim in two ways: statically if a claim is satisfied by a provisioned volume, and dynamically—for if no volume is available for a claim, the cluster may try to provision the volume dynamically based on the storage class specified. &lt;/p&gt;

&lt;p&gt;The claim with the allocated storage is valid as long as the pod making the claim exists. &lt;/p&gt;

&lt;p&gt;The reclaim policy of a volume specifies what to do with a volume once the application no longer needs the volume storage—for example, when we delete a pod using the volume. &lt;/p&gt;

&lt;p&gt;Accordingly, we can either retain or delete the data on the volume. Note also that the available access modes will depend on what type of volume is used. Since Kubernetes itself does not provide a file-sharing solution, we need to set that up first. &lt;/p&gt;

&lt;p&gt;For instance, when using NFS, we need to set up the NFS share first, and then we can refer to it when creating a persistent volume. Additionally, we may need to install drivers for supporting that volume on the cluster. &lt;/p&gt;

&lt;h3&gt;
  
  
  YAML Example
&lt;/h3&gt;

&lt;p&gt;Let's look at an example configuration for an NFS volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-vol
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server-name
path: "/"
mountOptions:
- nfsvers=4.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The name, capacity, and accessModes are common to all types of volumes, whereas the section at the end, "nfs" in this case, is specific to the type of volume. &lt;/p&gt;

&lt;p&gt;We can create the volume with kubectl apply. To get information about a volume, we would use&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pv &amp;lt;volume-name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, create a persistent volume claim, and again with kubectl apply&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we can request a particular storage class (useful for dynamic provisioning), the access mode, and the amount of storage needed. &lt;/p&gt;

&lt;p&gt;We can query for a claim using &lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pvc &amp;lt;claim-name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, we can use the claim in a pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  volumes:
    - name: my-pv-storage
      persistentVolumeClaim:
        claimName: my-pv-claim
  containers:
    - name: my-pv-container
      image: nginx
      ports:
        - containerPort: 8080
          name: "tomcat-server"
      volumeMounts:
        - mountPath: "/usr/data"
          name: my-pv-storage

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we link the persistent volume to the claim we created earlier. Then, as usual, we refer to the volume to mount it at the specified path in the container. &lt;/p&gt;

&lt;p&gt;Next, let's go through the supported types of persistent volumes. &lt;/p&gt;

&lt;h3&gt;
  
  
  HostPath
&lt;/h3&gt;

&lt;p&gt;This is probably the easiest way to test persistent volumes. &lt;/p&gt;

&lt;p&gt;HostPath mounts content from the node's file system into the pod. It has specific use cases, like when the container needs to run sys tools or access Docker internals. Containers usually shouldn't make any assumptions about the host node, so good practice discourages such use. &lt;/p&gt;

&lt;p&gt;Also, hostPath exposes the host's file system—and potentially the cluster—to security flaws in the application. We should only use it for testing on a single node, as it doesn't work in a multi-node cluster. You can check out the local volume type instead. &lt;/p&gt;

&lt;h3&gt;
  
  
  Local
&lt;/h3&gt;

&lt;p&gt;Using local storage devices mounted on nodes is a better alternative to hostPath for sharing a file system between multiple pods but on the same node. &lt;/p&gt;

&lt;p&gt;The volume definition contains node affinity, which points to the particular node name on which the local storage is available. The controller will assign pods using the local storage volume to the node that has the local storage, thus using the node affinity to identify the node name. &lt;/p&gt;

&lt;p&gt;If the node with the local storage becomes unhealthy, the storage will become unavailable, and pods using it will fail too. Thus, local storage is not suitable where fail safety is important. &lt;/p&gt;

&lt;h3&gt;
  
  
  Projected
&lt;/h3&gt;

&lt;p&gt;A projected volume maps several existing volume sources into the same directory. The supported volume types for this are downwardAPI, secret, configMap, and serviceAccountToken. &lt;/p&gt;

&lt;h3&gt;
  
  
  ISCSI
&lt;/h3&gt;

&lt;p&gt;iSCSI—SCSI over IP—is an IP-based standard for transferring data that supports host access by carrying &lt;a href="https://en.wikipedia.org/wiki/SCSI"&gt;SCSI&lt;/a&gt; commands over IP networks. SCSI is a set of standards for physically connecting and transferring data between computers and peripheral devices. &lt;/p&gt;

&lt;h3&gt;
  
  
  CSI
&lt;/h3&gt;

&lt;p&gt;The container storage interface defined by Kubernetes is a standard for exposing arbitrary block and file storage systems to containerized workloads. To support using a new type of file system as a volume, we need to write a CSI driver for that file system and install it on the cluster. A list of CSI drivers can be seen &lt;a href="https://kubernetes-csi.github.io/docs/drivers.html"&gt;here&lt;/a&gt;, including drivers for file systems on popular cloud providers like AWS and Azure. &lt;/p&gt;

&lt;h3&gt;
  
  
  Fc
&lt;/h3&gt;

&lt;p&gt;Fc, or Fibre Channel storage, is a high-speed network that attaches servers and storage devices. &lt;/p&gt;

&lt;h3&gt;
  
  
  Nfs
&lt;/h3&gt;

&lt;p&gt;A network file system is a distributed file system originally developed by Sun Microsystems that's based on the open network computing remote procedure call. &lt;/p&gt;

&lt;h3&gt;
  
  
  Cephfs
&lt;/h3&gt;

&lt;p&gt;A Ceph file system is a POSIX-compliant, open-source file system built on top of Ceph’s distributed object store, Rados. It provides a multi-use, highly available, and performant file store. &lt;/p&gt;

&lt;h3&gt;
  
  
  RBD
&lt;/h3&gt;

&lt;p&gt;A Rados block device is the device on which the Ceph file system is built. Block storage allows us to access storage as blocks of raw data rather than files and directories. &lt;/p&gt;

&lt;h3&gt;
  
  
  AwsElasticBlockStore (deprecated)
&lt;/h3&gt;

&lt;p&gt;We can use this volume type to mount an AWS EBS store. It is now deprecated, so we should use the CSI drivers instead. &lt;/p&gt;

&lt;h3&gt;
  
  
  AzureDisk (deprecated)
&lt;/h3&gt;

&lt;p&gt;This is used to mount an Azure disk. It is now deprecated, so we should use the CSI drivers instead. &lt;/p&gt;

&lt;p&gt;The above list of persistent volume types is not exhaustive, but it covers the commonly used types. &lt;/p&gt;

&lt;h2&gt;
  
  
  ConfigMap
&lt;/h2&gt;

&lt;p&gt;This type of volume exposes key value pairs from a ConfigMap as files on the file system. &lt;/p&gt;

&lt;p&gt;Specifically, the key becomes the file name, and the value becomes the file contents. For example, the log-level=debug key value is represented as a file named log-level with contents = "debug". We can specify the path at which we want to mount the volume in the container. But first, we need to create a ConfigMap using kubectl create. &lt;/p&gt;

&lt;p&gt;We can create it from properties files or literal values. It's also possible to expose the values from the ConfigMap as environment variables for a pod. See &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/"&gt;more&lt;/a&gt; for details. &lt;/p&gt;

&lt;h3&gt;
  
  
  Downward API
&lt;/h3&gt;

&lt;p&gt;The downward API exposes pod and container field values to applications. The downward API volume exposes the key value pairs as files on the file system similar to ConfigMap above. &lt;/p&gt;

&lt;h3&gt;
  
  
  Secret
&lt;/h3&gt;

&lt;p&gt;This is a tempfs-based file system used to store secrets, e.g., for authentication. It's similar to ConfigMap. We need to first create a  secret using the Kubernetes API. We can also expose secrets as environment variables. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this post, we've highlighted how to inject file systems into Kubernetes pods using volumes. &lt;/p&gt;

&lt;p&gt;We've also explored the different kinds of volumes and their uses. Using volumes allows us to use various types of storage, persist data independent of the pod, and also share data across pods. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>opensource</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Environments as a Service (EaaS) - Top 3 benefits</title>
      <dc:creator>elanderholm</dc:creator>
      <pubDate>Thu, 11 Mar 2021 16:27:42 +0000</pubDate>
      <link>https://forem.com/elanderholm/environments-as-a-service-eaas-top-3-benefits-1dfl</link>
      <guid>https://forem.com/elanderholm/environments-as-a-service-eaas-top-3-benefits-1dfl</guid>
      <description>&lt;h1&gt;
  
  
  Environments as a Service
&lt;/h1&gt;

&lt;p&gt;Everything is a service these days; Everything as a Service (&lt;a href="https://simple.wikipedia.org/wiki/Everything_as_a_service"&gt;XaaS&lt;/a&gt;) actually exists. So you can be forgiven for not knowing exactly what Environments as a Service (EaaS) is and why it could be a game changer for your business.&lt;/p&gt;

&lt;p&gt;EaaS is the natural extension of infrastructure as a service (IaaS, e.g. AWS, GCP, etc.), but instead of just the hardware and base software,  EaaS includes all your code and settings as well as the infrastructure and software to run your application in an isolated environment.  You describe your application to the system and the EaaS platform does the rest.  &lt;/p&gt;

&lt;p&gt;These environments can be used for performance testing, QA, Sales Demos, large software and/or data migrations—even production.  Aside from production, these environments are ephemeral; they come and go based on your particular SDLC.&lt;/p&gt;

&lt;p&gt;There are a lot of reasons to implement an EaaS, but there are 3 specific benefits that can change the whole trajectory of your business. &lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Control
&lt;/h2&gt;

&lt;p&gt;Using a cloud provider has huge advantages, but there are some downsides, and a big one is cost.  It isn’t that using a cloud provider means your infrastructure costs are necessarily higher, but the risk of accidentally spending a lot more money than you meant to is real!  An EaaS can make this a non-issue.  Once you know what an environment costs to create, you can understand your expenditures in a way not possible when dealing with AWS or GCP directly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Don't forget to create all the correct billing alarms in AWS, they aren't on by default!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Your EaaS provider gives you total control over your cloud bill by allowing you to limit the number of environments created and what they consist of... Your environments can be scaled to match your needs for each environment so that costs can be contained. For example, your demo environments need not be as big or fast as your load testing environment. Also, as noted above, your environments are ephemeral so they can be spun up and deleted automatically for only as long as you are actually going to use them.&lt;/p&gt;

&lt;p&gt;The costs of building your own internal EaaS is complicated to calculate, but you need to take into account a team of specialized devops engineers working on the project for 6-18+ months (depending on complexity), maintenance of the platform each year, cost of adopting new technologies, handling all your own AWS or other cloud costs, internal product management to make sure it stays competitive, and so on.  It’s not cheap or easy to understand the costs of this kind of system.  Even a team of only 3 Devops engineers working on an EaaS for about 6 months is over half a million dollars and that does not take into account opportunity cost and maintenance/upgrading costs going forward.&lt;/p&gt;

&lt;p&gt;Buying the right EaaS will be usable sooner than if you build it yourself and the costs will be simple to understand and more affordable.  There are many things to spend your limited resources on to move your business forward, but building your own EaaS is not one of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Massive Increases in Speed
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;“Speed Kills.” - Al Davis&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;All things being equal you have an advantage in football and business if you are more agile than your opponents or competitors.  Obviously, in software development you need to define speed with a component of quality.  Deploying a bunch of changes quickly that result in lower conversion or (worse) down-time is a huge problem, and that’s not how we want to measure speed: it’s too naive.  &lt;/p&gt;

&lt;p&gt;We need to measure our velocity by only counting product deliverables that meet or exceed your key metrics and don’t compromise the stability of the application.  Having a fast and capable EaaS could increase your teams’ velocity more than implementing any other kind of platform.  An EaaS can improve your velocity in at least two dimensions by removing bottlenecks and decreasing rework.  &lt;/p&gt;

&lt;p&gt;With an EaaS your releases will never get stuck because of a lack of staging or QA environments. The ability to create and destroy production like environments with your EaaS means your releases aren’t being delayed because of environmental bottlenecks. Since your environments are now ephemeral you can create more when your teams need them and shut them down when they aren’t needed.  Capacity planning can now be done in real time with the platform reacting to your teams’ changes in velocity.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/5x1nPw1sTjoz1kGpj2UBRn/b68500948be55a7cbc0f9c436c1f6c1d/Screen_Shot_2021-03-09_at_11.40.06_AM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/5x1nPw1sTjoz1kGpj2UBRn/b68500948be55a7cbc0f9c436c1f6c1d/Screen_Shot_2021-03-09_at_11.40.06_AM.png" alt="Ephemeral Environments"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So many beautiful Ephemeral Environments to work with!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Rework is the most costly kind of work you can do.  You always want to tackle rework as early as possible.  You can’t avoid it completely, but you can minimize its impact.  Often features are complex—even the smallest ones—with all the ways people may interact with it.  From mobile apps to APIs, features often need to be seen or experienced by many people and multiple teams before release to avoid rework.  Having access to an isolated, ephemeral environment that looks like ‘production + the new feature’ gives each team the ability to test earlier on and give feedback while the developers and designers are creating it.  A good EaaS will give you the confidence in your quality and remove bottlenecks, allowing all your teams to achieve more in less time than ever before. &lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing New Technology
&lt;/h2&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/6LgleidDIn2KnbbkRar4WA/3458272bd6d6c0be911672574396f930/devops-tools-690x460.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/6LgleidDIn2KnbbkRar4WA/3458272bd6d6c0be911672574396f930/devops-tools-690x460.png" alt="Devops Tools and Technolgies"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Simplified Devops technolgies image, courtesy of OSOLABS.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Technologies are constantly changing and evolving.  It wasn’t that long ago that open source databases and the cloud weren’t considered mature enough technologies for the enterprise.  It was even less time ago when you didn’t use containers, but just virtual machines (VMs) on AWS using EC2.  If you started building an internal EaaS 4-5 years ago you most likely did not build it using Kubernetes (k8s); it just wasn’t ready for production workloads at the time.  The retro-fit of an internal EaaS from managing VMs or containers to managing k8s is a non-trivial process.  This is one of the major reasons you want to use an external EaaS.&lt;/p&gt;

&lt;p&gt;Kubernetes is amazing when it’s running well and managed by an experienced group of people.  Implementing it is not for the faint of heart and it changes rapidly compared to more mature software.  It’s probably the ultimate “&lt;a href="https://en.wiktionary.org/wiki/footgun"&gt;footgun&lt;/a&gt;” in devops at the moment.&lt;/p&gt;

&lt;p&gt;K8s is not an EaaS by itself, but just a piece of the system—albeit an important one.  In order to implement an internal EaaS on Kubernetes you need a deep understanding of it and a group of talented engineers to create an EaaS on top of it.  And after you do all that, what happens if Kubernetes gets supplanted by something else?  You will be stuck at that decision again, needing to invest all the time and resources to implement something new, or stay with what you have and hope it doesn’t hold you back compared to your competitors.  &lt;/p&gt;

&lt;p&gt;Cloud providers are always supporting new technologies and many times the documentation, support, and stability of these technologies leaves something to be desired.  An EaaS can help you avoid the time and distraction it takes to learn and implement all these changing technologies.  You can think of Kubernetes as an engine and an EaaS as a car.  Most of us buy a car, not all the parts of the car to make it yourself, unless we are in the business of building cars.  You are in the business of creating amazing AI products, collaboration software, or just about anything other than an EaaS.     &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;An EaaS is crucial for your business to move as fast as possible while not sacrificing quality.  The ability to control and predict costs while producing high quality work as quickly as possible is the holy grail of product development.  The inability to quickly produce isolated environments of any specification will hold you back in innumerable ways. &lt;/p&gt;

&lt;p&gt;EaaS platforms are extremely complicated systems with rapidly changing technologies underpinning them.  Just as most companies shouldn’t create their own alerting and monitoring solution, but instead use Datadog or an analog, they shouldn’t be creating their own EaaS. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;At Release we work tirelessly to bring your application to life in an orchestrated, human interface. We write software to deal with all the complexity, difficulty, and strain so that no one else has to (unless they want to!) We create the engine that drives the Kubernetes vehicle, and we deliver solutions that our customers can use to get on with their business of doing business. Checkout &lt;a href="https://releasehub.com"&gt;Release&lt;/a&gt; and let us help your business streamline feature development with EaaS!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@alschim?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Alexander Schimmeck&lt;/a&gt; on &lt;a href="https://unsplash.com/"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>eaas</category>
      <category>saas</category>
    </item>
    <item>
      <title>Feature Flags and Ephemeral Environments</title>
      <dc:creator>elanderholm</dc:creator>
      <pubDate>Tue, 08 Dec 2020 19:20:32 +0000</pubDate>
      <link>https://forem.com/elanderholm/feature-flags-and-ephemeral-environments-n23</link>
      <guid>https://forem.com/elanderholm/feature-flags-and-ephemeral-environments-n23</guid>
      <description>&lt;h1&gt;
  
  
  Feature Flags and Ephemeral Environments
&lt;/h1&gt;

&lt;p&gt;Feature Flags are a necessary and ubiquitous part of modern software development.  As your company and the complexity of your application grows it becomes imperative to be able to control what features are available to your internal development teams, stakeholders and customers.  In the long-ago, before-times, we would just have a variable that you would toggle between true and false to control behavior of your application.  However, as application development transitioned to the Web we needed the same kind of control, except that hard-coded feature flags just weren’t going to cut it. Enter Dynamic Feature Flags!&lt;/p&gt;

&lt;p&gt;Dynamic feature flags were a big improvement over static feature flags, but also added complexity and presented challenges different from static feature flags.  Gone were hard-coded flags, but they were replaced with if statements and more importantly, retrieving the appropriate flags for your application.  Most people started by rolling their own, but as developing with feature flags gained popularity many different companies popped into existence looking to solve the problems of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One interface to manage your flags&lt;/li&gt;
&lt;li&gt;Easy maintenance of your flags&lt;/li&gt;
&lt;li&gt;Very fast and reliable retrieval of your flags&lt;/li&gt;
&lt;li&gt;Splitting traffic to one feature or another &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While companies like LaunchDarkly, Optimizely, Rollout, Split.io and others made it fairly easy to create and manage these flags this doesn’t solve all of your issues.  Many software orgs, especially as they grow, need lots of environments for testing. This poses a challenge to your Feature Flag setup specifically if your environments are ephemeral.&lt;/p&gt;

&lt;p&gt;Ephemeral environments are like any environment except they will be removed in a relatively short amount of time unlike your staging or production environments.  Good examples are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Feature branches&lt;/li&gt;
&lt;li&gt;Sales Demos&lt;/li&gt;
&lt;li&gt;Load Testing&lt;/li&gt;
&lt;li&gt;Refactors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These environments may not last a long time, but they are exceedingly important and can be just as complex as production.  While a sales demo environment may be able to function with seed data, a load testing environment will need production or production-like data and many replicas of each service to give a valid result.  These can be super complex to create and manage and their ephemeral nature can play havoc with your feature flag setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Flag Environments to the Rescue…Sort of
&lt;/h2&gt;

&lt;p&gt;LaunchDarkly (and others) recognized this issue and created the concept of environments in their own applications.  You can read about their implementation here.  They have apis that allow you to create and manipulate these sets of feature flags on an environment by environment basis. This works great if you have a finite set of environments and the set of them doesn’t change often, but with ephemeral environments the ability to spin them up and down is a feature not a bug.&lt;/p&gt;

&lt;p&gt;In order to simplify this issue most people create two kinds of environments in their favorite Feature Flag provider: one for development (or test) and one for production.  In larger organizations development teams may have a few, such as development, test, uat, staging, and production.  This works fine as long as you don’t want to add another one or you never take the plunge toward truly ephemeral application environments.  &lt;/p&gt;

&lt;p&gt;Once you move to ephemeral environments most people take the shortcut of assigning every ephemeral environment to a single Feature Flag environment, which is simple enough, but creates a large problem with people stepping on each other’s toes.  &lt;/p&gt;

&lt;p&gt;Imagine you have 10 environments all pointing to a single database with writes happening from all those environments: it’s the same issue here.  The great thing about feature flags is the ability to toggle them and see different behavior, but if every environment is pointing to the same one you now have another resource contention problem.  If you toggle Feature A ‘on’ what’s to stop your co-worker from toggling it ‘off’?  Any issues you have with permanent staging environments are magnified with ephemeral environments.&lt;/p&gt;

&lt;p&gt;The best solution would be upon the creation of an ephemeral environment you would create an environment in LaunchDarkly based on something unique about your ephemeral environment and when it comes up, you would make sure it was using the unique SDK api for that particular Feature Flag Environment.  Let’s implement the workflow and see how that would work with Release!&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/6iJFF3Zu70PwOxkxXCKtp3/24f7412df0886df79f4ff1239d1eeffc/image9.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/6iJFF3Zu70PwOxkxXCKtp3/24f7412df0886df79f4ff1239d1eeffc/image9.png" alt="FF workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with Ephemeral Environments
&lt;/h2&gt;

&lt;p&gt;In order to try this out with Release we need a repository with a Docker File that has implemented Feature Flags with LaunchDarkly.  I’m going to use &lt;a href="https://github.com/elanderholm/rails_postgres_redis"&gt;this&lt;/a&gt; repository on Github and you can do the same by first forking the repository so you can use it to create an application with Release.&lt;/p&gt;

&lt;p&gt;Once you have forked the repository you can navigate to &lt;a href="https://releasehub.com"&gt;releasehub.com &lt;/a&gt;and sign-in using github in order to follow along with this example.&lt;/p&gt;

&lt;p&gt;The steps to get our ephemeral environments created in Release with support for environments in LaunchDarkly are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create our application in Release&lt;/li&gt;
&lt;li&gt;Create a job with Release to create the environment in LaunchDarkly&lt;/li&gt;
&lt;li&gt;Add some environment variables so the application can contact LaunchDarkly and pull in the SDK Api key from our newly created LaunchDarkly Environment&lt;/li&gt;
&lt;li&gt;Deploy our Ephemeral Environment&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;If you don’t have a launch darkly account, you can create one for free for 30 days to use for this example.  You will also need to create at least one feature flag.  If you already have a launch darkly account with a lot of feature flags you can just skip this step.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/4O6rr31peejXL43aLTPrAi/dc7022076bd7b28e2e189431a5262c7b/image3.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/4O6rr31peejXL43aLTPrAi/dc7022076bd7b28e2e189431a5262c7b/image3.png" alt="LaunchDarkly Test Flag"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Application In Release
&lt;/h2&gt;

&lt;p&gt;Once we are logged into Release we want to click &lt;strong&gt;Create New Application&lt;/strong&gt; in the left-hand sidebar.  After doing that we will be presented with Create New Application Workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/5LbYmJLNvAIZwuUqcjMtsv/5aeb59c6fef3b73532e04125a676dd0e/image12_png.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/5LbYmJLNvAIZwuUqcjMtsv/5aeb59c6fef3b73532e04125a676dd0e/image12_png.png" alt="Create Application With Refresh Button"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, we will click the "refresh" button to find our newly forked repository.  Then, we will select that repository and “Docker” for the “api” service.  Finally,  name your application.  Once you have finished click the purple “Generate App Template” button.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/7HFUIDbur1n6r5aELXgb01/f8e12dfed6ca2cb6130fbdb68c98342a/image2.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/7HFUIDbur1n6r5aELXgb01/f8e12dfed6ca2cb6130fbdb68c98342a/image2.png" alt="Pick your Repository"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly name your application and generate the template for your configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/WKSmTIAeyTjNKCHLTVbEe/a17a56b380c61066b4fa89fc85690475/image4.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/WKSmTIAeyTjNKCHLTVbEe/a17a56b380c61066b4fa89fc85690475/image4.png" alt="Name your Application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Modify the Application Template
&lt;/h2&gt;

&lt;p&gt;Before we can deploy our environment(s) we need to make a modification to our application template and add a few environment variables.  We also need to create a job that will create our LaunchDarkly environment upon initial environment deployment.  Jobs in Release are described in detail &lt;a href="https://docs.releasehub.com/reference-guide/application-settings/application-template#jobs"&gt;here&lt;/a&gt;.  The TL;DR is that with a small amount of configuration you can run any arbitrary script or task in a container.  For example, these jobs are very useful for running migrations before a deployment of your backend service.  In this case we will run a rake task to set up our LaunchDarkly Environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
- name: create-launch-darkly-env
  from_services: api
  args:
  - bundle
  - exec
  - rake
  - launch_darkly:create_environment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The above yaml represents a job in Release&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We will place the above lines right before the “services” stanza in our application template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;memory:
   limits: 1Gi
   requests: 100Mi
 replicas: 1
jobs:
  - name: create-launch-darkly-env
    from_services: api
    args:
    - bundle
    - exec
    - rake
    - launch_darkly:create_environment
services:
  - name: api
    image: erik-opsnuts-test-001/rails_postgres_redis/api
    has_repo: true
    static: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Place the jobs snippet into the Application Template&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In order for Release to utilize this job as part of the workflow to deploy an environment we will need to add one line near the bottom of the file in the 'workflows' section.  Under 'setup':'order_from' we will add &lt;strong&gt;jobs.create-launch-darkly-env&lt;/strong&gt;. Then, click “Save and Continue.”&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;workflows:
- name: setup
  order_from:
  - jobs.create-launch-darkly-env
  - services.all
- name: patch
  order_from:
  - services.api
  - services.sidekiq
  - services.db
  - services.redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Place jobs.create-launch-darkly-env before services.all under the workflows stanza&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;That’s all the configuration needed, now we just need to add two environment variables before we deploy!&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Environment Variables
&lt;/h2&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/2afDqMwgfbBgsxc5WHAlCL/f43f8e929dcd4e73d0efdb46cab6dc3b/image11.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/2afDqMwgfbBgsxc5WHAlCL/f43f8e929dcd4e73d0efdb46cab6dc3b/image11.png" alt="Adding Env Variables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click 'EDIT' for 'Default Environment Variables' to bring up the editor.  We will add two environment variables that contain information about LaunchDarkly.  They are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LAUNCH_DARKLY_API_KEY&lt;/strong&gt;: Your LaunchDarkly Api Key which is found here.  If you don’t have an api token create the “+ TOKEN” button to make one.  You will want to give it admin privileges.  If you can’t do that contact your administrator.  &lt;em&gt;&lt;strong&gt;Once you create it, make sure you copy it and paste it somewhere you can retrieve it.&lt;/strong&gt;&lt;/em&gt;  LaunchDarkly will obfuscate your token and if you don’t save it somewhere you will need to generate a new one.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/6jdsjV6ETNJUOnmCNWP9tZ/75653f2eeb2217930777e900b4ff6746/image7.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/6jdsjV6ETNJUOnmCNWP9tZ/75653f2eeb2217930777e900b4ff6746/image7.png" alt="Create LD token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LAUNCH_DARKLY_PROJECT_NAME&lt;/strong&gt;:  We will just use ‘default’ for this example, but if there is another project you would like to test with feel free.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;defaults:
- key: POSTGRES_USER
  value: postgres
- key: POSTGRES_PASSWORD
  value: postgres
- key: LAUNCH_DARKLY_PROJECT_NAME
  value: default
- key: LAUNCH_DARKLY_API_KEY
  value: your-api-key
  secret: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Click ‘Save’ to save your environment variables as part of your application configuration.  Then, click ‘Build and Deploy’. You will be redirected to the activity dashboard for that application and a Docker build was kicked off in the background. This will be followed by the deployment of the environment for your application.  You can view the build and the deployment under the ‘builds’ and ‘deploys’ sections respectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/0RxHLU0FDRkPRdE8NRzDf/b086b5e3ff3fb0cec0b744ec4e43f8d9/image10.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/0RxHLU0FDRkPRdE8NRzDf/b086b5e3ff3fb0cec0b744ec4e43f8d9/image10.png" alt="left-hand-nav"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Environment
&lt;/h2&gt;

&lt;p&gt;This process of doing the docker build will take a few minutes the first time.  Once the build and deployment have finished you can find the url for your new environment by clicking ‘Environments’ on the left and then by clicking into your new environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/3JfGh3cQnbXMea0fkcCZd1/061efc9db04bfcb0c34e66826a0b06cf/image6.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/3JfGh3cQnbXMea0fkcCZd1/061efc9db04bfcb0c34e66826a0b06cf/image6.png" alt="Enviro"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you click on the url for your newly created ephemeral environment another window in your browser will open to the example rails site with postgres and redis.  It should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/7uUfgJGrmodl3QwVsAI8wk/07de4d7da560256378414160f1cca5c3/image8.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/7uUfgJGrmodl3QwVsAI8wk/07de4d7da560256378414160f1cca5c3/image8.png" alt="test-a"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have a flag named ‘test-flag’ in your launch darkly account you can go ahead and toggle it from ‘false’ to ‘true’ and vice versa and reload your environment to see the changes.  If you would like to use a different flag, you only need to make one change in: &lt;strong&gt;app/views/welcome/index.html.erb&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;% test_flag = Rails.configuration.ld_client.variation("test-flag", {key: "user@test.com"}, false) %&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once you have changed ‘test-flag’ to the flag name of your choosing, you only need to commit and push the change to github.  Once that happens, Release will automatically do a build and then deploy your changes.  When the process finishes, you will be able to see the welcome page change based on your new flag.&lt;/p&gt;

&lt;p&gt;In your LaunchDarkly interface you will see a newly created environment with a name of the form ‘tedfe34`.  This name is the same as your &lt;strong&gt;RELEASE_ENV_ID&lt;/strong&gt; environment variable that Release creates automatically for your new environment.  You will see this value in a few places in the Release UI besides the environment variable editor.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/7LV1zBKzfkTPpleU7l91og/afcd2787a05df2c8ce2d3916e266c857/image1.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/7LV1zBKzfkTPpleU7l91og/afcd2787a05df2c8ce2d3916e266c857/image1.png" alt="LD FF env interface"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion - What’s next?
&lt;/h2&gt;

&lt;p&gt;Now that you can get pristine feature flag environments dedicated to your Release environments what’s next?  In this example the clean-up would need to be done manually, not a huge deal, but we can do better.  Release will be implementing a deeper integration with LaunchDarkly in the near future to make this seamless and handle deleting the environments in LaunchDarkly when your Release environment is removed. &lt;/p&gt;

&lt;p&gt;Stay tuned for integrations with other feature flag providers in the future.  If you would like to have environments for your applications that are fast, simple to define and incredibly powerful send us a note at &lt;a href="mailto:support@releasehub.com"&gt;support@releasehub.com&lt;/a&gt; and we will help you and your team become more efficient utilizing ephemeral environments with Release.&lt;/p&gt;

&lt;h3&gt;
  
  
  About Release
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Release is the simplest way to spin up even the most complicated environments.  We specialize it taking your complicated application and data and making reproducable environments on-demand.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Hero Image by &lt;a href="https://unsplash.com/@dnevozhai?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Denys Nevozhai&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/traffic?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
  </channel>
</rss>
