<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: tmcclung</title>
    <description>The latest articles on Forem by tmcclung (@tmcclung).</description>
    <link>https://forem.com/tmcclung</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tmcclung"/>
    <language>en</language>
    <item>
      <title>Rainbow Deployment: Why and How to Do It</title>
      <dc:creator>tmcclung</dc:creator>
      <pubDate>Wed, 14 Dec 2022 17:06:16 +0000</pubDate>
      <link>https://forem.com/tmcclung/rainbow-deployment-why-and-how-to-do-it-32d3</link>
      <guid>https://forem.com/tmcclung/rainbow-deployment-why-and-how-to-do-it-32d3</guid>
      <description>&lt;p&gt;How do you define if your application is modern? One of the defining factors is if it utilizes zero-downtime deployments. If you can deploy a new version of your application without your users realizing it, it's a good indicator that your application follows modern practices. In modern, cloud-native environments, it's relatively easy to achieve, however it's not always as simple as deploying a new version of your application and then very quickly switching traffic to it. Some applications may need to finish long-running tasks first. Others will have to somehow deal with not breaking user sessions. The bottom line is that, just like pretty much any technology, you can do basic zero-downtime deployments or more advanced zero-downtime deployments. &lt;/p&gt;

&lt;p&gt;In this post, you'll learn about the latter. We'll talk about what rainbow deployments are, and how you can use them for very efficient zero-downtime deployments. &lt;/p&gt;

&lt;h2&gt;
  
  
  Zero-Downtime Deployments
&lt;/h2&gt;

&lt;p&gt;In order to explain rainbow deployments, we need to have a good understanding of zero-downtime deployments in general. So, what are they? The name gives it away. Zero-downtime deployment is when you release a new version of your application without any downtime. This usually means that you deploy a new version of the application, and users are switched to that new version without even knowing. &lt;/p&gt;

&lt;p&gt;Zero-downtime deployments are superior to traditional deployments, where you schedule a "maintenance window" and show a "we are down for maintenance" message to your users for a certain amount of time. In the world of Kubernetes, there are two main ways of completing (near) zero-downtime deployments: Kubernetes' own rolling update deployment, and blue/green deployments. Let's quickly go over both so we'll have a good base of knowledge before diving into the rainbow deployments. &lt;/p&gt;

&lt;h2&gt;
  
  
  Rolling Update
&lt;/h2&gt;

&lt;p&gt;Kubernetes rolling updates are quite simple yet very effective in many cases. The traditional software update process is usually done by shutting down the old version and then deploying the new version. That, of course, will introduce some downtime. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yu1lGzGz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgukujv33fgai4q5ht0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yu1lGzGz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgukujv33fgai4q5ht0y.png" alt="Image description" width="880" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Kubernetes rolling update does the opposite. It first deploys a new version of the application right next to the old version, and as soon as the new version is marked as up and running, it automatically switches traffic to the new version—and only then does it delete the old version. Therefore, no downtime. &lt;/p&gt;

&lt;p&gt;However, a Kubernetes rolling update has some limitations. Your application needs to be able to handle such a process, you need to think about database access, and it's a very on/off process. Therefore, you don't have any control over when and how gradually the traffic will switch to the new version. &lt;/p&gt;

&lt;h2&gt;
  
  
  Blue/Green Deployments
&lt;/h2&gt;

&lt;p&gt;Blue/green deployments are next-level deployments that try to answer the limitations of simple rolling updates. In this model, you always keep two deployments (or two clones of the whole infrastructure). One is called blue and one is called green. At any given time, only one is active and serving traffic, while the other one will be idle. And once you need to release an update, you do that on the idle side, test if everything works, and then switch the traffic. &lt;/p&gt;

&lt;p&gt;This model is better than a simple rolling update because you have control over switching traffic, and you can have the new version running for a few minutes or even hours so that you can do testing to make sure you won't have any surprises once live traffic hits it. &lt;/p&gt;

&lt;p&gt;However, while better than rolling updates, blue/green deployments also have their limitations. The most important is that you're limited to two environments: blue and green. While in most cases that's enough, there are use cases where that would be a limiting factor. For example, if you have long-running tasks such as database migrations or AI processing. &lt;/p&gt;

&lt;h2&gt;
  
  
  When Blue/Green Is Not Enough
&lt;/h2&gt;

&lt;p&gt;Imagine a situation where you deploy a new version of your long-running software to your blue environment, you test if it's okay, and you make it your live environment. Then you do the same again for the green environment—you deploy a new version there and switch again from blue to green. &lt;/p&gt;

&lt;p&gt;So now, if you'd like to deploy a new version again, you'd have to do it on the blue environment. But blue could still be working on that long-running task. You can't simply stop a database migration in the middle because you'll end up with a corrupted database. So you'll have to wait until the software on the blue environment is finished before you can make another deployment. And that's where rainbow deployments come into play. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Rainbow Deployment?
&lt;/h2&gt;

&lt;p&gt;Rainbow deployment is the next level of deployment methods that solves the limitation of blue/green deployments. In fact, rainbow deployments are very similar to blue/green deployments, but they're not limited to only two (blue and green) environments. You can have as many colorful environments as you like—thus the name. &lt;/p&gt;

&lt;p&gt;At Release we use Kubernetes namespaces along with our deployment system to automate the creation and removal of rainbow deployments for your application. Release will automatically create and manage a new namespace for each deployment.&lt;/p&gt;

&lt;p&gt;As we said, the working principle of rainbow deployment is the same as blue/green deployments, but you can operate on more copies of your application than just two. So, let's take our example from before, the one about the long-running task. Instead of waiting for the blue environment to finish in order to make another deployment, you can just add another environment. Let's call it yellow. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OlL2sy2H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3asocedo0rslttw0mfka.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OlL2sy2H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3asocedo0rslttw0mfka.jpeg" alt="Image description" width="880" height="587"&gt;&lt;/a&gt;&lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;Then we have three environments: blue, green, and yellow. Our blue is busy, and green is currently serving traffic. So if we want to deploy a new version of our application, we can deploy it to yellow and then switch traffic to it from green. And that's how rainbow deployment works. &lt;/p&gt;

&lt;p&gt;This is a very powerful method of deploying applications because it really lets you avoid downtime as much as possible for as many users as possible. Long-running tasks blocking your deployments provide just one example, but there are more use cases. For example, if your application uses &lt;a href="https://en.wikipedia.org/wiki/WebSocket"&gt;WebSockets&lt;/a&gt;, no matter how fast and downtime-less your deployments are, you'll still have to disconnect users from their WebSockets sessions, so they'll potentially lose some notifications or other data from your app. Rainbow deployments can solve that problem too. You deploy a new version of your application, and you keep the old one till the users finally disconnect from WebSockets sessions. Then you kill the old version of the application. &lt;/p&gt;
&lt;h2&gt;
  
  
  How to Do a Rainbow Deployment
&lt;/h2&gt;

&lt;p&gt;Now that you know what rainbow deployments are, let's see how you actually do them. There is no one standard way of achieving rainbow deployments. In fact, there aren't even any tools that you can install that will do rainbow deployments for you. It's more of a do-it-yourself approach. That may seem like bad news, because you can't simply install some tool and benefit from rainbow deployments, but we can leverage the tools we have to enable rainbow deployments with just a few extra lines of logic. &lt;/p&gt;

&lt;p&gt;So, how do you do it, then? You use your current CI/CD pipelines. All you need to do is to point whatever network device you're using to a specific "color" of the application when you deploy one. In the case of Kubernetes, this could mean changing the &lt;strong&gt;Service&lt;/strong&gt; or &lt;strong&gt;Ingress&lt;/strong&gt; objects to point to a different deployment. Let's see an example. Below are some very simple and typical Kubernetes deployment and service definitions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: your_application:0.1
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginxservice
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      name: nginxs
      targetPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have one deployment and one service that points to that deployment. The service knows which deployment to target based on deployment labels. The service is instructed to search for deployment that has a label app with a value of nginx. But what if we simply target the deployment by color as well? Well, you'd pretty much end up creating a rainbow deployment strategy. &lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Rainbow Magic
&lt;/h2&gt;

&lt;p&gt;So, your definition would look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-[color]
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        color: [color]
    spec:
      containers:
      - name: nginx
        image: your_application:0.2
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginxservice
spec:
  selector:
    app: nginx
    environment: [color]
  ports:
    - protocol: TCP
      port: 80
      name: nginxs
      targetPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it would be in your CI/CD job to replace &lt;strong&gt;[color]&lt;/strong&gt; in the YAML definition every time you want to deploy a new version. So you deploy your application and service for it. Then the next time you want to deploy a new version of that application, instead of updating the existing deployment, you create a new deployment and update the existing service to point to that new deployment. And you can repeat that process as many times as you want. And once the old deployments aren't needed anymore, you can delete them. This is the working principle of rainbow deployments. &lt;/p&gt;

&lt;p&gt;It's also worth mentioning that you don't need to use colors to distinguish your deployments. It can be anything. A common example is git commit hash. Another thing you need to know is that this method isn't exclusive to Kubernetes. You can use it in pretty much any infrastructure or environment as long as you have a way to distinguish and point your network traffic to a specific deployment. &lt;/p&gt;

&lt;h2&gt;
  
  
  Rainbow Deployment Summary
&lt;/h2&gt;

&lt;p&gt;With ReleaseHub, you have easy access to unlimited environments, so we extended the Blue-green pattern to infinite colors of the rainbow. Each deployment happens in a &lt;strong&gt;Namespace&lt;/strong&gt;, which is a copy of your production environment. Each Namespace gets a color, and you can have as many colors as you need.&lt;/p&gt;

&lt;p&gt;Rainbow deployments are sometimes a little bit difficult to understand. It may seem like wasting resources or simply not logical. But they do solve a lot of problems with common deployment methods, and they do bring true benefits to your users. However, it's definitely not a magic solution that will solve all your application problems. Your infrastructure and your application need to be compatible with this approach. Database handling may be especially tricky (for example, you don't want to have two applications writing to the same record in the same database). But these are typical problems that you need to solve anyway when dealing with distributed systems. &lt;/p&gt;

&lt;p&gt;Once you improve the user experience, you can also think about improving your developer productivity. If you want to learn more, take a look at our &lt;a href="https://docs.releasehub.com/reference-documentation/workflows-in-release/rainbow-deployments"&gt;documentation here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  About Release
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://releasehub.com"&gt;Release&lt;/a&gt; is the simplest way to spin up even the most complicated environments. We specialize in &lt;br&gt;
taking your complicated application and data and making reproducible environments on-demand.&lt;/p&gt;

&lt;p&gt;Cover Photo by &lt;a href="https://unsplash.com/@karson_?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Karson&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/rainbow?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>5 Ways To Improve Developer Velocity With Ephemeral Environments</title>
      <dc:creator>tmcclung</dc:creator>
      <pubDate>Tue, 23 Feb 2021 16:14:23 +0000</pubDate>
      <link>https://forem.com/tmcclung/5-ways-to-improve-developer-velocity-with-ephemeral-environments-5e74</link>
      <guid>https://forem.com/tmcclung/5-ways-to-improve-developer-velocity-with-ephemeral-environments-5e74</guid>
      <description>&lt;p&gt;Velocity is a measurement of how many story points a software development team can finish within a sprint (usually one or two weeks). These points are set by the software development team when they review a ticket and estimate how complex the ticket is. When a team measures this output over a period of time, generally they have a consistent amount of story points they can deliver in a sprint and their velocity is known.&lt;/p&gt;

&lt;p&gt;Improving developer velocity is directly correlated with performance. &lt;a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/developer-velocity-how-software-excellence-fuels-business-performance"&gt;McKinsey published an article in April 2020&lt;/a&gt;, where they cite that companies in the top 25% on their Developer Velocity Index grow up to twice as fast as companies in their same industries. Intuitively this makes sense since delivering more allows the development team to learn through iterating and improving. &lt;/p&gt;

&lt;p&gt;One might argue that velocity alone doesn’t make for great software, but assuming a development team is aware that quality is important, one can see how velocity usually helps. The ability to deliver quickly also allows a development team to address quality issues quickly. It’s easy to argue that development teams with high velocity have the ability to deliver better quality software because they can address issues quickly.&lt;/p&gt;

&lt;p&gt;In the same study, McKinsey highlighted several factors that allow a software development team to move quickly. Specifically they highlight that Technology Tools are an incredibly important dimension to velocity and business outcomes. And the most important tools are: Planning, Collaboration, Development and DevOps tools. &lt;/p&gt;

&lt;p&gt;In this post I’m going to discuss the &lt;strong&gt;top 5 ways Ephemeral Environments can improve developer velocity&lt;/strong&gt; by touching on how they are a &lt;em&gt;Collaboration&lt;/em&gt;, &lt;em&gt;Development&lt;/em&gt; and &lt;em&gt;DevOps&lt;/em&gt; tool. As we’ve spoken about in our article &lt;a href="https://releasehub.com/ephemeral-environments"&gt;"What is an Ephemeral Environment?"&lt;/a&gt;, ephemeral environments are spun up on demand and contain the code and data that approximates production closely. These environments are used by development teams in the software development process to test, debug and ensure features are built correctly before code is pushed to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Here are the top 5 ways ephemeral environments can be used to improve developer velocity
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Ephemeral environments are a DevOps tool designed to remove the staging or QA environment bottleneck&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Traditional pre-production ecosystems usually have a limited amount of environments for developers. The staging or QA environment is generally used as a step before production where all code is merged and tested. Most organizations have one or very few of these environments, so as the organization grows these environments become a bottleneck in the process as all code must be tested here before production. &lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/57RYmuBRfBBWCRUWglqBnZ/14bafeeb0a47fd566938d2ff052a01c6/Screen_Shot_2021-02-22_at_2.43.18_PM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/57RYmuBRfBBWCRUWglqBnZ/14bafeeb0a47fd566938d2ff052a01c6/Screen_Shot_2021-02-22_at_2.43.18_PM.png" alt="Example of ephemeral environments for each branch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With ephemeral environments, the traditional idea of “staging” is gone. Every feature branch is contained in its own isolated environment and becomes its own integration environment. There is no longer a need to have a single testing and integration environment where all code must merge before going to production. With ephemeral environments you have a limitless supply of environments for any purpose.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Ephemeral environments are a collaboration tool designed to allow for “early and often” feedback&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Feedback is the lifeblood of great products. If you’ve ever read &lt;a href="https://www.amazon.com/High-Output-Management-Andrew-Grove/dp/0679762884/"&gt;Andy Grove’s book on high output management&lt;/a&gt;, you know he does an amazing job of discussing how rework is so costly. If you haven’t read this book, I highly recommend it, even if all you read are the first few chapters where he discusses trying to cook a high quality egg repeatedly, in under three minutes. In summary, Andy suggested through this analogy that finding issues/defects early in the egg cooking process is the most important part of consistently cooking a high quality egg in under three minutes.&lt;/p&gt;

&lt;p&gt;Likewise in software development, getting feedback and finding quality issues early in the development cycle reduces costly rework and improves velocity. If a product is delivered to a customer that doesn’t work or has bugs, it has to be reworked and go through the entire process again. Or if a product manager or designer doesn’t have a way to see changes until an engineer is finished with development, there is a high likelihood they will spot something wrong and rework the solution. These are all examples of wrotten eggs in the process that hamper developer velocity. &lt;/p&gt;

&lt;p&gt;With ephemeral environments, rework can be minimized because stakeholders become a part of the development process. When an ephemeral environment is created, URLs to the environment are created so stakeholders can see progress while code is being developed. &lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/2dFjnsY2lBSSbkaOfqczSG/3b9ef1f1134d5cf21f464af8dbf8fa93/Screen_Shot_2021-02-22_at_2.40.44_PM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/2dFjnsY2lBSSbkaOfqczSG/3b9ef1f1134d5cf21f464af8dbf8fa93/Screen_Shot_2021-02-22_at_2.40.44_PM.png" alt="Links in the PR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://releasehub.com"&gt;Release&lt;/a&gt;, we highly recommend to our customers that they create a PR as soon as a developer starts working on a feature so a Release Ephemeral Environment is automatically created. When the developer pushes code to their source control system, the environment is updated making it a live reflection of the feature during development. Product managers, designers and QA are automatically notified when changes are live and they can preview those changes and give feedback immediately. &lt;/p&gt;

&lt;p&gt;At Release, we will also share our own ephemeral environments with our customers as we’re building a feature so we can get feedback directly from the people we’re making the software for before we release it to production.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Ephemeral environments can limit rework and thus increase developer velocity.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Ephemeral environments are a developer tool that allows for full integration and smoke testing on isolated features&lt;/p&gt;

&lt;p&gt;Traditional continuous integration (CI) is the idea that your developer process should constantly be testing as a developer pushes code. What this leaves out many times is that most CI systems only perform unit tests continuously. Unit tests are meant to test small units of code and not the entire system as a whole. Integration and Smoke tests are where full paths of user experience can be tested. Usually Integration and Smoke tests are left to be tested only when the code makes its way via a merge to the mainline code branch and a traditional staging environment.&lt;/p&gt;

&lt;p&gt;Again, if we refer back to Andy Grove’s three minute egg analogy, this step of running Integration and Smoke tests only when the code branch is merged to the mainline is extremely late in the process. If issues are found during Integration and/or Smoke tests, the developer has to start the development cycle again from the beginning after finding this issue too late in the process.&lt;/p&gt;

&lt;p&gt;To add to the issue, if a team only has a single staging environment, the bottleneck around this staging environment is exacerbated with developers waiting for Integration and Smoke tests to be run on this single environment. On top of this, many code changes/features/branches may have been a part of the mainline merge making finding the cause of failed Integration/Smoke tests difficult and time consuming.&lt;/p&gt;

&lt;p&gt;With ephemeral environments, Integration and Smoke tests can be run when the ephemeral environment is created for a feature branch. This ensures that Integration and Smoke tests are run as frequently as unit tests so developers can find issues early in the process. Additionally, Integration and Smoke tests run against a single feature change/branch will isolate changes against the mainline and make finding the root cause much easier.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Ephemeral environments are a DevOps tool that allow for experimentation with infrastructure&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Making changes to infrastructure is hard and when a developer introduces the need for an infrastructure change it’s costly in time across the board. In a traditional environment setup (without ephemeral environments) this will result in an overall slow down in developer velocity as the shared staging environments must be updated by the DevOps team so the developer has some place to test their changes and new infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/4clfoJM7gvr7ufFW8yqmax/e9731087d982502e619cba93d9cbafc6/Screen_Shot_2021-02-22_at_2.44.56_PM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/4clfoJM7gvr7ufFW8yqmax/e9731087d982502e619cba93d9cbafc6/Screen_Shot_2021-02-22_at_2.44.56_PM.png" alt="Experiment with environment configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With ephemeral environments, this testing can be done in isolation and does not impact any other developer. For instance, with Release Ephemeral Environments, a developer can add services, environment variables, new infrastructure dependencies, new datasets/databases on their own through use of environment templates (environments as code) to experiment and develop without interfering with any other developers work or environments. This results in higher developer velocity again through minimization of rework and bottlenecks on shared resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Ephemeral environments are a collaboration tool designed to be an agile/scrum catalyst&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Many organizations have made the move to Agile/Scrum but their infrastructure and technology haven’t adapted to support a more iterative approach to building software. The entire premise of Agile/Scrum is for teams to be empowered and driven by early and often feedback. If your organization is on Agile/Scrum and you’re still using a single or few staging environments, you’re technologically hampering your process improvements. Ephemeral environments are the homes and office buildings where agile teams live, work, build, and play.&lt;/p&gt;

&lt;p&gt;Ephemeral environments are a catalyst to the Agile/Scrum methodology. When a developer does a pull request the ephemeral environment is created and collaboration on the feature can begin. The team is free to iterate, share,  nand solicit feedback all while keeping the rest of the organization freely moving with their own ephemeral environments. Stakeholders are a part of the development process and true customer driven development, which is the heart of the Agile/Scrum methodology, can occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments turbo charge development velocity by eliminating bottlenecks in the process (DevOps Tool), including stakeholders in the process (Collaboration Tool) and improving product quality (Developer Tool). All of these factors were highlighted in the McKinsey report on developer velocity as critical and ephemeral environments are &lt;em&gt;an investment that will put your organization in the top 25%&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@maicoamorim?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Maico Amorim&lt;/a&gt; on &lt;a href="https://unsplash.com/@maicoamorim?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>agile</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes Pods Advanced Concepts Explained</title>
      <dc:creator>tmcclung</dc:creator>
      <pubDate>Wed, 10 Feb 2021 16:09:43 +0000</pubDate>
      <link>https://forem.com/tmcclung/kubernetes-pods-advanced-concepts-explained-17hl</link>
      <guid>https://forem.com/tmcclung/kubernetes-pods-advanced-concepts-explained-17hl</guid>
      <description>&lt;p&gt;In this blog post we’ll investigate certain advanced concepts related to Kubernetes init containers, sidecars, config maps, and probes. We’ll show you how to implement these concepts in your own cluster, but more importantly how to apply these to your projects in &lt;a href="https://releasehub.com/"&gt;Release&lt;/a&gt; for both fun and profit.&lt;/p&gt;

&lt;p&gt;We’ll start with a brief introduction to pods and containers in Kubernetes, and then show specific examples of each item listed above. Below you will find a drawing of these examples to keep yourself oriented during our bumpy ride ahead.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/5StzQNBGjSiQRmuBATNdH1/c58328398227bd62299b87e5b70ed280/Understanding_Advanced_Kubernetes_Concepts.jpg" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/5StzQNBGjSiQRmuBATNdH1/c58328398227bd62299b87e5b70ed280/Understanding_Advanced_Kubernetes_Concepts.jpg" alt="Advanced Kubernetes Pod Concepts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Kubernetes Pod Concepts
&lt;/h2&gt;

&lt;p&gt;Before we begin, let’s get a brief overview of some key concepts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container
&lt;/h3&gt;

&lt;p&gt;In Docker, a container is an image that bundles layered filesystems which can be deployed as a runnable bundle. This container is usually built with a Dockerfile and has a startup binary or executable command.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sidecar Container
&lt;/h4&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/#how-pods-manage-multiple-containers"&gt;sidecar container&lt;/a&gt; is simply a container that runs alongside other containers in the pod. There’s no official definition of a sidecar concept. The only thing that distinguishes a container as a sidecar container is that you consider it ancillary or secondary to the primary container. Running multiple sidecar containers does not scale well, but does have additional advantages of being able to reuse configuration files and container images. The reason sidecars do not scale well is that they may be overprisioned or wasteful based on the performance of the main application container. However, the tradeoffs can make sense in legacy applications or during migrations toward truly cloud-native designs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Init Container
&lt;/h4&gt;

&lt;p&gt;An &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/"&gt;init container&lt;/a&gt; is simply a container that runs before any other containers in the pod. You can have several init containers that run sequentially. As each container finishes and exits properly (with a zero!), the next container will start. If an init container exits with an error or if it does not finish completely, the pod could go into a &lt;a href="https://releasehub.com/blog/kubernetes-how-to-debug-crashloopbackoff-in-a-container"&gt;dreaded CrashLoopBackoff&lt;/a&gt;. All of the containers share a filesystem, so the benefit here is that you can use or reuse container images to process, compile, or generate files or documents that can be picked up later by other containers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Probes
&lt;/h4&gt;

&lt;p&gt;Although the word "probes" may stir up visions of Alien tools used for discovery and investigation of humans, fear not. These probes will only make your services run better! Kubernetes has &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/"&gt;several probes&lt;/a&gt; for defining the health of containers inside a pod. A startup probe allows the scheduler to tolerate delays in a slow-startup container. A liveness probe allows Kubernetes to restart a faulty or stalled container. A readiness probe allows a container to receive traffic only when it is ready to do so.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pod
&lt;/h3&gt;

&lt;p&gt;You may harbour some fear in the back of your mind of “pod people” or vegetable clones grown to replace humanity with mindless zombies who hunt and destroy mankind. However, in Kubernetes, the smallest managed unit is the pod. But a pod could be composed of several containers that run in a single process space and filesystem. A pod is usually composed of one container that runs a single process as a service. However, there are several advanced usage examples we will go into that run multiple containers for expanded options and use cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Node
&lt;/h3&gt;

&lt;p&gt;A Kubernetes node is ultimately a physical machine (which can have several layers of virtualisation) that runs the pod or pods, providing the critical CPU, memory, disk, and network resources. Multiple pods can be spread across multiple nodes, but a single pod is contained on a single node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Volumes
&lt;/h3&gt;

&lt;p&gt;Volumes are simply abstractions of filesystems that can be mounted inside containers. You cannot overlap or nest volume mounts. However, there are several mount types that might be very useful to your use case.&lt;/p&gt;

&lt;h4&gt;
  
  
  configMap
&lt;/h4&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#configmap"&gt;configMap&lt;/a&gt; is a so-called “blob” of information that can be mounted as a file inside your container. Remember, that this is not an evil, destructive blob out to devour our planet! It is a batch of text that is treated amorphously, like a... well... blob. The usual use case here is for a configuration file or secrets mount.&lt;/p&gt;

&lt;h4&gt;
  
  
  emptyDir
&lt;/h4&gt;

&lt;p&gt;An &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir"&gt;emptyDir&lt;/a&gt; is an empty filesystem that can be written into and used by containers inside a pod. The usual use case here is for temporary storage or initialization files that can be shared.&lt;/p&gt;

&lt;h4&gt;
  
  
  hostPath
&lt;/h4&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath"&gt;hostPath&lt;/a&gt; is a filesystem that exists on the Kubernetes node directly and can be shared between containers in the pod. The usual use case here is to store cached files that could be primed from previous deployments if they are available.&lt;/p&gt;

&lt;h4&gt;
  
  
  Persistent Volume Claim (PVC)
&lt;/h4&gt;

&lt;p&gt;A &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/"&gt;persistent volume claim&lt;/a&gt; is a filesystem that lasts across nodes and pods inside a namespace. Data in a PVC are not erased or destroyed when a pod is removed, only when the namespace is removed. PVCs come in many underlying flavors of storage, depending on your cloud provider and infrastructure architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Namespace
&lt;/h3&gt;

&lt;p&gt;A Kubernetes namespace is a collection of resources that are grouped together and generally have access to one another. Multiple pods, deployments, and volume claims (to list a few) will run together, potentially across multiple nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sidecars and Init Containers
&lt;/h2&gt;

&lt;p&gt;The first use case we will cover involves running several containers inside a single pod. Once again, a pod here refers to one or more containers grouped together in Kubernetes, not vegetable human clones grown for evil reasons. In the following scenario, we will examine how multiple containers can share a single process space, filesystem, and network stack.&lt;/p&gt;

&lt;p&gt;Keep in mind that most docker and Kubernetes purists will tell you that running more than one process in a container, or having more than one container in a pod is not a good design and will inevitably lead to scalability and architectural issues down the road. These concerns are generally well founded. However, careful application of the following supported and recommended patterns will allow you to thrive either during your transition from a legacy stack to Kubernetes or once you are successfully running your application in a cluster.&lt;/p&gt;

&lt;p&gt;One particular use case we encounter with customers is that their application has a backend container that requires a reverse proxy like Nginx to perform routing, static file serving, and so forth. The best method to achieve this objective would be to create a separate pod with Nginx (for example) and run the two service pods in a single namespace. This gives us the flexibility to scale the backend pods and Nginx pods separately as needed. However, typically the backend service or application needs to also serve static files that are located inside the container filesystem and would not be available across the pod boundary. We agree this is not a preferred pattern to use, but it is common enough with legacy applications that we see it happen.&lt;/p&gt;

&lt;p&gt;In this scenario, we often recommend a sidecar container running Nginx which can be pulled directly from Docker Hub or a custom image can be created. We also recommend that customers reuse their backend application container as an init container that starts with a custom command for creating any initialization or other startup tasks that need to be completed before the application itself starts.&lt;/p&gt;

&lt;p&gt;One feature of this multi-container setup is that the Nginx container can use the “localhost” loopback to communicate with the backend service. Of course the sidecar container might be a logging or monitoring agent, but the principle is the same: the containers can speak with each other over a private network that is potentially not available outside of the pod, unless you make it available. In our Nginx example, the backend could be isolated so that all communication traffic inbound to the service container must be routed to the Nginx proxy.&lt;/p&gt;

&lt;p&gt;The other nice feature of this configuration is that the containers all share a common file system so that the Nginx container can access static files generated by (or stored on) the backend service container.&lt;/p&gt;

&lt;p&gt;Here is a link to our documentation that shows an example of running &lt;a href="https://docs.releasehub.com/reference-guide/application-settings/application-template#sidecar-containers"&gt;sidecar&lt;/a&gt; and &lt;a href="https://docs.releasehub.com/reference-guide/application-settings/application-template#init-containers"&gt;init&lt;/a&gt; containers on Release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Probes
&lt;/h2&gt;

&lt;p&gt;As we have noted, probes are not just for Aliens! Kubernetes uses them to test your application stack and report on its health. Kubernetes will also take action based on these probes, just like an Alien might. There are several probes that are supported natively by Kubernetes. The main use cases we support for our customers are the liveness probe and readiness probe.&lt;/p&gt;

&lt;p&gt;The liveness probe is a way to test whether a container is “alive” or not, and if it fails the probe, then Kubernetes will restart the container. We usually recommend that your application not freeze up or have memory leaks and so forth so that a liveness probe should not be necessary. This “reboot your app to fix the problems” philosophy is not generally considered good practice. However, perfect code is impossible and when services are running in a production container environment, we know that almost anything can (and will) happen.&lt;/p&gt;

&lt;p&gt;The readiness probe is a way to test whether a container is capable of serving traffic or not, and if it fails the probe, then the service port will be removed from the ingress controller. Contrary to our stance on the liveness probe, we strongly encourage and recommend that customers implement a readiness probe on any service that receives inbound traffic. In some sense, we consider a readiness probe mandatory for your production services.&lt;/p&gt;

&lt;p&gt;Here is a link to our documentation that shows an example of using a &lt;a href="https://docs.releasehub.com/reference-guide/application-settings/application-template#readiness-and-liveness-probes"&gt;liveness and readiness probe&lt;/a&gt; for services running in Release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Volumes
&lt;/h2&gt;

&lt;p&gt;This section gets a bit technical and tricky. Of course, no actual customer stacks would use every single type of volume, container, and probe listed in this article. But we do hope this overview shows all the features that are possible. You should carefully consider the use cases presented below and choose the one that best fits your use case.&lt;/p&gt;

&lt;p&gt;Here is a link to our documentation that shows options for our &lt;a href="https://docs.releasehub.com/reference-guide/application-settings/application-template#resources"&gt;storage volume types&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  configMap (Just in Time File Mounts)
&lt;/h3&gt;

&lt;p&gt;A configMap (purposely spelled in &lt;a href="https://en.wikipedia.org/wiki/Camel_case#Programming_and_coding"&gt;camelCase&lt;/a&gt;) is not itself a volume in Kubernetes. Strictly speaking, a configMap is just a blob of text that can be stored in the &lt;a href="https://kubernetes.io/docs/concepts/overview/components/#etcd"&gt;etcd key-value datastore&lt;/a&gt;. However, one convenient use case Release supports is creating a container storage volume that is mounted inside a container as a file whose contents are the text blob stored in etcd. At Release, we call this customer helper function a &lt;a href="https://docs.releasehub.com/reference-guide/application-settings/file-mounts"&gt;Just in Time File Mount&lt;/a&gt;. The common use case for a configMap at Release is being able to upload a file with configuration details. For example, in our previous example involving an Nginx sidecar, the &lt;a href="https://www.nginx.com/resources/wiki/start/topics/examples/full/"&gt;nginx.conf&lt;/a&gt; file could be uploaded as a Just in Time File Mount. &lt;em&gt;"What do we want? File Mounts! When do we want them? Just in Time!"&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  emptyDir (Scratch Volume)
&lt;/h3&gt;

&lt;p&gt;An emptyDir volume is a native Kubernetes construct Release supports for containers in a pod to share empty space that can be mounted locally. This volume is erased as soon as the pod ends its life-cycle, and it is blank to begin with. Thus, the most common use case is for a scratch or temporary location to store files that only need to be stored during the lifetime of the pod.&lt;/p&gt;

&lt;h3&gt;
  
  
  hostPath (Intra-pod Cache or Shared Volume)
&lt;/h3&gt;

&lt;p&gt;The next example is a native Kubernetes construct that Release supports for containers in a pod to share a filesystem path that stays on a node. The most common use case for a hostPath volume is to store cache or build data that can be generated and re-generated as needed inside a pod. Unlike an emptyDir volume that only lasts as long as the pod does, the hostPath can last as long as the application that deploys the pods. Thus, a container could generate (or compute) files, assets, or data that could be reused or incrementally updated with the next pod deployment on the same node. Release automatically sets the correct permissions and ensures that each namespace has unique files so that data are not leaked between customers.&lt;/p&gt;

&lt;h3&gt;
  
  
  PVC (Long Term Persistent Storage)
&lt;/h3&gt;

&lt;p&gt;The final example of a volume mount that Release offers is the ability to store data on persistent storage that is available across nodes and pods in a namespace. This long term storage is persistent and does not disappear during pod or node life cycles. Release uses Amazon Web Services (AWS) Elastic File System (EFS), which is their cloud offering of Network File System (NFSv4) storage. This allows customers to store long term data that will persist between deployments, availability zones (AZs), and node failures, and can be shared between multiple pods. The most common use cases for persistent storage of this type are for pre-production databases that need long term storage between deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we’ve given you an overview of key advanced concepts for Kubernetes pods that you will not find anywhere else. If you are confident and practiced in using these examples in your Kubernetes deployments, then you can consider yourself one of the members of an elite club of practitioners. This benefit does not just come with a distinguished title or piece of paper stating your qualifications: it also confers substantial success and accomplishment in your DevOps career journey.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@wynand_uys?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Wynand Uys&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/pod?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>What is an Ephemeral Environment?</title>
      <dc:creator>tmcclung</dc:creator>
      <pubDate>Fri, 05 Feb 2021 16:11:20 +0000</pubDate>
      <link>https://forem.com/tmcclung/what-is-an-ephemeral-environment-4jnd</link>
      <guid>https://forem.com/tmcclung/what-is-an-ephemeral-environment-4jnd</guid>
      <description>&lt;p&gt;An ephemeral environment is an environment meant to last for a limited amount of time, in which the &lt;a href="https://www.merriam-webster.com/dictionary/ephemeral"&gt;definition of ephemeral&lt;/a&gt; is &lt;em&gt;‘lasting a very short time’&lt;/em&gt;. The amount of time could be as short as the lifecycle of a CI/CD pipeline or as long as a week, but the key component being that eventually the environment goes away. Some other names for ephemeral environments could be ‘on-demand environments’, ‘dynamic environments’, or ‘temporary environments’. No matter the name, the use case is the same: the environment is created, used for a short period of time, and then removed without consequence.  &lt;/p&gt;

&lt;p&gt;Now that we have an idea of what ephemeral environments are, let’s go over some of their characteristics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ephemeral Environments Should Look Like Production
&lt;/h2&gt;

&lt;p&gt;One of the most important factors to a successful ephemeral environment workflow is to have the environments look as close to a production replica as possible. To start, if you are using Docker images, the same image that you deploy onto an ephemeral environment should be eligible to be deployed to your production server. When thinking about memory as one example, if the production server is allocated 2GB of memory then the ephemeral environment should be too. If the ephemeral environment has less memory, say only 1GB, and a memory intensive part of the application fails, it is now unclear whether or not that part of the application would fail if the same image were deployed to production.  &lt;/p&gt;

&lt;p&gt;As another slightly different example, if the application uses a database, the production version should be talking to a persistent database, like Amazon’s RDS, while the ephemeral environment may be talking to a containerized version, but having both databases on the exact same version ensures new code doesn’t accidentally use a database feature that isn’t available on production.  &lt;/p&gt;

&lt;p&gt;Every application is different and we can’t cover every possible feature that needs to look the same here, the premise remains the same that every ephemeral environment should look as close to your production environment as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ephemeral Environments Are Automated And On Demand
&lt;/h2&gt;

&lt;p&gt;Now that we know our ephemeral environments should look like our production environment, the next step is to automate their creation to meet that criteria. Products like Terraform, AWS’s Cloudformation, or Release’s Application Template are what we call “environments as code” to ensure that the ephemeral environments are created the same way each time.  &lt;/p&gt;

&lt;p&gt;Once the template is created, the ephemeral environments should be set up to be automatically created on certain events, such as when a pull request is opened. They should also be able to be created on demand manually (not through an event driven process) in case a new environment is needed for any reason.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ephemeral Environments Have Replicated Data
&lt;/h2&gt;

&lt;p&gt;We previously mentioned that running the same version of database was a requirement of having our ephemeral environments look like production. Not only should they look the same, but they should have very similar datasets available to them in an isolated manner. This means that the database attached to the ephemeral environment will not be shared with any other environment. Because the database will also be removed as part of the cleanup process of the environment, it makes for the perfect place to test destructive actions without worrying about affecting anything else. A few ways to achieve this isolated replication would be to use a container with a seed file or using an RDS Snapshot based approach like Release’s Datasets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ephemeral Environments Are Shareable
&lt;/h2&gt;

&lt;p&gt;Being able to see the ephemeral environment and the code changes yourself is great, but garnering feedback from others is even more important. Ephemeral environments shine when multiple stakeholders such as product managers, the QA team, or even customers are able to preview changes before they are generally available. The early feedback cycle helps the engineering team dial in their changes and is accomplished by having the ephemeral environment live on a unique and shareable url. At Release, every environment receives a handle in the form of ‘ted’ + 4 alphanumeric characters and each service within that environment has a shareable url, such as &lt;a href="https://backend-teda1b2.releasehub.com"&gt;https://backend-teda1b2.releasehub.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With all these characteristics in mind, we can now talk about how ephemeral environments could be used practically in a CI/CD pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration with Collaboration Tools
&lt;/h2&gt;

&lt;p&gt;The last characteristic we talked about was being able to share the link to the ephemeral environment with other people. One way to achieve this shareability without manualing sending the link to everyone is to set up integrations with collaboration tools such as Github or Jira. If the creation of the ephemeral environment is automated, such as when a Pull Request is opened, having an integration back to Github to post the shareable URL is a great way for other engineers to discover the environment. Github provides many ways to share the URL such as through comments, the status API, or deployments.  &lt;/p&gt;

&lt;p&gt;Another way to automate sharing the URL might be to connect to Jira and have a naming convention between Jira ticket numbers and branch names which allows the URL to be added to the ticket automatically. Collaboration is at the heart of using ephemeral environments and making it easier to discover the environments through integrations helps drive the team’s success.&lt;/p&gt;

&lt;h2&gt;
  
  
  Smoke- or Integration- Tests
&lt;/h2&gt;

&lt;p&gt;Unit tests are a core part of the development lifecycle but one thing that unit tests can miss is how the system behaves as a whole outside of that single unit of work. This is where smoke- or integration- tests can shine. There are many different approaches to these tests depending on the type of application. For example, if the application has only API endpoints, a script using curl commands may suffice to create the test suite. However if the application has a webpage then tools like Selenium can be used to create a test suite that actually visits the website ensures the pages are working. Either approach benefits from having a live ephemeral environment for each branch or unit under test because without it, there could be a waiting line to deploy the code onto a single staging or testing server to see if the tests pass and no one wants to wait around for that!&lt;/p&gt;

&lt;p&gt;Putting everything that we’ve talked about together, we can begin to tell a story about how a company can start small with ephemeral environments and continue to scale up as the company grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling Up With Ephemeral Environments
&lt;/h2&gt;

&lt;p&gt;Here is a fictional story of Acme, Inc., founded recently on the great idea of building completely customisable moose traps and selling them on their website. This story will illustrate how ephemeral environments can work for any size company, of any maturity, and with any workflow.  &lt;/p&gt;

&lt;p&gt;The beginning of a company starts with a pair of co-founders, an engineer and a product manager. The co-founders decide that using ephemeral environments is a good way for the engineer to showcase the work to the product manager and to their potential customers. The endeavour starts with only a few ephemeral environments as the engineering co-founder can only juggle so many projects at once. They are able to create an environment for one task and share it with the product manager and wait for feedback while moving onto the next task.  &lt;/p&gt;

&lt;p&gt;The company finds a few initial customers and decides to hire additional engineers. They also decide to set up automated ephemeral environments with every pull request and receive Slack messages when each environment is ready. The product manager is now able to review each product change without the need for the engineering team to send them the link and the number of ephemeral environments being created concurrently continues to grow.  &lt;/p&gt;

&lt;p&gt;After a successful year of work, the co-founders decide that they want to hire additional product managers and build out a QA team. With the added headcount, they introduce a ticketing system to track the work from inception to development to QA review and finally to deployment. To align the use of ephemeral environments throughout this process, the pull request and environment urls are automatically added to the ticket. Now the product manager and the QA team are able to use the same environment link on the ticket to assess the work and provide feedback. Nobody has to wait for an environment to be freed up for testing to complete their work.  &lt;/p&gt;

&lt;p&gt;After months of product polish and bootstrapping the sales process, the company decides to hire a sales team to kick start the customer acquisition process. The members of the sales team learn about the company’s product but are also introduced to the ephemeral environments. When a sales team member needs to demo the product for a customer, they’re able to create an environment with a clean dataset which they can use to showcase the full breadth of the product without worrying about questions like, “can I delete this?” and, “will everything be reset for my next demo?”, and “will anyone mess with my environment while I’m trying to demonstrate it to the customer?” That level of confidence in their ephemeral environment allows them to focus on the pitch to the customer, rather than dancing around features because using those features might interfere with someone else, or worrying that there might be clutter from previous demos.  &lt;/p&gt;

&lt;p&gt;From the early stages of the company, through the growth and expansion of the product and engineering team, and finally to the sales and customer acquisition front, ephemeral environments played an important role each step of the way. Without these, the company may have been hampered by engineers waiting for time in a single staging environment, or product managers and the QA team testing on different environments and getting different results, or the sales team attempting to close a customer and realizing that the data on their environment had already been changed by a previous sales demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Ephemeral Environments To Your Workflow
&lt;/h2&gt;

&lt;p&gt;Hopefully we’ve shown some compelling features for using ephemeral environments in your (and your team’s) workflow. If you agree that ephemeral environments are valuable, you may want to see how we’ve made them easy at Release.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>startup</category>
    </item>
    <item>
      <title>YC startup, founding engineer,  Regis Wilson chats with Corey Quinn about joining a startup during the pandemic</title>
      <dc:creator>tmcclung</dc:creator>
      <pubDate>Thu, 28 Jan 2021 17:00:51 +0000</pubDate>
      <link>https://forem.com/tmcclung/yc-startup-founding-engineer-regis-wilson-chats-with-corey-quinn-about-joining-a-startup-during-the-pandemic-70p</link>
      <guid>https://forem.com/tmcclung/yc-startup-founding-engineer-regis-wilson-chats-with-corey-quinn-about-joining-a-startup-during-the-pandemic-70p</guid>
      <description>&lt;p&gt;I had the pleasure of reconnecting with Corey Quinn on his excellent podcast, &lt;a href="https://www.lastweekinaws.com/podcast/"&gt;"Screaming in the Cloud"&lt;/a&gt;. The far-ranging topics wandered working together at a company, to discussing how far internet technology has developed, to changing jobs in the middle of a pandemic, to what we do at Release.&lt;br&gt;
You can view the full episode and transcript &lt;a href="https://releasehub.com/blog/podcast-guest-on-screaming-in-the-cloud"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes - How to Debug CrashLoopBackOff in a Container</title>
      <dc:creator>tmcclung</dc:creator>
      <pubDate>Tue, 26 Jan 2021 15:59:08 +0000</pubDate>
      <link>https://forem.com/tmcclung/kubernetes-how-to-debug-crashloopbackoff-in-a-container-18mn</link>
      <guid>https://forem.com/tmcclung/kubernetes-how-to-debug-crashloopbackoff-in-a-container-18mn</guid>
      <description>&lt;h1&gt;
  
  
  Kubernetes - How to Debug CrashLoopBackOff in a Container
&lt;/h1&gt;

&lt;p&gt;If you’ve used Kubernetes (k8s), you’ve probably bumped into the dreaded CrashLoopBackOff. A CrashLoopBackOff is possible for several  types of k8s misconfigurations (not able to connect to persistent volumes, init-container misconfiguration, etc). We aren’t going to cover how to configure k8s properly in this article, but instead will focus on the harder problem of debugging your code or, even worse, someone else’s code 😱&lt;/p&gt;

&lt;p&gt;Here is the output from kubectl describe pod for a CrashLoopBackOff:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name:           frontend-5c49b595fc-sjzkg
Namespace:      tedbf02-ac-david-nginx-golang-tmcclung-nginx-golang
Priority:       0
Start Time:     Wed, 23 Dec 2020 14:55:49 -0500
Labels:         app=frontend
                pod-template-hash=5c49b595fc
                tier=frontend
Status:         Running
IP:             10.1.31.0
IPs:            &amp;lt;none&amp;gt;
Controlled By:  ReplicaSet/frontend-5c49b595fc
Containers:
  frontend:
    Container ID:   docker://a4ed7efcaaa87fe36342cf7532ff1de5cd51b62d3d681dfb9857999300f6c587
    Image:          .amazonaws.com/tommyrelease/awesome-compose/frontend@sha256:dfd762c
    Image ID:       docker-pullable://.amazonaws.com/tommyrelease/awesome-compose/frontend@sha256:dfd762c
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 24 Jan 2021 20:25:26 -0500
      Finished:     Sun, 24 Jan 2021 20:25:26 -0500
    Ready:          False
    Restart Count:  9043
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two common problems when starting a container are OCI runtime create failed (which means you are referencing a binary or script that doesn’t exist on the container) and container “Completed” or “Error” which both mean that the code executing on the container failed to run a service and stay running.&lt;/p&gt;

&lt;p&gt;Here’s an example of an OCI runtime error, trying to execute: “hello crashloop”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    Port:          80/TCP
    Host Port:     0/TCP
    Command:
      hello
      crashloop
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "hello": executable file not found in $PATH: unknown
      Exit Code:    127
      Started:      Mon, 25 Jan 2021 22:20:04 -0500
      Finished:     Mon, 25 Jan 2021 22:20:04 -0500
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;K8s gives you the exit status of the process in the container when you look at a pod using kubectl or &lt;a href="https://github.com/derailed/k9s"&gt;k9s&lt;/a&gt;. Common exit statuses from unix processes include 1-125. Each unix command usually has a man page, which provides more details around the various exit codes. Exit code (128 + SIGKILL 9) 137 means that k8s hit the memory limit for your pod and killed your container for you. &lt;/p&gt;

&lt;p&gt;Here is the output from kubectl describe pod, showing the container exit code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 24 Jan 2021 20:25:26 -0500
      Finished:     Sun, 24 Jan 2021 20:25:26 -0500
    Ready:          False
    Restart Count:  9043
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  All containers are not created equally.
&lt;/h2&gt;

&lt;p&gt;Docker allows you to define an &lt;code&gt;Entrypoint&lt;/code&gt; and &lt;code&gt;Cmd&lt;/code&gt; which you can mix and match in a Dockerfile. &lt;code&gt;Entrypoint&lt;/code&gt; is the executable, and &lt;code&gt;Cmd&lt;/code&gt; are the arguments passed to the &lt;code&gt;Entrypoint&lt;/code&gt;. The Dockerfile schema is quite lenient and allows users to set &lt;code&gt;Cmd&lt;/code&gt; without &lt;code&gt;Entrypoint&lt;/code&gt;, which means that the first argument in &lt;code&gt;Cmd&lt;/code&gt; will be the executable to run. &lt;/p&gt;

&lt;p&gt;Note: k8s uses a different naming convention for Docker &lt;code&gt;Entrypoint&lt;/code&gt; and &lt;code&gt;Cmd&lt;/code&gt;. In Kubernetes &lt;code&gt;command&lt;/code&gt; is Docker &lt;code&gt;Entrypoint&lt;/code&gt; and Kubernetes &lt;code&gt;args&lt;/code&gt; is Docker &lt;code&gt;Cmd&lt;/code&gt;. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Docker field name&lt;/th&gt;
&lt;th&gt;Kubernetes field name&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;The command run by the container&lt;/td&gt;
&lt;td&gt;Entrypoint&lt;/td&gt;
&lt;td&gt;command&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Arguments passed to the command&lt;/td&gt;
&lt;td&gt;Cmd&lt;/td&gt;
&lt;td&gt;args&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;There are a few tricks to understanding how the container you’re working with starts up. In order to get the startup command when you’re dealing with someone else’s container, we need to know the intended Docker &lt;code&gt;Entrypoint&lt;/code&gt; and &lt;code&gt;Cmd&lt;/code&gt; of the Docker image. If you have the Dockerfile that created the Docker image, then you likely already know the &lt;code&gt;Entrypoint&lt;/code&gt; and &lt;code&gt;Cmd&lt;/code&gt;, unless you aren’t defining them and inheriting from a base image that has them set.&lt;/p&gt;

&lt;p&gt;When dealing with either off the shelf containers, using someone else’s container and you don’t have the Dockerfile, or you’re inheriting from a base image that you don’t have the Dockerfile for, you can use the following steps to get the values you need. First, we pull the container locally using &lt;code&gt;docker pull&lt;/code&gt;, then we inspect the container image to get the &lt;code&gt;Entrypoint&lt;/code&gt; and &lt;code&gt;Cmd&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker pull &amp;lt;image id&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker inspect &amp;lt;image id&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here we use &lt;code&gt;jq&lt;/code&gt; to filter the JSON response from &lt;code&gt;docker inspect&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;david@sega:~: docker pull docker.elastic.co/elasticsearch/elasticsearch:7.10.2
7.10.2: Pulling from elasticsearch/elasticsearch
ddf49b9115d7: Pull complete
e736878e27ad: Pull complete
7487c9dcefbe: Pull complete
9ccb7e6e1f0c: Pull complete
dcec6dec98db: Pull complete
8a10b4854661: Pull complete
1e595aee1b7d: Pull complete
06cc198dbf22: Pull complete
55b9b1b50ed8: Pull complete
Digest: sha256:d528cec81720266974fdfe7a0f12fee928dc02e5a2c754b45b9a84c84695bfd9
Status: Downloaded newer image for docker.elastic.co/elasticsearch/elasticsearch:7.10.2
docker.elastic.co/elasticsearch/elasticsearch:7.10.2
david@sega:~: docker inspect docker.elastic.co/elasticsearch/elasticsearch:7.10.2 | jq '.[0] .ContainerConfig .Entrypoint'
[
  "/tini",
  "--",
  "/usr/local/bin/docker-entrypoint.sh"
]
david@sega:~: docker inspect docker.elastic.co/elasticsearch/elasticsearch:7.10.2 | jq '.[0] .ContainerConfig .Cmd'
[
  "/bin/sh",
  "-c",
  "#(nop) ",
  "CMD [\"eswrapper\"]"
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Dreaded CrashLoopBackOff
&lt;/h2&gt;

&lt;p&gt;Now that you have all that background, let’s get to debugging the CrashLoopBackOff.&lt;/p&gt;

&lt;p&gt;In order to understand what’s happening, it’s important to be able to inspect the container inside of k8s so the application has all the environment variables and dependent services. Updating the deployment and setting the container &lt;code&gt;Entrypoint&lt;/code&gt; or k8s &lt;code&gt;command&lt;/code&gt; temporarily to &lt;code&gt;tail -f /dev/null&lt;/code&gt; or &lt;code&gt;sleep infinity&lt;/code&gt; will give you an opportunity to debug why the service doesn’t stay running. &lt;/p&gt;

&lt;p&gt;Here’s how to configure k8s to override the container &lt;code&gt;Entrypoint&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elasticsearch&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elasticsearch&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;progressDeadlineSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;revisionHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
      &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;25%&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;25%&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;null&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
        &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;tail&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-f"&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/dev/null&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s the configuration in &lt;a href="https://releasehub.com"&gt;Release&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elasticsearch&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.elastic.co/elasticsearch/elasticsearch:7.10.2&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;tail&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-f"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/dev/null&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now use &lt;code&gt;kubectl&lt;/code&gt; or &lt;code&gt;k9s&lt;/code&gt; to exec into the container and take a look around. Using the &lt;code&gt;Entrypoint&lt;/code&gt; and &lt;code&gt;Cmd&lt;/code&gt; you discovered earlier, you can execute the intended startup command and see how the application is failing.&lt;/p&gt;

&lt;p&gt;Depending on the container you're running, it may be missing many of the tools necessary to debug your problem like: curl, lsof, vim; and if it’s someone else’s code, you probably don’t know which version of linux was used to create the image. We typically try all of the common package managers until we find the right one. Most containers these days use Alpine Linux (apk package manager) or a Debian, Ubuntu (apt-get package manager) based image. In some cases we’ve seen Centos and Fedora, which both use the yum package manager.&lt;/p&gt;

&lt;p&gt;One of the following commands should work depending on the operating system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;apk&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;apt-get&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;yum&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dockerfile maintainers often remove the cache from the package manager to shrink the size of the image, so you may also need to run one of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;apk update&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;apt-get update&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;yum makecache&lt;/code&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now you need to add the necessary tools to help with debugging. Depending on the package manager you found, use one of the following commands to add useful debugging tools:&lt;/p&gt;

&lt;p&gt;-&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;apt-get install -y curl vim procps inetutils-tools net-tools lsof&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;-&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;apk add curl vim procps net-tools lsof&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;-&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;yum install curl vim procps lsof&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;At this point, it’s up to you to figure out the problem. You can edit files using vim to tweak the container until you understand what’s going on. If you forget all of the files you’ve touched on the container, you can alway kill the pod and the container will restart without your changes. Always remember to write down the steps taken to get the container working. You’ll want to use your notes to alter the Dockerfile or add commands to the container startup scripts. &lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging Your Containers
&lt;/h2&gt;

&lt;p&gt;We have created a simple script to get all of the debuging tools, as long as you are working with a container that has curl pre-installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install debugging tools on a container with curl pre-installed&lt;/span&gt;
/bin/sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/releasehub-com/container-debug/main/install.sh&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>debugging</category>
      <category>crashloopbackoff</category>
    </item>
    <item>
      <title>What is a staging environment?</title>
      <dc:creator>tmcclung</dc:creator>
      <pubDate>Sun, 10 Jan 2021 01:13:42 +0000</pubDate>
      <link>https://forem.com/tmcclung/what-is-a-staging-environment-27m7</link>
      <guid>https://forem.com/tmcclung/what-is-a-staging-environment-27m7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s6GEH1yy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7chi9wwth8ahdpxymk9p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s6GEH1yy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7chi9wwth8ahdpxymk9p.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What is a Staging Environment?
&lt;/h1&gt;

&lt;p&gt;An environment, in the traditional sense, is defined as the surroundings in which a person, animal, or plant lives or operates. It’s where you exist, operate and thrive. The definition of an environment in the computer systems context would be the surroundings in which code, software or applications live or operate. Or simply, an environment is the surroundings where your code runs.&lt;/p&gt;

&lt;p&gt;There are many types of environments for software systems. Development environments, production environments, pre-production environments and staging environments, to name a few. All of these types of environments are just qualifying the purpose of the surroundings your code is running in. Each environment has a purpose. A development environment is where your code runs when you are developing your software. A production environment is where your code runs when it's in front of end users, i.e. in production.&lt;/p&gt;

&lt;p&gt;A staging environment (sometimes called a pre-production environment) is the environment where your code is 'staged' prior to being run in front of users so you can ensure it works as designed. Uses of the staging environment can be for automated tests, or for QA teams, Product Managers and other stakeholders to validate features and functionality that have been developed according to specification. Staging environments are critical to building software, but building them is costly and time consuming so many organizations only have a single one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Most organizations have a single staging environment to their detriment
&lt;/h2&gt;

&lt;p&gt;Traditionally most organizations rely on a single staging environment for their developers. As an organization grows, this becomes a major bottleneck in delivering software quickly. Because developers have to share time on the environment, it has to be carefully maintained as tests finish and data is changed. The last developer on the staging environment may have changed it in a material way causing confusion and issues for the next developer to use the environment. Maintaining this critical resource becomes incredibly important and incredibly difficult as the complexity and size of an organization grows.&lt;/p&gt;

&lt;p&gt;So why do most organizations rely on just one staging environment? The reason is usually unintentional if you think about the evolution of the development organization from the earliest days. When you have one or few engineers, a single staging environment is sufficient. The complexity of your systems are low and keeping a staging environment up to date is manageable.&lt;/p&gt;

&lt;p&gt;As the organization grows and complexity increases, evolving the environment ecosystem becomes a tax that most organizations don’t pay. They move fast and furious on new product features while doing their best to keep their infrastructure up and running. By the time product velocity has slowed, the complexity of their systems makes duplicating environments incredibly difficult. And now they’re faced with an expensive effort to play catch up and try to remove the bottlenecks around a single staging environment.&lt;/p&gt;

&lt;p&gt;The organization has two choices, invest in solving the problem or live with slowing product velocity. The cost to solve the problem is high in all scenarios. They can hire specialists to continue manually managing more environments or they can invest in building a platform to automate the creation of environments. There are tons of problems to solve including keeping environments in sync with production, ensuring data is representative of production in pre-production environments, automatic creation of environments, moving code from one environment to another, etc… Unfortunately most organizations can’t afford to invest heavily in infrastructure so they choose to live with the single staging environment and just accept slowing product velocity.&lt;/p&gt;

&lt;p&gt;For those companies that choose to invest in building an automated solution in-house they do so with the belief that building a platform to enable developers to move quickly will pay off in the long run. And for companies that have the resources to pull this off, they end up with a distinct competitive advantage. Companies such as Facebook, Google, Apple, Netflix invest heavily in infrastructure and tooling for this exact reason. As of this writing, Facebook has 338 open infrastructure roles, Google has 1072. There is a reason the big guys are investing here, it gives them a competitive advantage but it’s clear it’s not cheap.&lt;/p&gt;

&lt;p&gt;What is a company to do? Invest heavily? Build internally? Buy off the shelf? There are solutions on the market, including &lt;a href="https://releasehub.com"&gt;Release&lt;/a&gt; that will reduce the cost dramatically. You can read more about this in our overview of the tradeoffs of &lt;a href="https://releasehub.com/build-vs-buy"&gt;building vs. buying&lt;/a&gt; a solution to staging environment management.&lt;/p&gt;

&lt;h2&gt;
  
  
  How can you move faster with multiple staging environments?
&lt;/h2&gt;

&lt;p&gt;What benefits do you gain as an organization if you can enable your organization with on-demand staging environments?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Higher product velocity means features can be released faster to customers.&lt;/li&gt;
&lt;li&gt;No more “works on my machine”. A common complaint with developers who's local environment doesn't match production.&lt;/li&gt;
&lt;li&gt;Higher quality software releases with less defects.&lt;/li&gt;
&lt;li&gt;Less frustration in your organization while waiting on shared resources.&lt;/li&gt;
&lt;li&gt;An advantage in time to market and experimentation against competitors.&lt;/li&gt;
&lt;li&gt;Happier customers, developers and stakeholders.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's no wonder the big companies invest so heavily in DevOps and infrastructure employees. They’ve built internal systems that remove development bottlenecks and environment scarcity.&lt;/p&gt;

&lt;p&gt;For organizations to compete in the modern software development age, environment management is a critical element of any organization that wants to move fast. On-demand staging environments are necessary to unlock the potential of your teams and are the development resources that are most needed to deliver ideas into the world.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>startup</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>You don't need to know what you’re doing, you need to iterate and remove bottlenecks</title>
      <dc:creator>tmcclung</dc:creator>
      <pubDate>Wed, 06 Jan 2021 18:58:58 +0000</pubDate>
      <link>https://forem.com/tmcclung/you-don-t-need-to-know-what-you-re-doing-you-need-to-iterate-and-remove-bottlenecks-3bf3</link>
      <guid>https://forem.com/tmcclung/you-don-t-need-to-know-what-you-re-doing-you-need-to-iterate-and-remove-bottlenecks-3bf3</guid>
      <description>&lt;p&gt;Do you ever feel like you have no idea what you’re doing? Like you’re just kind of going along with things, doing your &lt;em&gt;best&lt;/em&gt; but not really sure if you’re doing it &lt;em&gt;right&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;Here’s a little secret I’ve learned over the years: &lt;em&gt;Nobody knows what they’re doing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Denis, with his steadfast approach and surgeon-like precision? Hannah, with her pen and notebook, who writes down every detail to later review? Kendall, who is quick on the trigger to any bizarre question that upper management tosses her way and always leaves them with good laughs and big smiles?&lt;/p&gt;

&lt;p&gt;All of them? &lt;em&gt;They have no idea what they’re doing.&lt;/em&gt; They don’t &lt;strong&gt;have&lt;/strong&gt; to know what they’re doing. They’ve unlocked the biggest secret that formal education has desperately tried to unteach us: Failing is &lt;em&gt;fine&lt;/em&gt;! Failing is &lt;strong&gt;good&lt;/strong&gt;! Failing is &lt;strong&gt;&lt;em&gt;the fastest way to success&lt;/em&gt;&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/WJeSMrXAhgHYU4fZHfaaH/ef143f37f81945a4a0710040145433b3/1OL6dFH.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/WJeSMrXAhgHYU4fZHfaaH/ef143f37f81945a4a0710040145433b3/1OL6dFH.gif" alt="slimemold"&gt;&lt;/a&gt;&lt;/p&gt;Brainless, single-cell slime hunting for food.

&lt;p&gt;In the world of software development, we’ve already accepted this as scientific fact. We embrace this and weave it into the foundation of our methodologies and systems. We know, with absolute certainty, that nothing is certain. The winner will be the little boat whose chart is scribbled on the back of a napkin and can pivot on a dime, not the monolithic Titanic who, despite the captain’s best efforts, is going to collide with that iceberg.&lt;/p&gt;

&lt;p&gt;However, even though we recognize the advantages and the &lt;em&gt;need&lt;/em&gt; to be &lt;strong&gt;agile&lt;/strong&gt; in this industry, that does not mean that we’ve mastered all the ways to optimize this. While different Software Development methodologies can have their place for different problem spaces (just as different programming languages are better suited for some problems over others), one particular approach to &lt;strong&gt;&lt;em&gt;failing fast&lt;/em&gt;&lt;/strong&gt; has gained a lot of traction over the past decade. The &lt;strong&gt;DevOps&lt;/strong&gt; methodology was forged from the fires of Agile, and today DevOps has been crowned the champion of how to build great software, quickly and wicked fast. Modern technology companies that thrive to compete place DevOps on the forefront of their mind.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is DevOps?
&lt;/h1&gt;

&lt;p&gt;If you’re not already familiar with DevOps, the term can be a little confusing. DevOps began as a cultural movement within companies. Rather than Developer teams taking their code and throwing it over the fence for the Operations team to deploy and monitor (while the Security team haughtily throws their arms in the air over any concern—be it major or minor—introduced by the other teams), DevOps works to tear down these artificial walls.&lt;/p&gt;

&lt;p&gt;The way this works in practice is through tight feedback loops and blurring the edges of responsibility. “DevOps” has turned from an idea into a career where you build a racetrack for product development. This enables the idea-to-deployment cycle to hasten. No longer do we need to take months to plan, build, test, release, deploy, evaluate. No longer do we need to make sure every release is perfect “because there’s no going back”. Now we can do what we do best: &lt;em&gt;Make mistakes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You’ve heard of (or worked with) companies that deploy their code tens to thousands of times a day. This is incredibly powerful. Ideas always look a little different in practice and sometimes they turn out to be bad ideas. But sometimes those silly ideas that would have you laughed out of a boardroom turn out to be the ones worth their weight in gold. With the ability to experiment and quickly reset, we can fractal our way to the perfect solution for any problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/75VdZ4jr0QwzXgpU6gk6vW/fff263c59f6582aaf3fd60d72312ebfb/ComplexGargantuanFluke-size_restricted.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/75VdZ4jr0QwzXgpU6gk6vW/fff263c59f6582aaf3fd60d72312ebfb/ComplexGargantuanFluke-size_restricted.gif" alt="ComplexGargantuanFluke"&gt;&lt;/a&gt;&lt;/p&gt;Twining motion of vines trying to find something to climb.
&lt;h1&gt;
  
  
  Why do I need Environments?
&lt;/h1&gt;

&lt;p&gt;If your company is big enough, has the capital, and understands the need, you may be lucky enough to have a team of DevOps Engineers who work to help make sure everyone has the environments they need, and the tools to build and deploy code.You have your build and deploy pipelines. Your code only takes one push, merge, and a couple button clicks to make its way into production. You’re. Living. The. Life. &lt;/p&gt;

&lt;p&gt;Okay, sure, there are rough edges. The Developers and QA might be a little agitated that they have “bad data” in their databases. This bad data causes weird shadow bugs that wouldn’t exist “for a real user”. You have Product Managers and UX/UI Engineers digging around your QA and Staging Environments to make sure the feature matches the requirements, but they run into these shadow bugs and in a panic, hold a meeting to discuss “why the application is broken”. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;C’est la vie.&lt;/em&gt; Your environments are starting to drift apart, but it’s okay. QA finds a bug, but they can’t be sure when it was introduced. The Developer isn’t sure either, there are a few features that others were working on that have all been merged into here. One major feature is ready, but the others are blocked. Hours tick into the evening as the team scrambles to fix the concerns, introducing more in their haste, until finally QA calls to cancel today’s deployment. Disappointed and mentally taxed, everyone finally goes home frustrated.&lt;/p&gt;

&lt;p&gt;This happens regularly. It’s a clog in the system, but it is manageable. However, what you don’t know yet is that your competitor has committed to allocating resources to addressing this problem directly. It took them twelve months (a bit off from their original six-month estimate), but now the tooling is built out and QA can create a build from any branch, on the fly. Now the data is fresh, every time, freeing QA to get their hands dirty in this sandbox. The Product team is excited to be able to look at new features side by side before they’re released, and features can even be put on hold or tinkered with in isolation. &lt;/p&gt;

&lt;p&gt;Meanwhile, your company is falling behind. The only way you’d be able to keep up is to spend the money &lt;em&gt;and the time&lt;/em&gt; to build this for yourselves. It’s a big investment, a big time commitment, and your company is worried. What if the project fails? There’s a lot at stake here.&lt;/p&gt;

&lt;p&gt;Environments are the key to rapid prototyping and quick feature releases, while maintaining a solid, battle-tested product. But environments are also expensive. The upfront and maintenance cost put them outside the scope for many companies and by the time a strong need rears its head, the architecture has evolved into a technical labyrinth. &lt;/p&gt;

&lt;p&gt;At Release, we understand this problem in depth. We have customers who use Release to give themselves a competitive advantage after they realized environments were holding them back. A new concept in DevOps has emerged called &lt;a href="https://releasehub.com/ephemeral-environments"&gt;Ephemeral Environments&lt;/a&gt; which eliminates the bottleneck of shared staging environments.  An Ephemeral Environment is automatically created when a developer does a pull request and has just their changes on their branch. This environment spins up for UAT testing and when the branch is merged, it disappears. Developers never wait for access to environments as they appear as part of their development workflow.&lt;/p&gt;

&lt;p&gt;We're using Ephemeral Environments which has put us on fast-track to shipping deliverables that we can stand behind. The unfortunate truth is that building this infrastructure is necessary to have a fighting chance against the heavyweights. But the problem is that the cost and time required can be astronomical. The &lt;a href="https://releasehub.com"&gt;Release&lt;/a&gt; platform specifically aims to solve this problem directly by providing Environments-as-a-Service. This way, you get all the advantages of environments, without the costs or headache of Doing It Yourself, freeing you up to focus on the business and the product needs. &lt;/p&gt;

&lt;h1&gt;
  
  
  In short...
&lt;/h1&gt;

&lt;p&gt;Remember: Nobody knows what they’re doing. And that’s okay. Nobody’s ever known what they’re doing; we’re all just stumbling around. But if we stumble with purpose, we can fall into something that works. If we fail fast, we get to success faster. Our methodologies in software development reflect this, but in practice, building supportive infrastructure is costly. If we can get to a place where we optimtimize failing fast by deploying early and often, we can more quickly find what our product needs to be. We can find success &lt;em&gt;without ever having to know&lt;/em&gt; what we’re doing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ephemeral Environments
&lt;/h2&gt;

&lt;p&gt;Curious about Ephemeral Environments and what they can do for you? Check out &lt;a href="https://releasehub.com/ephemeral-environments"&gt;this article&lt;/a&gt; on what Ephemeral Environments are and what they can do for you. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Tutorial - Minecraft server running in Kubernetes on releasehub.com (free)</title>
      <dc:creator>tmcclung</dc:creator>
      <pubDate>Thu, 17 Dec 2020 23:39:06 +0000</pubDate>
      <link>https://forem.com/tmcclung/tutorial-minecraft-server-running-in-kubernetes-on-releaseapp-io-free-d1a</link>
      <guid>https://forem.com/tmcclung/tutorial-minecraft-server-running-in-kubernetes-on-releaseapp-io-free-d1a</guid>
      <description>&lt;h1&gt;
  
  
  Setup your own free Minecraft Server running on releasehub.com
&lt;/h1&gt;

&lt;p&gt;One of the coolest things about working at &lt;a href="https://releasehub.com"&gt;Release&lt;/a&gt; has been figuring out all of the fun stuff that we can do with the platform. While our main use case is helping people build environments for their applications, anything that runs in Docker will run easily on Release.&lt;/p&gt;

&lt;p&gt;Early on, I found a handful of repos that helped us build out our platform. One that has been the most fun all along is the &lt;a href="https://github.com/itzg/docker-minecraft-server"&gt;docker-minecraft-server from itzg&lt;/a&gt;. I used it in the early days because it had a little complexity and a fully working docker-compose ecosystem to play around with. It’s got the great side effect of when it runs, I let my kids test it out!&lt;/p&gt;

&lt;p&gt;So while you’re sipping on egg nog and enjoying a 2020 Holiday season on COVID lock-down, here’s a walkthrough of how to get your very own free Minecraft server up and running on Release. &lt;/p&gt;

&lt;p&gt;I highly recommend following along on the video tutorial. I've included step by step instructions for anyone that learns better through reading or if you're confused about a step.&lt;/p&gt;

&lt;p&gt;If you want to see a live version of this setup, we fired up our own Minecraft server that we built using these steps. So if you're bored over the Holidays, pop in and say hello! Here's our Server name if you want to say hi.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Play Minecraft With Us on the Release Team Minecraft Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;team-release-minecraft.releaseapp.io&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Full video walkthrough of this tutorial
&lt;/h1&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/cBDr5LwJb34"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h1&gt;
  
  
  Detailed instructions to get your Minecraft Server up and running
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;At the time of this writing, we have a &lt;a href="https://releasehub.com/pricing-page"&gt;“Starter” plan&lt;/a&gt; that’s free so you can give this a shot and have some Holiday fun on Release. Since we’re hosting all of the environments on Release on the starter plan we have a limitation of 2Gb/container. That’s sufficient for a Minecraft server for your kids and their friends.&lt;/p&gt;

&lt;p&gt;To get started, take a look at &lt;a href="https://github.com/awesome-release/docker-minecraft-server"&gt;https://github.com/awesome-release/docker-minecraft-server&lt;/a&gt;, which we cloned from &lt;a href="https://github.com/itzg/docker-minecraft-server"&gt;itzg&lt;/a&gt;. Fork or clone this repo into your GitHub account so you’ve got your own version of it to play around with.&lt;/p&gt;

&lt;p&gt;Once you’ve got your own repo to work with, I recommend taking a quick read through the &lt;a href="https://github.com/awesome-release/docker-minecraft-server/blob/master/README.md"&gt;README&lt;/a&gt;, there are a lot of configuration options and the documentation is extremely well done. &lt;/p&gt;

&lt;p&gt;We're also going to use the &lt;a href="https://github.com/rcon-web-admin/rcon-web-admin"&gt;Rcon Web Administrative portal. Take a look at the documentation&lt;/a&gt;, &lt;em&gt;specifically the environment variables that can be configured.&lt;/em&gt; itzg made a version of this for Docker called &lt;a href="https://github.com/itzg/docker-rcon-web-admin"&gt;docker-rcon-web-admin&lt;/a&gt; that we are using when when we load the rcon and rcon-ws services in this tutorial.&lt;/p&gt;

&lt;p&gt;For this walkthrough, we’re going to bring up a vanilla Minecraft server with an Rcon administrative portal running in a standalone container. This will let you and your kids have full control over the Minecraft server and ban friends who can’t fight off Zombie Pig Men.  Here's an overview of what the system architecture looks like.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/2wNwcLD1zGZRa5lebjeTtx/1f910bef3c2f3b07bdb076b6185493f9/Screen_Shot_2020-12-14_at_4.50.14_PM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/2wNwcLD1zGZRa5lebjeTtx/1f910bef3c2f3b07bdb076b6185493f9/Screen_Shot_2020-12-14_at_4.50.14_PM.png" alt="High level overview of Minecraft with Rcon"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The master branch of this repo is already setup to work with this docker-compose file in Release. Take a look at the .release.yaml file in the root of the repo’s directory. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;compose: examples/docker-compose-with-rcon.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This sets the &lt;code&gt;compose&lt;/code&gt; directive to `examples/docker-compose-with-rcon.yml’ which tells Release that’s the docker-compose file you want to use. If you want to play around with a Forge server or other examples, just point the .release.yaml file at the corresponding docker-compose.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Create a new application in Release
&lt;/h2&gt;

&lt;p&gt;Ok, let’s setup the server.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fork, clone or copy this repo: &lt;a href="https://github.com/awesome-release/docker-minecraft-server"&gt;https://github.com/awesome-release/docker-minecraft-server&lt;/a&gt; &lt;a href="https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/duplicating-a-repository"&gt;Here are some simple instructions on how to copy this repo over to your account.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Login or create an account on Release here: &lt;a href="https://releasehub.com"&gt;https://releasehub.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow the steps to create your account. Once your account is created, click the “Create an application” button.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/6VWjyyZyzQTIMKKOhMtDJR/968a10010225386a27a2fd8140bdf11c/Screen_Shot_2020-12-17_at_8.34.49_AM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/6VWjyyZyzQTIMKKOhMtDJR/968a10010225386a27a2fd8140bdf11c/Screen_Shot_2020-12-17_at_8.34.49_AM.png" alt="Create new app button"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select your &lt;code&gt;docker-minecraft-server&lt;/code&gt; repo. If you don’t see it in the list, click &lt;code&gt;Configure the Release app on Github&lt;/code&gt; link to assign permissions to your repo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/150rdMfSZdsZNfKjAO2wnp/58321bf2af11408179188ba4af26b477/Screen_Shot_2020-12-17_at_8.37.00_AM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/150rdMfSZdsZNfKjAO2wnp/58321bf2af11408179188ba4af26b477/Screen_Shot_2020-12-17_at_8.37.00_AM.png" alt="Select your repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add a name for your application. Note this name is used in your server hostname.&lt;/li&gt;
&lt;li&gt;Click Generate App Template. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/3AKC0zgywkvWOKaMANDqE2/0f6d197f208a81718c4ef13211ba23d2/Screen_Shot_2020-12-17_at_8.38.03_AM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/3AKC0zgywkvWOKaMANDqE2/0f6d197f208a81718c4ef13211ba23d2/Screen_Shot_2020-12-17_at_8.38.03_AM.png" alt="Click Generate App Template"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Edit the generated application template
&lt;/h2&gt;

&lt;p&gt;Release automatically detects and creates an application template from the docker-compose file but there are a few edits we need to make based on how this repo works and to make sure we can fit the server into the Starter plan. If you want to dive in, &lt;a href="https://docs.releasehub.com/reference-guide/application-settings/application-template"&gt;read the documentation about Release Application Templates.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a little background, take a look at this diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/31tYP2pKpmfWZcrZ1BGXNP/f93c93fea06f66b3c203a3b792eb7268/Screen_Shot_2020-12-17_at_8.56.30_AM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/31tYP2pKpmfWZcrZ1BGXNP/f93c93fea06f66b3c203a3b792eb7268/Screen_Shot_2020-12-17_at_8.56.30_AM.png" alt="docker-minecraft-server networking architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to make our application reflect this networking setup. &lt;/p&gt;

&lt;p&gt;In Release we have two different kinds of loadbalancers based on Amazon's &lt;a href="https://aws.amazon.com/elasticloadbalancing/?elb-whats-new.sort-by=item.additionalFields.postDateTime&amp;amp;elb-whats-new.sort-order=desc"&gt;ELB's&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html"&gt;ALB's&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;We also need to make sure we're using the correct type of port for the use case. There are two types of ports &lt;code&gt;container_port&lt;/code&gt; and &lt;code&gt;node_port&lt;/code&gt;. In short, a &lt;code&gt;node_port&lt;/code&gt; is exposed to the Internet and a &lt;code&gt;conatiner_port&lt;/code&gt; is not. Because the rcon service is only internally facing, we want to set its port to a &lt;code&gt;container_port&lt;/code&gt;. For more info on setting the correct type of port, &lt;a href="https://docs.releasehub.com/reference-guide/application-settings/application-template#ports"&gt;read about ports in Release&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So let's make the changes necessary to setup the Application Template correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Update memory to 2Gb
&lt;/h3&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/1qhpslhwNOzPwpGPUjFdpg/eeebe9a0fcb4fb89fb2117492bcd777b/Screen_Shot_2020-12-17_at_8.49.13_AM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/1qhpslhwNOzPwpGPUjFdpg/eeebe9a0fcb4fb89fb2117492bcd777b/Screen_Shot_2020-12-17_at_8.49.13_AM.png" alt="Increase to 2gi"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Minecraft server is setup to use 1Gb of max memory so we need to set the default memory limit in Release to 2Gb to leave enough room with some overhead. Edit the app template to allow the services to use up to 2Gb of memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Update hostnames and ports
&lt;/h3&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/2mbzeFvsEzBbHDcZDQZfNs/7a6d72c07e0a2962d1133c72a2863e32/ports-minecraft.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/2mbzeFvsEzBbHDcZDQZfNs/7a6d72c07e0a2962d1133c72a2863e32/ports-minecraft.gif" alt="Change ports settings on the minecraft service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change the port &lt;code&gt;type&lt;/code&gt; for 25575 to &lt;code&gt;container_port&lt;/code&gt; and remove the &lt;code&gt;target_port&lt;/code&gt; line.&lt;/li&gt;
&lt;li&gt;Add a &lt;code&gt;loadbalancer: true&lt;/code&gt; for port 25565.&lt;/li&gt;
&lt;li&gt;Add a hostname field at the same level as ports in the file and set to &lt;code&gt;hostname: my-server-${env_id}-${domain}&lt;/code&gt;. You can set &lt;code&gt;my-server&lt;/code&gt; to anything you'd like. ${env_id} and ${domain} are variables that Release will automatically fill in to customize your domain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/5Hq9WPLrep8YTOStorAv72/308cec8a27f9d421a15f151eb5168139/remove-minecraft-alb.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/5Hq9WPLrep8YTOStorAv72/308cec8a27f9d421a15f151eb5168139/remove-minecraft-alb.gif" alt="remove minecraft alb"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove the ALB hostname for the &lt;code&gt;minecraft&lt;/code&gt; service. (We only need the &lt;code&gt;minecraft&lt;/code&gt; service exposed on port 25565 via an ELB not an ALB which is for http/https).&lt;/li&gt;
&lt;li&gt;Click "Save and Continue".&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Setup Environment Variables
&lt;/h2&gt;

&lt;p&gt;We need to set a few passwords via environment variables and an &lt;a href="https://docs.releasehub.com/reference-guide/reference-examples/environment-variable-mappings"&gt;environment variable mapping&lt;/a&gt; for the &lt;code&gt;rcon&lt;/code&gt; websocket hostname. For more information about these environment variables, see the documentation/README files here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/awesome-release/docker-minecraft-server/blob/master/README.md#rcon"&gt;Rcon environment variables for the &lt;code&gt;minecraft&lt;/code&gt; service.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rcon-web-admin/rcon-web-admin#environment-variables"&gt;Environment variables for rcon-web-admin&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this diagram we show the passwords that need to be set via environment variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/3xnIcjv4BCvfY0Ho2byE2p/8f5951634c0a30eae280e3c832d47867/Screen_Shot_2020-12-17_at_10.37.01_AM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/3xnIcjv4BCvfY0Ho2byE2p/8f5951634c0a30eae280e3c832d47867/Screen_Shot_2020-12-17_at_10.37.01_AM.png" alt="Passwords via environment variables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup passwords via environment variables
&lt;/h3&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/3pzhiCBkhpQ8R4JLKv1YEc/7f53d78a57268a98bdf530e520760407/rcon-password.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/3pzhiCBkhpQ8R4JLKv1YEc/7f53d78a57268a98bdf530e520760407/rcon-password.gif" alt="rcon password envs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the &lt;code&gt;minecraft&lt;/code&gt; service we need to set a password for its local &lt;code&gt;rcon&lt;/code&gt; service on port 25575 so other containers can connect to it. &lt;code&gt;RCON_PASSWORD&lt;/code&gt; is the environment variable that needs to be set for this and on the &lt;code&gt;rcon&lt;/code&gt; and &lt;code&gt;rcon-ws&lt;/code&gt; service we need to set &lt;code&gt;RWA_RCON_PASSWORD&lt;/code&gt; to the same value so those services can control the minecraft server.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/3Bqx50fbcFEQo8TeLaWRB1/fb36558ae0cf90fd33ae91a01909438b/rwa-password.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/3Bqx50fbcFEQo8TeLaWRB1/fb36558ae0cf90fd33ae91a01909438b/rwa-password.gif" alt="rcon web password"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on "Edit" for "Default Environment Variables".&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;RCON_PASSWORD&lt;/code&gt; in the &lt;code&gt;minecraft&lt;/code&gt; service and add &lt;code&gt;secret: true&lt;/code&gt;. To encrypt this value in the database.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;RWA_RCON_PASSWORD&lt;/code&gt; to the same value as you set in step 2 on both the &lt;code&gt;rcon&lt;/code&gt; and &lt;code&gt;rcon-ws&lt;/code&gt; services.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;RWA_PASSWORD&lt;/code&gt; which will be the default password used for the RCON Web Administration tool in both the &lt;code&gt;rcon&lt;/code&gt; and &lt;code&gt;rcon-ws&lt;/code&gt; services. Make sure to add &lt;code&gt;secret: true&lt;/code&gt; to encrypt this value.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setup mapping of environment variable RWA_WEBSOCKET_URL_SSL
&lt;/h3&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/H7q1NXUl6g0oZ6iYgjnMH/c49e8375ed880c46c302dbb0468f2cd2/Screen_Shot_2020-12-17_at_10.55.38_AM.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/H7q1NXUl6g0oZ6iYgjnMH/c49e8375ed880c46c302dbb0468f2cd2/Screen_Shot_2020-12-17_at_10.55.38_AM.png" alt="rwa webscoket hostname mapping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last environment variable we need to add is a mapping that tells Release to map &lt;code&gt;RWA_WEBSOCKET_URL_SSL&lt;/code&gt; to a dynamically created environment variable for hostnames created in Release &lt;code&gt;RCON_WS_INGRESS_HOST&lt;/code&gt;.  &lt;code&gt;RWA_WEBSOCKET_URL_SSL&lt;/code&gt; tells the Rcon Web Admin tool which container host url is running the websocket for this service which is on our &lt;code&gt;rcon-ws&lt;/code&gt; service on port 4327. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;RCON_WS_INGRESS_HOST&lt;/code&gt; is automatically created everytime a new environment is created by Release and always conatins the correct hostname for &lt;code&gt;rcon-ws&lt;/code&gt;. This value can change when new environments are created, thus we can't just hard set &lt;code&gt;RWA_WEBSOCKET_URL_SSL&lt;/code&gt;.  This is where an environment variable mapping comes into play. The diagram above represents the change we need to add in our Default Environment Variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/1W93m0C2YME1uskGVo6fqA/d1c09555747b8d2578f597a44c3fd3d8/env-mappings.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/1W93m0C2YME1uskGVo6fqA/d1c09555747b8d2578f597a44c3fd3d8/env-mappings.gif" alt="set env mapping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add a &lt;code&gt;mapping:&lt;/code&gt; directive and map &lt;code&gt;RWA_WEBSOCKET_URL_SSL&lt;/code&gt; to the top of the file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
mapping:&lt;br&gt;
  RWA_WEBSOCKET_URL_SSL: wss://${RCON_WS_INGRESS_HOST}&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;br&gt;
When these chages and your env passwords have been made, your file should look like this:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;&lt;/code&gt;`
&lt;/h2&gt;

&lt;p&gt;mapping:&lt;br&gt;
  RWA_WEBSOCKET_URL_SSL: wss://${RCON_WS_INGRESS_HOST}&lt;br&gt;
defaults:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;key: RWA_RCON_HOST&lt;br&gt;
value: minecraft&lt;br&gt;
services:&lt;br&gt;
minecraft:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;key: EULA
value: 'TRUE'&lt;/li&gt;
&lt;li&gt;key: MAX_MEMORY
value: 1G&lt;/li&gt;
&lt;li&gt;key: ENABLE_RCON
value: true&lt;/li&gt;
&lt;li&gt;key: RCON_PASSWORD
value: "rcon_password"
secret: true&lt;/li&gt;
&lt;li&gt;key: VIEW_DISTANCE
value: 15&lt;/li&gt;
&lt;li&gt;key: MAX_BUILD_HEIGHT
value: 256
rcon:&lt;/li&gt;
&lt;li&gt;key: RWA_RCON_HOST
value: minecraft&lt;/li&gt;
&lt;li&gt;key: RWA_RCON_PASSWORD
value: "rcon_password"
secret: TRUE&lt;/li&gt;
&lt;li&gt;key: RWA_PASSWORD
value: "rwa_password"
secret: true
rcon-ws:&lt;/li&gt;
&lt;li&gt;key: RWA_RCON_HOST
value: minecraft&lt;/li&gt;
&lt;li&gt;key: RWA_RCON_PASSWORD
value: "rcon_password"
secret: TRUE&lt;/li&gt;
&lt;li&gt;key: RWA_PASSWORD
value: "rwa_password"
secret: true
`&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click 'Save &amp;amp; Deploy'&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/FKamQWEAPHoWSCN6y3rIJ/05f8d3e75578b3042fe253226625dc03/deploy-and-inspect.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/FKamQWEAPHoWSCN6y3rIJ/05f8d3e75578b3042fe253226625dc03/deploy-and-inspect.gif" alt="deploy and view environment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your environment is now deploying, you can click on the deploy and watch its progress. When it's done, navigate to the environment screen and inspect your created hostnames.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Setup the Minecraft Client to Connect to your new server and login to the RCON web admin tool.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/tsY55jHlEsGV1qQdVtJRl/3590ce1e1adde7289c182ed44d708d17/setup-minecraft-server.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/tsY55jHlEsGV1qQdVtJRl/3590ce1e1adde7289c182ed44d708d17/setup-minecraft-server.gif" alt="Setup minecraft server"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using the &lt;code&gt;minecraft&lt;/code&gt; hostname that was created by Release, create a new server within the Minecraft Client.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/2bqtxnSkRmiTzFGDPe2nJw/13e466cf36005fac0ba61dd1a4501e31/rcon-web-admin.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/2bqtxnSkRmiTzFGDPe2nJw/13e466cf36005fac0ba61dd1a4501e31/rcon-web-admin.gif" alt="rcon web admin tool"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the &lt;code&gt;rcon&lt;/code&gt; hostname that was created by Release to access the RCON Web Admin user interface.&lt;/li&gt;
&lt;li&gt;Login using the same password you set for the &lt;code&gt;RWA_PASSWORD&lt;/code&gt; environment variable.&lt;/li&gt;
&lt;li&gt;Add the &lt;code&gt;minecraft&lt;/code&gt; server.&lt;/li&gt;
&lt;li&gt;Add the &lt;code&gt;console&lt;/code&gt; widget.&lt;/li&gt;
&lt;li&gt;Run admin commands on your server!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What if it doesn't work???
&lt;/h2&gt;

&lt;p&gt;If for any reason you made a mistake and something doesn't work. You can navigate to your App Settings and edit your Application Template and your Default Environment Variables. Double check you've made the proper settings. Once you've made these edits, navigate to your environments screen, delete your environment and create a new one. The beauty of Release is environments can be torn down and up whenever you want. Here are the links to the docs on how to edit your App Template and Default Environment Variables.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.releasehub.com/reference-guide/application-settings"&gt;Modify Application Settings&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Delete and Create a new Environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/690Qo5GAdls6r4zotrNRXD/09ecbd813c067044474a5e432dda5067/delete-and-create.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/690Qo5GAdls6r4zotrNRXD/09ecbd813c067044474a5e432dda5067/delete-and-create.gif" alt="delete and create new environment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;You now have your very own Minecraft Server running on the Release Starter Plan. This server was created in an Ephemeral Environment in Release and will destroy itself in 7 days. If you'd like your server to remain indefinitely, you'll need to delete the environment and re-create it as a permanent environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/qf96nnjfyr2y/690Qo5GAdls6r4zotrNRXD/09ecbd813c067044474a5e432dda5067/delete-and-create.gif" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/qf96nnjfyr2y/690Qo5GAdls6r4zotrNRXD/09ecbd813c067044474a5e432dda5067/delete-and-create.gif" alt="delete and create new environment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure you choose permanent when creating the environment.&lt;/p&gt;

&lt;p&gt;With the RCON Web Admin tool you can control and make the server your own special place. If you have any questions, please contact the Release team at &lt;a href="mailto:hello@releasehub.com"&gt;&lt;/a&gt;&lt;a href="mailto:hello@releasehub.com"&gt;hello@releasehub.com&lt;/a&gt;. Jump in and say hello on our Release Team Minecraft Server here:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;team-release-minecraft.releaseapp.io&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Happy Holidays from the Release Team!!!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>startup</category>
      <category>gamedev</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
