<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Simon Emms</title>
    <description>The latest articles on Forem by Simon Emms (@mrsimonemms).</description>
    <link>https://forem.com/mrsimonemms</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mrsimonemms"/>
    <language>en</language>
    <item>
      <title>Zigflow: The Missing Temporal DSL</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Mon, 02 Feb 2026 19:33:11 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/zigflow-the-missing-temporal-dsl-5bfm</link>
      <guid>https://forem.com/mrsimonemms/zigflow-the-missing-temporal-dsl-5bfm</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: &lt;a href="https://zigflow.dev?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=intro" rel="noopener noreferrer"&gt;Zigflow&lt;/a&gt; lets you write Temporal workflows in YAML, so you can focus on what happens instead of how to make it reliable.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://temporal.io?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=intro" rel="noopener noreferrer"&gt;Temporal&lt;/a&gt; is one of those tools that feels like a superpower once it clicks. Durable workflows, automatic retries and crash recovery baked in.&lt;/p&gt;

&lt;p&gt;But getting there can feel heavy.&lt;/p&gt;

&lt;p&gt;Since I joined Temporal last April, I've really seen first-hand how it helps engineers make their application bullet-proof whilst writing less code.&lt;/p&gt;

&lt;p&gt;Last summer, I had a week of customer calls where a common pattern kept repeating. The whole week, I heard some version of the same thing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Temporal is great for developers, but we want our business users to be able to define workflows. Is there a DSL we can use?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;They loved Temporal for its durability and reliability, but they didn't want every workflow change to require an engineer, a code review and a deployment. For them, the workflow wasn't the complexity, but the implementation was.&lt;/p&gt;

&lt;p&gt;That tension was the spark for &lt;a href="https://zigflow.dev?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=intro" rel="noopener noreferrer"&gt;Zigflow&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Temporal is powerful, but there is a learning curve
&lt;/h2&gt;

&lt;p&gt;Temporal workflows are just code, but not necessarily &lt;em&gt;normal&lt;/em&gt; code. Workflows are replayed. You have to understand determinism (and non-determinism). Things that once felt natural have to be reconsidered.&lt;/p&gt;

&lt;p&gt;For experienced Temporal users, that's fine. It can even feel elegant. &lt;br&gt;For newbies, it's a whole new paradigm.&lt;/p&gt;

&lt;p&gt;Also, most workflows aren't algorithmically complex:&lt;/p&gt;

&lt;p&gt;They're sequences.&lt;br&gt;They branch.&lt;br&gt;They wait.&lt;br&gt;They call services.&lt;br&gt;They loop.&lt;br&gt;They retry.&lt;/p&gt;

&lt;p&gt;Most workflows aren't complex in &lt;em&gt;what&lt;/em&gt; they do - they're complex in &lt;em&gt;how&lt;/em&gt; they're made reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Zigflow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://zigflow.dev?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=intro" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdsk7yts8olgfqml11f7.png" alt="Zigflow" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zigflow.dev?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=intro" rel="noopener noreferrer"&gt;Zigflow&lt;/a&gt; is a declarative DSL for Temporal workflows.&lt;/p&gt;

&lt;p&gt;Instead of writing your workflow code directly, you describe your workflow in YAML. Zigflow turns that description into a real Temporal workflow with retries, durability and all the usual Temporal guarantees.&lt;/p&gt;

&lt;p&gt;Think of it as a way to start simple with Temporal, without painting yourself in a corner.&lt;/p&gt;

&lt;p&gt;You get all the reliability, observability and scalability without having to think about the sharp edges on day one.&lt;/p&gt;

&lt;p&gt;A DSL (domain-specific language) is a specialised language, focused on the problem. Rather than inventing yet another DSL, Zigflow builds on the CNCF's &lt;a href="https://serverlessworkflow.io" rel="noopener noreferrer"&gt;Serverless Workflow&lt;/a&gt; specification - a vendor-neutral standard designed for exactly this problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Show me the money
&lt;/h2&gt;

&lt;p&gt;If you've gotten this far, you'll want to see how it all works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;document&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dsl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zigflow&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
&lt;span class="na"&gt;do&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;set&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;as&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${ . }&lt;/span&gt;
      &lt;span class="na"&gt;set&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Hello from Ziggy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't pseudo-code, but a full Temporal workflow ready to receive triggers.&lt;/p&gt;

&lt;p&gt;Run this with the Zigflow CLI (&lt;code&gt;zigflow -f ./workflow.yaml&lt;/code&gt;) and you've got a running Temporal workflow. Trigger this from the Temporal UI (&lt;strong&gt;task queue&lt;/strong&gt;: &lt;code&gt;zigflow&lt;/code&gt; and &lt;strong&gt;workflow type&lt;/strong&gt;: &lt;code&gt;hello-world&lt;/code&gt;) and you'll see the response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Hello from Ziggy"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The goal of Zigflow isn't low-code for the sake of it, but to allow you to focus on what the workflow does. And you get the added bonus of Temporal best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices as guardrails
&lt;/h2&gt;

&lt;p&gt;As a workflow grows, Temporal concepts start to matter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Histories get long, so you need Continue-As-New&lt;/li&gt;
&lt;li&gt;Activities get slow, so you need heartbeats&lt;/li&gt;
&lt;li&gt;You want visibility, so you need search attributes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Zigflow bakes all these ideas in from the start, so you don't need to learn them upfront. You still benefit from them, but they're implemented as sensible defaults so you only need to know about them when they matter.&lt;/p&gt;

&lt;p&gt;And if you know Temporal already, Zigflow becomes a way to standardise and accelerate workflow creation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Written in Go. Not locked to Go
&lt;/h2&gt;

&lt;p&gt;Zigflow is written in Go, but the workflows it runs are just Temporal workflows.&lt;/p&gt;

&lt;p&gt;That means they're language-agnostic by nature.&lt;/p&gt;

&lt;p&gt;You can trigger them from TypeScript, signal from the Temporal UI, query from Python and update from Java.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why you might want to try it
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You're curious about Temporal, but put off by the learning curve&lt;/li&gt;
&lt;li&gt;You want to prototype workflows fast&lt;/li&gt;
&lt;li&gt;You like the idea of workflows being readable by humans&lt;/li&gt;
&lt;li&gt;You're building tooling or platforms on top of Temporal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any of this resonates, Zigflow is worth a look:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;👉 Docs: &lt;a href="https://zigflow.dev?utm_source=devto&amp;amp;utm_medium=blog&amp;amp;utm_campaign=intro" rel="noopener noreferrer"&gt;zigflow.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;👉 Code: &lt;a href="https://github.com/mrsimonemms/zigflow" rel="noopener noreferrer"&gt;github.com/mrsimonemms/zigflow&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you try it and like it, please add a GitHub star. I know it's a bit of a vanity-metric, but it's great seeing feedback from people who are using it. And that motivates me to keep building.&lt;/p&gt;

&lt;p&gt;This is just the beginning and I've got so many ideas about new features (including a drag-and-drop UI). If this resonates, try it out. And I'm planning a follow-up post diving into these ideas.&lt;/p&gt;

&lt;p&gt;And let me know what you build.&lt;/p&gt;

</description>
      <category>temporal</category>
      <category>dsl</category>
      <category>workflows</category>
      <category>yaml</category>
    </item>
    <item>
      <title>When "The Best" isn't good enough</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Fri, 14 Jun 2024 00:00:00 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/when-the-best-isnt-good-enough-3cn8</link>
      <guid>https://forem.com/mrsimonemms/when-the-best-isnt-good-enough-3cn8</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cgWIQmjx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://simonemms.com/img/blog/beekeeping.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cgWIQmjx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://simonemms.com/img/blog/beekeeping.jpg" alt="When " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am a beekeeper.&lt;/p&gt;

&lt;p&gt;This won't be of much surprise to anyone who's spent any time with me, or who follows &lt;a href="https://twitter.com/theshroppiebeek"&gt;@TheShroppieBeek&lt;/a&gt; on the&lt;a href="https://en.wikipedia.org/wiki/Information_superhighway"&gt;Information Superhighway&lt;/a&gt;. I'll bore for England on the subject of beekeeping. One thing beekeepers often say to each other is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ask a beekeeper a question and get two answers&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A question that might seem simple, such as "what's the best way of raising a new queen?", will come with a multitude of opinions, folklore and experience. You might ask a fan of the &lt;a href="http://www.dave-cushman.net/bee/millermethod.html"&gt;Miller Method&lt;/a&gt;, or a lover of &lt;a href="https://www.youtube.com/watch?v=PJ_79D1ASlg"&gt;Grafting&lt;/a&gt; or someone who likes using any of the other methods.&lt;/p&gt;

&lt;p&gt;The problem here is asking for "The Best". How do we know it's "The Best"? When I say "The Best", I'm looking for the easiest way of doing it. When you hear "The Best", you might think I'm looking for the most reliable way of doing it. These are subtly different things.&lt;/p&gt;

&lt;p&gt;My setup will be different to the person I'm asking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I have six hives over two apiaries&lt;/li&gt;
&lt;li&gt;I'm a hobbyist&lt;/li&gt;
&lt;li&gt;it's not my main income, so I don't need to turn a profit&lt;/li&gt;
&lt;li&gt;my bees are all fairly sheltered from the wind&lt;/li&gt;
&lt;li&gt;my bees all have south or east-facing entrances&lt;/li&gt;
&lt;li&gt;my bees are all around ~130 metres above sea-level&lt;/li&gt;
&lt;li&gt;my bees are all around 52°N&lt;/li&gt;
&lt;li&gt;I select my bees for calmness rather than productivity&lt;/li&gt;
&lt;li&gt;I use wooden, National boxes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The person I ask almost certainly won't keep bees in an identical fashion to me. And, even if they did, they will have a natural odour, style and demeanour which the bees will pick up on and react differently to. So if I ask for "The Best", I'm assuming that they will know all about my bees and that there's will be the same.&lt;/p&gt;

&lt;p&gt;Which is why you end up getting two answers from one beekeeper.&lt;/p&gt;

&lt;h2&gt;
  
  
  "The Best" in software engineering
&lt;/h2&gt;

&lt;p&gt;We see a similar behaviour in software engineering. I have been regularly asked for "The Best" without giving any other answers.&lt;/p&gt;

&lt;p&gt;Let's examine the question "what's The Best cloud provider?", which is a question I'm asked with depressing regularity. Defining "The Best" is actually quite difficult:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;do you need a truly global application (including Africa and South America), or do you just need Europe and North America?&lt;/li&gt;
&lt;li&gt;are you using a VPN, or will everything be accessed over the public internet?&lt;/li&gt;
&lt;li&gt;are you going to be having vast quantities of traffic/data, or is it only going to be a few gigabytes per month?&lt;/li&gt;
&lt;li&gt;do you need to use Windows machines, or is everything on Linux?&lt;/li&gt;
&lt;li&gt;do you need managed Kubernetes, or are you comfortable using K3s?&lt;/li&gt;
&lt;li&gt;do you have any specialist requirements (like a &lt;a href="https://aws.amazon.com/ground-station"&gt;satellite&lt;/a&gt;), or are you just deploying some containers and storing data?&lt;/li&gt;
&lt;li&gt;is money no object, or do you need to watch the pennies?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are just some of the questions that need to be answered in order to give an answer to the question. In my mind, I divide up the cloud providers into "The Establishment" (AWS, GCP, Azure) and "The Challengers" (DigitalOcean, Civo, Hetzner and others). For (most of) the questions I asked, the first part is something that The Establishment caters for (and does well), but the second part is something that everyone does well.&lt;/p&gt;

&lt;p&gt;Tell me the things that matter to you and I'll be able to give an intelligent answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Define your parameters
&lt;/h2&gt;

&lt;p&gt;I've regularly run post-mortems in my capacity as a technical leader. I often end them by talking of better questions that we can ask in future to avoid the situation that's gone wrong. Telling me how you just "The Best" is useful in helping me understand the question you're actually asking.&lt;/p&gt;

&lt;p&gt;Instead of asking for "The Best", trying asking a better question:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;my goal is &lt;em&gt;X&lt;/em&gt;. What are the different ways I could achieve this?&lt;/li&gt;
&lt;li&gt;am I on the right tracks with this? Are there any things I need to watch out for?&lt;/li&gt;
&lt;li&gt;I want to avoid &lt;em&gt;X&lt;/em&gt; - how might you do it?&lt;/li&gt;
&lt;li&gt;I'm doing a proof of concept that I want to move into production - what should I watch out for?&lt;/li&gt;
&lt;li&gt;I want to put this application on the internet - how are we doing it for other things like this?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The exception that proves the rule
&lt;/h2&gt;

&lt;p&gt;Of course, there are always exceptions. If you ask an&lt;a href="https://www.joshwiddicombe.com/"&gt;eager-to-please comedian&lt;/a&gt; to buy The Taskmaster "the best present", you might just get them to get a tattoo whilst filming an unbroadcast show on a channel known for just showing repeats.&lt;/p&gt;

&lt;p&gt;And comedy gold ensues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB.&lt;/strong&gt; If you're &lt;a href="https://en.wikipedia.org/wiki/Alex_Horne"&gt;Little Alex Horne&lt;/a&gt;reading this, please keep asking comedians/intelligent idiots for "The Best". The vagueness is what actually means makes for varied, interesting and funny TV.&lt;/p&gt;

</description>
      <category>culture</category>
      <category>development</category>
      <category>questions</category>
    </item>
    <item>
      <title>Self-Hosted is dead - long live Self-Hosted</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Sun, 05 Feb 2023 00:00:00 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/self-hosted-is-dead-long-live-self-hosted-47jk</link>
      <guid>https://forem.com/mrsimonemms/self-hosted-is-dead-long-live-self-hosted-47jk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dcftcx8u7tvebyuf785.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dcftcx8u7tvebyuf785.jpg" alt="Self-Hosted is dead - long live Self-Hosted" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gitpod may have deprecated support for self-hosted, but that's no reason to let it die. I've created a &lt;a href="https://github.com/mrsimonemms/gitpod-self-hosted" rel="noopener noreferrer"&gt;Gitpod Self-Hosted&lt;/a&gt; community installation repository to fly the flag for those of us who want and need self-hosted.&lt;/p&gt;




&lt;p&gt;Regular readers will know that I worked at &lt;a href="https://www.gitpod.io" rel="noopener noreferrer"&gt;Gitpod&lt;/a&gt; until&lt;a href="https://simonemms.com/blog/2023/01/28/i-am-an-ex-podder" rel="noopener noreferrer"&gt;a couple of weeks ago&lt;/a&gt;. Whilst I was there, I provided the technical leadership on the self-hosted offering. I planned and built the Gitpod Installer to &lt;a href="https://www.gitpod.io/blog/gitpod-installer" rel="noopener noreferrer"&gt;remove the complexity of our old Helm charts&lt;/a&gt;, worked with&lt;a href="https://www.replicated.com" rel="noopener noreferrer"&gt;Replicated&lt;/a&gt; to package it for public consumption and various other odds and sods.&lt;/p&gt;

&lt;p&gt;In December, &lt;a href="https://www.gitpod.io/blog/introducing-gitpod-dedicated" rel="noopener noreferrer"&gt;Gitpod pulled official support for self-hosted&lt;/a&gt;. For anyone who worked alongside me at Gitpod or worked with me in the &lt;a href="https://www.gitpod.io/community" rel="noopener noreferrer"&gt;Gitpod community&lt;/a&gt;, you will be aware that I had reservations about this as a strategy. I'm not going to get into the detail of that here (I may do so publicly in the future), but there's a couple of highlights that are appropriate to call out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gitpod SaaS works well - this is the shop-window and how the majority of people will interact with it&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://www.gitpod.io/dedicated" rel="noopener noreferrer"&gt;managed Gitpod service&lt;/a&gt; is a good idea (and something I argued in favour of over a year ago)&lt;/li&gt;
&lt;li&gt;Dropping self-hosted excludes a whole host of hobbyists and champions, who were crucial to getting Gitpod to where it is today&lt;/li&gt;
&lt;li&gt;It removes Gitpod as a viable option for businesses who need to guarantee things like data sovereignty, access policies or other compliance policies (Dedicated does this, but there will still be those for whom requiring the guarantees will be cost/time-prohibitive, so will insist on on-premise or nothing)&lt;/li&gt;
&lt;li&gt;Dedicated will be significantly more expensive than Self-Hosted&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.youtube.com/watch?v=eUJ_ifjKopM" rel="noopener noreferrer"&gt;This Town Ain't Big Enough For The Both Of Us&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Gitpod is difficult to run. It is a Kubernetes application that builds it's own images. It provides access to the Docker socket so that Docker can be used inside it. In order to do this and maintain security and isolation between workspaces, lots of work has been done to make this happen.&lt;/p&gt;

&lt;p&gt;This means you can't just get any old Kubernetes instance and run it. Anyone who's thought "oooh, I'll spin up minikube and have a play with Gitpod for half an hour" will know that it doesn't work like that.&lt;/p&gt;

&lt;p&gt;Gitpod.io runs on &lt;a href="https://k3s.io" rel="noopener noreferrer"&gt;k3s&lt;/a&gt;. It used to run on &lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;Google's GKE&lt;/a&gt;, but it was found to have certain limitations that meant that it wouldn't run reliably on it. In self-hosted though, we supported GKE. And &lt;a href="https://aws.amazon.com/eks" rel="noopener noreferrer"&gt;EKS&lt;/a&gt;. And&lt;a href="https://azure.microsoft.com/en-gb/products/kubernetes-service" rel="noopener noreferrer"&gt;AKS&lt;/a&gt;. Strangely, we didn't officially support k3s (although I had created a &lt;a href="https://github.com/MrSimonEmms/gitpod-k3s-guide" rel="noopener noreferrer"&gt;k3s guide&lt;/a&gt;which was quite popular in the community). But that means we were officially supporting four times the number of distributions, including one that we couldn't get to work reliably for our own paid service.&lt;/p&gt;

&lt;p&gt;Gitpod also needs nodes that run on &lt;a href="https://releases.ubuntu.com/focal" rel="noopener noreferrer"&gt;Ubuntu 20.04&lt;/a&gt;. And the nodes could have &lt;a href="https://github.com/toby63/shiftfs-dkms" rel="noopener noreferrer"&gt;the &lt;code&gt;shiftfs&lt;/code&gt; module&lt;/a&gt;enabled, or use &lt;a href="https://www.kernel.org/doc/html/latest/filesystems/fuse.html" rel="noopener noreferrer"&gt;&lt;code&gt;FUSE&lt;/code&gt;&lt;/a&gt; if it wasn't. And then there was in-cluster or external database/registry/storage.&lt;/p&gt;

&lt;p&gt;And that's not counting the number of times where a feature was added that would have added more requirements on the nodes, such as the time a PR was opened that changed all the persistent volume claims to use a feature that was only available on Google Cloud Platform.&lt;/p&gt;

&lt;p&gt;This meant that we were supporting a matrix of many permutations. For a team of four engineers, that was a big task.&lt;/p&gt;

&lt;p&gt;It also meant that Self-Hosted was actually slowing down Gitpod's progression. A good example of this is &lt;a href="https://github.com/gitpod-io/gitpod/pull/14005" rel="noopener noreferrer"&gt;gitpod-io/gitpod#14005&lt;/a&gt;. This is a pull request to drop support for &lt;code&gt;FUSE&lt;/code&gt;. &lt;a href="https://github.com/aledbf" rel="noopener noreferrer"&gt;Alejandro&lt;/a&gt; (rightly) said that "Ubuntu stable already provides &lt;code&gt;shiftfs&lt;/code&gt; OOTB", but what that didn't take it account of was that the managed Kubernetes nodes didn't necessarily have it enabled. In both GKE and AKS, &lt;code&gt;shiftfs&lt;/code&gt; wasn't actually enabled by default. So the ticket lay fallow for months, with no one really happy with this state of affairs.&lt;/p&gt;

&lt;p&gt;This had a practical benefit to Gitpod. By only supporting &lt;code&gt;shiftfs&lt;/code&gt;, the complexity of the installation would reduce, the image build speeds would dramatically increase and everyone would be happy.&lt;/p&gt;

&lt;p&gt;Except those customers of GKE and AKS...&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.youtube.com/watch?v=3KGypBFyaQQ" rel="noopener noreferrer"&gt;Bright Idea&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are two broad schools of thought for on-premise applications:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;provide support for a wide-range of features, potentially limiting the deployment options&lt;/li&gt;
&lt;li&gt;provide support for a wide-range of deployment options, potentially limiting the features&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Historically, Gitpod fell into the first camp as it was felt that by providing lots of deployment options, we'd get the most customers. Towards the end of 2022, it became clear that this was actually hurting us.&lt;/p&gt;

&lt;p&gt;For much of my last 6 months at Gitpod, I was advocating an approach where we limit our deployment options. Instead of supporting the managed Kubernetes of all the major cloud providers, we should only support specific versions of k3s. Our job in the Self-Hosted team would then be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;maintaining the known images of Ubuntu&lt;/li&gt;
&lt;li&gt;maintaining ways of autoscaling the VMs in those cloud providers&lt;/li&gt;
&lt;li&gt;being able to provide well-tested and reliable instructions to our users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A few weeks before I left, I spoke to one of my teammates and they said:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you think you're so clever, why not build a proof-of-concept?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://github.com/mrsimonemms/gitpod-self-hosted" rel="noopener noreferrer"&gt;So I did&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And, just because I'm no longer a Gitpodder, I'm still a lover of Gitpod so I continued working on it after I left (that's normal right?).&lt;/p&gt;

&lt;p&gt;At this stage, it only has support for &lt;a href="https://www.hetzner.com" rel="noopener noreferrer"&gt;Hetzner&lt;/a&gt; as this is what I use (it's cheap as chips, fast and reliable), but there's plans to add more cloud providers in future. Importantly, we have control of the configuration of the Kubernetes runtime and the configuration of the nodes that these run on.&lt;/p&gt;

&lt;p&gt;It creates a pool of Kubernetes managers and a pool of Kubernetes nodes. In a typical managed instance, you don't actually have access to the managers (usually known as the control-plane) and you only have access to the nodes. But, because this is k3s, you have access to both. Once Terraform has created the managers, it installs k3s to them.&lt;/p&gt;

&lt;p&gt;When you're scaling a cluster, you typically wouldn't be scaling the control-plane because these would not normally have any Gitpod resources on them. So, when you scale, you're adding new nodes.&lt;/p&gt;

&lt;p&gt;Both the managers and nodes make use of &lt;a href="https://cloudinit.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;cloud-init&lt;/a&gt;scripts to configure the node for us, which means we don't actually need to do anything clever like creating base images in &lt;a href="https://www.packer.io/" rel="noopener noreferrer"&gt;Packer&lt;/a&gt;. For the&lt;a href="https://github.com/mrsimonemms/gitpod-self-hosted/blob/main/cloud-init/k3s_manager.yaml" rel="noopener noreferrer"&gt;manager VMs&lt;/a&gt;, the cloud-init script installs the required packages, change the SSH port from &lt;code&gt;22&lt;/code&gt; (Gitpod needs port &lt;code&gt;22&lt;/code&gt; for it's own SSH access) and installs &lt;code&gt;shiftfs&lt;/code&gt;. For the&lt;a href="https://github.com/mrsimonemms/gitpod-self-hosted/blob/main/cloud-init/k3s_node.yaml" rel="noopener noreferrer"&gt;nodes&lt;/a&gt; it does exactly the same and then installs k3s and connects it to the manager pool.&lt;/p&gt;

&lt;p&gt;The reason that the cloud-init script has the k3s connection information? So that we can add a new node via autoscaling without needing someone to run the Terraform scripts. Importantly, creating a new VM from scratch only takes about a minute or so, which is about the same amount of time that most managed Kubernetes services take to provision a new node.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.youtube.com/watch?v=vWP54WPwYlI" rel="noopener noreferrer"&gt;I Am The Resurrection&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;I don't know where I'm going with this project in truth. I'm a passionate and empathetic engineer, so I can't just turn off the taps on something that was my entire working life for the past 18 months. I guess if it had ended on my terms, then that's one thing - by being made redundant, I finished mid-ticket and I had the very hollow feeling of unfinished business.&lt;/p&gt;

&lt;p&gt;I worked bloody hard to make Gitpod Self-Hosted a success and I hated seeing that it wasn't as successful as it should have been. Gitpod is an open-source project, so this could well become a vibrant, community-remix of Gitpod. Equally, it could just be something I do for myself. I have no influence over the technical direction of Gitpod any more, so I guess this could all be closed down very quickly if they don't want it to exist any more.&lt;/p&gt;

&lt;p&gt;It's unlikely that I'll do work on other cloud providers just because. I use Hetzner and this works well for me. If you want me to open up other cloud providers, I will want sponsoring for this effort (and access to an account for the desired cloud) so that I'm not funding development of a commercial, &lt;a href="https://www.gitpod.io/blog/future-of-software-cdes" rel="noopener noreferrer"&gt;VC-backed&lt;/a&gt;company that recently decided it couldn't afford me any more. I love Gitpod and it's community, but maybe not that much.&lt;/p&gt;

&lt;p&gt;I will be writing some extensive documentation for the project over the next few weeks, so please keep an eye on the project. And, if you want to discuss me working on additional cloud providers, please put a call in my &lt;a href="https://diary.simonemms.com" rel="noopener noreferrer"&gt;diary&lt;/a&gt; so we can discuss things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub repository: &lt;a href="https://github.com/mrsimonemms/gitpod-self-hosted" rel="noopener noreferrer"&gt;github.com/mrsimonemms/gitpod-self-hosted&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Actual usage: &lt;a href="https://github.com/mrSimonEmms/gitpod-app" rel="noopener noreferrer"&gt;github.com/mrsimonemms/gitpod-app&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>gitpod</category>
      <category>selfhosted</category>
      <category>commmunity</category>
      <category>hetzner</category>
    </item>
    <item>
      <title>I am an ex-Podder</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Sun, 29 Jan 2023 15:02:01 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/i-am-an-ex-podder-4phh</link>
      <guid>https://forem.com/mrsimonemms/i-am-an-ex-podder-4phh</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Hey Simon, I'm sorry to be sending this. We've made the hard decision to reduce the size of the Gitpod team and you are affected.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That was the Slack message I woke up to on the morning of Tuesday 24th January 2023 from our &lt;a href="https://www.linkedin.com/in/christian-weichel-740b4224/" rel="noopener noreferrer"&gt;CTO&lt;/a&gt; and my time with Gitpod was over, along with &lt;a href="https://www.gitpod.io/blog/building-for-the-long-run" rel="noopener noreferrer"&gt;20 other colleagues and friends&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, guess it's time to find something else then.&lt;/p&gt;

&lt;p&gt;One of the most recurring questions I've had in the past few days is "what is it you're looking for Simon?". The first thing to say it that, until 10am on Tuesday morning, I wasn't planning on looking for anything. But that ain't going to pay the mortgage, so here's some thoughts...&lt;/p&gt;




&lt;p&gt;I have about 15 years of experience and &lt;a href="https://simonemms.com/testimonials" rel="noopener noreferrer"&gt;I'm bloody good at my job&lt;/a&gt;. I don't want to be &lt;em&gt;Engineer #7,384&lt;/em&gt; at a company. I have been a technical leader at many places and want to go somewhere that would benefit from that experience.&lt;/p&gt;

&lt;p&gt;The job title doesn't matter, but what I'm doing does (that said, bonus points if I have a job with a comical acronym). Dependent upon the structure, the ideal job might be CTO, Staff Engineer, Architect or Senior Engineer. The important thing is that I want a role where I can work with other brilliant people and provide the technical leadership to either the whole company or a section of it.&lt;/p&gt;

&lt;p&gt;I care about what I'm doing. When I went to Gitpod, I took both a salary cut and a responsibility cut, but I did that because I believed in what I was doing. I want to go somewhere where I'm working on something that matters.&lt;/p&gt;

&lt;p&gt;I like working on difficult problems. They're exciting, interesting and challenging.&lt;/p&gt;

&lt;p&gt;In 2022, &lt;a href="https://simonemms.com/speaking" rel="noopener noreferrer"&gt;I did four talks&lt;/a&gt; in the UK, Spain and the USA (I'm still claiming KubeCon as I wrote the talk, even though I caught Covid the week beforehand so CTO Chris had to deliver the talk). This was something that I really enjoyed and I had very positive feedback about the talks. I don't want&lt;br&gt;
&lt;a href="https://devrel.co/about/" rel="noopener noreferrer"&gt;DevRel&lt;/a&gt; to be the main focus of my next job, but I would very much like it if it was something that I was supported to do as a sideline.&lt;/p&gt;

&lt;p&gt;I want to work with a brilliant and diverse group of people. I've loved working with engineers from across the world at Gitpod, learning about different ways of working, different cultures and different parts of the world. Bonus points if you have a hiring policy that levels the &lt;a href="https://www.youtube.com/watch?v=2KlmvmuxzYE&amp;amp;ab_channel=BuzzFeedVideo" rel="noopener noreferrer"&gt;inherent privilege&lt;/a&gt; of straight, white, heterosexual male engineers (like myself).&lt;/p&gt;

&lt;p&gt;Remote working only. I'm fuelled by tea and I only buy the best (&lt;a href="https://www.yorkshiretea.co.uk/" rel="noopener noreferrer"&gt;Yorkshire Tea&lt;/a&gt;, natch). After the pandemic, this really shouldn't even be a discussion point any more, but I wrote &lt;a href="https://simonemms.com/blog/2020/01/23/in-defence-of-remote-working" rel="noopener noreferrer"&gt;some thoughts&lt;/a&gt; on the subject pre-pandemic.&lt;br&gt;
I live in a beautiful part of the country and my desk looks out onto my garden where I have goldfinches, robins, sparrows and blackbirds cavorting around in front of my desk all day long. When I need 10 minutes away from my desk to mull over a problem, I'll go and watch my bees foraging. A quiet, inspiring office with good tea makes me work better - it certainly beats commuting.&lt;/p&gt;

&lt;p&gt;Money is &lt;strong&gt;NOT&lt;/strong&gt; the motivating factor. As a guide, I'm looking in the region of £100,000 - the closer the role is to my ideal then the more negiotiable this is. If the role isn't particularly close to my ideal, then the money &lt;em&gt;IS&lt;/em&gt; the deciding factor - in that case, to quote &lt;a href="https://en.wikipedia.org/wiki/Spike_Milligan" rel="noopener noreferrer"&gt;Spike Milligan&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All I ask is the chance to prove that money can't buy you happiness&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And I'm not going to do a technical test. By all means, I'm happy to prove my credentials in a conversation, but technical tests are a fundamentally broken and flawed system. And asking someone with my &lt;a href="https://simonemms.com/profile" rel="noopener noreferrer"&gt;experience&lt;/a&gt; is a waste of time (and fairly insulting). If you need to see I can code, you can look at my&lt;br&gt;
&lt;a href="https://github.com/MrSimonEmms" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; or &lt;a href="https://simonemms.com/testimonials" rel="noopener noreferrer"&gt;testimonials from the people I've worked with&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, the only absolute no-no. &lt;strong&gt;NO GAMBLING COMPANIES&lt;/strong&gt;. Ever. They're a cancer on our society, preying on the most vulnerable people and who provide little-to-no value except to their shareholders. I don't want to go to sleep at night knowing that my house is being paid for by people suffering with addiction problems.&lt;/p&gt;




&lt;p&gt;This is my ideal. I don't expect all of these to be met by my next role, but I want to get close. If you have something you think is suitable, please &lt;a href="https://diary.simonemms.com" rel="noopener noreferrer"&gt;put a time in my diary&lt;/a&gt; so you can tell me about it.&lt;/p&gt;

</description>
      <category>gitpod</category>
      <category>work</category>
      <category>hiring</category>
    </item>
    <item>
      <title>Building a RESTful API With Functions</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Sun, 29 Jan 2023 15:00:33 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/building-a-restful-api-with-functions-3527</link>
      <guid>https://forem.com/mrsimonemms/building-a-restful-api-with-functions-3527</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;There is an accompanying &lt;a href="https://github.com/MrSimonEmms/openfaas-rest-api"&gt;GitHub registry&lt;/a&gt; with a working demo&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Serverless functions are a great way of building a scalable application. They're small, independent, fast-to-build and infinitely scalable. If you need to build an application with just a couple of endpoints, they will work great.&lt;/p&gt;

&lt;p&gt;When it comes to building a big application or one that you want to expose as a public service, serverless functions can end up being just a bit cumbersome to develop. As most serverless frameworks focus on single functions, there can be a lot of repetition in getting a &lt;a href="https://en.wikipedia.org/wiki/Create,_read,_update_and_delete"&gt;CRUD&lt;/a&gt; application set up.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a RESTful application?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;For more information, check out the &lt;a href="https://en.wikipedia.org/wiki/Representational_state_transfer"&gt;Wikipedia article&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;REST (Representational state transfer) is a way of representing data between machines. It has a several key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no session state is maintained between calls&lt;/li&gt;
&lt;li&gt;the HTTP verb decides how the request is handled&lt;/li&gt;
&lt;li&gt;it returns the data model&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What tech are we using?
&lt;/h2&gt;

&lt;p&gt;Whilst there's plenty of open-source tech out there that can help, this is my favoured way of achieving the end goal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.openfaas.com"&gt;OpenFaaS&lt;/a&gt; for the serverless functions&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://konghq.com/kong"&gt;Kong&lt;/a&gt; for the API gateway&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt; and &lt;a href="https://helm.sh"&gt;Helm&lt;/a&gt; for deployment&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://k3d.io"&gt;K3d&lt;/a&gt; and &lt;a href="https://skaffold.dev"&gt;Skaffold&lt;/a&gt; for local development&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://gitlab.com/MrSimonEmms/openfaas-templates/-/tree/master/template/mongoose-crud"&gt;custom NodeJS OpenFaaS template&lt;/a&gt;
that uses &lt;a href="https://www.mongodb.com"&gt;MongoDB&lt;/a&gt; and  &lt;a href="https://mongoosejs.com"&gt;Mongoose&lt;/a&gt; to manage the data models&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;As I like to keep things simple, all you need to do to get the development cluster up and running is to run &lt;code&gt;make&lt;/code&gt;. This will run a series of commands to check you have the correct dependencies, create your k3d cluster and provision your cluster. Once you've done that, you can access your cluster on &lt;a href="http://localhost:9999"&gt;localhost:9999&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you've got the cluster up and running, you can just use &lt;code&gt;make serve&lt;/code&gt; to reload the Kubernetes objects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your first function
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;In the example repo, look at the &lt;code&gt;product&lt;/code&gt; function as the first function&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To create your first function, run the command &lt;code&gt;FN_NAME=product make new&lt;/code&gt;. This will create a new OpenFaaS function in the &lt;code&gt;/components&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;The important file is &lt;code&gt;schema.js&lt;/code&gt; which is a standard &lt;a href="https://mongoosejs.com/docs/guide.html"&gt;Mongoose schema&lt;/a&gt;. In this example, we're just defining a &lt;code&gt;modelName&lt;/code&gt; of &lt;code&gt;Product&lt;/code&gt; - in a more complex example, we can add in both synchronous and asynchronous validation, but a simple name property will do for now.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;BaseSchema&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;modelName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Product&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ProductSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;BaseSchema&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;modelName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ProductSchema&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, add this to the &lt;code&gt;functions&lt;/code&gt; Helm chart in &lt;code&gt;/chart/functions/values.yaml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;product&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/mrsimonemms/openfaas-rest-api/product&lt;/span&gt;
      &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
    &lt;span class="na"&gt;envvars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;LOGGER_LEVEL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;info&lt;/span&gt;
      &lt;span class="na"&gt;MONGODB_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mongodb://openfaas-mongodb.openfaas.svc.cluster.local:27017/openfaas"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, add the &lt;code&gt;image&lt;/code&gt; to the &lt;code&gt;artifactOverrides&lt;/code&gt; section in the &lt;code&gt;skaffold.yaml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;artifactOverrides&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;product&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/mrsimonemms/openfaas-rest-api/product&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you visit the &lt;a href="http://localhost:8080"&gt;OpenFaaS dashboard&lt;/a&gt;, you will see the &lt;code&gt;product&lt;/code&gt; function has been deployed to OpeenFaaS.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To get the login credentials, run &lt;code&gt;make openfaas&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Making it into an endpoint
&lt;/h2&gt;

&lt;p&gt;Now we have an OpenFaaS function running, we need to configure Kong so that it acts as a gateway. OpenFaaS functions are exposed on &lt;code&gt;/function/:name&lt;/code&gt;, which is what we're going to redirect to - in this template, the &lt;a href="https://en.wikipedia.org/wiki/Create,_read,_update_and_delete"&gt;CRUD&lt;/a&gt; endpoints live under &lt;code&gt;/crud&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;First, in &lt;code&gt;/charts/openfaas/values.yaml&lt;/code&gt;, add a &lt;code&gt;product-gateway&lt;/code&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;product-gateway&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;konghq.com/path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/function/product/crud&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, in the &lt;code&gt;/charts/openfaas/template/ingress.yaml&lt;/code&gt;, tell the Kong ingress about the &lt;code&gt;product-gateway&lt;/code&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openfaas&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;konghq.com/strip-path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kong&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/api/v1/product&lt;/span&gt;
            &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;product-gateway&lt;/span&gt;
                &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you'll find this exposed on &lt;a href="http://localhost:9999/api/v1/product"&gt;localhost:9999/api/v1/product&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your second function and nested endpoints
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;In the example, this is the &lt;code&gt;product-size&lt;/code&gt; function&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So far, we've done a fairly simple example only. Next, we're going to up the complexity by adding a nested endpoint. A root endpoint is fairly straightforward because it just sends everything to the function and returns whatever that returns.&lt;/p&gt;

&lt;p&gt;Conversely, a nested endpoint is locked to a product ID. Fortunately, Kong makes this fairly straightforward for us with its plugin system.&lt;/p&gt;

&lt;p&gt;Firstly, repeat all the above steps to create a &lt;code&gt;product-size&lt;/code&gt; function. In the &lt;code&gt;schema.js&lt;/code&gt;, add a &lt;code&gt;productId&lt;/code&gt; parameter - this will store the &lt;code&gt;_id&lt;/code&gt; from the &lt;code&gt;product&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;This time, when creating the &lt;code&gt;/charts/openfaas/template/ingress.yaml&lt;/code&gt;, we're going to add a named path parameter in the &lt;code&gt;path&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/api/v1/product/(?&amp;lt;productId&amp;gt;[\w-]+)/size&lt;/span&gt;
  &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
  &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;product-size-gateway&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This searches for any alphanumeric character between &lt;code&gt;/product/&lt;/code&gt; and &lt;code&gt;/size&lt;/code&gt; in the URL and assigns it the name &lt;code&gt;productId&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, we have to tell it what to do with it. In the &lt;code&gt;/charts/openfaas/values.yaml&lt;/code&gt;, add this to the &lt;code&gt;services&lt;/code&gt; array:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;product-size-gateway&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;konghq.com/path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/function/productsize/crud&lt;/span&gt;
    &lt;span class="na"&gt;konghq.com/plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;product-request-transformer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how this refers to a &lt;code&gt;product-request-transformer&lt;/code&gt; plugin. So, let's define it in the same &lt;code&gt;values.yaml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;product-request-transformer&lt;/span&gt;
    &lt;span class="na"&gt;plugin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;request-transformer&lt;/span&gt;
    &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;remove&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;productId&lt;/span&gt;
      &lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;productId:$(uri_captures["productId"])&lt;/span&gt;
      &lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;querystring&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;filter:productId||$eq||$(uri_captures["productId"])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This does a few things. Firstly, it removes any &lt;code&gt;productId&lt;/code&gt; from the body and then adds in the product ID in the URL.  It also appends a query string &lt;code&gt;filter=productId||$eq||&amp;lt;productId&amp;gt;&lt;/code&gt; to the URL. This ensures that the &lt;code&gt;product-size&lt;/code&gt; function behaves like a nested API endpoint.&lt;/p&gt;

&lt;p&gt;Importantly, this won't return an HTTP 404 result if the &lt;code&gt;product&lt;/code&gt; doesn't exist. That's beyond the scope of this demo, although you could achieve this by adding a &lt;code&gt;middleware.js&lt;/code&gt; file to the function. This can be either a function or an array of functions and it follows the same basic interface as an &lt;a href="https://expressjs.com/en/guide/writing-middleware.html"&gt;Express middleware function&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this simple demo, you can see that it's very possible to create a fully RESTful API from a collection of serverless functions. It's important to remember that all these functions are entirely isolated from each other from a coding point of view (so you could even do something interesting like writing them in different languages). This makes them very powerful and almost infinitely scalable.&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>kong</category>
      <category>openfaas</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Multi-Arch Docker Containers</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Sun, 29 Jan 2023 14:51:49 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/multi-arch-docker-containers-jp2</link>
      <guid>https://forem.com/mrsimonemms/multi-arch-docker-containers-jp2</guid>
      <description>&lt;p&gt;Containers have revolutionised computing since they were first popularised by &lt;a href="https://docker.io" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; in the first half of the 2010s. Containers have made&lt;br&gt;
developing software and deploying it far simpler than it has ever been by creating an artifact that can work across different systems. By maintaining it's own dependencies independent of the host machine, we no longer have to ensure that all developers and deployment environments are using &lt;em&gt;version x.y.z&lt;/em&gt; of a language and the correct versions of all our databases.&lt;/p&gt;

&lt;p&gt;With the exception of multi-architectural deployments that is.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is multi-arch?
&lt;/h2&gt;

&lt;p&gt;I am currently writing this post on my laptop - it's a 64 bit machine running Ubuntu 19.10. If I run &lt;code&gt;uname -p&lt;/code&gt;, it proves that it's a 64 bit machine by printing &lt;code&gt;x86_64&lt;/code&gt;. When I come to deploy it to my Raspberry Pi cluster (yes, I'm that cool kids), that'll be on a 32 bit processor - &lt;code&gt;uname -p&lt;/code&gt; now gives me &lt;code&gt;armv7l&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is the description of the processor on the computer itself and it's architecture. When you're running your containers, you get used to largely being able to ignore the host machine and it's capabilities entirely - you might be running Windows or OSX, but if your container is an Ubuntu or Alpine-based image, you'll be doing everything in Linux.&lt;/p&gt;

&lt;p&gt;The processor is one of the few things that the container does not virtualise. If you have a 64 bit host machine, your container &lt;strong&gt;MUST&lt;/strong&gt; be compatible with a 64 bit processor. For the most part, this causes us few problems - we all tend to develop on 64 bit machines and deploy to cloud provider of choice who provides us with a fleet of 64 bit machines. This only becomes a problem if we need to support multiple architectures at any stage the of the software development lifecycle.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why are multi-arch containers a good idea?
&lt;/h2&gt;

&lt;p&gt;Until fairly recently, if you wanted to deploy your containerised application, you were pretty-much limited to doing so to 64 bit machines. It was possible to get &lt;a href="https://blog.hypriot.com/post/how-to-setup-rpi-docker-swarm/" rel="noopener noreferrer"&gt;Docker Swarm&lt;/a&gt; onto a Raspberry Pi and Kubernetes was (officially) off limits.&lt;/p&gt;

&lt;p&gt;Then along came &lt;a href="https://www.scaleway.com/en/virtual-instances/arm-instances" rel="noopener noreferrer"&gt;Scaleway&lt;/a&gt; with their very cheap ARM clouds and &lt;a href="https://k3s.io" rel="noopener noreferrer"&gt;K3S&lt;/a&gt; with a lightweight Kubernetes that was perfect for Raspberry Pis. Now you can have very cost-effective and (with enough nodes) high-performing clusters running on machines lying around your office. These are perfect for development and staging clusters to test out your applications.&lt;/p&gt;

&lt;p&gt;In recent years to, the rise of the Internet of Things (IoT) has largely been made possible by lightweight processors. I've worked with many IoT companies over the years and it's always useful to be able to have a virtual device with which to interact in development and testing - this process is simplified greatly if you have that software in a container.&lt;/p&gt;

&lt;p&gt;If none of these reasons convince you based on your requirements today, think about what the future might bring. There have been &lt;a href="https://www.macrumors.com/guide/arm-macs" rel="noopener noreferrer"&gt;persistent rumours that Apple will switch to ARM processors&lt;/a&gt; in the future (which may or may not require multi-archness). You also rarely know exactly where your application will be heading in 3+ years time - I've lost count of the number of times I've been told by architects and products owners "no Simon, we definitely will never do &lt;em&gt;x&lt;/em&gt; feature" only to find I'm building that exact feature 6 months later.&lt;/p&gt;

&lt;p&gt;Finally, it's a very simple change that add almost no time or effort to the build pipeline - for the effort involved, is it not worth just having it there in the background?&lt;/p&gt;
&lt;h2&gt;
  
  
  Docker Setup
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Even those the experimental version of Docker is not recommended for production&lt;br&gt;
use, it is fine for building the containers. You do &lt;strong&gt;NOT&lt;/strong&gt; need to enable experimental&lt;br&gt;
mode on your deployment machine. As further evidence for it's use, this is how&lt;br&gt;
Docker provides multi-arch support for all officially supported containers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In order to make truly multi-arch containers, we need to use the experimental version of Docker. To do that, go to your command line and edit the file &lt;code&gt;~/.docker/config.json&lt;/code&gt;. This is a JSON file, and you need to ensure that &lt;code&gt;experimental&lt;/code&gt; is set to &lt;code&gt;enabled&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"experimental"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"enabled"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's likely you'll have additional settings in there - leave those as they are. To prove you have enabled experimental mode, type &lt;code&gt;docker manifest --help&lt;/code&gt; and you should see the help page.&lt;/p&gt;

&lt;p&gt;Next, you need to &lt;a href="https://hub.docker.com/r/multiarch/qemu-user-static" rel="noopener noreferrer"&gt;enable&lt;/a&gt; &lt;a href="https://www.qemu.org/" rel="noopener noreferrer"&gt;Qemu&lt;/a&gt; support for your Docker host instance. Qemu is a generic&lt;br&gt;
machine emulator and virtualiser. In simple terms, it allows your machine with one type of processor to emulate other types of processor, allowing your machine to execute scripts on different processor architectures.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This updates the Docker machine instance. It will only need to be done once for your Docker instance. If you restart your machine or the Docker instance, you will need to run it again.&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;p&gt;```shell script&lt;br&gt;
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


Now you have your Docker instance set up. It's time to consider the Dockerfile. Under normal circumstances, your Dockerfile would start off with using the `FROM` command to pull in an image, perhaps `FROM node:12-alpine`. Now you're in the multi-arch world, you cannot simply do that. Since [2017](https://www.docker.com/blog/docker-official-images-now-multi-platform/) all Docker images are multi-arch - if you just use the image name, eg `node`, it queries the manifest against your host machine's processor and pulls that down. On a 64 bit machine, it actually pulls down `amd64/node`.

To get around that, we need to specify the architecture. Since Docker v17, we can add build arguments before the `FROM` tag. So you will need to change your Dockerfile like so:



```Dockerfile
ARG ARCH="amd64"
FROM ${ARCH}/node:12-alpine
CMD [ "uname", "-a" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We're defaulting the &lt;code&gt;ARCH&lt;/code&gt; argument to &lt;code&gt;amd64&lt;/code&gt; and then telling Docker to pull down that exact image. If we now build two different images, one for amd64 and one for a Raspberry Pi 3B, we should see differences.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;```shell script&lt;br&gt;
docker build -t riggerthegeek/multiarch-test:amd64 .&lt;br&gt;
docker build --build-arg=ARCH=arm32v7 -t riggerthegeek/multiarch-test:arm32v7 .&lt;br&gt;
docker run -it --rm riggerthegeek/multiarch-test:amd64 # String includes x86_64&lt;br&gt;
docker run -it --rm riggerthegeek/multiarch-test:arm32v7 # String include armv7l&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


Now we have all the images built, we need to combine them into a manifest. As touched on above, a manifest is a way of combining multiple images into a single image, allowing the machine that's pulling the image to decide which one it wants to actually use. There
are many use-cases for this, such as whether the host's operating systems is Windows. In our case, we only want to differentiate by the processor.

&amp;gt; You will need to push the above images before running this command



```shell script
docker manifest create riggerthegeek/multiarch-test \
    riggerthegeek/multiarch-test:amd64 \
    riggerthegeek/multiarch-test:arm32v7
docker manifest push riggerthegeek/multiarch-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you run &lt;code&gt;docker manifest inspect riggerthegeek/multiarch-test&lt;/code&gt; now, you will see how the host Docker engine will decide which image to use.&lt;/p&gt;

&lt;p&gt;Finally, test your container on an AMD64 machine and on a Raspberry Pi. You should see different results, but notice how you're only specifying the image name, not the tag.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;```shell script&lt;br&gt;
docker run -it --rm riggerthegeek/multiarch-test # x86_64 on AMD64, armv7l on Raspberry Pi&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


You have now built a multi-arch Docker image, deployable to both 64 bit machines and Raspberry Pi 3B/3B++.

## Building with GitLab

I use GitLab for my CI. Enabling multi-arch builds in GitLab is really simple - it's one environment variable and then the same commands as above. This is my usual setup in my `.gitlab-ci.yml` - I've removed all branching guards and strategies for brevity.



```yaml
variables:
  DOCKER_CLI_EXPERIMENTAL: enabled # Required for docker manifests - the only change required
  DOCKER_DRIVER: overlay2
  DOCKER_HOST: tcp://docker:2375

stages:
  - build
  - publish

# Extensible commands
.docker_base:
  image: docker:stable
  services:
    - docker:dind
  tags:
    - docker
  before_script:
    - docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}
    - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

.docker_build:
  extends: .docker_base
  stage: build
  script:
    - docker build --build-arg=ARCH=${ARCH} -t ${CI_REGISTRY_IMAGE}/${ARCH}:${CI_COMMIT_SHA} .
    - docker push ${CI_REGISTRY_IMAGE}/${ARCH}:${CI_COMMIT_SHA}

# Build amd64 image
docker_build_amd64:
  extends: .docker_build
  variables:
    ARCH: amd64

# Build arm32v7 image
docker_build_arm32v7:
  extends: .docker_build
  variables:
    ARCH: arm32v7

# Combine both images in a manifest and publish
docker_publish:
   extends: .docker_base
   stage: publish
   script:
     - |
         docker manifest create ${CI_REGISTRY_IMAGE}
           ${CI_REGISTRY_IMAGE}/amd64:${CI_COMMIT_SHA}
           ${CI_REGISTRY_IMAGE}/arm32v7:${CI_COMMIT_SHA}
     - docker manifest push ${CI_REGISTRY_IMAGE}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Warnings
&lt;/h2&gt;

&lt;p&gt;Building through the emulator is slower. Unless you absolutely have to, I would recommend only building the full manifest on &lt;code&gt;master&lt;/code&gt; and &lt;code&gt;develop&lt;/code&gt; branches.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Using a non-Ubuntu base image in Gitpod</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Sat, 30 Apr 2022 00:00:00 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/using-a-non-ubuntu-base-image-in-gitpod-1j5h</link>
      <guid>https://forem.com/mrsimonemms/using-a-non-ubuntu-base-image-in-gitpod-1j5h</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyt0l102nrwqpaliasq7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyt0l102nrwqpaliasq7.jpg" alt="Using a non-Ubuntu base image in Gitpod" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;About 3 weeks ago, the technical lead of one of Gitpod's self-hosted customers messaged me saying:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hey Simon. I know we have to use Ubuntu for our images, but one of our teams uses a testing framework that only runs on CentOS.&lt;/p&gt;

&lt;p&gt;Please help.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What had happened was they had conflated two facts about Gitpod:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;when running a Gitpod installation, you must &lt;a href="https://www.gitpod.io/docs/self-hosted/latest/cluster-set-up#node-and-container-requirements" rel="noopener noreferrer"&gt;run Ubuntu on your Kubernetes nodes&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;the &lt;a href="https://github.com/gitpod-io/workspace-images" rel="noopener noreferrer"&gt;Gitpod workspace images&lt;/a&gt; use an Ubuntu image (currently &lt;code&gt;buildpack-deps:focal&lt;/code&gt;) as the base&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, this doesn't mean that you &lt;strong&gt;MUST&lt;/strong&gt; use Ubuntu. In fact, you can theoretically use any Linux distribution as your base image (except Alpine - see &lt;a href="https://github.com/gitpod-io/gitpod/issues/3356" rel="noopener noreferrer"&gt;#3356&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a CentOS base image
&lt;/h2&gt;

&lt;p&gt;In principal, this is just a question of copying the Ubuntu image and changing all the &lt;code&gt;apt-get install&lt;/code&gt; commands for &lt;code&gt;yum install&lt;/code&gt; and changing the package names where relevant.&lt;/p&gt;

&lt;p&gt;Take a look at the &lt;a href="https://github.com/MrSimonEmms/gitpod-centos-base-image/blob/main/Dockerfile" rel="noopener noreferrer"&gt;Dockerfile&lt;/a&gt; to see how you could achieve this. There are quite a few packages in here which make it all work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bash-completion&lt;/li&gt;
&lt;li&gt;other Unix shells, such as Fish and ZSH&lt;/li&gt;
&lt;li&gt;development tools&lt;/li&gt;
&lt;li&gt;Git and Git LFS&lt;/li&gt;
&lt;li&gt;Docker and Docker Compose&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also creates a &lt;code&gt;gitpod&lt;/code&gt; user with the user ID &lt;code&gt;33333&lt;/code&gt; and grants it anonymous &lt;code&gt;sudo&lt;/code&gt; access.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using this in your project
&lt;/h2&gt;

&lt;p&gt;Now we have a base image, we can use this in our project. This is as simple as defining a &lt;a href="https://www.gitpod.io/docs/config-docker" rel="noopener noreferrer"&gt;custom Docker image&lt;/a&gt; in your &lt;code&gt;.gitpod.yml&lt;/code&gt;. Then you can create a &lt;code&gt;.gitpod.Dockerfile&lt;/code&gt; like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ghcr.io/mrsimonemms/gitpod-centos-base-image:latest

### Node.js ###
LABEL dazzle/layer=lang-node
LABEL dazzle/test=tests/lang-node.yaml
USER gitpod
ENV NODE_VERSION=16.13.0
ENV TRIGGER_REBUILD=1
RUN curl -fsSL https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | PROFILE=/dev/null bash \
  &amp;amp;&amp;amp; bash -c ". .nvm/nvm.sh \
    &amp;amp;&amp;amp; nvm install $NODE_VERSION \
    &amp;amp;&amp;amp; nvm alias default $NODE_VERSION \
    &amp;amp;&amp;amp; npm install -g typescript yarn node-gyp" \
  &amp;amp;&amp;amp; echo ". ~/.nvm/nvm-lazy.sh" &amp;gt;&amp;gt; /home/gitpod/.bashrc.d/50-node
# above, we are adding the lazy nvm init to .bashrc, because one is executed on interactive shells, the other for non-interactive shells (e.g. plugin-host)
COPY --chown=gitpod:gitpod nvm-lazy.sh /home/gitpod/.nvm/nvm-lazy.sh
ENV PATH=$PATH:/home/gitpod/.nvm/versions/node/v${NODE_VERSION}/bin

USER gitpod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will need to copy &lt;a href="https://github.com/gitpod-io/workspace-images/blob/main/chunks/lang-node/nvm-lazy.sh" rel="noopener noreferrer"&gt;nvm-lazy.sh&lt;/a&gt; into your project. You could also use a cURL command to download it from GitHub, but I couldn't be bothered for a simple app.&lt;/p&gt;

&lt;p&gt;Once you've done that, your workspace is ready for development in Gitpod&lt;/p&gt;

&lt;h2&gt;
  
  
  Further work
&lt;/h2&gt;

&lt;p&gt;If you're having to use this for multiple projects, you will most probably want to create either a &lt;a href="https://hub.docker.com/r/gitpod/workspace-full" rel="noopener noreferrer"&gt;workspace-full&lt;/a&gt; equivalent (or whatever languages you want to support). Creating a standalone image and publishing it to your own Docker registry isn't covered here, but isn't any more difficult than creating a Docker image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/MrSimonEmms/gitpod-centos-base-image" rel="noopener noreferrer"&gt;CentOS Base Image&lt;/a&gt; - a Gitpod CentOS base image you can use yourselves&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/MrSimonEmms/gitpod-centos-node-example" rel="noopener noreferrer"&gt;CentOS Node Example&lt;/a&gt; - an example GitHub repository with a Node application&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>development</category>
      <category>cloudnative</category>
      <category>gitpod</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Setting Terraform Service Principal Permissions to Work With Azure Active Directory</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Sun, 10 Jan 2021 00:00:00 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/setting-terraform-service-principal-permissions-to-work-with-azure-active-directory-4le4</link>
      <guid>https://forem.com/mrsimonemms/setting-terraform-service-principal-permissions-to-work-with-azure-active-directory-4le4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mgc0cnnhskfknaro4g6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mgc0cnnhskfknaro4g6.jpg" alt="Setting Terraform Service Principal Permissions to Work With Azure Active Directory" width="640" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are certain things that, no matter how many times I do them, I always have to look up. Symbolic links, for instance, are one thing I must have done at least once a week for the past 10 years, but I just can't remember whether the &lt;code&gt;/path/to/file&lt;/code&gt;or the &lt;code&gt;/path/to/symlink&lt;/code&gt; comes first (it's &lt;code&gt;ln -s /path/to/file /path/to/symlink&lt;/code&gt;for the record).&lt;/p&gt;

&lt;p&gt;Configuring the permissions for a service principal to work with Azure Active Directory is a close second. Unlike the symlink though, the documentation for this is &lt;a href="https://registry.terraform.io/providers/hashicorp/azuread/latest/docs" rel="noopener noreferrer"&gt;dreadful&lt;/a&gt; - all the commands are there, but it's just so verbose and wordy that I always miss something.&lt;/p&gt;

&lt;p&gt;For anyone just interested in the answer (including Future Simon), the tl;dr is here&lt;/p&gt;

&lt;h2&gt;
  
  
  What's a Service Principal?
&lt;/h2&gt;

&lt;p&gt;From the &lt;a href="https://docs.microsoft.com/en-us/powershell/azure/create-azure-service-principal-azureps" rel="noopener noreferrer"&gt;Azure documentation&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An Azure service principal is an identity created for use with applications, hosted services, and automated tools to access Azure resources. This access is restricted by the roles assigned to the service principal, giving you control over which resources can be accessed and at which level. For security reasons, it's always recommended to use service principals with automated tools rather than allowing them to log in with a user identity.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What's So Special About Active Directory?
&lt;/h2&gt;

&lt;p&gt;In the topology of Azure, the &lt;a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs" rel="noopener noreferrer"&gt;AzureRM&lt;/a&gt;resources and the &lt;a href="https://registry.terraform.io/providers/hashicorp/azuread/latest" rel="noopener noreferrer"&gt;AzureAD&lt;/a&gt;resources occupy a different place. AzureRM (short for "Azure Resource Manager") lives under a subscription. In normal circumstances, and by default, Azure will only have a single subscription per account. All the resources (such as a virtual machine) will live in here. When you want to manage one of these resources with Terraform, simply give the service principal the appropriate permissions (I usually go with &lt;code&gt;Owner&lt;/code&gt;) on the subscription and everything will work fine.&lt;/p&gt;

&lt;p&gt;Active Directory is different. Logically, the Active Directory resources sit outside the subscription. These are on the account itself. That means that, if you have multiple subscriptions on your Azure account, you would still only have a single Active Directory which manages things for both subscriptions.&lt;/p&gt;

&lt;p&gt;This means that the permissions model is different in Active Directory to the subscriptions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://simonemms.com/img/blog/azure-active-directory/subscription-iam.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkeynjd7u68tu1hxbldmq.png" alt="Subscription IAM" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the Identity and Access Management (IAM) page for a subscription. As you can see, the "Terraform" Service Principal has the "Owner" role. The "Access control (IAM)" blade doesn't exist for the Active Directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Would You Ever Want To Mess About With The Active Directory?
&lt;/h2&gt;

&lt;p&gt;There are plenty of legitimate reasons to work with the Active Directory in Terraform. One of my most regular reasons is, when setting up a Kubernetes cluster, I also set up an "admins" and a "users" group for managing which users can get access to the cluster. Those in the "admins" group have full admin access to the cluster, those in the "users" group can only get limited access via the role-based access control (RBAC) settings. The configuration for that is fairly simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "azurerm_client_config" "current" {}

resource "azuread_group" "admin" {
  name = "k8s-admin"
  description = "Admin-level members for Kubernetes"
  owners = [data.azurerm_client_config.current.object_id]
  prevent_duplicate_names = true
}

resource "azuread_group" "user" {
  name = "k8s-user"
  description = "User-level members for Kubernetes"
  owners = [data.azurerm_client_config.current.object_id]
  prevent_duplicate_names = true
}

resource "azurerm_kubernetes_cluster" "k8s" {
  role_based_access_control {
    enabled = true
    azure_active_directory {
      tenant_id = data.azurerm_client_config.current.tenant_id
      managed = true
      admin_group_object_ids = [azuread_group.admin.id]
    }
  }

  # Additional configuratio - see Terraform docs for details
}

resource "azurerm_role_assignment" "admin" {
  principal_id = azuread_group.admin.id
  scope = azurerm_resource_group.k8s.id
  role_definition_name = "Azure Kubernetes Service Cluster Admin Role"
}

resource "azurerm_role_assignment" "user" {
  principal_id = azuread_group.user.id
  scope = azurerm_resource_group.k8s.id
  role_definition_name = "Azure Kubernetes Service Cluster User Role"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any user added to either of these groups will get the appropriate permissions on the Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Set It Up
&lt;/h2&gt;

&lt;p&gt;We'll be setting up the service principal as a Group Administrator and also giving the service principal the appropriate API access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting as a Group Administrator
&lt;/h3&gt;

&lt;p&gt;This gives the service principal to administer groups. This is needed to add and remove groups and assign members to it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log into &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;portal.azure.com&lt;/a&gt; and navigate to &lt;a href="https://portal.azure.com/?quickstart=True#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview" rel="noopener noreferrer"&gt;Azure Active Directory&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select the &lt;a href="https://portal.azure.com/?quickstart=True#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RolesAndAdministrators" rel="noopener noreferrer"&gt;Roles and Administrators&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Select the role Groups Administrator&lt;/li&gt;
&lt;li&gt;Select "Add assignments" and add your service principal&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Granting API Access
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Important. The Terraform provider still uses the old (and deprecated) API rather than the new Microsoft Graph API. &lt;a href="https://github.com/hashicorp/terraform-provider-azuread/issues/323" rel="noopener noreferrer"&gt;Work&lt;/a&gt;is happening to move over to that, but at the time of writing, is still incomplete.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Log into &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;portal.azure.com&lt;/a&gt; and navigate to &lt;a href="https://portal.azure.com/?quickstart=True#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview" rel="noopener noreferrer"&gt;Azure Active Directory&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select the &lt;a href="https://portal.azure.com/?quickstart=True#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps" rel="noopener noreferrer"&gt;App Registrations&lt;/a&gt;blade&lt;/li&gt;
&lt;li&gt;Select your service principal&lt;/li&gt;
&lt;li&gt;Select "API permissions" from the blade on the left&lt;/li&gt;
&lt;li&gt;Select "Add a permission" and select the legacy "Azure Active Directory Graph" at the very bottom of the page.&lt;/li&gt;
&lt;li&gt;Under "Delegated permissions", select "Directory.ReadWrite.All" and "Group.ReadWrite.All". Then click "Add permissions" to save.&lt;/li&gt;
&lt;li&gt;Select "Add a permission", select the "Azure Active Directory Directory Graph" again&lt;/li&gt;
&lt;li&gt;Under "Application permissions", select "Application.ReadWrite.All". Then click "Add permissions" to save&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. Now when you run &lt;code&gt;terraform apply&lt;/code&gt;, it will have the permissions to create the groups with your desired configuration. Importantly, if you &lt;code&gt;terraform destroy&lt;/code&gt;, it will also have the permissions to delete the configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Delegation Permissions
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://simonemms.com/img/blog/azure-active-directory/delegated-permissions.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw09nfpln8gvaied20yd9.png" alt="Delegate Permissions" width="800" height="769"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Application Permissions
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://simonemms.com/img/blog/azure-active-directory/application-permissions.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55h2dp421sth7453lzw9.png" alt="Application Permissions" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>serviceprincipal</category>
      <category>permissions</category>
    </item>
    <item>
      <title>Using GitLab Pages to Host a Helm Registry</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Sun, 27 Dec 2020 00:00:00 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/using-gitlab-pages-to-host-a-helm-registry-50pm</link>
      <guid>https://forem.com/mrsimonemms/using-gitlab-pages-to-host-a-helm-registry-50pm</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ifipNB2n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://simonemms.com/img/blog/bruce-warrington-eMqG0_PpoGg-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ifipNB2n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://simonemms.com/img/blog/bruce-warrington-eMqG0_PpoGg-unsplash.jpg" alt="Using GitLab Pages to Host a Helm Registry" width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To see this in action, check out&lt;a href="https://gitlab.com/MrSimonEmms/helm-repo"&gt;gitlab.com/MrSimonEmms/helm-repo&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Helm is a great way of sharing Kubernetes resources and making them reusable. The&lt;a href="https://helm.sh/docs/topics/registries"&gt;documentation&lt;/a&gt; provides a way of creating a registry using a Docker image that you can host yourself. This provides lots of functionality, such as authentication and commands to interact with it. If you have your own infrastructure and need authentication, this is a great way to start. However, if you're publishing an open-source project, or you don't need authentication then managing infrastructure is an expense and overhead you don't need.&lt;/p&gt;

&lt;p&gt;Enter &lt;a href="https://docs.gitlab.com/ee/user/project/pages/"&gt;GitLab Pages&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is GitLab Pages
&lt;/h2&gt;

&lt;p&gt;GitLab Pages is a way of publishing static files to the internet. It also allows you to use any URL you want and can be configured to use Let's Encrypt TLS certificates.&lt;/p&gt;

&lt;p&gt;As a Helm Registry is simply an &lt;code&gt;index.yaml&lt;/code&gt; file and a collection of &lt;code&gt;.tar.gz&lt;/code&gt; files, this makes GitLab Pages a great option for hosting your registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Your Repo
&lt;/h2&gt;

&lt;p&gt;To set the repository up, you actually only need three files configured.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;code&gt;packages&lt;/code&gt; directory is used in case you want to set up a website to read the &lt;code&gt;index.yaml&lt;/code&gt;. This is outside the scope of this post, but can be copied from my &lt;a href="https://gitlab.com/MrSimonEmms/helm-repo/-/merge_requests/1"&gt;Helm Registry source code&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  /packages/index.html
&lt;/h3&gt;

&lt;p&gt;This file is required for GitLab Pages to trigger building of the website. Even though it's not required for the Helm Registry, without it, this won't be published as a website.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
&amp;lt;head&amp;gt;
  &amp;lt;meta charset="UTF-8"&amp;gt;
  &amp;lt;title&amp;gt;Helm Registry&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
  My Helm Registry
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  /packages/index.yaml
&lt;/h3&gt;

&lt;p&gt;This is the contents of the Helm Registry. Eventually, this will contain a list of all the packages published to your registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
entries: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  /.gitlab-ci.yml
&lt;/h3&gt;

&lt;p&gt;This file controls how the GitLab CI/CD builds the package. There are two tasks here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the &lt;code&gt;add_helm_chart&lt;/code&gt; task is run when a trigger is received. It downloads the Helm chart, adds it to the &lt;code&gt;index.yaml&lt;/code&gt; file and commits it to the repository&lt;/li&gt;
&lt;li&gt;the &lt;code&gt;pages&lt;/code&gt; task is run when a commit is pushed to the &lt;code&gt;master&lt;/code&gt; branch. It publishes the&lt;code&gt;packages&lt;/code&gt; directory as your website.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
  - init
  - publish

image: node

variables:
  GIT_REPO_DIR: ./git-repo
  HELM_REPO_DIR: ./packages

add_helm_chart:
  rules:
    - if: '$CI_PIPELINE_TRIGGERED == "true" &amp;amp;&amp;amp; $PROJECT_CHART_REPO != null &amp;amp;&amp;amp; $PROJECT_OWNER != null &amp;amp;&amp;amp; $TAG_NAME != null &amp;amp;&amp;amp; $CHART_DIR != null &amp;amp;&amp;amp; $CHART_NAME != null'
  image: registry.gitlab.com/mrsimonemms/gitlab-ci-tasks/kubectl-helm
  stage: init
  before_script:
    - git remote set-url origin https://${GITLAB_USER_LOGIN}:${GITLAB_TOKEN}@gitlab.com/${CI_PROJECT_PATH}.git
    - git config --global user.email "${GITLAB_USER_EMAIL}"
    - git config --global user.name "${GITLAB_USER_NAME}"
    - git checkout -B ${CI_COMMIT_REF_NAME}
    - git pull origin ${CI_COMMIT_REF_NAME}
    - cd ${HELM_REPO_DIR}
  script:
    - git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/${PROJECT_OWNER}/${PROJECT_CHART_REPO}.git ${GIT_REPO_DIR}
    - cd ${GIT_REPO_DIR}
    - git checkout ${TAG_NAME}
    - cd -
    - helm package ${GIT_REPO_DIR}/${CHART_DIR}/${CHART_NAME} -d .
    - helm repo index --url ${HELM_REPO_URL} --merge index.yaml .
    - rm -Rf ${GIT_REPO_DIR}
    - git status
    - git add .
    - "git commit -m \"chore: add ${PROJECT_OWNER}/${PROJECT_CHART_REPO} ${TAG_NAME} to Helm repo\""
    - git status
    - git push origin ${CI_COMMIT_REF_NAME}

pages:
  stage: publish
  script: mv ${HELM_REPO_DIR} public
  artifacts:
    paths:
      - public
  only:
    - master
  except:
    - triggers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;For this to work, various bits of configuration must be done:&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Custom URL (optional)
&lt;/h3&gt;

&lt;p&gt;If you want to host this on a custom URL, you can add this in the Settings -&amp;gt; Pages section. Follow the instructions on screen to add the DNS records.&lt;/p&gt;

&lt;p&gt;If you don't do this, you can use the default &lt;a href="https://gitlab.io"&gt;gitlab.io&lt;/a&gt; URL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Personal Access Token
&lt;/h3&gt;

&lt;p&gt;Create a &lt;a href="https://gitlab.com/-/profile/personal_access_tokens"&gt;Personal Access Token&lt;/a&gt; with the&lt;code&gt;api&lt;/code&gt; scope selected (from the documentation, you should also be able to use the &lt;code&gt;write_repository&lt;/code&gt;scope although I've not tested it with that).&lt;/p&gt;

&lt;h3&gt;
  
  
  Add CI/CD Variables
&lt;/h3&gt;

&lt;p&gt;In the Settings -&amp;gt; CI/CD section for your repository, create some variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GITLAB_TOKEN&lt;/code&gt; - this is the value of the Personal Access Token above. This value should be both&lt;code&gt;protected&lt;/code&gt; and &lt;code&gt;masked&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;HELM_REPO_URL&lt;/code&gt; - this is the URL to host the repository on. This must be the fully qualified domain, including &lt;code&gt;https://&lt;/code&gt; at the start. As an example, my value is &lt;code&gt;https://helm.simonemms.com&lt;/code&gt;. This doesn't need to be &lt;code&gt;protected&lt;/code&gt; or &lt;code&gt;masked&lt;/code&gt; but won't hurt if they are.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create a Pipeline Trigger
&lt;/h3&gt;

&lt;p&gt;In the Settings -&amp;gt; CI/CD section for your repository, create a Pipeline Trigger. A good description would be "Add Helm chart to registry", although the exact working is up to you.&lt;/p&gt;

&lt;p&gt;Keep a note of both the project ID in the example (in the format&lt;code&gt;https://gitlab.com/api/v4/projects/xxxx/trigger/pipeline&lt;/code&gt;) and the token.&lt;/p&gt;




&lt;h2&gt;
  
  
  Adding a Chart to your Registry
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;As this uses &lt;code&gt;git clone&lt;/code&gt; to get the project, this can get any public repository or any private repo that's owned by the same user/group as the Helm Registry.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now you've set the Helm Registry repository up, you can begin to integrate this with other repositories that contain the Helm charts you wish to publish. Ultimately, this is a simple cURL call.&lt;/p&gt;

&lt;p&gt;This worked example will use values from&lt;a href="https://gitlab.com/MrSimonEmms/openfaas-amqp1.0-connector"&gt;gitlab.com/MrSimonEmms/openfaas-amqp1.0-connector&lt;/a&gt;. You will need to replace these with your own values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export CHART_DIR=chart # Location of the Helm chart directory in the repository
export CHART_NAME=openfaas-amqp1.0-connector # Location of the chart with the Helm chart directory
export CI_PROJECT_NAMESPACE=MrSimonEmms # In GitLab CI/CD, this is pre-filled
export CI_PROJECT_NAME=openfaas-amqp1.0-connector # In GitLab CI/CD, this is pre-filled
export HELM_REPO_TRIGGER_TOKEN=xxxxxxxx # The trigger token for the Helm Registry project (generated above)
export HELM_REPO_PROJECT_ID=123456 # The project ID of the Helm Registry project (see the trigger configuration above)
export VERSION=v1.0.0 # The branch or tag to publish

curl -f -X POST \
  -F token=${HELM_REPO_TRIGGER_TOKEN} \
  -F ref=master \
  -F variables[PROJECT_OWNER]=${CI_PROJECT_NAMESPACE} \
  -F variables[PROJECT_CHART_REPO]=${CI_PROJECT_NAME} \
  -F variables[TAG_NAME]=${VERSION} \
  -F variables[CHART_DIR]=${CHART_DIR} \
  -F variables[CHART_NAME]=${CHART_NAME} \
  https://gitlab.com/api/v4/projects/${HELM_REPO_PROJECT_ID}/trigger/pipeline
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will trigger the &lt;code&gt;add_helm_chart&lt;/code&gt; job inside the GitLab CI/CD config. After a few minutes, you will see a new commit to the &lt;code&gt;master&lt;/code&gt; branch and then you will find the chart added to the&lt;code&gt;index.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;That's pretty much all there is to hosting your own Helm Registry in GitLab Pages. Also, this doesn't matter if a specific version is added via the trigger multiple times - Helm will manage that for you.&lt;/p&gt;

</description>
      <category>helm</category>
      <category>kubernetes</category>
      <category>gitlab</category>
      <category>devops</category>
    </item>
    <item>
      <title>An introduction to Git Rebase</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Mon, 16 Nov 2020 00:00:00 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/an-introduction-to-git-rebase-30c0</link>
      <guid>https://forem.com/mrsimonemms/an-introduction-to-git-rebase-30c0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmardxqiwownkzk4pwd0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmardxqiwownkzk4pwd0.jpg" alt="An introduction to Git Rebase" width="640" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This post assumes some familiarity with Git. This is an advanced concept and shouldn't be tried as your first foray into Git and version control.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;During development on a feature branch, there will be times when we need to update our branch because the &lt;code&gt;master&lt;/code&gt; branch has received an update. In order to keep a linear Git history, avoid using the &lt;code&gt;git merge master&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;Having a linear history is more work, but it offers some advantages over a merge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;much easier to find the true source of a commit&lt;/li&gt;
&lt;li&gt;no "merge commit" messages taking up space&lt;/li&gt;
&lt;li&gt;reduced opportunity for conflicts&lt;/li&gt;
&lt;li&gt;easier to revert a commit that's not been introduced as part of a pull request&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introducing Git Rebase
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;git rebase master&lt;/code&gt; command is what we need to use instead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Assume you're in a branch feature/my-wonderful-feature
git checkout master
git pull
git checkout - # or git checkout feature/my-wonderful-feature
git rebase master
git push --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there are any issues, you can always use &lt;code&gt;git rebase --abort&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Git Rebase?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"This is the way it should have gone"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Git rebase is a way of rewriting your history. At a simple level, it unwinds all your commits to the last common commit with your branch, applies the &lt;code&gt;master&lt;/code&gt; commits and then puts your commits after them.&lt;/p&gt;

&lt;p&gt;By contrast, &lt;code&gt;git merge master&lt;/code&gt; would create a merge commit &lt;strong&gt;AFTER&lt;/strong&gt; your commits. When your feature branch gets promoted to the &lt;code&gt;master&lt;/code&gt; branch, you will end up with merge commits polluting the history.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Git Rebase to squash commits
&lt;/h2&gt;

&lt;p&gt;In a feature branch, there will often be commits in your log like this (&lt;code&gt;git log --oneline&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;067c88e gah, I'm stupid. I can see why CI broke
bf2a09e erm, not sure why CI has broken so another go
7fa9388 (feature/my-wonderful-feature): feat(some-brilliant-feat): this is a brilliant feature I've worked hard on
d4193f5 (master) fix(some-fix): some fix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've all been there. CI/CD pipelines can be a pain to get right. These commits are fine in a feature branch, but we don't want these littering the history in &lt;code&gt;master&lt;/code&gt; - a feature branch should be the work we did, logically separated into "good" commits (it makes finding problems later much, much easier).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It's a good idea to create a backup branch. With a Git Rebase, you are changing the branch irreparably.Run &lt;code&gt;git checkout -b backup/my-wonderful-feature&lt;/code&gt; to create a backup.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Run &lt;code&gt;git rebase -i HEAD~3&lt;/code&gt; to get the 3 latest commits - the &lt;code&gt;3&lt;/code&gt; can be changed to anything, but you shouldn't go beyond the last common commit (in this case &lt;code&gt;d4193f5&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;This will present an interactive screen that looks like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NB. The latest commit is at the bottom.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pick 7fa9388 (feature/my-wonderful-feature): feat(some-brilliant-feat): this is a brilliant feature I've worked hard on
pick bf2a09e erm, not sure why CI has broken so another go
pick 067c88e gah, I'm stupid. I can see why CI broke

# Rebase f4b6d01..9249bd0 onto f4b6d01 (3 commands)
#
# Commands:
# p, pick &amp;lt;commit&amp;gt; = use commit
# r, reword &amp;lt;commit&amp;gt; = use commit, but edit the commit message
# e, edit &amp;lt;commit&amp;gt; = use commit, but stop for amending
# s, squash &amp;lt;commit&amp;gt; = use commit, but meld into previous commit
# f, fixup &amp;lt;commit&amp;gt; = like "squash", but discard this commit's log message
# x, exec &amp;lt;command&amp;gt; = run command (the rest of the line) using shell
# b, break = stop here (continue rebase later with 'git rebase --continue')
# d, drop &amp;lt;commit&amp;gt; = remove commit
# l, label &amp;lt;label&amp;gt; = label current HEAD with a name
# t, reset &amp;lt;label&amp;gt; = reset HEAD to a label
# m, merge [-C &amp;lt;commit&amp;gt; | -c &amp;lt;commit&amp;gt;] &amp;lt;label&amp;gt; [# &amp;lt;oneline&amp;gt;]
# . create a merge commit using the original merge commit's
# . message (or the oneline, if no original merge commit was
# . specified). Use -c &amp;lt;commit&amp;gt; to reword the commit message.
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now change your history. In our example, we want to combine all the commits together - this is the &lt;code&gt;fixup&lt;/code&gt; command. Change the file so that it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pick 7fa9388 (feature/my-wonderful-feature): feat(some-brilliant-feat): this is a brilliant feature I've worked hard on
f bf2a09e erm, not sure why CI has broken so another go
f 067c88e gah, I'm stupid. I can see why CI broke
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now save your changes and exit. If you now look at &lt;code&gt;git log --oneline&lt;/code&gt;, you will see that only the first commit exists. You can now run &lt;code&gt;git push --force&lt;/code&gt; to replace your remote branch with your local branch.&lt;/p&gt;

&lt;p&gt;You can also use this method for doing other things. Generally, I find that I use the &lt;code&gt;pick&lt;/code&gt;, &lt;code&gt;reword&lt;/code&gt;and &lt;code&gt;fixup&lt;/code&gt; commands the most, although I also find &lt;code&gt;edit&lt;/code&gt; and &lt;code&gt;squash&lt;/code&gt; useful for editing my history before submitting a change.&lt;/p&gt;

</description>
      <category>git</category>
      <category>devops</category>
      <category>repository</category>
    </item>
    <item>
      <title>Dockerising an R App</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Thu, 27 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/dockerising-an-r-app-2d03</link>
      <guid>https://forem.com/mrsimonemms/dockerising-an-r-app-2d03</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37cs9bsbd39m9kxmr60x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37cs9bsbd39m9kxmr60x.jpg" alt="Dockerising an R App" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;R is a statistical programming language with a huge repository of tools to crunch numbers, manipulate data and do all manner of data science tasks. I've worked with a couple of teams with data scientists who use it and love it. Which is great, except for one problem.&lt;/p&gt;

&lt;p&gt;R is a pain in the arse to Dockerise.&lt;/p&gt;

&lt;p&gt;One irritating, but not insurmountable problem is the lack of official templates for R. Personally, I tend to stick to the official templates built by Docker which I then extend. That way you know that they're safe to use, are built to proper standards and are kept (reasonably) up-to-date. Fortunately, there is the &lt;a href="https://hub.docker.com/u/rocker" rel="noopener noreferrer"&gt;Rocker&lt;/a&gt; organisation that maintains a series of images that you can use.&lt;/p&gt;

&lt;p&gt;However, the biggest pain point by far is dependency management and the final size of the images. By design, when using &lt;a href="https://rstudio.com/" rel="noopener noreferrer"&gt;RStudio&lt;/a&gt; developers will typically install dependencies at runtime. Here, that's fine because it's a development environment. When you're containerising your R app, this is not acceptable as containers should be immutable, pre-compiled and fast-loading. Some of these dependencies take many &lt;strong&gt;MINUTES&lt;/strong&gt; to download, compile and install.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsimonemms.com%2Fimg%2Fblog%2Fnode_modules.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsimonemms.com%2Fimg%2Fblog%2Fnode_modules.jpg" title="Largest mass in the universe" alt="Largest mass in the universe" width="647" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And R dependencies are big. Really big. If you thought &lt;code&gt;node_modules&lt;/code&gt; was big, R is something else. I recently developed a fairly simple R app for the &lt;a href="https://github.com/britishredcrosssociety/local-lockdown" rel="noopener noreferrer"&gt;British Red Cross&lt;/a&gt;and the final image size was over 2GB (yes, that's &lt;strong&gt;GIGABYTES&lt;/strong&gt; ). Rocker don't provide an Alpine image which doesn't help, but I don't think that's a big problem due to the size of the dependencies and even R itself. Rocker's &lt;a href="https://hub.docker.com/r/rocker/r-base" rel="noopener noreferrer"&gt;r-base&lt;/a&gt; image comes out at over 800MB. This is built on &lt;a href="https://hub.docker.com/_/debian" rel="noopener noreferrer"&gt;debian&lt;/a&gt; which is 118MB - using Alpine would only reduce that by 100MB which, seeing as the R base is over 700MB, hardly seems worth the effort.&lt;/p&gt;

&lt;p&gt;There are other issues with R dependencies. With NodeJS you have your &lt;code&gt;package.json&lt;/code&gt;, with Python you have your &lt;code&gt;requirements.txt&lt;/code&gt;. R doesn't really have any matching concept (although there are some &lt;a href="https://stackoverflow.com/questions/38928326/is-there-something-like-requirements-txt-for-r" rel="noopener noreferrer"&gt;workarounds&lt;/a&gt;) so you have to maintain your dependencies in both your &lt;code&gt;Dockerfile&lt;/code&gt; and where you call it in your R app.&lt;/p&gt;

&lt;p&gt;Finally, any dependencies that you need in your OS are not installed. This is fairly standard, but the default behaviour of the installer is to exit without an error which is incredibly frustrating.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, how do you do it then?
&lt;/h2&gt;

&lt;p&gt;The key to installing the dependencies is the &lt;code&gt;install2.r&lt;/code&gt; application which is bundled with all Rocker images. This installs dependencies from the &lt;a href="https://cran.r-project.org/web/packages/" rel="noopener noreferrer"&gt;CRAN installation repository&lt;/a&gt;. There is also a corresponding &lt;code&gt;installGithub.r&lt;/code&gt; binary which installs dependencies from GitHub.&lt;/p&gt;

&lt;p&gt;In your &lt;code&gt;*.R&lt;/code&gt; files, you will use the &lt;code&gt;library()&lt;/code&gt; function to call your dependencies at the top of the script. Basically, every time you use it, you need to update your &lt;code&gt;Dockerfile&lt;/code&gt; with each dependency. Yes, it's a pain to do it each time, but that's what you have to do.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;library(dplyr)
library(readr)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  install2.r
&lt;/h3&gt;

&lt;p&gt;As mentioned above, this doesn't error by default. It'll print the errors in the logs (along with lots of other things) so you'll never know if the build has failed. Therefore, you need to use the &lt;code&gt;--error&lt;/code&gt;flag with this.&lt;/p&gt;

&lt;p&gt;There is also a &lt;code&gt;--skipinstalled&lt;/code&gt; flag which stops reinstalling any dependency that's already present in the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  installGithub.r
&lt;/h3&gt;

&lt;p&gt;Again, make sure you pass the &lt;code&gt;--error&lt;/code&gt; flag otherwise any errors won't break the build.&lt;/p&gt;

&lt;p&gt;Typically, you wouldn't need to use this. I only had to use this for the Red Cross because of an bug with the latest version of Tidyverse when using Ubuntu 18.04 (which is the basis of the R image). I would only suggest using this if you need to install a specific version of a dependency, because I can't work out how to do that with &lt;code&gt;install2.r&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;One final note, this requires &lt;code&gt;remotes&lt;/code&gt; to be installed. So you will need to run&lt;code&gt;install2.r --error remotes&lt;/code&gt; before installing anything with &lt;code&gt;installGithub.r&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full Dockerfile example
&lt;/h2&gt;

&lt;p&gt;This is an example using Shiny server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM rocker/shiny:3.6.3
COPY . ./src/shiny-server

# Install any OS dependencies - this is just an example and not required for these dependencies
RUN apt-get update \
    &amp;amp;&amp;amp; apt-get install -y libudunits2-dev

# List of dependencies - ensure corresponds with `library()` calls in *.R files
RUN install2.r --error \
    --skipinstalled \
    readr \
    dplyr

USER shiny
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you're here, the usual Docker command of &lt;code&gt;docker build -t r-app .&lt;/code&gt; will build this &lt;code&gt;Dockerfile&lt;/code&gt;into your image.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Live Reload for OpenFaas</title>
      <dc:creator>Simon Emms</dc:creator>
      <pubDate>Wed, 12 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://forem.com/mrsimonemms/live-reload-for-openfaas-33dn</link>
      <guid>https://forem.com/mrsimonemms/live-reload-for-openfaas-33dn</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ldxexjSR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://simonemms.com/img/blog/braden-collum-9HI8UJMSdZA-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ldxexjSR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://simonemms.com/img/blog/braden-collum-9HI8UJMSdZA-unsplash.jpg" alt="Live Reload for OpenFaas" width="880" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;2020-08-15: Added GoLang example&lt;/p&gt;

&lt;p&gt;This article assumes a degree of familiarity with OpenFaaS. I won't be covering how to get started in it or the key concepts in any detail. If you want to get started with it, please see their &lt;a href="https://docs.openfaas.com/"&gt;excellent documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I am a big fan of "serverless" functions. They help your application scale to a theoretically unlimited capacity. They are so useful that there is a whole myriad of implementations - AWS, Azure and GCP all have their own native versions, you can use the Serverless framework and then there's the like of KNative. And these are just the ones I can remember off the top of my head.&lt;/p&gt;

&lt;p&gt;One of my favourite versions is &lt;a href="https://openfaas.com"&gt;OpenFaaS&lt;/a&gt; by the very excellent &lt;a href="https://www.alexellis.io/"&gt;Alex Ellis&lt;/a&gt;. This has the advantage of being open source, a vibrant community and really easy to get started. It's also cloud native, so you're not tied into a particular cloud provider (yes, I'm looking at you AWS/Azure/GCP) - if it runs in Kubernetes, it'll run in OpenFaaS.&lt;/p&gt;

&lt;p&gt;One of my biggest gripes with OpenFaaS is that it can be painful developing new functions. It's fine if you want to stick it in the public Docker Registry and/or you don't mind waiting for a minute or two each time you change your code. I regularly work for companies that don't want their proprietary code being published on the internet for anyone to use.&lt;/p&gt;

&lt;p&gt;Also, I'm impatient and hate compile time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://xkcd.com/303/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RtgrzwN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://imgs.xkcd.com/comics/compiling.png" alt="Compiling" title="Compiling" width="413" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The official workflow is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;faas-cli build # Build the image
faas-cli push # Push to Docker Hub
faas-cli deploy # Deploy to your cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each of these steps could well take in excess of 30 seconds. Painful when you're hacking away.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Compose To The Rescue
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;There is an accompanying Git repo with this. Please check that out to see the source code - &lt;a href="https://gitlab.com/MrSimonEmms/openfaas-docker-compose"&gt;gitlab.com/MrSimonEmms/openfaas-docker-compose&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Docker Compose is perhaps a little unfashionable with Kubernetes-enabled teams, but it serves a great place for efficient local development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt; : this development workflow exists outside OpenFaaS. It uses the template that you will use when it moves to production, but when developing your function you won't have access to things like the async workflow. However, as you would be focusing on getting your function working rather than accessing it, that's usually an easy concession to make.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Process Explained
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are using the Classic Watchdog, there is nothing to do here and your function will automatically reflect any changes each time you invoke the function. This is because the Classic Watchdog runs the whole command each time the function is invoked.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The OF Watchdog is the modern and (in my opinion) the better watchdog. The function runs as a single HTTP endpoint on all methods. It's better because it allows improved error handling. However, because it's an application inside the function, we need to change how that application works - basically, you need to put a file watcher in that restarts the application each time you make a change to the code.&lt;/p&gt;

&lt;p&gt;The important one here is the &lt;code&gt;fprocess&lt;/code&gt; environment variable. This is the command that run the application. For example, in the &lt;code&gt;node12&lt;/code&gt; template the &lt;code&gt;fprocess&lt;/code&gt;envvar is &lt;code&gt;node index.js&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We also need to set the &lt;code&gt;volumes&lt;/code&gt; and set the &lt;code&gt;user&lt;/code&gt; to &lt;code&gt;root&lt;/code&gt;. The reason we have to set the &lt;code&gt;user&lt;/code&gt; to &lt;code&gt;root&lt;/code&gt; is to be able to install any global dependencies to the container at runtime. Even though this is an anti-pattern in a production container, I would argue that this is ok in a dev-only container to avoid having to maintain duplicate Dockerfiles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets
&lt;/h2&gt;

&lt;p&gt;OpenFaaS supports &lt;a href="https://docs.openfaas.com/reference/secrets/"&gt;secrets&lt;/a&gt; by putting the file in &lt;code&gt;/var/openfaas/secrets&lt;/code&gt;. Docker Compose stores secrets in&lt;code&gt;/run/secrets&lt;/code&gt; so you will need to do some form of either/or load, appropriate to the language. I suggest putting the OpenFaaS version as the default to reduce load in the production environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT:&lt;/strong&gt; You should never store sensitive data in a repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Let's start by creating a &lt;code&gt;Makefile&lt;/code&gt; in the root of your project. The purpose of this is to be able to easily install all the templates found in the &lt;code&gt;functions.yml&lt;/code&gt; file. Strictly speaking, this is optional, but it makes life a lot easier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FUNC_FILE ?= './functions.yml'

templates:
    which faas-cli || (echo "Please install 'faas-cli' package" &amp;amp;&amp;amp; exit 1)
    which jq || (echo "Please install 'jq' package" &amp;amp;&amp;amp; exit 1)
    which yq || (echo "Please install 'yq' package" &amp;amp;&amp;amp; exit 1)

    $(eval templates := $(shell cat ${FUNC_FILE} | yq r - -j | jq -r '.functions | values[].lang'))

    for template in $(templates) ; do \
        faas-cli template store pull $$template ; \
    done
.PHONY: templates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;make templates&lt;/code&gt; to download all the templates you need.&lt;/p&gt;

&lt;p&gt;We now need a &lt;code&gt;functions.yml&lt;/code&gt; in the root of your project. This is what we're using as the definition of all the functions we're writing. This is the exact same format as the OpenFaaS yaml file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 1.0
provider:
  name: openfaas
functions: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Language Implementations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Node12
&lt;/h3&gt;

&lt;h4&gt;
  
  
  functions.yml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 1.0
provider:
  name: openfaas
functions:
  node12:
    lang: node12
    handler: ./functions/node12
    image: node12:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  docker-compose.yml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.7'
services:
  node12:
    build:
      context: ./template/node12
    ports:
      - 3000:3000
    environment:
      fprocess: nodemon index.js
    secrets:
      - example
    volumes:
      - ./functions/node12:/home/app/function
    user: root
    command: sh -c "npm i -g nodemon &amp;amp;&amp;amp; fwatchdog"

secrets:
  my-secret:
    file: ./secrets/example
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get live reload in NodeJS we use the &lt;a href="https://nodemon.io/"&gt;nodemon&lt;/a&gt; application. If you've ever done any NodeJS work, it's incredibly likely that you'll have used nodemon as it works really &lt;strong&gt;REALLY&lt;/strong&gt; well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Python3-Flask
&lt;/h3&gt;

&lt;h4&gt;
  
  
  functions.yml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 1.0
provider:
  name: openfaas
functions:
  python3-flask:
    lang: python3-flask
    handler: ./functions/python3-flask
    image: python:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  docker-compose.yml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.7'
services:
  python3-flask:
    build:
      context: ./template/python3-flask
    ports:
      - 3001:8080
    environment:
      FLASK_APP: /home/app/index.py
      FLASK_ENV: development
      fprocess: flask run
    volumes:
      - ./functions/python3-flask:/home/app/function
    user: root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unlike the &lt;code&gt;node12&lt;/code&gt; template, we don't need to install any dependencies at runtime. The &lt;code&gt;fprocess&lt;/code&gt; needs to use &lt;code&gt;flask&lt;/code&gt;, which is already installed as that's the framework used in this template. I've kept the &lt;code&gt;user&lt;/code&gt; as &lt;code&gt;root&lt;/code&gt;for consistency, but it probably isn't actually necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  GoLang-HTTP
&lt;/h3&gt;

&lt;h4&gt;
  
  
  functions.yml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 1.0
provider:
  name: openfaas
functions:
  golang-http:
    lang: golang-http
    handler: ./functions/golang-http
    image: golang-http:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  docker-compose.yml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.7'
services:
  golang-http:
    build:
      context: ./template/golang-http
      target: build
    ports:
      - 3002:8080
    environment:
      fprocess: air -c /go/src/handler/function/.air.toml
      mode: http
      upstream_url: http://127.0.0.1:8082
    volumes:
      - ./functions/golang-http:/go/src/handler/function
    command: sh -c "go get -u github.com/cosmtrek/air &amp;amp;&amp;amp; fwatchdog"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one is a little more involved than NodeJS/Python because the Go template builds a binary and puts it into an empty Alpine container. We have to intercept the build process to use the GoLang template by setting the &lt;code&gt;build.target&lt;/code&gt; parameter to &lt;code&gt;build&lt;/code&gt; (see &lt;a href="https://docs.docker.com/develop/develop-images/multistage-build/"&gt;Docker Multi-Stage builds&lt;/a&gt;for more information). As the OpenFaaS configuration is done in the final container, we need to apply this configuration to the &lt;code&gt;build&lt;/code&gt; target. This means setting the&lt;code&gt;mode&lt;/code&gt; and &lt;code&gt;upstream_url&lt;/code&gt; environment variables.&lt;/p&gt;

&lt;p&gt;Finally, this uses the Go package &lt;a href="https://github.com/cosmtrek/air"&gt;Air&lt;/a&gt; to provide the live reload facility. This requires a config file, even if there's nothing inside it. As we won't need this in production, there's a &lt;code&gt;.dockerignore&lt;/code&gt; file to not send it to the image when building. However, the Docker Compose &lt;code&gt;volumes&lt;/code&gt; ignores this, which is exactly what we need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;I want to write a version for this for each officially supported OpenFaaS template in the template store. If you want to help with that, please fork the repo and add a PR.&lt;/p&gt;

&lt;p&gt;GoLang (now done) and CSharp are the ones I'd really like to get done as they seem to be fairly popular within the community.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
