<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tom Watt</title>
    <description>The latest articles on Forem by Tom Watt (@tomowatt).</description>
    <link>https://forem.com/tomowatt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tomowatt"/>
    <language>en</language>
    <item>
      <title>Things I learnt migrating an application to Kubernetes</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Tue, 13 Jul 2021 15:02:58 +0000</pubDate>
      <link>https://forem.com/tomowatt/things-i-learnt-migrating-an-application-to-kubernetes-2nlj</link>
      <guid>https://forem.com/tomowatt/things-i-learnt-migrating-an-application-to-kubernetes-2nlj</guid>
      <description>&lt;p&gt;Let me paint the picture, an application forgotten to gain dust and bugs. Created in a rush to be deployed solely using a custom in-house deployment system on ageing and unmaintained Virtual Machines.&lt;/p&gt;

&lt;p&gt;Python2. Django 1. Hardcoding.&lt;/p&gt;

&lt;p&gt;It was a mess. It was hard.&lt;/p&gt;

&lt;p&gt;But it was a rewarding experience.&lt;/p&gt;




&lt;h2&gt;
  
  
  Knowledge of tooling is sometimes sparse
&lt;/h2&gt;

&lt;p&gt;The application used Docker - thankfully - but used a lot of scripts to do &lt;em&gt;little&lt;/em&gt; things.&lt;/p&gt;

&lt;p&gt;Times change and knowledge drifts. But we can still over issues with a bash script.&lt;/p&gt;

&lt;p&gt;First thing I did was remove all these scripts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Script to run commands in the container, was replaced with a Makefile. Serving as both documentation and one-command setup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Script to setup local environment variables, was replaced with default values set either in the Docker Compose or the application. Local development shouldn't require extra effort to setup and should work from a fresh pull.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Script to pull additional libraries, was replaced with git submodules. Love or hate them, submodules are useful and cleaner when done right.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you were ever think about using a script to overcome an issue, double check what you are already using and you &lt;em&gt;might&lt;/em&gt; find a better solution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Focus on what the Application needs when building a container
&lt;/h2&gt;

&lt;p&gt;The container originally when built was over a whopping 1GB! But as I mentioned it was a Django application, so how was it so big?&lt;/p&gt;

&lt;p&gt;"Useful tools and packages"&lt;/p&gt;

&lt;p&gt;Database Client, Text editors, etc., you name it and it was probably there.&lt;/p&gt;

&lt;p&gt;The application didn't need any of it. A developer did.&lt;/p&gt;

&lt;p&gt;The convenience of pre-install tools and packages adds unneccesary bulk. This slows building time and add extra maintenance.&lt;/p&gt;

&lt;p&gt;Ideally, you shouldn't need to shell into a container to Debug. But if you did, you could just install the necessary when debugging and then destroy the container when done.&lt;/p&gt;

&lt;p&gt;Containers are meant to &lt;em&gt;emphemeral&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;After removing all those 'convenient' tools and packages, the image size dropped to about 300MB. 💪&lt;/p&gt;




&lt;h2&gt;
  
  
  Moving code repositories to a new host is &lt;em&gt;easy?&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;The code needed moved, from BitBucket to GitHub. But I had never done anything like this before.&lt;/p&gt;

&lt;p&gt;After a bit of searching around and trying GitHub's Importer, which didn't work, I came across this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git push --mirror {destination}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I had the &lt;strong&gt;power&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;16 repositories pulled. 16 repositories created. 16 &lt;em&gt;mirror&lt;/em&gt; pushes. Done.&lt;/p&gt;

&lt;p&gt;Though as easy as it was to do with fresh empty repositories., the command is &lt;strong&gt;destructive&lt;/strong&gt; so be wary.&lt;/p&gt;




&lt;h2&gt;
  
  
  Helm + Terraform works but better used separately
&lt;/h2&gt;

&lt;p&gt;We use a combination of tools. Terraform for all the infrastructure and Helm to template the application settings.&lt;/p&gt;

&lt;p&gt;But Terraform &lt;em&gt;can&lt;/em&gt; do the things Helm does.&lt;/p&gt;

&lt;p&gt;The application is aged, and reliant on older versions of services e.g., Elastic Search. I tried using the Elastic Search Helm chart but I couldn't get the exact version that the application was previously using and it added extra things that I didn't need e.g., multiple pods.&lt;/p&gt;

&lt;p&gt;Using Terraform, I created a simple Kubernetes Deployment for the Elastic Search service and that's all I needed. Job Done.&lt;/p&gt;

&lt;p&gt;Another challenge I encountered, as I had created the Helm template for the application and referenced it locally with Terraform, is that any update to the template wouldn't cause Terraform to update it.&lt;/p&gt;

&lt;p&gt;A way around this was to ensure Values passed into the &lt;code&gt;helm_release&lt;/code&gt; resource were referenced and easily updatable e.g., Image Tag.&lt;/p&gt;

&lt;p&gt;But looking back, if I had just created it all in Terraform, I wouldn't have had to do such things and I feel it would have been &lt;em&gt;nicer&lt;/em&gt; overall.&lt;/p&gt;

&lt;p&gt;Plus Helm templates are &lt;strong&gt;painful&lt;/strong&gt; to debug. Too many spaces here, too little there. 🙃&lt;/p&gt;




&lt;h2&gt;
  
  
  Embracing Failing Fast
&lt;/h2&gt;

&lt;p&gt;I thought I was being smart when taking out the hardcoded settings for Django e.g., Database Name, User, etc. and using Environment Variables &lt;em&gt;but&lt;/em&gt; providing Defaults.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATABASE_NAME = os.environ.get('DATABASE_NAME', 'postgres')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It worked fine for the local development setup as I didn't need to update the Compose file with an extra Environment Variable.&lt;/p&gt;

&lt;p&gt;But then I noticed an issue in the Live environment on the Cluster, that the application was using the wrong Database. But how?!&lt;/p&gt;

&lt;p&gt;...&lt;/p&gt;

&lt;p&gt;I forgot to include it in the Helm Values.&lt;/p&gt;

&lt;p&gt;...&lt;/p&gt;

&lt;p&gt;A quick fix.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATABASE_NAME = os.environ['DATABASE_NAME']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now I'll never forget to include &lt;em&gt;important&lt;/em&gt; settings.&lt;/p&gt;




&lt;h2&gt;
  
  
  You won't always get thanks but should always take on the challenge
&lt;/h2&gt;

&lt;p&gt;This took me a lot of time. Research, distractions, failures, repeat.&lt;/p&gt;

&lt;p&gt;And from the Client and User side, nothing changed. The site is up and running.&lt;/p&gt;

&lt;p&gt;But it was worth it.&lt;/p&gt;

&lt;p&gt;There didn't need to be any fanfare. When you put in a lot of effort to make something a little bit better, more secure, reliable and can be proud of what you've managed, then that's all you need.&lt;/p&gt;

&lt;p&gt;The relief from finishing it and merging all the code after a clean was pure bliss.&lt;/p&gt;




&lt;p&gt;I've learned a lot doing this but I would happily do it again.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>terraform</category>
      <category>helm</category>
    </item>
    <item>
      <title>Going "Repository Native" with Continuous Integration (CI)</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Fri, 02 Apr 2021 14:13:09 +0000</pubDate>
      <link>https://forem.com/tomowatt/going-repository-native-with-continuous-integration-ci-36pk</link>
      <guid>https://forem.com/tomowatt/going-repository-native-with-continuous-integration-ci-36pk</guid>
      <description>&lt;p&gt;The idea of Continuous Integration in Software Development has become so popular over the years that it almost seems like everyone &lt;em&gt;knows&lt;/em&gt; about it. With a decade of worldwide growth in people searching for it, and now its declining trend in the past few years.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QwEEDG7---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/asgx2jpdirqvdt1qwda0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QwEEDG7---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/asgx2jpdirqvdt1qwda0.png" alt='Google Search Trends: Interest over time for the term "Continuous Integration" - 1 January 2004 to 2 April 2021' width="800" height="275"&gt;&lt;/a&gt;Data source: Google Trends (&lt;a href="https://www.google.com/trends"&gt;https://www.google.com/trends&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;This has led to the creation of many opinions and many more tools to implement Continuous Integration. To date, the &lt;a href="https://landscape.cncf.io/card-mode?category=continuous-integration-delivery&amp;amp;grouping=category"&gt;Cloud Native Computing Foundation&lt;/a&gt; recognises 36 tools under the term "Continuous Integration &amp;amp; Delivery" but there is many more in the wilds of the Internet.&lt;/p&gt;

&lt;p&gt;All of them are not equal, either specific to a Platform or Cloud Provider or requiring additional setup. Some for unique use cases or generic enough that switching between can be &lt;em&gt;easy&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;So, what do I mean by "Repository Native" CI? I'm talking about the CI tools provided by a Repository Host e.g., BitBucket Pipelines, GitHub Actions and GitLab CI/CD.&lt;/p&gt;

&lt;p&gt;These CI tools provide ease of use and require minimal setup and maintenance. Most of them only require adding an additional file, e.g., &lt;code&gt;bitbucket-pipelines.yml&lt;/code&gt;, to the code repository and it 'just' works, providing an appropriate insight into the running tasks and feedback. &lt;/p&gt;

&lt;p&gt;New commits and branches can easily trigger your CI process. Environment Variables for the CI process can be set and controlled in the Code Repository. And most of these Repository Native CI tools can be easily linked to external applications for Notifications.&lt;/p&gt;

&lt;p&gt;If hosting your Repositories on a Cloud Provider and using their CI offering, you'll get a similar experience but with the additional benefit of easily integrating with other services that the Cloud Provider offers. For example, using Google Cloud Build and pushing a Container Image to a Google Container Registry without needing set up Service Accounts - in most cases.&lt;/p&gt;

&lt;p&gt;In terms of maintenance, there is very little and pretty much seen in a 'hands-off' approach. Keeping your CI process up to date should always be the main focus.&lt;/p&gt;

&lt;p&gt;Compared to an external CI tool e.g., Concourse, Jenkins and Tekton, "Repository Native" CI offerings are somewhat more generic. External CI tools usually have unique conventions but appeal to creating more complex CI processes but suffer from adding complexity, which requires additional knowledge.&lt;/p&gt;

&lt;p&gt;There is an initial cost to external CI tools, usually in the form of access to the code - "Repository Native" CI tools already have access. Which, while usually, is little, it can quickly build with more repositories needing accessed and how complex the CI process is. For instance, if a task requires access to multiple repositories to run.&lt;/p&gt;

&lt;p&gt;Another cost comes from giving appropriate access to the external CI tool to other services e.g., Container Registries or a Cloud Provider. Though, this is also experienced when using Code Repository Hosting - e.g., BitBucket - and needing create a VM on a Cloud Provider - e.g., AWS but is slowly easing with integrations being developed.&lt;/p&gt;

&lt;p&gt;One benefit of external CI tools is usually the option to self-host. Allows for enhanced security and tweaking them to run how and where you'd like. But that comes with the greatest cost comes the maintenance: when your CI server is down or slow, you've got to fix it.&lt;/p&gt;

&lt;p&gt;Though, in the world of more complex systems and intensive CI tasks, I’ll say that external CI tools do offer some distinct advantages. If self-hosting by being able to add more compute resources, as some "Repository Native" CI tools are limited. And by being able to create unique triggers for tasks e.g., using Slack.&lt;/p&gt;

&lt;p&gt;I’ve usually found that the CI process will do short lived &lt;em&gt;simple&lt;/em&gt; tasks – style checks, static code analysis and unit tests. These &lt;em&gt;simple&lt;/em&gt; tasks benefit more from less setup. Less setup means less maintenance, and less maintenance means more productivity. This promotes the use of “Repository Native” CI tools – following the &lt;a href="https://en.wikipedia.org/wiki/Pareto_principle"&gt;80-20 rule&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ci</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Creating a k3s Cluster with k3sup &amp; Multipass 💻☸️</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Thu, 28 Jan 2021 13:39:16 +0000</pubDate>
      <link>https://forem.com/tomowatt/creating-a-k3s-cluster-with-k3sup-multipass-h26</link>
      <guid>https://forem.com/tomowatt/creating-a-k3s-cluster-with-k3sup-multipass-h26</guid>
      <description>&lt;p&gt;This started off due to frustration with my current Single Board Computers setup and limitations using Docker Desktop meaning I couldn't fully utilise all the features of Kubernetes.&lt;/p&gt;

&lt;p&gt;I'd previously setup up a k3s cluster using k3sup on my Raspberry Pi's and ASUS Tinkerboard running Ubuntu 20.04. Due to using older models – Pi 2 &amp;amp; 3 -, with limited resources, it wasn't a great experience for testing demanding applications or load testing.&lt;/p&gt;

&lt;p&gt;Docker Desktop Kubernetes is great for starting out but the limitations kick in when you want to test out multiple nodes and installing additional features like Ingress Controllers. Plus, running it on a MacBook Pro 2019 I found the battery life draining quickly – running just Docker Desktop is bad enough at times.&lt;/p&gt;

&lt;p&gt;So here comes in Multipass. I'd seen it a few times and was very curious. I'd previously used Vagrant combined with VirtualBox with various levels of success and agony. So the idea of being able to run an Ubuntu VM and Kubernetes Cluster without additional programs to manage and debug got me hooked. I like things to be as native as possible and Multipass can run on HyperKit and Hyper-V.&lt;/p&gt;

&lt;p&gt;Installing Multipass is pretty slick and having a CLI – like Vagrant – meant there was a way to wrap it all up in a script for repeatability. And install k3sup is even easier.&lt;/p&gt;

&lt;p&gt;Bonus feature of Multipass is being able to pass in cloud-init files to do additional configuration setup, which adds many benefits beyond this example.&lt;/p&gt;

&lt;p&gt;The plan is to create a 3 node – 1 master, 2 workers – k3s cluster, which is disposable and portable.&lt;/p&gt;

&lt;p&gt;Tools used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/alexellis/k3sup" rel="noopener noreferrer"&gt;k3sup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://multipass.run" rel="noopener noreferrer"&gt;Multipass&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud-init.io" rel="noopener noreferrer"&gt;cloud-init&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So let's begin!&lt;/p&gt;

&lt;p&gt;k3sup uses SSH to install and by default Multipass uses a predefined User – &lt;code&gt;ubuntu&lt;/code&gt; – and SSH Key-Pair, which can be found after bit of digging - see &lt;a href="https://github.com/canonical/multipass/issues/913" rel="noopener noreferrer"&gt;Issue #913&lt;/a&gt;. This is where cloud-init comes in.&lt;/p&gt;

&lt;p&gt;Using cloud-init, we can easily create a user, assigned it to groups, and pass in a Public SSH Key of our choice. And this is the minimal to do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;users:
- name: tom
  groups: sudo
  sudo: ALL=(ALL) NOPASSWD:ALL
  ssh_authorized_keys: 
  - ssh-rsa...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Next is to spin up the nodes, which can all be done with the Multipass CLI. We'll leave the VMs with the default values for CPU, RAM, Disk Space and OS - latest LTS - but include our cloud-init configuration:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;multipass launch -n master --cloud-init - &amp;lt;&amp;lt;EOF
users:
- name: tom
  groups: sudo
  sudo: ALL=(ALL) NOPASSWD:ALL
  ssh_authorized_keys: 
  - ssh-rsa...
EOF

multipass launch -n node1 --cloud-init - &amp;lt;&amp;lt;EOF
users:
- name: tom
  groups: sudo
  sudo: ALL=(ALL) NOPASSWD:ALL
  ssh_authorized_keys: 
  - ssh-rsa...
EOF

multipass launch -n node2 --cloud-init - &amp;lt;&amp;lt;EOF
users:
- name: tom
  groups: sudo
  sudo: ALL=(ALL) NOPASSWD:ALL
  ssh_authorized_keys: 
  - ssh-rsa...
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Using the Multipass CLI, we can then view the IP Addresses of our new VMs:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;multipass list

Name                    State             IPv4             Image
master                  Running           192.168.64.8     Ubuntu 20.04 LTS
node1                   Running           192.168.64.9     Ubuntu 20.04 LTS
node2                   Running           192.168.64.10    Ubuntu 20.04 LTS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now to get our k3s cluster built, again using the default values, first by setting up our master node:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k3sup install --ip 192.168.64.8 --context k3s-cluster --user tom --ssh-key ./demo-key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This will install k3s, setup the instance as the master node and return the &lt;code&gt;kubeconfig&lt;/code&gt; to the current directory.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ...
    server: https://192.168.64.8:6443
  name: k3s-cluster
contexts:
- context:
    cluster: k3s-cluster
    user: k3s-cluster
  name: k3s-cluster
current-context: k3s-cluster
kind: Config
preferences: {}
users:
- name: k3s-cluster
  user:
    client-certificate-data: ...
    client-key-data: ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Next, to set up our worker nodes to join the cluster:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k3sup join --server-ip 192.168.64.8 --ip 192.168.64.9 --user tom --ssh-key demo-key

k3sup join --server-ip 192.168.64.8 --ip 192.168.64.10 --user tom --ssh-key demo-key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Finally, we can now check our k3s cluster is up and running:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;KUBECONFIG=kubeconfig kubectl get nodes

NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   7m17s   v1.19.7+k3s1
node1    Ready    &amp;lt;none&amp;gt;   6m47s   v1.19.7+k3s1
node2    Ready    &amp;lt;none&amp;gt;   6m24s   v1.19.7+k3s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And now time to tear it all down now:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;multipass delete --all &amp;amp;&amp;amp; multipass purge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  The Next Level
&lt;/h2&gt;

&lt;p&gt;I mentioned that I wanted this setup to be disposable and portable. So let's wrap this all up in a good ol' Shell script.&lt;/p&gt;

&lt;p&gt;So with a bit of refactoring and Command Line tooling - Piping &amp;amp; Parsing - we can end up with a script like so:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh

master="master"
nodes=("node1" "node2")
context="k3s-cluster"

createInstance () {
    multipass launch -n "$1" --cloud-init - &amp;lt;&amp;lt;EOF
users:
- name: ${USER}
  groups: sudo
  sudo: ALL=(ALL) NOPASSWD:ALL
  ssh_authorized_keys: 
  - $(cat "$PUBLIC_SSH_KEY_PATH")
EOF
}

getNodeIP() {
    echo $(multipass list | grep $1 | awk '{print $3}')
}

installK3sMasterNode() {
    MASTER_IP=$(getNodeIP $1)
    k3sup install --ip "$MASTER_IP" --context "$context" --user "$USER" --ssh-key  "${PRIVATE_SSH_KEY_PATH}"
}

installK3sWorkerNode() {
    NODE_IP=$(getNodeIP $1)
    k3sup join --server-ip "$MASTER_IP" --ip "$NODE_IP" --user "$USER" --ssh-key "${PRIVATE_SSH_KEY_PATH}"
}

createInstance $master

for node in "${nodes[@]}"
do
    createInstance "$node"
done

installK3sMasterNode $master

for node in "${nodes[@]}"
do
    installK3sWorkerNode "$node"
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And with only needing two Environment Variables, we can now bring up a disposable cluster within &lt;strong&gt;minutes&lt;/strong&gt; - I clocked just under 2 minutes!&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PUBLIC_SSH_KEY_PATH=./demo-key.pub PRIVATE_SSH_KEY_PATH=./demo-key ./minimal-k3s-multipass-bootstrap.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Making a script as well allows for easy use on multiple systems without needing to install more than the necessary.&lt;/p&gt;

&lt;p&gt;Multipass offers extra configuration options for the VMs and cloud-init can do more internally on the OS. Plus k3sup has even more options, but I'll keep it simple for now!&lt;/p&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/tomowatt" rel="noopener noreferrer"&gt;
        tomowatt
      &lt;/a&gt; / &lt;a href="https://github.com/tomowatt/k3s-multipass-bootstrap" rel="noopener noreferrer"&gt;
        k3s-multipass-bootstrap
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Bootstrap script to get a k3s Cluster created with Multipass for local development.
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;



</description>
      <category>devops</category>
      <category>ubuntu</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Why we need Code Reviews</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Sun, 08 Nov 2020 10:19:44 +0000</pubDate>
      <link>https://forem.com/tomowatt/why-we-need-code-reviews-hg8</link>
      <guid>https://forem.com/tomowatt/why-we-need-code-reviews-hg8</guid>
      <description>&lt;p&gt;I’ll start off by saying that this is more of a personal vent. I’ve not been in the coding game for too long but I’ve experienced different cultures when it comes to software engineering.&lt;/p&gt;

&lt;p&gt;So here’s why I think we need Code Reviews - in no specific order.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;Code Reviews allow us to check for any potential security issues. As systems become more complex, we all want to make it easier to ship the code or want to test a new feature and, sometimes, we let security standards slip. We might accidentally include personal credentials in the code, expose APIs or misunderstand the consequences of our code which could have security implications. &lt;br&gt;
Code Reviews allow us to work together to ensure these potential issues don't become reality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quality
&lt;/h2&gt;

&lt;p&gt;We all should want to produce great quality code, but we are faced with reality that it's not going to happen all the time. We are faced with combinations of deadlines, obscure requirements, growing complexity and most importantly: being human. &lt;br&gt;
Code Reviews allow our peers to let us know if the code isn't up to standards and improve it before it becomes a burden later on in maintenance.&lt;br&gt;
We have all written code that we look back at and question why.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reflection
&lt;/h2&gt;

&lt;p&gt;Knowing that our Code is going to be reviewed allows us the opportunity to reflect on what we've done. We don't want to send &lt;em&gt;bad&lt;/em&gt; code to get told to redo it. &lt;br&gt;
When preparing our Code for review, we should reflect on what we've done, question if it meet the requirement(s), does the Commit messages make sense, does it meet internal/external standards, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Personal Development
&lt;/h2&gt;

&lt;p&gt;Being on both sides of a Code Review - receiving and giving - has its benefits. &lt;br&gt;
As someone who's put their code up for review, we can be given knowledge and tips from others about how to improve it, told about functionality we never knew about that simplifies things, or told about the dreaded &lt;em&gt;typo&lt;/em&gt; that we've totally overlooked and are now embarrassed about.&lt;br&gt;
When giving the Code Review, we can learn from overlooking new code and styles which we can implement back into our own work.&lt;br&gt;
Code Reviews, if done properly, enable us to learn and develop to become better.&lt;/p&gt;

&lt;h2&gt;
  
  
  Teamwork
&lt;/h2&gt;

&lt;p&gt;As systems grow and become more complex, we will start having less understanding of all the moving parts. We may have to use code that we never built but someone else in our team did.&lt;br&gt;
Using Code Reviews, we can enable team members to share their work and give more people a better understanding of the changes being made that we might have to use or work on in the future.&lt;/p&gt;

&lt;p&gt;👨‍💻📝👩‍💻&lt;/p&gt;

</description>
      <category>devops</category>
      <category>devrel</category>
      <category>codequality</category>
      <category>git</category>
    </item>
    <item>
      <title>New Job, New Challenges</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Sat, 24 Oct 2020 09:00:26 +0000</pubDate>
      <link>https://forem.com/tomowatt/new-job-new-challenges-43lm</link>
      <guid>https://forem.com/tomowatt/new-job-new-challenges-43lm</guid>
      <description>&lt;p&gt;So during this chaotic year I managed motivate myself and seek new pastures. As much as I liked working at my previous employer, I wanted to grow and learn.&lt;/p&gt;

&lt;p&gt;I managed to go from a Junior DevOps title to DevOps Engineer. I was mainly focused on working on the infrastructure and maintaining pipelines to now being brought in to do a whole lot more.&lt;/p&gt;

&lt;p&gt;It's been 3 weeks since I started my new role and it has been  tough. So much going on and so much to learn from the code, culture to working practises. Reflecting back, at my previous employment, I learnt so much from AWS, containers, CI/CD, etc., but mainly the way a team works and grows to deal with new challenges.&lt;/p&gt;

&lt;p&gt;The biggest change I noticed was the culture. From a small business where I knew everyone and had a working relationship with them all, being able to chat and grow as a team. To a bigger business - which is currently rearranging its structure - and there is multiple historic cultures, different practises and noticeably, a lack of chat.&lt;/p&gt;

&lt;p&gt;And now, with a bit of insight and pushing from my new manager, I'm starting to see the true nature of the DevOps role. I've got a lot to learn still but I'm starting to see what I'll need to do to fulfil my role.&lt;/p&gt;

&lt;p&gt;Previously, I focused on automation, infrastructure and keeping the Developers 'happy' - either by simplifying things or taking away some of the burdens. &lt;/p&gt;

&lt;p&gt;Now I'm challenged with setting new practises, standards and creating a new productive culture. I know it's going to be long and uphill struggle at times but I'm looking forward to putting good and sustainable standards in place and building a more connected team. &lt;/p&gt;

&lt;p&gt;There will always be new technologies but I see building a strong culture of knowledge sharing, standards and shared responsibility as the biggest driver of software development.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>career</category>
      <category>devops</category>
      <category>devrel</category>
    </item>
    <item>
      <title>P3 - Personal Porting Project - Update</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Sun, 06 Sep 2020 09:17:08 +0000</pubDate>
      <link>https://forem.com/tomowatt/p3-personal-porting-project-update-1lgp</link>
      <guid>https://forem.com/tomowatt/p3-personal-porting-project-update-1lgp</guid>
      <description>&lt;p&gt;So little follow up on my Personal Project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Progress
&lt;/h2&gt;

&lt;p&gt;I've added two more Languages: Ballerina and Rust. Both of them presented unique challenges but very satisfying to finally get something working.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ballerina&lt;/strong&gt;, always stood out to me as an 'modern' language. Previous studies using Java allowed me to get a good start on Porting over the program but only got me so far. &lt;/p&gt;

&lt;p&gt;Dealing with  &lt;code&gt;@tainted&lt;/code&gt; and &lt;code&gt;@untainted&lt;/code&gt; annotations stumped me when dealing with the Error handling, so I feel my port doesn't truly represent how Ballerina &lt;em&gt;should&lt;/em&gt; be written. &lt;/p&gt;

&lt;p&gt;With that, I didn't use the &lt;code&gt;@docker&lt;/code&gt; annotation - which I feel that's where Ballerina excels as a language - but that's just to be able to build and run with Docker Compose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rust&lt;/strong&gt;. This was a whole new level of learning for me. I started off wrong and ended up taking a lot longer to Port compared to the other Languages - so far. &lt;/p&gt;

&lt;p&gt;I restarted probably about 5 or 6 times just with the basics of opening a File, reading lines, etc. Not realising some examples of how to do such linear actions had been deprecated as Rust has evolved.&lt;/p&gt;

&lt;p&gt;But it did then present me with an issue, I presented my Tasks for the Program in a abstract manner that's not easily implemented in Code. I updated these into smaller, more manageable Tasks and that allowed finish porting into Rust.&lt;/p&gt;

&lt;p&gt;Once, I finished and complied the Rust code into the single Binary and saw how small the entire application and Docker Image could be, I was amazed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Plans
&lt;/h2&gt;

&lt;p&gt;Definitely aiming to add in Ruby soon. With possibility of Swift and see what if I can deal with another 2 more, to get a total of 9 different Languages.&lt;/p&gt;

&lt;p&gt;Still need to refactor, add tests and HTML &amp;amp; CSS.&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/tomowatt" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N2zyiHxn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--sJQMW4bJ--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/382024/c3f5b9c6-7e67-4d94-8d9f-b7f8f6508d03.jpeg" alt="tomowatt"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/tomowatt/p3-personal-porting-project-12ba" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;P3 - Personal Porting Project&lt;/h2&gt;
      &lt;h3&gt;Tom Watt ・ Aug 16 '20&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#personal&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#project&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#productivity&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;



&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/tomowatt"&gt;
        tomowatt
      &lt;/a&gt; / &lt;a href="https://github.com/tomowatt/P3"&gt;
        P3
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      P3 - Personal Porting Project
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>personal</category>
      <category>project</category>
      <category>docker</category>
      <category>productivity</category>
    </item>
    <item>
      <title>P3 - Personal Porting Project</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Sun, 16 Aug 2020 08:38:55 +0000</pubDate>
      <link>https://forem.com/tomowatt/p3-personal-porting-project-12ba</link>
      <guid>https://forem.com/tomowatt/p3-personal-porting-project-12ba</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ih2rUkCa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://imgs.xkcd.com/comics/password_strength.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ih2rUkCa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://imgs.xkcd.com/comics/password_strength.png" alt="Password Strength - XKCD comic" width="740" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What
&lt;/h2&gt;

&lt;p&gt;It's a small idea based on the &lt;a href="https://xkcd.com/936/"&gt;Password Strength - XKCD comic&lt;/a&gt; to create a &lt;em&gt;simple&lt;/em&gt; application that will generate a &lt;em&gt;random&lt;/em&gt; password from a collection of nouns and then recreate it in other programming languages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why
&lt;/h2&gt;

&lt;p&gt;I like to take on new challenges and learn at the same time. I felt this is a great way to be able to learn new skills and techniques - and different Programming Languages - but also reinforce what I already know.&lt;/p&gt;

&lt;h2&gt;
  
  
  How
&lt;/h2&gt;

&lt;p&gt;To keep it simple and quick to be able to test and run locally, I've used Docker Compose. This allows me to bring up multiple containers for the various Programming Languages and keep them separate.&lt;/p&gt;

&lt;p&gt;I started with Python, as that's what I'm most comfortable with, and was able to get a skeleton/template to implemented in the other languages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Progress
&lt;/h2&gt;

&lt;p&gt;I've started to make some progress and recreated the application in Go and Node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go&lt;/strong&gt; presented me with the challenge of being entirely new to me. The Syntax was somewhat familiar to Python but with the added Static Typing, needed use of Scanners to read files and needing to make Random Random(!?). It was more verbose with needed Error Handling but I have to say I liked it.&lt;br&gt;
Additionally, building the binary at the end and being able to shrink the Docker Image from 300MB+ to 13MB using Multi-Stage Building was satisfying.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node&lt;/strong&gt;, again was entirely new to me as well. I originally intended for it to be Deno for TypeScript but came across no official Docker Image for Deno. So to support long term use, defaulted to Node. The use of CallBacks and Async totally threw me and I gave up and tried Rust - that didn't work out.&lt;br&gt;
Then returned after thinking it through and reading more examples and documentation. The Syntax caught me out a few times but I managed to work my way through by going back to the skeleton/template to simplify my way of thinking.&lt;/p&gt;
&lt;h2&gt;
  
  
  Future Plans
&lt;/h2&gt;

&lt;p&gt;Going to aim to get add in Ballerina, Ruby &amp;amp; Rust. Possibly, add more but I feel that's a good range of different languages and styles to start with. Next moving onto refactoring and making the applications &lt;em&gt;suited&lt;/em&gt; to the languages.&lt;/p&gt;

&lt;p&gt;Tests will need to be added as you always &lt;strong&gt;need&lt;/strong&gt; tests.&lt;/p&gt;

&lt;p&gt;Finally, adding use of HTML and CSS to create a UI and making the Docker Images as small as possible.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/tomowatt"&gt;
        tomowatt
      &lt;/a&gt; / &lt;a href="https://github.com/tomowatt/P3"&gt;
        P3
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      P3 - Personal Porting Project
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>personal</category>
      <category>project</category>
      <category>docker</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Tips for using Ansible</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Tue, 14 Jul 2020 18:38:03 +0000</pubDate>
      <link>https://forem.com/tomowatt/tips-for-using-ansible-33ff</link>
      <guid>https://forem.com/tomowatt/tips-for-using-ansible-33ff</guid>
      <description>&lt;p&gt;I use Ansible on a daily basis and it brings me so much joy when I finally write up a new Role or update a Playbook and everything works.&lt;/p&gt;

&lt;p&gt;But that’s usually after a while of failures - a lot of failures. But that’s okay, if Ansible is used in the right way then you’ll always make progress.&lt;/p&gt;

&lt;p&gt;So here’s some tips that help!&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Use ‘ansible-lint’
&lt;/h2&gt;

&lt;p&gt;Such a simple but useful tool will help keep your Playbooks and Roles clean of errors and provide useful hints to improve them e.g. ensuring Tasks are named.&lt;br&gt;
Best run when creating or editing Roles and Playbooks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-lint playbook.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Be Verbose
&lt;/h2&gt;

&lt;p&gt;With certain Modules there is Default values set and whilst it may be fine to not mention them when writing up Tasks, there can be consequences for your future self or others when they come to update the Task. For instance &lt;a href="https://docs.ansible.com/ansible/latest/modules/apt_module.html"&gt;apt&lt;/a&gt; install packages by default but doesn't update the cache. So you can end up with a Task like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Install Nginx
  apt:
    name: nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But future self or others might hit an issue with this if the cache hasn't been updated prior, so by making it more verbose the state and Task provide more clarity of what's actually being run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Install Nginx
  apt:
    name: nginx
    state: present
    update_cache: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Don’t be afraid to use ‘shell’ or ‘command’
&lt;/h2&gt;

&lt;p&gt;There is lot of Modules out there but sometimes it's easier to write a simple one liner to do what you want done - although not recommended by Ansible. &lt;a href="https://docs.ansible.com/ansible/latest/modules/acme_certificate_module.html"&gt;ACME Certificate Module&lt;/a&gt; is very complex to use, and it has to be run twice to work. When I can tried to implement Let's Encrypt/Certbot with a Wildcard, I found it easier to use shell instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Get TLS Certificates
  shell: "certbot certonly --dns-route53 -d {{ server_name }} -n --agree-tos --email {{ email }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's clearer what's being run and it can easily be changed in the future by others.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Enable Timer
&lt;/h2&gt;

&lt;p&gt;Add this to your &lt;code&gt;ansible.cfg&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;callback_whitelist = timer, profile_tasks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this will allow you to see how long each Task and the Playbook took to run. This gives both estimates for when others run them but also allows insight into what tasks are slow and could be improved.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PLAY RECAP
**************************************************************
rpi2                       : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
rpi3                       : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tinker                     : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Tuesday 14 July 2020  19:20:11 +0100 (0:00:00.603)       0:01:19.673 ********** 
==============================================================
Gathering Facts - 13.72s
/Users/tomwatt/DEV/Ansible-Local/playbooks/test.yaml:2 
--------------------------------------------------------------
test-connection : Ensure the right sudo password - 6.13s
/Users/tomwatt/DEV/Ansible-Local/roles/test-connection/tasks/main.yaml:2 
--------------------------------------------------------------
debug-host-info : Print Host Information - 0.60s
/Users/tomwatt/DEV/Ansible-Local/roles/debug-host-info/tasks/main.yaml:2 
--------------------------------------------------------------
Playbook run took 0 days, 0 hours, 1 minutes, 19 seconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Keep Ansible up to date
&lt;/h2&gt;

&lt;p&gt;Newer versions have more functionality and fixes. One caveat, though, is being aware of changes between versions e.g. modules renamed, deprecations, etc. So check the &lt;a href="https://docs.ansible.com/ansible/latest/porting_guides/porting_guides.html"&gt;Porting Guides&lt;/a&gt; when prior to updating.&lt;br&gt;
Running your Playbooks with Verbose - &lt;code&gt;-vvvvv&lt;/code&gt; - on will give hints of what’s being changed/removed in later versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Many and Small Roles
&lt;/h2&gt;

&lt;p&gt;I've done it and it's easily done, creating a Playbook that does everything that needs doing. It will work but will it work in the future? What happens when it needs a minor change to work on a different server? This is where having more, smaller Roles helps.&lt;/p&gt;

&lt;p&gt;Keeping a Role to a simple grouping of Tasks, with any needed variables, handlers or templates improves the reusability of it. It's also organises the code, for easier testing and maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Use ‘ansible-galaxy role init’ to create Roles
&lt;/h2&gt;

&lt;p&gt;Following on, be lazy and use it to create the basic structure for your new Roles, then remove what you don’t need.&lt;/p&gt;

&lt;p&gt;Hope some of these are helpful for those who use Ansible or looking to use it. But let me know and share any tips you all have as well!&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>devops</category>
      <category>iac</category>
    </item>
    <item>
      <title>Running an Unity WebGL game within Docker</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Sun, 05 Jul 2020 10:10:38 +0000</pubDate>
      <link>https://forem.com/tomowatt/running-an-unity-webgl-game-within-docker-5039</link>
      <guid>https://forem.com/tomowatt/running-an-unity-webgl-game-within-docker-5039</guid>
      <description>&lt;p&gt;So whilst learning on how to make Unity games, I got curious of what would be a good way to test, share and get feedback on.&lt;/p&gt;

&lt;p&gt;There are various websites that will host the games for you but I like the idea of making a game that’s Open for people to contribute to, learn from and ultimately play.&lt;/p&gt;

&lt;p&gt;So this is where I went down the path of looking into using Docker to host a WebGL game. After searching the web I came across a few others that looked to do something similar.&lt;/p&gt;

&lt;p&gt;So to start, I've built and exported the Unity game and kept a simple file structure.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

.
├── Dockerfile
├── docker-compose.yaml
├── webgl
└── webgl.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This allowed me to easily copy necessary files with a single &lt;code&gt;COPY&lt;/code&gt; within the Dockerfile.&lt;/p&gt;

&lt;p&gt;To host game within Docker and keeping things simple, I've used Nginx as the base image as the HTML files only needed served. &lt;/p&gt;

&lt;p&gt;But the default configuration needed to be updated to point to the copied files. This resulted in the following for the Nginx configuration, just using the &lt;code&gt;index.html&lt;/code&gt; created by Unity and update the location root to where the files were copied to.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Next part is the Dockerfile itself, putting all the pieces together to host the WebGL game.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Finally, using Docker Compose, I can finally launch the Docker image and play the game within a browser with a single &lt;code&gt;docker-compose -d up&lt;/code&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;All the code can be found here: &lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/tomowatt" rel="noopener noreferrer"&gt;
        tomowatt
      &lt;/a&gt; / &lt;a href="https://github.com/tomowatt/unity-docker-example" rel="noopener noreferrer"&gt;
        unity-docker-example
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Example of running Unity WebGL within Docker
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;unity-docker-example&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/tomowatt/unity-docker-examplescreenshot.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ftomowatt%2Funity-docker-examplescreenshot.png" alt="Game Start Screen"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;An example of how to run a Unity WebGL game using Docker.
Although not the most exciting game, it could prove useful to be able to build and share games using Docker.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h4 class="heading-element"&gt;Run the game:&lt;/h4&gt;

&lt;/div&gt;
&lt;p&gt;&lt;code&gt;docker-compose up -d&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Then visit &lt;strong&gt;localhost:8080&lt;/strong&gt; in a Browser.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Stop the game:&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;&lt;code&gt;docker-compose down&lt;/code&gt;&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/tomowatt/unity-docker-example" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;Hope this helps anyone who is curious about doing a similar thing and I'll hope to improve this as I learn more about Unity, WebGL and Docker.&lt;/p&gt;




&lt;p&gt;2022 Update: Add linked code examples via gists and embedded repository, updated Dockerfile code&lt;/p&gt;

</description>
      <category>docker</category>
      <category>webgl</category>
      <category>unity3d</category>
      <category>game</category>
    </item>
    <item>
      <title>AWS - Save Costs with Lambda</title>
      <dc:creator>Tom Watt</dc:creator>
      <pubDate>Fri, 05 Jun 2020 13:35:43 +0000</pubDate>
      <link>https://forem.com/tomowatt/aws-save-costs-with-lambda-12o3</link>
      <guid>https://forem.com/tomowatt/aws-save-costs-with-lambda-12o3</guid>
      <description>&lt;p&gt;When I first started at my current role, AWS was completely new to me. And I mean everything. I'd worked previously dealing with physical servers and now I was just dealing with the CLI and Web Console to interact and manage servers.&lt;/p&gt;

&lt;p&gt;One thing that caught me by chance was when I saw the billing every month and wondered why were we paying for development instances, databases, etc. when no one was there to work on them?&lt;/p&gt;

&lt;p&gt;Whilst learning more as time went by, I found inspiration and use from previous colleagues works to deal with these unused resources. And that's when I found use and power of AWS Lambda.&lt;/p&gt;

&lt;p&gt;I won't go in-depth but keeping things simple, using Crons from CloudWatch Events to trigger Lambdas that shutdown or terminate instances saves money. &lt;/p&gt;

&lt;p&gt;Why is that? Because in most cases, unless you use Lambda a lot, it's free under the &lt;a href="https://aws.amazon.com/free/"&gt;AWS Free Tier&lt;/a&gt; and such simple tasks such as deleting EC2 instances run take less than 1 second. Finally, it is automated. &lt;/p&gt;

&lt;p&gt;Here's an example of what is needed to filter and terminate EC2 instances with specific tags:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

EC2_CLIENT = boto3.client('ec2')
EC2_RESOURCE = boto3.resource('ec2')

PROJECTS = ['super', 'special', 'ultra',]
ENV = ['dev']

def get_ec2_instances():
    ec2_instances = EC2_RESOURCE.instances.filter(
        Filters=[
            {
                'Name': 'tag:project',
                'Values': PROJECTS
            },
            {
                'Name': 'tag:env',
                'Values': ENV
            },
        ]
    )

    return [instance.id for instance in ec2_instances]

def delete_ec2_instances(instance_ids):
    if instance_ids:
        EC2_CLIENT.terminate_instances(InstanceIds=instance_ids)

def delete_instances(event=None, conext=None):
    delete_ec2_instances(get_ec2_instances())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can easily use environment variables, add logging and error handling if needed but sometimes keeping it simple is the best.&lt;/p&gt;

&lt;p&gt;And if you need to change when instances get terminated, just update your CloudWatch Event.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>serverless</category>
      <category>python</category>
    </item>
  </channel>
</rss>
