<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: David O' Connor</title>
    <description>The latest articles on Forem by David O' Connor (@bit-of-a-git).</description>
    <link>https://forem.com/bit-of-a-git</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bit-of-a-git"/>
    <language>en</language>
    <item>
      <title>Kubernetes Resume Challenge - Google Cloud GKE</title>
      <dc:creator>David O' Connor</dc:creator>
      <pubDate>Thu, 21 Mar 2024 17:40:56 +0000</pubDate>
      <link>https://forem.com/bit-of-a-git/kubernetes-resume-challenge-google-cloud-gke-395l</link>
      <guid>https://forem.com/bit-of-a-git/kubernetes-resume-challenge-google-cloud-gke-395l</guid>
      <description>&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;One of the most valuable experiences for me last year was completing the Cloud Resume Challenge. The things that I learnt proved very useful when working on different projects. So when I saw that the CRC's Forrest Brazzeal and KodeKloud were coming together to create a &lt;a href="https://cloudresumechallenge.dev/docs/extensions/kubernetes-challenge/"&gt;Kubernetes Resume Challenge&lt;/a&gt;, I leapt at the opportunity to try it!&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;Deploy a scalable, consistent, and highly available e-commerce website with a database to Kubernetes using a cloud provider of your choosing.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Kubernetes and why is it important?
&lt;/h3&gt;

&lt;p&gt;Let's go back to the 2000s. When you went to a company's website, it was probably hosted on a server that might be sitting in a room somewhere in their office. Think of a server as like the computer you might be reading this on now, but a computer dedicated to just one purpose - like serving a website. But this can be wasteful - a server may have resources that are going unused. And what if the business suddenly receives a lot of traffic? The website could potentially go down and cost the company sales and revenue.&lt;/p&gt;

&lt;p&gt;Enter Kubernetes. Kubernetes was invented at Google and is basically a way of running lots of mini-servers (called containers). It allows you to do things like automatically scale your applications with traffic, check and ensure containers are healthy, and make more efficient use of your resources. It is very likely you have used a site or app that is deployed on Kubernetes - Airbnb, Spotify, and Reddit are three of the most prominent. And for IT professionals, a big advantage of Kubernetes is that it can run basically anywhere - including on major cloud providers such as Amazon Web Services, Microsoft Azure, or Google Cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 1 - Steps 1 to 5
&lt;/h3&gt;

&lt;p&gt;I had previous Kubernetes experience from one of my first projects and had recently made it a little over halfway through KodeKloud's CKA course, so I felt ready to start. I created a Dockerfile, built and pushed the image to DockerHub, and additionally created a K8s ConfigMap for the database init script.&lt;/p&gt;

&lt;p&gt;Although most of my experience is with AWS, I decided to try Google Cloud as I had heard great things about their Kubernetes service (GKE) and I knew that new users get free credits. To deploy a cluster, I used GCP's very useful and easy cluster creation wizard. I came across a small issue where my selected region did not have enough resources but this was easily solved by changing to a different region. I installed gcloud CLI, gcloud's auth plugin, and updated kubectl to use the new cluster.&lt;/p&gt;

&lt;p&gt;I realised that to deploy the website I needed a database first. I created a deployment using a MariaDB image, configured it to use the db-init configmap to populate the database, and added a service which would allow the frontend pods to connect to the database. When I deployed the website pods I noticed that they were unable to connect. I exec'd into one of the pods and checked the PHP code and environment variables but it all looked fine. I then checked the MariaDB pod logs before realising it was actually an init-script issue. After fixing that, the connection was up and running.&lt;/p&gt;

&lt;p&gt;The last step this week was to create a load balancer to expose the website to the internet. This was a quick and easy process as cloud providers have seamless K8s/LB integrations. I deployed the LB targeting the front-end pods on port 80, used the gcloud CLI to fetch the IP address, and successfully accessed the website.&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 2: Steps 6 to 10
&lt;/h3&gt;

&lt;p&gt;The first task was to set a feature toggle that enabled dark mode for the website. I used my browser's developer tools to change various CSS rules to create a dark mode and created new stylesheets using these rules. I wrote some simple PHP code that enabled different stylesheets depending on whether the FEATURE_DARK_MODE environment variable was true or not. I built a new Docker image with the changes, pushed to DockerHub, and was able to successfully deploy the website with the new dark mode feature.&lt;/p&gt;

&lt;p&gt;Next I manually scaled the application up and down, built a new image with a promotional banner, deployed the new image, and rolled back the application to the previously deployed image.&lt;/p&gt;

&lt;p&gt;The last step this week was to implement autoscaling. I used the guide's commands to create a Horizontal Pod Autoscaler and then used Apache Bench to simulate traffic and CPU load. However, I noticed that the website pods were not scaling. After checking the HPA logs and Googling the behaviour I realised it was because I had not set resource requests on the pods. I used kubectl top to check the current resource consumption and then set the requests based on those values. After experimenting with different values I was able to see the pods autoscaling up and back down when tested with Apache Bench.&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 3 - Steps 11 to 13
&lt;/h3&gt;

&lt;p&gt;I added liveness and readiness probes to the application and was able to see Kubernetes delaying traffic to unready pods and additionally restarting pods if they became unhealthy. I also configured the database and website pods to pull credentials from a secret. I created a GitHub repository and pushed my code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Extra Credit
&lt;/h3&gt;

&lt;p&gt;I also decided to try the extra credit steps. I started by adding a persistent volume to the database which would allow data to remain stored upon pod restarts or other events. I found a useful MariaDB guide that I followed, first creating a persistent volume claim and then adding and mounting a volume to the database deployment. To test, I logged into the DB pod to create a new entry. I restarted the deployment and saw that the new entry had persisted.&lt;/p&gt;

&lt;p&gt;Next I used Helm to package the application. On a very high level, Helm is a way of streamlining the deployment of your Kubernetes code. I followed the Quick Start guide I found in the documentation. It was a bit daunting to change all the Kubernetes templates to use Helm but I found a useful tool called Helmify that I used to create rough drafts. I went through the values and templates generated and changed them both for clarity and to parameterise values as much as possible. Once I did that I deleted my previous deployment and created a new one using Helm. I was impressed by how quick and easy it was.&lt;/p&gt;

&lt;p&gt;The last step was to implement a CI/CD pipeline which would allow me to automatically build and deploy code. It was quite easy to create a GitHub Actions job that built a Docker image and pushed to DockerHub. However, it was a trickier process to deploy the Helm charts to GKE.&lt;/p&gt;

&lt;p&gt;I followed a Google guide and started by creating a service account and adding the necessary IAM roles to it. I stored the generated JSON key securely in GitHub Secrets and used a GitHub action to authenticate the job to GCP. However, this failed with an error about an auth plugin. While searching for a solution I found a very useful GitHub Action that allowed authentication without gcloud. After experimenting with a few Helm commands I was able to successfully deploy to GKE via GitHub Actions!&lt;/p&gt;

&lt;h3&gt;
  
  
  Finished Product
&lt;/h3&gt;

&lt;p&gt;Normal:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml69d6cs8c31p1wuig6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml69d6cs8c31p1wuig6f.png" alt="Normal" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dark Mode:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxn7ppy7gcro9qw60hhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxn7ppy7gcro9qw60hhg.png" alt="Dark Mode" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;This was a very enjoyable challenge that allowed me to use knowledge gained both from work projects and the Cloud Resume Challenge. Although I had Kubernetes experience I did not have experience with Helm or HPA. After this challenge, I am really interested in exploring Helm further and will hopefully have more opportunities to use this and Kubernetes in the future.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/bit-of-a-git/kubernetes-resume-challenge"&gt;Github Repository&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>googlecloud</category>
      <category>helm</category>
      <category>cicd</category>
    </item>
    <item>
      <title>A Security-Focused Cloud Resume Challenge</title>
      <dc:creator>David O' Connor</dc:creator>
      <pubDate>Tue, 11 Jul 2023 22:08:24 +0000</pubDate>
      <link>https://forem.com/bit-of-a-git/a-security-focused-cloud-resume-challenge-16aa</link>
      <guid>https://forem.com/bit-of-a-git/a-security-focused-cloud-resume-challenge-16aa</guid>
      <description>&lt;p&gt;Hi! My name is David O' Connor and I am a Cloud/DevOps engineer based in Ireland. Before transitioning to this field, I worked as a musician and music teacher. Last year I decided on a career change. I have always had a passion for tech, and after completing Cybersecurity and Cloud courses I found a job in Cloud. I am really enjoying working in this area.&lt;/p&gt;

&lt;p&gt;It may seem like quite a jump from music to Cloud, but believe it or not they have a lot in common! Both fields require a strong understanding of your tools, including identifying potential failure points, recognising symptoms of issues, and troubleshooting problems quickly. Preparation, versatility, and continuous learning are crucial. In music, I had to be able to play genres like jazz, rock, pop, and even classical – sometimes all on the same day! Similarly, in Cloud so far I’ve worked with Cloudformation, Kubernetes, Python, Docker, Javascript, Terraform, C#, the list goes on!&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;I came across &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/aws/" rel="noopener noreferrer"&gt;this challenge&lt;/a&gt; while in training for a Cloud role with Deloitte Ireland. This consisted of a three-month course on AWS/DevOps resources, ending with the AWS Certified Cloud Practitioner certification. I enjoyed learning about these technologies and wanted to put my newfound skills to the test – the Cloud Resume Challenge was exactly what I was looking for.&lt;/p&gt;

&lt;p&gt;The idea of this challenge is to create and host your own CV in the Cloud using HTML/CSS and an S3 static website. Next, you create a visitor counter using Javascript, Lambda, an API, and a database. Lastly, you create your resources as Infrastructure as Code and create CI/CD pipelines to automatically deploy them when changes are pushed to your Github. A full list of steps can be found &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/aws/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup:
&lt;/h3&gt;

&lt;p&gt;I started the project by setting up an AWS Organisation in my root account. Next, I used org-formation to create dev and production OUs and accounts with budget alarms, password policies, and region restrictions. I also set up SSO with MFA so I could easily and securely access the accounts. I learnt a lot from this step as it closely resembles what you might typically encounter in a professional environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chunk 1: Front-end
&lt;/h3&gt;

&lt;p&gt;I spent some time getting the HTML/CSS exactly as I wanted it before uploading it to S3. I set up a static HTTPS website using S3, Cloudfront, and ACM. Next, I purchased a domain, created a Route 53 hosted zone pointing to my Cloudfront distribution, and updated my domain to use the provided AWS nameservers.&lt;/p&gt;

&lt;p&gt;When I was recreating the front-end with Terraform I decided to improve the overall security. When researching best practices, I came across a very interesting blog on security headers. These HTTP headers help protect against threats like cross-site-scripting and clickjacking attacks. Luckily, these headers were very easy to add to Cloudfront through Terraform, and I was able to quickly upgrade my site’s rating from an F to an A.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8x1bw3giivodkj9lg6wd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8x1bw3giivodkj9lg6wd.png" alt="Improved security rating with headers" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also wanted to try and implement DNSSEC. This protocol helps shield against &lt;a href="https://bluecatnetworks.com/blog/four-major-dns-attack-types-and-how-to-mitigate-them/" rel="noopener noreferrer"&gt;DNS-based attacks&lt;/a&gt; by adding cryptographic signatures to DNS records. I found a guide on implementing this through Terraform which was very helpful. I was able to create a key, associate it with Route53, and configure my domain to use it.&lt;/p&gt;

&lt;p&gt;Lastly, I decided to disable the S3 static website configuration as this requires public read access. Instead, I limited S3 access to Cloudfront using an origin access control. This provides essentially the same functionality as an S3 static website while also following the principle of least privilege.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chunk 2: Back-end
&lt;/h3&gt;

&lt;p&gt;I experimented a lot with this part of the project and was able to make a visitor counter I was quite happy with. However, when I revisited this step in Terraform I couldn’t resist trying Forrest’s mod, which was to create a unique visitors counter.&lt;/p&gt;

&lt;p&gt;I wrote a Lambda function in Python that would retrieve, hash, and store the visitor’s IP address in a DynamoDB table along with a time-to-live value. DynamoDB uses TTL to delete items after a specified time – in my case I decided upon a month, meaning it would be monthly unique visitors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovcwg5qya8dtt77xwmsx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovcwg5qya8dtt77xwmsx.png" alt="Hashed IPs with TTL" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I modified my other Lambda function to fetch the count of hashed IPs from this table while also incrementing and returning the value from a hit count table. I integrated these with the POST and GET functions of my API and implemented rate limiting to enhance API security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chunk 3: Front-end/Back-end Integration &amp;amp; Testing
&lt;/h3&gt;

&lt;p&gt;I experimented quite a bit with Javascript and ended up with some code I was happy with. I decided to use Cypress for end-to-end testing as the guide suggested. I looked at previous examples and wrote tests to confirm that the API Gateway could update and retrieve from the databases correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chunk 4: Automation/CICD
&lt;/h3&gt;

&lt;p&gt;Now for the fun part! I started by recreating my resources in Terraform. Next I researched how to configure a remote backend, eventually settling on S3. Following the challenge’s DevOps mods, I wrote a Github Actions file to deploy code changes to the dev account and only merge and deploy to production if tests pass. I also worked out how to uniquely name dev resources using the Git commit ID and tear them down after successful tests.&lt;/p&gt;

&lt;p&gt;For security, I set up OIDC instead of storing my AWS access keys directly in Github. I also enforced signed Git commits and set up CodeQL to scan my code once monthly and upon pull requests. Lastly, I set up the front-end to automatically update the site and refresh Cloudfront whenever changes were made.&lt;/p&gt;

&lt;h3&gt;
  
  
  My site
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fbit-of-a-git%2Fcloud-resume-challenge-back-end%2Fmain%2Fimg%2FCloudResumeChallenge.drawio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fbit-of-a-git%2Fcloud-resume-challenge-back-end%2Fmain%2Fimg%2FCloudResumeChallenge.drawio.png" alt="Architecture of the site using AWS services" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above is a high-level overview of the architecture of my site. You can check it out at &lt;a href="https://davidoconnor.me" rel="noopener noreferrer"&gt;davidoconnor.me&lt;/a&gt;, and my Github repositories can be found &lt;a href="https://github.com/bit-of-a-git/cloud-resume-challenge-back-end/tree/main" rel="noopener noreferrer"&gt;here&lt;/a&gt; and &lt;a href="https://github.com/bit-of-a-git/cloud-resume-challenge-front-end" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reflections and next steps
&lt;/h3&gt;

&lt;p&gt;I thoroughly enjoyed the challenge and learnt a lot in the process. I took my time and explored almost everything suggested in the guide. When met with a problem, I kept trying until I was able to find my way past it – something I always tried to do in music too. Whether learning a difficult tune or how to use complicated new equipment, I always tried to explore all the possibilities and persevere until I finally succeeded.&lt;/p&gt;

&lt;p&gt;I particularly enjoyed the Python and Javascript parts of the challenge, and I would like my next project to focus on one of these. In terms of certs I am planning to study for the AWS Solutions Architect Associate next.&lt;/p&gt;

&lt;p&gt;Thank you for reading, and thanks to Forrest Brazeal for this really enjoyable challenge!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
