<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shawn Gestupa</title>
    <description>The latest articles on Forem by Shawn Gestupa (@smgestupa).</description>
    <link>https://forem.com/smgestupa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/smgestupa"/>
    <language>en</language>
    <item>
      <title>Improving on DevOps with Kubernetes, Helm, and GitOps</title>
      <dc:creator>Shawn Gestupa</dc:creator>
      <pubDate>Sun, 12 Apr 2026 02:54:30 +0000</pubDate>
      <link>https://forem.com/smgestupa/improving-on-devops-with-kubernetes-helm-and-gitops-bgd</link>
      <guid>https://forem.com/smgestupa/improving-on-devops-with-kubernetes-helm-and-gitops-bgd</guid>
      <description>&lt;p&gt;For the past few weeks, I shifted my focus on building a three-tier application declaratively with &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, making it more configurable with Helm, and implementing GitOps with ArgoCD for automated deployments:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8em2811y2o2v57yrnwqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8em2811y2o2v57yrnwqk.png" alt=" " width="512" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the added help of Prometheus and Grafana deployed via Helm, I was also able to improve the observability of my application. Loki and Tempo were also implemented to enhance log tracing within my cluster.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://smgestupa.dev/posts/20260106/improving-my-containerization-skills-with-docker/" rel="noopener noreferrer"&gt;one of my posts&lt;/a&gt;, I've mentioned the growing demand for containerization skills, which was why I decided to learn more about &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;, and even tinker with it, as a way to further understand containers.&lt;/p&gt;

&lt;p&gt;Adding my previous experience in GitOps where I started by &lt;a href="https://smgestupa.dev/posts/20250811/taking-the-cloud-resume-challenge-gcp-style/" rel="noopener noreferrer"&gt;building CI/CD pipelines&lt;/a&gt;, the training experience was smooth and made me truly believe that I'm getting closer to my aspirations after passing it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Improving my DevOps skills
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; I didn't link my repositories used for the training, since I'm not sure if I'm allowed to and the training was held internally by my employer as of writing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Unfortunately, I had to temporarily postpone getting for certificates, but on the bright side, I was given the opportunity to improve my principles on DevOps engineering.&lt;/p&gt;

&lt;p&gt;I've learned that DevOps engineering is more than just containerizing apps or automating deployments: it's about having a proactive mindset. Creating systems where problems can be mitigated before it reach customers with the use of automation &amp;amp; observability, turns a proactive engineer to react in real-time and to be constantly looking for bottlenecks/improvements.&lt;/p&gt;

&lt;p&gt;Embracing the DevOps philosophy allows us to think that the systems we're building will never be "finished".&lt;/p&gt;

&lt;p&gt;Throughout the training, I've spent most of my time reducing human error -- or my own error -- by making sure my progress is recorded with Git to complement my declarative configurations that I use to setup my local Kubernetes cluster with just a single command; I actually created three GitLab repositories for my architecture, one each for: Application code; Kubernetes manifests, with Helm charts; and ArgoCD configurations.&lt;/p&gt;

&lt;p&gt;I've also leveraged the use of Helm charts to conveniently deploy not only my application, but the systems necessary for observability &amp;amp; GitOps, such as Prometheus, Grafana, and ArgoCD.&lt;/p&gt;

&lt;p&gt;Then, I integrated Helm with my local ArgoCD to continuously monitor services due to its self-healing, continuous delivery, and additional monitoring capabilities.&lt;/p&gt;

&lt;p&gt;Lastly, I linked all three of my repositories with their own deployment pipeline to commit to a true continuous integration (CI), continuous delivery (CD) setup:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujwf33wydqsqni6ukndn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujwf33wydqsqni6ukndn.png" alt=" " width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My application code's repository with source codes for frontend &amp;amp; backend will create a merge request to the repository where my Helm charts are located to update the respective tag of an image used by either frontend or backend, though, only if their respective source code was updated, otherwise, no merge request will be made and only the Static Application Security Testing (SAST) job will run.&lt;/p&gt;

&lt;p&gt;I've chosen to manually approve the merge requests, instead of directly pushing the changes, since I wanted to visualize working in a team where requests must be reviewed &amp;amp; approved before their deployment to production, and this opportunity allowed me to easily comprehend how my deployment pipeline works from one repository to another.&lt;/p&gt;

&lt;p&gt;ArgoCD will be pulling the changes in a repository every 3 minutes, where it will then automatically deploy the new changes to respective systems without human intervention.&lt;/p&gt;

&lt;p&gt;The journey was quite exhilarating, for it gave me new insights on how to solve more complex problems with creative solutions by actively utilizing my past experiences to continuously improve my architecture, which may be lacking in some way, but thanks to what I've learned in the training, it'll be easier to discover, learn, and understand what I'm missing out where I can apply improvements on the next system that I'm trusted to build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some tinkering with Docker
&lt;/h2&gt;

&lt;p&gt;I've used Docker with Kubernetes, and considering I had previous experience with the former, it was easy to digest the provided Docker-related resources for the training, to the point I'm watching related YouTube videos in twice the normal playback speed.&lt;/p&gt;

&lt;p&gt;My three-tier application with the frontend (&lt;em&gt;presentation layer&lt;/em&gt;), backend (&lt;em&gt;logic layer&lt;/em&gt;), and database (&lt;em&gt;data layer&lt;/em&gt;), are all deployed with Docker, while the data layer uses an external Postgres image from the &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt; registry.&lt;/p&gt;

&lt;p&gt;The backend was a Node application that was used to setup APIs, with the help of Express, for the frontend where it was simply a single-paged React application that connects to the former via an environment variable by default.&lt;/p&gt;

&lt;p&gt;I also opted to use Alpine variants of Docker images as much as I can to reduce the resources used by containers, as well as speed up the provisioning of my three-tier application.&lt;/p&gt;

&lt;p&gt;In addition, both the frontend &amp;amp; backend were initially deployed with root privileges, meaning anyone with access to their respective containers control them. I sought to improve the security posture of my application by creating separate &lt;code&gt;Dockerfile&lt;/code&gt; files for both dev (&lt;em&gt;also used for testing&lt;/em&gt;) &amp;amp; prod environments, where the &lt;code&gt;Dockerfile&lt;/code&gt; for the prod will be using a non-root user, which greatly mitigates the risks of container breakout or impacts of an attacker inside the container.&lt;/p&gt;

&lt;p&gt;The initial &lt;code&gt;Dockerfile&lt;/code&gt; of the frontend was using a Node image for deployment, however, I leveraged an Nginx configuration file -- which was already included in the initial training repository -- to implement a multi-stage build where the deployment stage will be using a non-privileged Nginx image, specifically the &lt;a href="https://hub.docker.com/r/nginxinc/nginx-unprivileged" rel="noopener noreferrer"&gt;&lt;code&gt;nginxinc/nginx-unprivileged&lt;/code&gt;&lt;/a&gt; image, while only moving the necessary files from build to reduce the final image size of my container.&lt;/p&gt;

&lt;p&gt;Despite not being required by the training itself, it didn't feel right to me to keep the three-tier application "as-is" when I knew that I had the means to improve it, starting with its security.&lt;/p&gt;

&lt;p&gt;As usual, I still had my own share of problems with Docker:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The frontend encountered &lt;code&gt;host `backend` is not reachable&lt;/code&gt; error when it was trying to reach the backend's API server thru Nginx, which was why I had to provision my backend right after the database.&lt;/li&gt;
&lt;li&gt;I had a hard time changing the value of the frontend's environment variable specifically for the prod &lt;code&gt;Dockerfile&lt;/code&gt;, and apparently I had to add an &lt;code&gt;ENV&lt;/code&gt; or &lt;code&gt;ARG&lt;/code&gt; with the correct value to the the said environment variable because the correct value wasn't used during build-time and simply uses the fallback value specified in the source code.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  My struggles with Kubernetes
&lt;/h2&gt;

&lt;p&gt;When I was learning Kubernetes for the first time, I was struggling to digest the information, where I decided to document instead everything I've learned I've learned to the point my notes reached more than 20,000 lines and took me twice the time compared to when I was refreshing my knowledge on Docker.&lt;/p&gt;

&lt;p&gt;For my cluster, I opted to use &lt;a href="https://k3d.io/stable/" rel="noopener noreferrer"&gt;k3d&lt;/a&gt;, a lightweight wrapper to run &lt;a href="https://github.com/rancher/k3s" rel="noopener noreferrer"&gt;k3s&lt;/a&gt;, which I believed to be a great way to start learning without worrying too much on the resource overhead.&lt;/p&gt;

&lt;p&gt;My strategy for Kubernetes was to replicate how I deployed my three-tier application previously, by building the necessary manifests:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using either &lt;code&gt;ConfigMap&lt;/code&gt; (&lt;em&gt;for non-sensitive values&lt;/em&gt;) or &lt;code&gt;Secret&lt;/code&gt; (&lt;em&gt;for sensitive credentials&lt;/em&gt;) to match the environment variables from &lt;code&gt;docker-compose.yaml&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Replacing volume mounting of Docker from &lt;code&gt;volumes&lt;/code&gt; to Kubernetes' &lt;code&gt;VolumeMounts&lt;/code&gt;, with &lt;code&gt;PersistentVolume&lt;/code&gt; &amp;amp; &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; for external volume creation.&lt;/li&gt;
&lt;li&gt;Mirroring the Docker's &lt;code&gt;healthcheck&lt;/code&gt; with Kubernetes' &lt;code&gt;livenessProbe&lt;/code&gt;, as well as to add the ability for the latter to check if an application is ready with &lt;code&gt;readinessProbe&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Connecting Kubernetes' deployments with &lt;code&gt;Service&lt;/code&gt; -- by default are in &lt;code&gt;ClusterIP&lt;/code&gt; type -- to mimic private connectivity within Docker Compose.&lt;/li&gt;
&lt;li&gt;Ingress services were explicitly used for public-facing applications, similar to mapping an available host port to an internal port of a Docker container in order for it to be accessed from the host machine.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Nevertheless, learning Kubernetes allowed me to equip the knowledge to move from a simple deployment with Docker to having a complex yet flexible, scalable deployments, such as when to choose between &lt;code&gt;Deployment&lt;/code&gt; or &lt;code&gt;StatefulSet&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Of course, I don't think I'm learning when I didn't have problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I had to expose k3d with &lt;code&gt;--api-port 127.0.0.1:&amp;lt;PORT&amp;gt;&lt;/code&gt; to map it to my &lt;code&gt;localhost&lt;/code&gt;, otherwise, my Docker Desktop's &lt;code&gt;kubectl&lt;/code&gt; won't be able to communicate to the cluster's API server.&lt;/li&gt;
&lt;li&gt;A slight difference between Docker and Kubernetes when running a command inside a container/pod from a terminal, where I had to add &lt;code&gt;--&lt;/code&gt; before the command I want to run inside, as well as avoiding the &lt;code&gt;/&lt;/code&gt; prefix on the command because it pointed to my local directory instead the command inside the container.&lt;/li&gt;
&lt;li&gt;Encountered the &lt;code&gt;One or more containers do not have resources&lt;/code&gt; error once made me always add limits on how much resources my deployments can use.&lt;/li&gt;
&lt;li&gt;Deleting persistent volumes should be done right after deleting the persistent volume's claims, since I unknowingly waited longer than I needed to be when I deleted the volumes first.&lt;/li&gt;
&lt;li&gt;I had an issue where I couldn't correctly mount a SQL file specifically for initialization when I pasted the contents of the said file within &lt;code&gt;ConfigMap&lt;/code&gt;, so I opted to visualize how the contents should be added with &lt;code&gt;kubectl create configmap init-sql --from-file=init.sql=./init.sql -o YAML&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Encountered issues when creating &lt;code&gt;Secret&lt;/code&gt; for sensitive environment variables with &lt;code&gt;echo&lt;/code&gt;, where I should've used &lt;code&gt;echo -n&lt;/code&gt; instead to ensure to omit newlines.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Optimizing my infrastructure with Helm
&lt;/h2&gt;

&lt;p&gt;I built separate Kubernetes manifests for each layer in my three-tier application, with their own &lt;code&gt;ConfigMap&lt;/code&gt; or &lt;code&gt;Secret&lt;/code&gt; and services, which became redundant (&lt;em&gt;for me&lt;/em&gt;) to deploy the same applications with similar configurations and risked misconfigurations due to separate files.&lt;/p&gt;

&lt;p&gt;As part of my training, I had to learn Helm then understood how useful it is for consistent deployments, where it enabled me to install third-party applications, known as "charts", from registries to complement my deployments without manually creating the manifests myself -- charts are stored in registries, usually in &lt;a href="https://artifacthub.io/" rel="noopener noreferrer"&gt;Artifact Hub&lt;/a&gt;, similar to container images from Docker Hub registry; it acts as a "package manager", similar to &lt;a href="https://www.npmjs.com/" rel="noopener noreferrer"&gt;NPM&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;helm create&lt;/code&gt; and some time of tinkering, I was able to migrate my manifests to Helm as a single package, allowing me to manage each layers thru a single file named &lt;code&gt;values.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Working with Helm was a breeze, especially its reliance on the Go templating language which allowed me to control the flow of my deployments and inject/replace data in a single manifest used by all of the layers in my application, and in addition:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Helm is considered to be a "homebrew" (&lt;em&gt;a package manager used in MacOS&lt;/em&gt;) to Kubernetes.&lt;/li&gt;
&lt;li&gt;Helm is similar to third-party Terraform modules that simplify deployments and avoids maintaing multiple yet similar YAML files.&lt;/li&gt;
&lt;li&gt;The values in a Helm chart differ between charts as they are entirely on the choice of their respective developers, which reading documentations as necessary as always.&lt;/li&gt;
&lt;li&gt;We can avoid "snowflake servers/clusters" where software/packages are installed imperatively and making it hard to be re-built by opting to a declarative workflow with Helm where configures can be stored in a source control, which results in a "phoenix server".&lt;/li&gt;
&lt;li&gt;A library chart is a "library" of functions shared across multiple charts, while an application chart is a collection of templates.&lt;/li&gt;
&lt;li&gt;Go lang is used by Helm developers.&lt;/li&gt;
&lt;li&gt;Functions in Helm are called "pipelines" and are similar to the built-in functions of Terraform, where a pipeline syntax can be used to combine multiple pipelines with the pipe (&lt;code&gt;|&lt;/code&gt;) symbol: &lt;code&gt;{{ .Values.globalNamespace | default "default" }}&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Template flow controls can be done with if-statements, such as &lt;code&gt;{{ if .Values.development }}-dev{{ end }}&lt;/code&gt; or even &lt;code&gt;{{ if .Values.development }}-dev{{ else }}-prod{{ end }}&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The templating language by default creates a new whitespace, and adding &lt;code&gt;-&lt;/code&gt; inside the syntax, such as &lt;code&gt;{{- end }}&lt;/code&gt;, avoids the creation of said whitespace.&lt;/li&gt;
&lt;li&gt;Professional Helm templates can be huge and rely heavily on utilizing values/variables for declarative workflows which avoids hardcoding values.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I'm always open to encountering problems with Helm, because it will allow me to reinforce what I've learned to be creative with my solutions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Some of my pods encountered &lt;code&gt;ImagePullBackoff&lt;/code&gt; from Helm charts that I used, which meant that the chart may be outdated/non-existent, so I opted to look for alternatives, such as using Bitnami's MariaDB chart while the tutorials I've consumed were still using their MySQL chart.&lt;/li&gt;
&lt;li&gt;There's a difference between &lt;code&gt;indent&lt;/code&gt; and &lt;code&gt;nindent&lt;/code&gt; when programmatically injecting data to my Helm templates, where the former only indents the first line while the latter indents all of the lines and making it to work as expected compared to the former.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After Helm, I started to use it heavily on my deployments which helped me reduce the risks of misconfigurations as much as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-off deployments with GitOps thru ArgoCD
&lt;/h2&gt;

&lt;p&gt;In order to achieve the "continuous delivery" part of my architecture, I opted to deploy ArgoCD as a Helm chart and a required software to learn during my training.&lt;/p&gt;

&lt;p&gt;ArgoCD was necessary to implement the GitOps framework, since majority of its capabilities are tied to Git repositories and permitting it to read my repositories allowed my deployments to self-heal whenever a critical component within them gets unhealthy, reducing the time it takes for them to be available.&lt;/p&gt;

&lt;p&gt;Fortunately, Helm charts are compatible with ArgoCD, which was why I combined my three Git repositories and third-party Helm charts to setup my architecture with it; ArgoCD allowed me to come up with a creative solution to setup my architecture correctly, allowing me to learn:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An imperative setup introduces the risk of setting up snowflake servers/clusters, as well as having to extensively configure a cluster's RBAC and lacking visibility on the deployment status, whereas ArgoCD solves this by delegating access to Git repositories while providing visibility on deployed applications.&lt;/li&gt;
&lt;li&gt;The best practice was to separate unrelated deployments in separate Git repositories instead in a single repository for everything, especially a separate repository for system configurations to reduce the risk of exposign sensitive credentials.&lt;/li&gt;
&lt;li&gt;ArgoCD supports both Kubernetes manifests &amp;amp; Helm charts, combining the best of both worlds for deployments; ArgoCD is basically an extension of Kubernetes, then I used Helm to deploy it.&lt;/li&gt;
&lt;li&gt;Git repositories must be synced with ArgoCD for continuous monitoring to allow applications have "easy rollback", where ArgoCD watches changes in those repositories and apply updates automatically, though it can be disabled and an alert can be sent instead for new changes.&lt;/li&gt;
&lt;li&gt;ArgoCD can be used to control Kubernetes cluster -- by acting as an agent -- which can avoid providing external access to the a cluster, where management can be indirectly done thru Git repos.&lt;/li&gt;
&lt;li&gt;Git repositories will be the desired while Kubernetes clusters will be the actual live state, and ArgoCD ensures the two to be in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;ArgoCD has its own external Custom Resource Definitions (CRDs), where my three-tier application as well as the necessary components for my Kubernetes cluster are deployed with &lt;code&gt;Application&lt;/code&gt; manifests, especially for Helm charts that I choose to deploy that aren't in my Git repositories and updated default values inside the same manifest.&lt;/p&gt;

&lt;p&gt;I did have my own fair share of problems when I was deploying with ArgoCD:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Before I can deploy my applications with ArgoCD, I had to deploy first ArgoCD since it contained the CRDs used for deployments, especially for &lt;code&gt;Application&lt;/code&gt; manifests.&lt;/li&gt;
&lt;li&gt;Had a hard time using my GitLab repository's Deploy Keys for my ArgoCD, which led me to read the &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/argocd-repositories-yaml" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; in order to use the correct syntax for SSH-related keys.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To avoid committing sensitive credentials in my Git repository for ArgoCD, I utilized &lt;a href="https://external-secrets.io/latest/" rel="noopener noreferrer"&gt;External Secrets Operator&lt;/a&gt; to create a Kubernetes secret that syncs with an AWS Secrets Manager secret:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgap6rh2dlobh8n7z363.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgap6rh2dlobh8n7z363.png" alt=" " width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm aware that I could've used a webhook for immediate syncing between my Git repository and ArgoCD, though, that required additional setup than what was needed and I've taken note of this instead where I'll be implementing this in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhancing observability starting with Prometheus &amp;amp; Grafana
&lt;/h2&gt;

&lt;p&gt;The last part of my training was to implement Prometheus and Grafana to enhance the observability of my deployments, which I was able to do so as Helm charts deployed with ArgoCD.&lt;/p&gt;

&lt;p&gt;Prometheus &amp;amp; Grafana were originally separate external Helm charts, however, I opted to use the &lt;a href="https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack/" rel="noopener noreferrer"&gt;&lt;code&gt;kube-prometheus-stack&lt;/code&gt;&lt;/a&gt; chart which combines both of them in a single package. At first, &lt;code&gt;kube-prometheus-stack&lt;/code&gt; was deployed as an external Helm chart, but throughout the training, I found myself needing more flexiblity to configure either Prometheus, Grafana, or any components included in the same chart, which was why I ended up adding it in my Git repository for Kubernetes manifests &amp;amp; Helm charts thru &lt;code&gt;helm pull&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I also integrated &lt;a href="https://artifacthub.io/packages/helm/grafana/loki-stack" rel="noopener noreferrer"&gt;Loki&lt;/a&gt; &amp;amp; &lt;a href="https://artifacthub.io/packages/helm/grafana/tempo" rel="noopener noreferrer"&gt;Tempo&lt;/a&gt; as external Helm charts, to specifically enhance log tracing in my log applications.&lt;/p&gt;

&lt;p&gt;Improving my cluster's observability allowed me to learn:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prometheus can be used as a monitoring tool for highly, dynamic container environments or traditional, bare servers -- a mainstream monitoring solution of choice for containers and microservice architectures.&lt;/li&gt;
&lt;li&gt;Used to constantly monitor all deployments/services in order to identify for problems before they occur, as well as check for containers' resource usage while setting a threshold to alert for breaches; automated monitoring &amp;amp; alerting can be achieved with observability-related components.&lt;/li&gt;
&lt;li&gt;Usually, metrics can be retrieved from &lt;code&gt;/metrics&lt;/code&gt; endpoint by default, where client libraries can expose the said endpoint to an application for many services don't have default native support for Prometheus.&lt;/li&gt;
&lt;li&gt;Prometheus pulls metrics from targets, while Grafana can be used to visualize the said metrics to gain insights.&lt;/li&gt;
&lt;li&gt;Other monitoring systems, such as AWS CloudWatch, use a push system that &lt;em&gt;push&lt;/em&gt; data to a centralized collection platform which can result in high load of network traffic, in contrast to Prometheus where it uses a pull system -- a pull system can have better detection/insight since it'll know immediately if something is dead or not.&lt;/li&gt;
&lt;li&gt;Prometheus can use the Alert Manager component (&lt;em&gt;included in the &lt;code&gt;kube-prometheus-stack&lt;/code&gt; chart&lt;/em&gt;) to notify respective recipients/communication channels.&lt;/li&gt;
&lt;li&gt;PromQL can be used to communicate with the Prometheus server or specifically used for querying a target directly, which can be used by visualization tools, such as Grafana; PromQL can be used in the background when creating a Grafana dashboard.&lt;/li&gt;
&lt;li&gt;Prometheus is designed to be reliable and meant to work with other services, even if one of them are broken which can result in a less complex &amp;amp; extensive setup, however, it can be difficult to scale or has limits on monitoring (&lt;em&gt;can be solved by increasing the Prometheus server capacity or limit the number of metrics pulled&lt;/em&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My three-tier application wasn't initially designed to send metrics, which was why I enjoyed going back to programming when I implemented metrics for my frontend &amp;amp; backend: backend was setup with the &lt;a href="https://www.npmjs.com/package/prom-client" rel="noopener noreferrer"&gt;&lt;code&gt;prom-client&lt;/code&gt;&lt;/a&gt; Node library to create the metrics for both frontend &amp;amp; backend, then created a &lt;code&gt;/metrics&lt;/code&gt; endpoint for it.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;prom-client&lt;/code&gt; library wasn't compatible with client side, especially when the frontend was deployed with Nginx. My solution for this was to basically send the data for its own metric to the &lt;code&gt;/metrics&lt;/code&gt; endpoint of the backend via the &lt;code&gt;fetch&lt;/code&gt; API:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph2n5mz35xs3y97dio3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph2n5mz35xs3y97dio3h.png" alt=" " width="656" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The initial metrics for my frontend was simply to get the First Contentful Paint, which was retrieved from the &lt;a href="https://www.npmjs.com/package/web-vitals" rel="noopener noreferrer"&gt;&lt;code&gt;web-vitals&lt;/code&gt;&lt;/a&gt; Node library, and the page load duration.&lt;/p&gt;

&lt;p&gt;Since this was somewhat my first time dealing with Prometheus &amp;amp; Grafana, my problems with them allowed me to be resourceful:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I had to manually (&lt;em&gt;in a declarative way&lt;/em&gt;) create a Service Monitor for my layers in order for Prometheus to retrieve their metrics, otherwise, Prometheus won't see them as targets.&lt;/li&gt;
&lt;li&gt;I thought &lt;code&gt;kube-prometheus-stack&lt;/code&gt; chart was incompatible with the Loki chart, and what I had to do was basically stop setting the latter to be the default data source since the former uses Prometheus as the default data source.&lt;/li&gt;
&lt;li&gt;I had to add &lt;a href="https://www.npmjs.com/package/@opentelemetry/api" rel="noopener noreferrer"&gt;&lt;code&gt;@opentelemetry/api&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://www.npmjs.com/package/@opentelemetry/auto-instrumentations-node" rel="noopener noreferrer"&gt;&lt;code&gt;@opentelemetry/auto-instrumentations-node&lt;/code&gt;&lt;/a&gt; libraries in my Node applications in order for Tempo to retrieve their traces and output it within Grafana.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  The journey ahead
&lt;/h1&gt;

&lt;p&gt;I prioritized the DevOps training because I believed this will keep me ahead in this ever-changing field, where automation and efficiency are no longer optional, and I believe it was the right decision; I emerged from the training equipped with new, comprehensive knowledge and deeper insights into architecture and system design.&lt;/p&gt;

&lt;p&gt;Combined with my existing knowledge in Cloud Engineering, the training also provided me the principles necessary to build a robust, scalable, and stable infrastructure that I can bring with me wherever I go, achieving the adaptability that I always aim to have.&lt;/p&gt;

&lt;p&gt;I'm thankful for being given the opportunity to improve myself, as well as having the guidance of my seniors which gave me the confidence to finish the training.&lt;/p&gt;

&lt;p&gt;Maybe I'll continue getting the AWS Associate CloudOps Engineer since I was in the middle of it when I was given the training and had to stop abruptly.&lt;/p&gt;




&lt;p&gt;I originally planned to learn Kubernetes thru the &lt;a href="https://courses.mooc.fi/org/uh-cs/courses/devops-with-kubernetes" rel="noopener noreferrer"&gt;DevOps with Kubernetes&lt;/a&gt; course from the same instructors of &lt;a href="https://courses.mooc.fi/org/uh-cs/courses/devops-with-docker" rel="noopener noreferrer"&gt;DevOps with Docker&lt;/a&gt;, but I still recommend the two courses despite not taking the Kubernetes one -- I'm acting upon my experience with the Docker course, because I had a great experience with that.&lt;/p&gt;

&lt;p&gt;If you want to upskill in DevOps engineering: take your time with what you can digest, don't rush with what you're learning because it's important for you in the long-term to build the necessary principles organically.&lt;/p&gt;

&lt;p&gt;Don't forget to believe in yourself!&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Taking The Cloud Resume Challenge: GCP Style</title>
      <dc:creator>Shawn Gestupa</dc:creator>
      <pubDate>Mon, 11 Aug 2025 12:10:43 +0000</pubDate>
      <link>https://forem.com/smgestupa/taking-the-cloud-resume-challenge-gcp-style-23md</link>
      <guid>https://forem.com/smgestupa/taking-the-cloud-resume-challenge-gcp-style-23md</guid>
      <description>&lt;p&gt;What's self-learning without some challenges to shape us, hence for the past few weeks, I've taken upon &lt;a href="https://cloudresumechallenge.dev/" rel="noopener noreferrer"&gt;The Cloud Resume Challenge&lt;/a&gt; by Forrest Brazeal and re-created my resume using &lt;a href="https://cloud.google.com" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt; services, instead of my usual cloud platform (&lt;em&gt;nothing wrong with exploring a little hehe&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;Feel free to checkout my recreated resume here: &lt;a href="https://resume.smgestupa.dev" rel="noopener noreferrer"&gt;https://resume.smgestupa.dev&lt;/a&gt;  -- this won't replace my resume in my portfolio website.&lt;/p&gt;

&lt;p&gt;I followed most of the requirements/steps, except for steps &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#1-certification" rel="noopener noreferrer"&gt;1. Certification&lt;/a&gt; (&lt;em&gt;got too excited to do this&lt;/em&gt;), and &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#12-infrastructure-as-code" rel="noopener noreferrer"&gt;12. Infrastructure as Code&lt;/a&gt; (&lt;em&gt;saving this for another blog post&lt;/em&gt;). Still, I was able to finish the challenge:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxnbwf5nxxo9a2hvpg4w.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxnbwf5nxxo9a2hvpg4w.webp" alt="resume.smgestupa.dev Preview" width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;: this isn't a step-by-step guide, instead, it's focused more on my process, the decisions I made from researching. I do give out some tips on certain things, but finding something out yourself will always be better.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  How did I start this?
&lt;/h1&gt;

&lt;p&gt;This challenge tested me, especially when building the CI/CD pipeline for my repositories. Thankfully, I got to utilize my skills as a Full-Stack Developer to good use -- thanks to my previous clients who trusted me to build their theses, capstones, etc.&lt;/p&gt;

&lt;p&gt;I went with &lt;a href="https://svelte.dev/" rel="noopener noreferrer"&gt;SvelteKit&lt;/a&gt; to make everything easier for me (&lt;em&gt;feel free to use what works for you to achieve your goal&lt;/em&gt;). I also used TailwindCSS' &lt;a href="https://tailwindcss.com/docs/preflight" rel="noopener noreferrer"&gt;preflight script&lt;/a&gt; to reset the default browser styles to make styling super easy.&lt;/p&gt;

&lt;p&gt;From there, I was able to recreate my &lt;a href="https://smgestupa.dev/resume.pdf" rel="noopener noreferrer"&gt;original resume&lt;/a&gt; for steps &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#2-html" rel="noopener noreferrer"&gt;2. HTML&lt;/a&gt; and &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#3-css" rel="noopener noreferrer"&gt;3. CSS&lt;/a&gt;, though it's not an exact 1-to-1 match -- I did my best to keep the look somewhat similar, but things like the exact font and dimensions were hard to determine.&lt;/p&gt;

&lt;p&gt;At first, everything was just running locally:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbqwe0q8u9ivh3fliy51.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbqwe0q8u9ivh3fliy51.webp" alt="Local architecture where I can only access my website" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since no one can visit what I've made, my next step was to deploy it with Google Cloud as a static website.&lt;/p&gt;

&lt;h1&gt;
  
  
  Enter Google Cloud
&lt;/h1&gt;

&lt;p&gt;Before deploying, I had to activate the free $300 credits, since some services require billing to be enabled beforehand, such as the &lt;a href="https://cloud.google.com/storage" rel="noopener noreferrer"&gt;Cloud Storage&lt;/a&gt; which is used to &lt;a href="https://cloud.google.com/storage/docs/hosting-static-website" rel="noopener noreferrer"&gt;host my recreated resume as a static website&lt;/a&gt; (&lt;em&gt;as part of &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#4-static-website" rel="noopener noreferrer"&gt;4. Static Website&lt;/a&gt;&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You may have to enable the Cloud Storage API before creating a bucket.&lt;/p&gt;

&lt;p&gt;Hosting a static website with Cloud Storage was simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create my bucket.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Uniform Access&lt;/strong&gt; (&lt;em&gt;as recommended by Google Cloud&lt;/em&gt;).&lt;/li&gt;
&lt;li&gt;Upload my website files.&lt;/li&gt;
&lt;li&gt;Disable the &lt;strong&gt;Prevent public access&lt;/strong&gt; setting.&lt;/li&gt;
&lt;li&gt;Add the &lt;strong&gt;allUsers&lt;/strong&gt; principal with the &lt;strong&gt;Storage Legacy Object Reader&lt;/strong&gt; permission.&lt;/li&gt;
&lt;li&gt;Set the bucket's website configuration to point to &lt;code&gt;index.html&lt;/code&gt;, and boom! I have a static website with HTTPS, for free!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The architecture for this deployment was:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwnsswgdt2v2zyt5yt7p.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwnsswgdt2v2zyt5yt7p.webp" alt="Architecture using GCS for static website deployment" width="795" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My estimated cost for the GCS bucket is &lt;strong&gt;$0.00&lt;/strong&gt;/month since my files are under a gigabyte, since storage in Singapore (&lt;em&gt;&lt;code&gt;asia-southeast1&lt;/code&gt;&lt;/em&gt;) incurs a $0.020/month per GB -- rates vary depending on the bucket's region.&lt;/p&gt;

&lt;p&gt;But this was not enough, we can't expect visitors to remember a long domain, which is why my website will need a user-friendly URL and this can be done with &lt;a href="https://cloud.google.com/dns" rel="noopener noreferrer"&gt;Cloud DNS&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  DNS
&lt;/h2&gt;

&lt;p&gt;I created a managed zone in Cloud DNS for my subdomain: &lt;a href="https://resume.smgestupa.dev/" rel="noopener noreferrer"&gt;https://resume.smgestupa.dev/&lt;/a&gt;, as part of step &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#6-dns" rel="noopener noreferrer"&gt;6. DNS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You may have to enable the Cloud DNS API before creating a managed zone.&lt;/p&gt;

&lt;p&gt;But before that, I bought my own, personal domain (&lt;a href="https://smgestupa.dev" rel="noopener noreferrer"&gt;smgestupa.dev&lt;/a&gt;) which will be required for this step -- I paid &lt;strong&gt;$12.62&lt;/strong&gt; for one year, upfront. If you plan on purchasing a domain, decide as if you'll be using it for the rest of your life.&lt;/p&gt;

&lt;p&gt;It's was also pretty easy to migrate my domain to Cloud DNS, specifically a subdomain since I don't want to migrate my whole domain to it.&lt;/p&gt;

&lt;p&gt;If you don't plan on moving your whole domain (&lt;em&gt;like mine!&lt;/em&gt;), then the process will be simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Look up the nameservers generated by your Cloud DNS.&lt;/li&gt;
&lt;li&gt;Import the nameservers to your primary DNS management service.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After importing the nameservers, you'll have to create new DNS records in your managed zone moving forward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; you should consult your DNS management service's documentation for importing nameservers.&lt;/p&gt;

&lt;p&gt;My estimated cost for a managed zone is &lt;strong&gt;$0.20&lt;/strong&gt;/month&lt;/p&gt;

&lt;h2&gt;
  
  
  Load Balancing
&lt;/h2&gt;

&lt;p&gt;As part of &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#5-https" rel="noopener noreferrer"&gt;5. HTTPS&lt;/a&gt;, I used a &lt;a href="https://cloud.google.com/load-balancing" rel="noopener noreferrer"&gt;Cloud Load Balancer&lt;/a&gt; to add a user-friendly domain with HTTPS for my static website by linking it with my managed zone. It's also used to distribute user traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You may have to enable the Compute Engine API before creating a load balancer.&lt;/p&gt;

&lt;p&gt;I deployed a public-facing, global load balancer with a rule that uses port 443 (&lt;em&gt;for HTTPS&lt;/em&gt;) with a static IP address for the &lt;strong&gt;Frontend configuration&lt;/strong&gt;, which will be used to map my subdomain to it. Since I'm using a managed zone, I secured the rule with a Google-managed SSL certificate.&lt;/p&gt;

&lt;p&gt;For the &lt;strong&gt;Backend configuration&lt;/strong&gt;, a backend bucket was created that points to my bucket that hosts the static website with &lt;a href="https://cloud.google.com/cdn" rel="noopener noreferrer"&gt;Cloud CDN&lt;/a&gt; enabled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Routing rules&lt;/strong&gt; were left as-is. After finalizing my changes, I proceeded to create it and had to wait for a few seconds for the load balancer to initialize.&lt;/p&gt;

&lt;p&gt;The architecture for this deployment was:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs4t2bxi1cqv9eyjlbct.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs4t2bxi1cqv9eyjlbct.webp" alt="Architecture with Cloud CDN, Cloud Load Balancer (with Cloud CDN enabled)" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My estimated cost is &lt;strong&gt;$20.44&lt;/strong&gt;/month, since my rule incurs $0.28/hour (for the first 5 rules) in Singapore. The rates will vary depending on the load balancer's region.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database
&lt;/h2&gt;

&lt;p&gt;For step &lt;em&gt;&lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#8-database" rel="noopener noreferrer"&gt;8. Database&lt;/a&gt;&lt;/em&gt;, I used a native, regional Firestore database to track the number of visits to my static website.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You may have to enable the Cloud Firestore API before creating a database.&lt;/p&gt;

&lt;p&gt;Firestore is a NoSQL document database that was easy to setup. In my case, I created two for each environment: production and staging.&lt;/p&gt;

&lt;p&gt;The production database uses the &lt;code&gt;(default)&lt;/code&gt; database, since Firestore has a free quota for this database and I believe will be useful to prevent incurring any extra charges.&lt;/p&gt;

&lt;p&gt;Additionally, a named database is used for staging, since named databases have no free quota and will be useful for my pipelines.&lt;/p&gt;

&lt;p&gt;My estimated total cost for my Firestore databases is &lt;strong&gt;$0.00&lt;/strong&gt;/month, for I don't expect my production database to exceed the free quota while my pipelines call less than 10 operations per workflow.&lt;/p&gt;

&lt;p&gt;Of course, I can't just directly give my static website permissions to modify my databases, which is why I created a &lt;a href="https://cloud.google.com/functions" rel="noopener noreferrer"&gt;Cloud Function&lt;/a&gt; as a "middle-man" -- we should always assume there will be malicious actors that will cause irreparable damage if they have direct access to a database (&lt;em&gt;I don't want to get charged by Google Cloud hehe&lt;/em&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Function
&lt;/h2&gt;

&lt;p&gt;To give a controlled way for my static website to increment the total visits, I provisioned a Python function (&lt;em&gt;as part of steps &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#9-api" rel="noopener noreferrer"&gt;9. API&lt;/a&gt; &amp;amp; &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#10-python" rel="noopener noreferrer"&gt;10. Python&lt;/a&gt;&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You may have to enable the Cloud Functions API before creating a function.&lt;/p&gt;

&lt;p&gt;The process for my code was straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;My static website sends a GET HTTP requests to my function.&lt;/li&gt;
&lt;li&gt;The function increments the counter in Firestore by 1.&lt;/li&gt;
&lt;li&gt;Right before the function returns a response, it retrieves the counter's value which is added to the response body.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I opted to use a first-generation environment for faster cold starts, while I configured the function to have a 128MiB RAM and 0.167 vCPU -- I believe my function does not need more to do simple operations.&lt;/p&gt;

&lt;p&gt;An environment variable is also used to determine the Firestore database to be used, depending on the environment for my pipeline's testing job.&lt;/p&gt;

&lt;p&gt;My estimated cost is &lt;strong&gt;$0.00&lt;/strong&gt;/month, since the billing is request-based and I don't expect the function to invoke anywhere near a million per month.&lt;/p&gt;

&lt;p&gt;For my static website, I created a new section to display the total number of visits. Additionally, a JavaScript code was added that will send a request to the function to increment the total visits, and display the new value from the response body.&lt;/p&gt;

&lt;p&gt;The architecture for this deployment was:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffw7j9zqm5zefvvck261b.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffw7j9zqm5zefvvck261b.webp" alt="Architecture with Cloud Functions and Firestore" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Source Control
&lt;/h1&gt;

&lt;p&gt;Of course, I didn't forget to use a source control for all of my code. I've always had the habit to create a repository before I start a new project.&lt;/p&gt;

&lt;p&gt;For this challenge, I used &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; (&lt;em&gt;as part of step &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#13-source-control" rel="noopener noreferrer"&gt;13. Source Control&lt;/a&gt;&lt;/em&gt;) to store my front-end (&lt;em&gt;static website&lt;/em&gt;) and back-end (&lt;em&gt;Python function&lt;/em&gt;) in separate, private repositories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Front-end
&lt;/h2&gt;

&lt;p&gt;Manually building my website then uploading it to my bucket was good and all, for about 5 seconds. I realized that I'd be repeating this tedious process for weeks while doing this challenge, which is why I decided to setup a pipeline with GitHub Actions (&lt;em&gt;as part of step &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#15-cicd-front-end" rel="noopener noreferrer"&gt;15. CI/CD (Front-end)&lt;/a&gt;&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;For my pipeline, I created two workflows depending on the action:&lt;/p&gt;

&lt;h3&gt;
  
  
  Pull Request to &lt;code&gt;main&lt;/code&gt; Branch
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Use &lt;code&gt;actions/setup-node@v4&lt;/code&gt; to install NodeJS, which will be accessible for the whole runner.&lt;/li&gt;
&lt;li&gt;Checkout to the repo with &lt;code&gt;actions/checkout@v4&lt;/code&gt;, install &lt;code&gt;pnpm&lt;/code&gt;, install dependencies then build, all with &lt;code&gt;pnpm&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Push to &lt;code&gt;main&lt;/code&gt; Branch
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Use &lt;code&gt;actions/setup-node@v4&lt;/code&gt; to install NodeJS, which will be accessible for the whole runner.&lt;/li&gt;
&lt;li&gt;Checkout to the repo with &lt;code&gt;actions/checkout@v4&lt;/code&gt;, install &lt;code&gt;pnpm&lt;/code&gt;, install dependencies then build, all with &lt;code&gt;pnpm&lt;/code&gt;. The compiled code will be reused as an artifact via &lt;code&gt;actions/upload-artifact@v4&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Authenticate to Google Cloud, setup &lt;a href="https://cloud.google.com/sdk/docs/install" rel="noopener noreferrer"&gt;gcloud&lt;/a&gt; CLI, and download the compiled code artifact with &lt;code&gt;actions/download-artifact@v4&lt;/code&gt;. The artifact will then be uploaded to my bucket via &lt;code&gt;gcloud storage cp&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Back-end
&lt;/h2&gt;

&lt;p&gt;The Cloud Functions has an option to link a function to a repo, but I instead opted to manually setup my automation process (&lt;em&gt;as part of step &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#14-cicd-back-end" rel="noopener noreferrer"&gt;14. CI/CD (Back-end)&lt;/a&gt;&lt;/em&gt;) so that I can deepen my understanding of pipelines a little bit more.&lt;/p&gt;

&lt;p&gt;I also had a similar realization when working on my front-end repo, where I will repeatedly doing the tedious process of copy-and-pasting my code to update my function, leading me to build a new pipeline.&lt;/p&gt;

&lt;p&gt;Additionally, I implemented unit testing (&lt;em&gt;as part of step &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/googlecloud/#11-tests" rel="noopener noreferrer"&gt;11. Tests&lt;/a&gt;&lt;/em&gt;) for my pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Should return an HTTP 200 status code.&lt;/li&gt;
&lt;li&gt;Should return an HTTP 200 status code, a &lt;code&gt;counter&lt;/code&gt; should be in the response, and the &lt;code&gt;counter&lt;/code&gt; should be greater than or equal to 0.&lt;/li&gt;
&lt;li&gt;Should return an HTTP 204 status code, and includes &lt;code&gt;Access-Control-Allow-Origin&lt;/code&gt;, &lt;code&gt;Access-Control-Allow-Methods&lt;/code&gt;, and &lt;code&gt;Access-Control-Allow-Headers&lt;/code&gt; in the headers with these values:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Access-Control-Allow-Origin&lt;/code&gt;: &lt;code&gt;['https://resume.smgestupa.dev']&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Access-Control-Allow-Methods&lt;/code&gt;: &lt;code&gt;['GET']&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Access-Control-Allow-Headers&lt;/code&gt;: &lt;code&gt;['Accept']&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Should return an HTTP 200 status code, and &lt;code&gt;Access-Control-Allow-Origin&lt;/code&gt; with &lt;code&gt;['&lt;a href="https://resume.smgestupa.dev'" rel="noopener noreferrer"&gt;https://resume.smgestupa.dev'&lt;/a&gt;]&lt;/code&gt;.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Similar to the pipeline in my front-end repo, I created two workflows depending on the action:&lt;/p&gt;

&lt;h3&gt;
  
  
  Pull Request to &lt;code&gt;main&lt;/code&gt; Branch
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Update the &lt;code&gt;FIRESTORE_DATABASE&lt;/code&gt; environment variable to point to the staging database, checkout to the repo with &lt;code&gt;actions/checkout@v4&lt;/code&gt;, and run unit testing.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Push to &lt;code&gt;main&lt;/code&gt; Branch
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Update the &lt;code&gt;FIRESTORE_DATABASE&lt;/code&gt; environment variable to point to the staging database, checkout to the repo with &lt;code&gt;actions/checkout@v4&lt;/code&gt;, and run unit testing.&lt;/li&gt;
&lt;li&gt;Checkout to the repo with &lt;code&gt;actions/checkout@v4&lt;/code&gt;, authenticate to Google Cloud, setup gcloud CLI, create a &lt;code&gt;build&lt;/code&gt; folder, copy the &lt;code&gt;main.py&lt;/code&gt; and &lt;code&gt;requirements.txt&lt;/code&gt; inside the &lt;code&gt;build&lt;/code&gt; folder, and update the function via &lt;code&gt;gcloud run deploy&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  CI/CD
&lt;/h2&gt;

&lt;p&gt;I made heavy use of compose actions to reuse repetitive steps, like authenticating to Google Cloud, so it could reduce the time for me to manually update the repetitive steps across multiple jobs.&lt;/p&gt;

&lt;p&gt;For authentication, I utilized OIDC via &lt;a href="https://cloud.google.com/iam/docs/workload-identity-federation" rel="noopener noreferrer"&gt;Workload Identity Federation&lt;/a&gt; which lets me selectively choose which repo can deploy to my Google Cloud project without needing to download a service account's credentials.&lt;/p&gt;

&lt;p&gt;Fortunately, GitHub Actions offer free quota for private repos and I estimated the cost to be &lt;strong&gt;$0.00&lt;/strong&gt;/month since I don't expect to run my pipelines for more than 33 hours in total.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final Architecture
&lt;/h1&gt;

&lt;p&gt;I recreated my resume with SvelteKit, deployed it as a static website with Cloud Storage, connected it to a Cloud Load Balancer, then pointed my subdomain to it with Cloud DNS.&lt;/p&gt;

&lt;p&gt;To streamline my deployment to both of my static website &amp;amp; function, I automated the process with GitHub Actions.&lt;/p&gt;

&lt;p&gt;After all is done, I'm proud to bring myself to the greatest part: my final architecture that is running now for my static website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feljsmwejc5bpbl3tgssj.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feljsmwejc5bpbl3tgssj.webp" alt="Final architecture with Google Cloud &amp;amp; CI/CD pipelines" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the help of &lt;a href="https://cloud.google.com/products/calculator" rel="noopener noreferrer"&gt;Google Cloud Pricing Calculator&lt;/a&gt;, I estimated my total monthly cost to be &lt;strong&gt;$21.69&lt;/strong&gt; -- I only included the cost of provisioning services and omitted specific metrics such as data transfers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Monthly&lt;/th&gt;
&lt;th&gt;Yearly&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Personal Domain (1-year, upfront)&lt;/td&gt;
&lt;td&gt;$1.052&lt;/td&gt;
&lt;td&gt;$12.62&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Storage&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Load Balancer&lt;/td&gt;
&lt;td&gt;$20.44&lt;/td&gt;
&lt;td&gt;$245.28&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud DNS&lt;/td&gt;
&lt;td&gt;$0.20&lt;/td&gt;
&lt;td&gt;$2.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Firestore&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Run Functions&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Actions&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$21.69/month&lt;/td&gt;
&lt;td&gt;$260.3/year&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I still have more than $200 free credits that have yet to expire in less than 90 days, so at least for now, I can keep everything running.&lt;/p&gt;

&lt;p&gt;If you're reading this and want to deploy something similar, I do have an alternative in mind that should bring your monthly cost to $0.00/month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternative
&lt;/h2&gt;

&lt;p&gt;Assuming you already bought a personal domain, this setup should reduce your monthly cost to &lt;strong&gt;$0.00&lt;/strong&gt;/month by removing both the Cloud Load Balancer and Cloud DNS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qa6pw2u842rus4m0u80.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qa6pw2u842rus4m0u80.webp" alt="Alternative architecture without Cloud Load Balancer &amp;amp; Cloud CDN" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How this works: you can redirect/forward your domain/subdomain to the public URL of the &lt;code&gt;index.html&lt;/code&gt; with your DNS management service. The trade-off for this is that the public URL will always be displayed after redirection.&lt;/p&gt;

&lt;p&gt;Ultimately, it's about deciding what you'll compromise or the trade-offs you'll be comfortable with, especially avoiding to pay every month just to host a resume.&lt;/p&gt;

&lt;h1&gt;
  
  
  What I've learned
&lt;/h1&gt;

&lt;p&gt;This was an exciting journey. I already planned on doing this a few months ago, but couldn't decide to proceed because I felt the challenge was unnecessary at the time.&lt;/p&gt;

&lt;p&gt;I was fortunate to see myself fail, learn, and grow in new ways -- especially in a new cloud platform -- I've always wanted to grow beyond my current responsibilities, starting with automating more of my deployments.&lt;/p&gt;

&lt;p&gt;Failing with automation and monitoring (&lt;em&gt;there were moments I couldn't understand something&lt;/em&gt;) was essential to prepare myself to become a DevOps or Site Reliability Engineer. And I could clearly see my progress over time, and one of those is successfully deploying my static website.&lt;/p&gt;

&lt;p&gt;I also applied as much as I could from what I've learned from my usual cloud platform: to design an architecture that is operational, secure, reliable, performant, while optimizing for cost.&lt;/p&gt;

&lt;p&gt;I know this can be better and there's always for my improvement, but I'm proud to say what I've learned here and now will help me to become better for what is next for me.&lt;/p&gt;

&lt;h1&gt;
  
  
  Next Steps
&lt;/h1&gt;

&lt;p&gt;I've actually already started preparing for one the &lt;a href="https://cloud.google.com/learn/certification/cloud-engineer" rel="noopener noreferrer"&gt;Google Cloud Associate Cloud Engineer&lt;/a&gt;, since I want to broaden my skills even more while pushing myself further.&lt;/p&gt;

&lt;p&gt;Nevertheless, I'll keep on doing this self-learning journey, focusing more on automation, monitoring, or anything that is related to DevOps or Site Reliability Engineer (&lt;em&gt;hehe&lt;/em&gt;).&lt;/p&gt;




&lt;p&gt;All in all, it was fun to fail and learn. &lt;/p&gt;

&lt;p&gt;Thanks for making it this far, I truly appreciate it!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>googlecloud</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Is it worth taking HashiCorp's Terraform Associate exam?</title>
      <dc:creator>Shawn Gestupa</dc:creator>
      <pubDate>Fri, 25 Jul 2025 02:58:27 +0000</pubDate>
      <link>https://forem.com/smgestupa/is-it-worth-taking-hashicorps-terraform-associate-exam-4n2i</link>
      <guid>https://forem.com/smgestupa/is-it-worth-taking-hashicorps-terraform-associate-exam-4n2i</guid>
      <description>&lt;p&gt;Initially, I overlooked Terraform for the past few years, something I never imagined I'll be using. &lt;/p&gt;

&lt;p&gt;Until for the past few months and now, after falling in love with it, I am proud to announce that I've passed the HashiCorp Certified: Terraform Associate (003) examination and am now officially certified.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxj5wkvujhvxghxivw1m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxj5wkvujhvxghxivw1m.jpg" alt="HashiCorp Certified: Terraform Associate Certificate (003)" width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As someone new to Terraform (just started this year), I aimed to get this certificate not because I want to, but because of necessity: I won't be able to utilize it anymore due to difference in tools used in my current team, so I had to take an initiative by using what I've learned to the fullest and make it worth as much as I can. So for me, &lt;strong&gt;I truly believe the Terraform Associate exam is worth it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I had a hard time learning Terraform's opinionated structure -- being much more opinionated than the web frameworks that I've used before. Like any other person that is starting with something (or except to those that are built different), I struggled but for the next few months, I've started to get a hang of it and even find beauty within it for the way it handles automation for you and simplifies how you provision large infrastructure, making you love your job more (hopefully hehe).&lt;/p&gt;

&lt;p&gt;About the exam, it's 1 hour long and must be scheduled 48 hours in advance. There are less situational questions and lean more on being "direct", so if you want to pass the exam, you can start with the &lt;a href="https://developer.hashicorp.com/terraform/docs" rel="noopener noreferrer"&gt;official Terraform documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Personally, you don't have to buy courses but it would be hypocritical for me to not say that I've used one. In addition to courses, here are some resources that helped me pass the exam:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.pluralsight.com/courses/hashicorp-certified-terraform-associate" rel="noopener noreferrer"&gt;Pluralsight HashiCorp Certified Terraform Associate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.udemy.com/course/terraform-associate-practice-exam/" rel="noopener noreferrer"&gt;Bryan Krausen's HashiCorp Certified: Terraform Associate Practice Exam 2025&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.examtopics.com/exams/hashicorp/terraform-associate/" rel="noopener noreferrer"&gt;ExamTopics HashiCorp Terraform Associate Exam&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don't want to read the Terraform documentation, you can start with the practice exams (just like what I did hehe).&lt;/p&gt;

&lt;p&gt;Besides these resources, don't forget to keep on learning Terraform, especially getting a practical experience as this one of the key factors that built my confidence. I would also recommend to take notes if you want to, since for me, typing what I'm watching and hearing from the courses helped me remember. &lt;/p&gt;

&lt;p&gt;All in all, keep on learning, believe in yourself, and be confident as much as you can. &lt;/p&gt;

&lt;p&gt;Now on to the next challenge! I will aim to get a new AWS Associate certificate, either the SysOps Administrator (soon to be CloudOps Engineer) or Developer.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>What started me to (re-)learn programming</title>
      <dc:creator>Shawn Gestupa</dc:creator>
      <pubDate>Thu, 15 Dec 2022 14:44:04 +0000</pubDate>
      <link>https://forem.com/smgestupa/what-started-me-to-re-learn-programming-5bke</link>
      <guid>https://forem.com/smgestupa/what-started-me-to-re-learn-programming-5bke</guid>
      <description>&lt;p&gt;Before I started to love programming, I stressed about how and where I should start learning: should I learn C++ or Java instead? Should I start on web development or game development?&lt;/p&gt;

&lt;p&gt;This is my first post and I'll be writing about how I pushed myself to become a programmer.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤔 How I started and stopped learning programming
&lt;/h2&gt;

&lt;p&gt;I always dreamt of making websites; I even dreamt of making something cool on the internet to the point of thinking of making a game after playing so much AdventureQuest Worlds, despite having no experience in game development.&lt;/p&gt;

&lt;p&gt;Fast forward, I had my first exposure to programming during Grade 11. We were taught Virtual Basic, with one of our final projects was to make a 2D game using it.&lt;/p&gt;

&lt;p&gt;Since this was my first exposure to programming languages, making a game was hard. Though I made one despite the game looking simple, bad, and riddled with bugs/glitches. But the experience felt rewarding.&lt;/p&gt;

&lt;p&gt;Then in Grade 12, we were taught how to program an Arduino board. For me, it was extremely hard compared to making software/games, and I realized that I'm bad at hardware programming. Nonetheless, the experience was amazing, even if the board was simply programmed to blink LED lights in intervals.&lt;/p&gt;

&lt;p&gt;Even after stepping into college and doing some simple web development for our final project, I still couldn't push myself to keep on learning or commit to a specific language.&lt;/p&gt;

&lt;p&gt;I would be stuck thinking about where I should start. I began to get anxious from the dilemma, pushing me away from my dream of becoming a programmer.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎉 Open source to the rescue!
&lt;/h2&gt;

&lt;p&gt;After my 1st year of college, I had the idea to learn something related to web development since I was familiar with HTML, CSS, and JavaScript.&lt;/p&gt;

&lt;p&gt;I thought about making a Twitter bot that checks articles if they contain false information (too hard, I don't even know why I came up with this) or a website that gets food information. I settled with the latter.&lt;/p&gt;

&lt;p&gt;Then I started to choose a framework, I was contemplating using &lt;a href="https://reactjs.org/" rel="noopener noreferrer"&gt;React&lt;/a&gt; because 1. It is widely adopted; 2. It has an extensive library to choose from; and 3. I could get a job with an experience in React.&lt;/p&gt;

&lt;p&gt;However, I settled with &lt;a href="https://svelte.dev/" rel="noopener noreferrer"&gt;Svelte&lt;/a&gt;, because it caught my eye for how simple it is to use and learn, despite 1. It is less popular; 2. It has fewer options in choosing a library; and 3. I likely won't get a job with an experience in Svelte.&lt;/p&gt;

&lt;p&gt;But I pushed on, because of how I'm getting comfortable with using the framework.&lt;/p&gt;

&lt;p&gt;After a month of developing the website, I started to love programming. The joy of making something that you want to share felt like it is something that I've been meaning to find. The idea of open source helped me find that joy.&lt;/p&gt;

&lt;p&gt;It took me a few months to complete the website, but I did it. I don't care how slow I made the website, how the HTML code was a convoluted mess nor how troubled I am in understanding how JSON and API work because I was so eager to publish and share what I made to the internet. I didn't even know how components worked in Svelte or how to use them.&lt;/p&gt;

&lt;p&gt;After the website was complete, I hastily published the website's code on &lt;a href="https://gitlab.com/laazyCmd/unnamed-food-data-search" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt;, then later on &lt;a href="https://github.com/laazyCmd/unnamed-food-data-search" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; after switching there.&lt;/p&gt;

&lt;p&gt;Publishing the code on a code-hosting platform helped me find the motivation to keep on making and learning, I even found the confidence to contribute to other open source projects - even if my pull request was for a documentation/CSS fix, I was nonetheless happy as I was able to contribute.&lt;/p&gt;

&lt;p&gt;Now I'm happily publishing what I made on &lt;a href="https://github.com/laazyCmd" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Regardless if the project is simple, I'll already be happy to share what I've made. I've also decided to publish my programming-related college projects, just to showcase what my group had made.&lt;/p&gt;

&lt;p&gt;I even re-wrote the first website I published and then published it on &lt;a href="https://github.com/laazyCmd/food-product-search" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, after watching and understanding different videos &amp;amp; articles regarding code efficiency and better standards (I chose to generalize this because I genuinely couldn't remember the resources I've used, though they ingrained to me).&lt;/p&gt;

&lt;p&gt;I felt like I now have a better grasp when it comes to learning different technology and programming languages.&lt;/p&gt;

&lt;p&gt;The idea of open source gave me a purpose. Pushing myself to explore different technology is beneficial, so I may contribute to more open source projects and expose myself to more opportunities.&lt;/p&gt;

&lt;h2&gt;
  
  
  👌 Conclusion
&lt;/h2&gt;

&lt;p&gt;Joining the open source community became an amazing experience. It slowly moved me out of my comfort zone and allowed me to appreciate the combined efforts of programmers to make something happen.&lt;/p&gt;

&lt;p&gt;If you want to learn to code/programming, then learn something you're passionate about or interested in. Of course, start with something easy so that you're more motivated to finish it. You can even apply what you've learned for your next project.&lt;/p&gt;

&lt;p&gt;Whoever you are, wherever you are, it's not too late to join the open source community, you may even find your purpose to keep on learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tl;dr&lt;/strong&gt; I love open source!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
