<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Josh Kasuboski</title>
    <description>The latest articles on Forem by Josh Kasuboski (@kasuboski).</description>
    <link>https://forem.com/kasuboski</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kasuboski"/>
    <language>en</language>
    <item>
      <title>One API to Rule Them All: Building an OpenAI Gateway</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Thu, 01 May 2025 23:23:59 +0000</pubDate>
      <link>https://forem.com/kasuboski/one-api-to-rule-them-all-building-an-openai-gateway-349</link>
      <guid>https://forem.com/kasuboski/one-api-to-rule-them-all-building-an-openai-gateway-349</guid>
      <description>&lt;p&gt;I only speak OpenAI API now. Plus with logging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Needed This
&lt;/h2&gt;

&lt;p&gt;I’ve been using different AI providers’ APIs for various projects, but I faced two main challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Each provider has their own SDK and API format, requiring different code paths in my applications&lt;/li&gt;
&lt;li&gt;I wanted better visibility into what was happening with my requests across all providers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Cloudflare’s AI Gateway provides great logging out of the box, but I still had the problem of juggling multiple SDKs and API formats. I wanted a single, consistent interface that would work with any AI provider while giving me all the logging benefits.&lt;/p&gt;

&lt;p&gt;That’s why I created &lt;a href="https://github.com/kasuboski/openai-gateway" rel="noopener noreferrer"&gt;openai-gateway&lt;/a&gt; - a service that exposes an OpenAI-compatible API but can route to different AI providers behind the scenes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The concept is straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Your application sends requests to my gateway using the OpenAI API format&lt;/li&gt;
&lt;li&gt;The gateway authenticates your request with a simple API key&lt;/li&gt;
&lt;li&gt;It translates the request if needed and forwards it through Cloudflare AI Gateway to the appropriate provider (currently supporting Gemini)&lt;/li&gt;
&lt;li&gt;The response comes back, gets converted to OpenAI format if necessary, and returns to your application&lt;/li&gt;
&lt;li&gt;Cloudflare logs everything along the way&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole thing runs as a Cloudflare Worker using Hono, making it lightweight and globally distributed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Using it is as simple as changing your base URL. If you’re using the OpenAI SDK, it’s just:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const openai = new OpenAI({
 apiKey: "your-api-key",
 baseURL: "https://your-gateway-url/v1"
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or with curl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://your-gateway-url/v1/chat/completions \
 -H "Content-Type: application/json" \
 -H "Authorization: Bearer your-api-key" \
 -d '{
 "model": "gemini/gemini-pro",
 "messages": [{ "role": "user", "content": "Hello!" }]
 }'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that even though we’re using Gemini as the provider, we’re still using the OpenAI SDK and API format. The model name &lt;code&gt;gemini/gemini-pro&lt;/code&gt; tells the gateway which provider and model to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You See in Cloudflare
&lt;/h2&gt;

&lt;p&gt;This is where the magic happens. The Cloudflare dashboard gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complete request and response logging&lt;/li&gt;
&lt;li&gt;Token counts for each request&lt;/li&gt;
&lt;li&gt;Cost estimates based on your usage (I was using a free tier key in the screenshot)&lt;/li&gt;
&lt;li&gt;Response times and error rates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All without changing how your application works or adding any custom logging code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w6v1wiffvkdmu4jj6s7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w6v1wiffvkdmu4jj6s7.png" alt="cloudflare-dashboard" width="800" height="99"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Building this was surprisingly simple. The entire project is just a few hundred lines of code, but it solves a real problem I was having.&lt;/p&gt;

&lt;p&gt;I’ve liked making these small “glue” projects. They don’t need to be complex to be useful. Many small tools also makes it easier for the AI coders to work on them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;I’ve already implemented Gemini as the first provider, and I’m planning to expand support to include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More AI providers like Anthropic&lt;/li&gt;
&lt;li&gt;Enable more of the Cloudflare AI Gateway features like caching&lt;/li&gt;
&lt;li&gt;Add request fallbacks, try another model if the first fails&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eventually it could become something more like OpenRouter.&lt;/p&gt;

&lt;p&gt;But for now, it does exactly what I need - gives me visibility without complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;If you want to set up your own gateway try it out on cloudflare:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/kasuboski/openai-gateway.git
npm install
npm run dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or just star the repo and follow along as I add more features!&lt;/p&gt;

</description>
      <category>cloudflare</category>
      <category>openai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Building a Python Docker Image with Distroless and Uv</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Sat, 08 Mar 2025 19:03:13 +0000</pubDate>
      <link>https://forem.com/kasuboski/building-a-python-docker-image-with-distroless-and-uv-1clg</link>
      <guid>https://forem.com/kasuboski/building-a-python-docker-image-with-distroless-and-uv-1clg</guid>
      <description>&lt;p&gt;The images are too damn big! 📈 Let’s use a sane project manager and build an image with minimal dependences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tooling
&lt;/h2&gt;

&lt;p&gt;This will assume you use &lt;a href="https://docs.astral.sh/uv/" rel="noopener noreferrer"&gt;uv&lt;/a&gt; to manage your python project. It’s really made me actually consider using python now… Uv has a pretty nice docker tutorial on GitHub that we’re going to base off. You can find that &lt;a href="https://github.com/astral-sh/uv-docker-example" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It’s an example fastapi project that returns &lt;code&gt;hello world&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s different?
&lt;/h2&gt;

&lt;p&gt;Their example includes a standalone multi-stage build that is pretty good. The standalone aspect comes from allowing uv to install their version of python that will match your project. The multi-stage build ensures you end up with a base debian image with just python and your app.&lt;/p&gt;

&lt;p&gt;I wanted to not include the source code of the app separately since it’ll be in the virtualenv anyway as well as get rid of debian.&lt;/p&gt;

&lt;p&gt;So we’ll base on a google distroless image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Breaking down the Dockerfile
&lt;/h2&gt;

&lt;p&gt;If you’re impatient you can just read the Dockerfile. The first new thing is the builder is installing its own version of python. There are two environment variables that tell uv to only use the version of python that is managed by uv.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;UV_PYTHON_INSTALL_DIR&lt;/code&gt; is the directory where the python version will be installed and &lt;code&gt;UV_PYTHON_PREFERENCE&lt;/code&gt; tells uv to only use the version of python that is managed by uv.&lt;/p&gt;

&lt;p&gt;It then actually installs python. After we get python, we can install the project dependencies. Uv can install only the dependencies with &lt;code&gt;--no-install-project&lt;/code&gt;. It only needs the lock file and &lt;code&gt;pyproject.toml&lt;/code&gt; for this. Installing the dependencies separately ensures better caching since you probably don’t change dependencies as much as the code.&lt;/p&gt;

&lt;p&gt;After dependencies, we can copy the app and install the rest. Using &lt;code&gt;--no-editable&lt;/code&gt; tells uv to not install the project with any dependency on the source code. Then our final image can be created from just the virtualenv.&lt;/p&gt;

&lt;p&gt;The final runtime image is &lt;code&gt;gcr.io/distroless/cc&lt;/code&gt;. This is a google &lt;a href="https://github.com/GoogleContainerTools/distroless?tab=readme-ov-file" rel="noopener noreferrer"&gt;distroless&lt;/a&gt; image that’s a little smaller than &lt;code&gt;debian:slim&lt;/code&gt;. Some of your python dependencies might still expect glibc at runtime so we use &lt;code&gt;cc&lt;/code&gt; vs a &lt;code&gt;static&lt;/code&gt; option.&lt;/p&gt;

&lt;p&gt;The only thing we need then is to copy the python we installed with uv and the virtualenv. Adding the virtualenv path lets us reference any of the tools like fastapi.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ghcr.io/astral-sh/uv:bookworm-slim AS builder
ENV UV_COMPILE_BYTECODE=1 UV_LINK_MODE=copy

# Configure the Python directory so it is consistent
ENV UV_PYTHON_INSTALL_DIR=/python

# Only use the managed Python version
ENV UV_PYTHON_PREFERENCE=only-managed

# Install Python before the project for caching
RUN uv python install 3.12

WORKDIR /app
RUN --mount=type=cache,target=/root/.cache/uv \
 --mount=type=bind,source=uv.lock,target=uv.lock \
 --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
 uv sync --frozen --no-install-project --no-dev --no-editable
COPY . /app
RUN --mount=type=cache,target=/root/.cache/uv \
 uv sync --frozen --no-dev --no-editable

# Then, use a final image without uv
FROM gcr.io/distroless/cc

# Copy the Python version
COPY --from=builder --chown=python:python /python /python

WORKDIR /app
# Copy the application from the builder
COPY --from=builder --chown=app:app /app/.venv /app/.venv

# Place executables in the environment at the front of the path
ENV PATH="/app/.venv/bin:$PATH"

# Run the FastAPI application by default
CMD ["fastapi", "run", "--host", "0.0.0.0", "/app/.venv/lib/python3.12/site-packages/uv_docker_example"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  But why?
&lt;/h2&gt;

&lt;p&gt;Making the smallest image can help with startup time as you need to download and load less before you get to your app. For me, the biggest reason to go for a smaller image though is to reduce the vulnerability surface. You can’t have vulnerabilities if there’s nothing to be vulnerable.&lt;/p&gt;

&lt;p&gt;Python isn’t super conducive to an ultra minimal image. The python interpreter is around 75MB with our virtualenv being around 55MB. The smallest image we could hope for is then 130MB. The distroless base is 23.5 MB so it’s quite a fraction of overall size.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;debian:bookworm-slim&lt;/code&gt; image is around 75MB. It also includes things like bash which can be nice for debugging, but also for vulnerabilities.&lt;/p&gt;

&lt;p&gt;It is worth noting, when I was doing this, &lt;code&gt;debian:bookworm-slim&lt;/code&gt; had no vulnerabilities reported by &lt;code&gt;trivy&lt;/code&gt;. The base images are generally kept up to date to patch vulnerabilities, but more vulnerabilities are found because of their nature. If you are rebasing your images often it may not matter to you.&lt;/p&gt;

&lt;p&gt;I previously explored continously scanning using &lt;a href="https://www.joshkasuboski.com/posts/image-scanning-trivy/" rel="noopener noreferrer"&gt;&lt;code&gt;trivy&lt;/code&gt;&lt;/a&gt; in a separate post using github actions.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Talos Cluster</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Wed, 30 Oct 2024 19:46:23 +0000</pubDate>
      <link>https://forem.com/kasuboski/talos-cluster-3pi6</link>
      <guid>https://forem.com/kasuboski/talos-cluster-3pi6</guid>
      <description>&lt;p&gt;Switching to a Kubernetes OS… Make the machine manage itself 🤖.&lt;/p&gt;

&lt;h2&gt;
  
  
  You what
&lt;/h2&gt;

&lt;p&gt;I’ve now migrated just about everything from my old &lt;a href="https://www.joshkasuboski.com/posts/multi-region-k3s/" rel="noopener noreferrer"&gt;k3s cluster&lt;/a&gt; to a cluster run by &lt;a href="https://www.talos.dev/" rel="noopener noreferrer"&gt;Talos&lt;/a&gt;. From the Talos website, “Talos Linux is Linux designed for Kubernetes – secure, immutable, and minimal.” It has only what is needed for kubernetes and nothing else. Talos also claims to manage kubernetes operations for you (or at least expose commands for you to do it).&lt;/p&gt;

&lt;p&gt;There is no ssh only a gRPC API. You get to install a new &lt;code&gt;*ctl&lt;/code&gt; to your compute as &lt;code&gt;talosctl&lt;/code&gt;. It applies declarative config files that manage the system. Most of my setup is checked in to my existing repo &lt;a href="https://github.com/kasuboski/k8s-gitops" rel="noopener noreferrer"&gt;kasuboski/k8s-gitops&lt;/a&gt;. Depending when you’re reading this it might still be on its own branch &lt;code&gt;feature/talos&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The migration
&lt;/h2&gt;

&lt;p&gt;Just changing the kubernetes distribution surely wasn’t enough to keep me interested. I also needed to drastically change the nodes, ingress, and templating /s.&lt;/p&gt;

&lt;p&gt;My previous k3s cluster had problems running a cloudflare tunnel consistently. I had ended up moving it out of the cluster until I figured out what was going on. While I was looking for a project to do it automatically I found the &lt;a href="https://github.com/pl4nty/cloudflare-kubernetes-gateway" rel="noopener noreferrer"&gt;cloudflare-kubernetes-gateway&lt;/a&gt; project. It implements the Gateway API instead of Ingress and will setup and manage cloudflare tunnel for you. I was surprised how much this just worked after giving it a Cloudflare API key.&lt;/p&gt;

&lt;p&gt;I also moved my local ingress from an nginx ingress controller to the &lt;a href="https://gateway.envoyproxy.io/" rel="noopener noreferrer"&gt;envoy gateway&lt;/a&gt;. It implements the Gateway API by spinning up Envoy pods. I had MetalLB deployed in my old cluster, but never really used it for anything. Now my Envoy Gateway gets an IP address on my home network.&lt;/p&gt;

&lt;p&gt;This let me run split dns where if you’re on my network my domain routes to the envoy gateway, but from outside you are routed to the cloudflare tunnel. I had generally been too chicken to mess with that before as DNS is always the problem. Indeed I’m still not sure I have the correct setup on my Macbook where Tailscale MagicDNS likes to overwrite my settings.&lt;/p&gt;

&lt;p&gt;My home network DNS points to &lt;a href="https://github.com/ori-edge/k8s_gateway" rel="noopener noreferrer"&gt;k8s_gateway&lt;/a&gt; running in the cluster that sets the records for the domain to be the MetalLB IP of the Envoy Gateway.&lt;a href="talos-cluster.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30dkg7fjidts3of6y4o9.png" alt="talos architecture" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Hardware Changes
&lt;/h2&gt;

&lt;p&gt;I had stopped running my Raspberry PIs awhile ago as I didn’t really have a use for them. Everything I needed to run was fine on just the mini PC or my NAS. Talos promises to make an HA control plane easier to manage though so I figured now’s the time.&lt;/p&gt;

&lt;p&gt;I now have 3 Raspberry Pi 4 4GB running the kubernetes control plane with etcd running on a USB thumb drive. There is another Raspbery Pi 4 8Gb that is a worker. The HP mini pc is another worker and it has a bigger drive in it to store the sqlite databases for the various media apps.&lt;/p&gt;

&lt;p&gt;The Raspberry Pi worker is nice for various apps that don’t need storage. For example, ArgoCD and the cluster gateway controllers can run there. I would like to set up replicated storage so more things can move around, but that’s more for fun than actual need.&lt;/p&gt;

&lt;p&gt;My free Oracle VM is still outside the cluster as I didn’t want to deal with indirect connectivity while also learning about Talos. I may try and add it back in now. Talos has a &lt;a href="https://www.talos.dev/v1.8/talos-guides/network/kubespan" rel="noopener noreferrer"&gt;KubeSpan&lt;/a&gt; feature that seems like it should make it fine. I am running the Tailscale extension on all of the nodes so far anyway so they should have connectivity.&lt;/p&gt;

&lt;p&gt;&lt;a href="server-rack.jpg"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokl7u9oa0f0ke63e8ms2.jpg" alt="Server rack" width="800" height="1062"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Templating Changes
&lt;/h2&gt;

&lt;p&gt;I switched the repo to use &lt;a href="https://cuelang.org/" rel="noopener noreferrer"&gt;Cuelang&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Previously, I was already handcrafting most of the yaml and then using kustomize to tie it together. There were a few upstreams that had kustomizations I could pull in and then some helm charts I would manually run &lt;code&gt;helm template&lt;/code&gt; on.&lt;/p&gt;

&lt;p&gt;Now I made a wrapper that reads in a &lt;code&gt;vendor.cue&lt;/code&gt; file and will download a yaml file from a URL or will run kustomize with a path. It then imports the yaml into Cue and I can use it in my configuration from there. The rest of my repo is structured to match how ArgoCD apps work. Resources are assigned to an app that is then tied to a namespace.&lt;/p&gt;

&lt;p&gt;If you look in the repo you’ll see a &lt;code&gt;manifests/&lt;/code&gt; folder than is the output of the Cue files as JSON. ArgoCD is then setup to watch this folder. I’d like to eventually get rid of this setup and use store the manifests in an OCI registry. In the short time this has been setup, I still constantly forget to regenerate that manifests folder. I don’t know if I’ll modify ArgoCD to work with that setup or do something else. I could perhaps &lt;em&gt;gasp&lt;/em&gt; generate helm charts that are stored in OCI.&lt;/p&gt;

&lt;p&gt;I’ve liked working with Cue so far. I wish the autocomplete setup in VSCode was better, but I also usually only need to look up a field once since everything else can inherit from a base.&lt;/p&gt;

&lt;h2&gt;
  
  
  Moving Forward
&lt;/h2&gt;

&lt;p&gt;I like the setup so far. I’m still worried about etcd burning through those usb drives so I should probably be setting up backups… My other planned changes I’ve already alluded to. I want to get the free Oracle VM added to the cluster. I also want to get rid of the &lt;code&gt;manifests&lt;/code&gt; folder that I need to remember to update.&lt;/p&gt;

&lt;p&gt;Maybe further down the line I’ll try out &lt;a href="https://kubevirt.io/" rel="noopener noreferrer"&gt;kubevirt&lt;/a&gt; to mess with development VMs. I’ve also been looking at running &lt;a href="https://piraeus.io/" rel="noopener noreferrer"&gt;Piraeus&lt;/a&gt; since it seems like I could have replicated storage with only 2 nodes and have it be region and zone aware.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>talos</category>
      <category>gitops</category>
    </item>
    <item>
      <title>Running a Buildkit ARM Builder</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Sun, 08 May 2022 20:34:19 +0000</pubDate>
      <link>https://forem.com/kasuboski/running-a-buildkit-arm-builder-2dnd</link>
      <guid>https://forem.com/kasuboski/running-a-buildkit-arm-builder-2dnd</guid>
      <description>&lt;p&gt;I was sick of my hour long ARM docker builds. A 15x speedup using existing infrastructure isn't bad.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I build my feedreader for ARM in Github Actions. The workflow builds a multi-arch docker image on push. The x86 build was pretty quick, but the ARM build was using qemu which made it take around an hour. QEMU certainly didn't help, but the Github Actions runners aren't exactly the biggest machines at 2 vcpu and 7GB ram.&lt;/p&gt;

&lt;p&gt;My build was using the &lt;a href="https://github.com/docker/setup-buildx-action"&gt;docker buildx&lt;/a&gt; action. This makes the build use the newish buildkit backend for docker, but it's still running on the actions runner. I wanted to see if I could run my own buildkit backend. There was the option to connect it with a remote docker endpoint or a kubernetes cluster. Neither of which are really that appealing to me, although exposing the docker daemon over Tailscale could be fun.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;Right as I was looking to run my own &lt;code&gt;buildkitd&lt;/code&gt;, &lt;code&gt;buildx&lt;/code&gt; had a PR merged that would enable a remote builder driver. This lets you run &lt;code&gt;buildkitd&lt;/code&gt; somewhere and expose it over tcp. My &lt;a href="https://www.joshkasuboski.com/posts/multi-region-k3s/"&gt;kubernetes cluster&lt;/a&gt; has a free ARM node from Oracle that is pretty big (4 x 24GB). It's usually nowhere near fully utilized.&lt;/p&gt;

&lt;p&gt;Running a builder on it seemed like a great way to use the excess resources. Combined with &lt;code&gt;tailscale&lt;/code&gt; and the recommended mTLS auth I could have a rather secure build runner on my existing infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting it up
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/moby/buildkit#expose-buildkit-as-a-tcp-service"&gt;buildkit&lt;/a&gt; repo has instructions for running it over TCP. There is also an &lt;a href="https://github.com/moby/buildkit/tree/master/examples/kubernetes#deployment--service"&gt;example&lt;/a&gt; that shows how to run it in kubernetes with a deployment. I chose the deployment and service option vs a statefulset with consistent hashing because I was planning to use registry caching anyway and don't have immediate plans for many different builds to use this.&lt;/p&gt;

&lt;p&gt;I decided to expose it with Tailscale using the same &lt;a href="https://www.joshkasuboski.com/posts/tailscale-connect-kubernetes-pods/"&gt;process&lt;/a&gt; I had previously used for my feedreader. This means connecting to it requires you be on my tailnet (authenticated with Tailscale).&lt;/p&gt;

&lt;p&gt;In addition to requiring you be authenticated with Tailscale, the doc still recommends you use mTLS because the steps being built in the builder could potentially access the daemon as well. The example has a script to set up the certs for you, but I wanted to use the step cli from &lt;a href="https://smallstep.com/"&gt;Smallstep&lt;/a&gt;. It's still very simple, but I could control exactly what is set up.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Certificates
&lt;/h3&gt;

&lt;p&gt;The first step to run &lt;code&gt;buildkitd&lt;/code&gt; was to create the certificates it wants. I decided to make a Root CA for this along with an Intermediate CA and then server and client certificates. I didn't spend too long debating this and just followed a Smallstep guide…&lt;/p&gt;

&lt;p&gt;Creating the CA&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;step certificate create --profile root-ca "Buildkit Root CA" root_ca.crt root_ca.key

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating the Intermediate CA&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;step certificate create "Buildkit Intermediate CA 1" \
 intermediate_ca.crt intermediate_ca.key \
 --profile intermediate-ca --ca ./root_ca.crt --ca-key ./root_ca.key

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating the server cert&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;step certificate create buildkitd --san buildkitd --san localhost --san 127.0.0.1 buildkitd.crt buildkitd.key \
 --profile leaf --not-after=8760h \
 --ca ./intermediate_ca.crt --ca-key ./intermediate_ca.key --bundle --no-password --insecure

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating the client cert&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;step certificate create client client.crt client.key \
 --profile leaf --not-after=8760h \
 --ca ./intermediate_ca.crt --ca-key ./intermediate_ca.key --bundle --no-password --insecure

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll notice the server has a &lt;code&gt;buildkitd&lt;/code&gt; san, which is how I'll access it over Tailscale. The &lt;code&gt;local&lt;/code&gt; ones were for testing while port forwarding to the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running the Server
&lt;/h3&gt;

&lt;p&gt;You can find the example kubernetes yaml &lt;a href="https://github.com/moby/buildkit/blob/master/examples/kubernetes/deployment%2Bservice.rootless.yaml"&gt;here&lt;/a&gt;. It expects a kubernetes secret with &lt;code&gt;ca.pem&lt;/code&gt; and &lt;code&gt;key.pem&lt;/code&gt; keys. You can generate that from below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic buildkit-daemon-certs --from-file=key.pem=buildkitd.key --from-file=ca.pem=root_ca.crt --dry-run=client -oyaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My actual deployment can be found in &lt;a href="https://github.com/kasuboski/k8s-gitops/tree/main/builder/buildkit"&gt;kasuboski/k8s-gitops&lt;/a&gt;. It includes the &lt;a href="https://github.com/kasuboski/tailscale-proxy"&gt;tailscale-proxy&lt;/a&gt; as well as a &lt;code&gt;nodeSelector&lt;/code&gt; to make sure it schedules on the ARM node. It requests 1cpu and 512Mi with the limit set to 3.5cpu and 3Gi. It ends up having more cpu than Github Actions and isn't emulated. The memory is less, but hasn't been an issue.&lt;/p&gt;

&lt;p&gt;Once the server is running, it will be available in tailscale at &lt;code&gt;buildkitd&lt;/code&gt; since the proxy uses the deployment name.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting as a Client
&lt;/h3&gt;

&lt;p&gt;The client needs to have access over tailscale and a client cert. The easiest way is to use &lt;code&gt;buildctl&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buildctl --addr 'tcp://buildkitd:1234' \
 --tlscacert root_ca.crt \
 --tlscert client.crt \
 --tlskey client.key \
 build --frontend dockerfile.v0 --local context=. --local dockerfile=.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Building on multiple platforms with different builders requires &lt;code&gt;docker buildx&lt;/code&gt;. The remote driver is on &lt;code&gt;master&lt;/code&gt;, but isn't in a release yet. You can build buildx yourself to get access to that feature, but I only used it from Github Actions.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/docker/setup-buildx-action"&gt;setup buildx action&lt;/a&gt; has the option to build buildx from a specific commit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running in Github Actions
&lt;/h3&gt;

&lt;p&gt;If you want to skip to the workflow it's at &lt;a href="https://github.com/kasuboski/feedreader/blob/696debe2da1d26f1e4047806ff5e1f5ca5fbe347/.github/workflows/ci.yaml"&gt;kasuboski/feedreader&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The workflow will need secrets for Tailscale and the certificates. I use &lt;a href="https://doppler.com/join?invite=390F66AC"&gt;Doppler&lt;/a&gt; &lt;em&gt;referral link&lt;/em&gt; to manage the secrets. It synced super fast and has a nicer interface than doing it per repo in Github imo.&lt;/p&gt;

&lt;p&gt;Tailscale has a Github Action that will install and set it up given an auth key. They support ephemeral auth keys so you won't have a bunch of leftover machines in their system. Once installed, your workflow will have access to your tailnet and can reach &lt;code&gt;buildkitd&lt;/code&gt;. It's worth noting DNS magically works thanks to &lt;a href="https://tailscale.com/kb/1081/magicdns/"&gt;Magic DNS&lt;/a&gt;. Connecting to a kubernetes pod with a nice name and no other network setup is life changing.&lt;/p&gt;

&lt;p&gt;I had problems using the remote buildx driver with a different builder type. I ended up just running another &lt;code&gt;buildkitd&lt;/code&gt; on the actions runner. In the future, I'd like to run an x86 builder on one of my nodes.&lt;/p&gt;

&lt;p&gt;That's setup following inspiration from the buildx tests. This builder doesn't have mTLS setup, but I guess I'm fine for now since it's an ephemeral runner on Github's infrastructure 🤷‍♂️.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name buildkitd --privileged -p 1234:1234 moby/buildkit:buildx-stable-1 --addr tcp://0.0.0.0:1234
docker buildx create --name gh-builder --driver remote --use tcp://0.0.0.0:1234
docker buildx inspect --bootstrap

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding my arm runner is done after as below. The certs have already been written to disk from the Github secrets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker buildx create --append --name gh-builder \
 --node arm \
 --driver remote \
 --driver-opt key="$GITHUB_WORKSPACE/key.pem" \
 --driver-opt cert="$GITHUB_WORKSPACE/client_cert.pem" \
 --driver-opt cacert="$GITHUB_WORKSPACE/ca_cert.pem" \
 tcp://buildkitd:1234
docker buildx ls

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;docker buildx ls&lt;/code&gt; output should then show a &lt;code&gt;gh-builder&lt;/code&gt; with two nodes, one supporting &lt;code&gt;amd64&lt;/code&gt; and the other &lt;code&gt;arm&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="//buildx-ls.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BRzEVwWP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.joshkasuboski.com/posts/buildkit-builder/buildx-ls.png" alt="Docker Buildx List" width="880" height="79"&gt; &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After setting up the builder, the workflow went from an hour to under four minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Building on my excess capacity has been great and I want to add an x86 node as well. I could have run my own Github Runners, but that seems much more intense. All of my repos are public as well, so I figure I might as well use the free actions minutes.&lt;/p&gt;

&lt;p&gt;I do want to potentially make a service that will just give you a remote buildkit builder on demand. It's particularly helpful for ARM builds since those can be slow in emulation.&lt;/p&gt;

&lt;p&gt;I also looked into the cross compilation options, but just getting a native builder seemed easier and more flexible. Your &lt;code&gt;Dockerfile&lt;/code&gt; still has to not download a specific architecture explicitly, but otherwise most should be able to build multi-arch with this setup.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ci</category>
      <category>docker</category>
      <category>github</category>
    </item>
    <item>
      <title>Connect to Kubernetes Pods with Tailscale</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Sun, 03 Apr 2022 18:52:19 +0000</pubDate>
      <link>https://forem.com/kasuboski/connect-to-kubernetes-pods-with-tailscale-65l</link>
      <guid>https://forem.com/kasuboski/connect-to-kubernetes-pods-with-tailscale-65l</guid>
      <description>&lt;p&gt;I wasn't ready to add auth to my new feedreader. So I built a moat with Tailscale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does that &lt;em&gt;mean&lt;/em&gt;?
&lt;/h2&gt;

&lt;p&gt;I added a kubernetes pod to my tailnet so it's accessible from anywhere that can route to Tailscale nodes. It also gets a nice domain name so &lt;code&gt;http://feedreader&lt;/code&gt; works.&lt;/p&gt;

&lt;p&gt;Tailscale actually has a &lt;a href="https://tailscale.com/blog/kubecon-21/"&gt;blog post&lt;/a&gt; and &lt;a href="https://github.com/tailscale/tailscale/tree/main/docs/k8s"&gt;example&lt;/a&gt; for how to set this up. I wanted it to be mildly different so modified their run script. They also don't publish their example image so I needed to build one.&lt;/p&gt;

&lt;p&gt;You can see my version at &lt;a href="https://github.com/kasuboski/tailscale-proxy"&gt;kasuboski/tailscale-proxy&lt;/a&gt;. The main differences are it takes a &lt;code&gt;HOSTNAME&lt;/code&gt; and &lt;code&gt;DEST_PORT&lt;/code&gt; parameters.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;HOSTNAME&lt;/code&gt; is so you can set the name the node will show up as. This was important for kubernetes because I don't want the generated name of the pod to be how I access it. &lt;code&gt;DEST_PORT&lt;/code&gt; is so you can have it forward to a different port. This allows you to run your app on &lt;code&gt;8080&lt;/code&gt;, but the &lt;code&gt;tailscale-proxy&lt;/code&gt; will route any port to &lt;code&gt;8080&lt;/code&gt; meaning you can hit it on &lt;code&gt;80&lt;/code&gt; in your browser. This is required to not have &lt;code&gt;:8080&lt;/code&gt; ugliness after your URL.&lt;/p&gt;

&lt;h2&gt;
  
  
  But Why?
&lt;/h2&gt;

&lt;p&gt;I've been making a &lt;a href="https://github.com/kasuboski/feedreader"&gt;feedreader&lt;/a&gt; to replace my running &lt;a href="https://miniflux.app/"&gt;miniflux&lt;/a&gt;. It's my first &lt;em&gt;real&lt;/em&gt; rust project and I wanted an even more minimal feedreader.&lt;/p&gt;

&lt;p&gt;I haven't gotten around to figuring out users or authentication in the feedreader though. Despite this, it finally reached the point where I can use it as my main feedreader. However, I didn't exactly want an unauthenticated app hanging out on the internet for someone to ruin my day.&lt;/p&gt;

&lt;p&gt;If you don't want to make authentication, just make it inaccessible. This is where &lt;a href="https://tailscale.com/"&gt;Tailscale&lt;/a&gt; comes in. I already run Tailscale on all the nodes, which is how I'm able to have a &lt;a href="https://www.joshkasuboski.com/posts/multi-region-k3s/"&gt;multi-region k3s cluster&lt;/a&gt;. That doesn't make my pods routable though.&lt;/p&gt;

&lt;p&gt;Tailscale has an option for a &lt;a href="https://github.com/tailscale/tailscale/tree/main/docs/k8s#subnet-router"&gt;subnet router&lt;/a&gt; that is actually highlighted as how to access all things k8s in the examples. This probably would have been nice (and I might still add it), but I wouldn't automatically get dns routing I believe.&lt;/p&gt;

&lt;p&gt;You can see how it's all put together in my &lt;a href="https://github.com/kasuboski/k8s-gitops/blob/main/default/feedreader/add-tailscale-proxy.yaml"&gt;k8s-gitops repo&lt;/a&gt;. The gist is that you add a sidecar container that starts tailscale and in my case adds an &lt;code&gt;iptables&lt;/code&gt; rule which forwards all traffic to the app port. You can see my poor Doppler secret naming in there too where everything is using &lt;code&gt;miniflux-secret&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, no more auth?
&lt;/h2&gt;

&lt;p&gt;I still want to add users to the feedreader so that you can have multiple users on one instance. It remains to be seen whether I'll just add basic auth or something more complicated. Tailscale let me focus on getting something usable quickly though.&lt;/p&gt;

&lt;p&gt;After seeing how convenient it was to expose a pod with a dns name, I also want to make a debugging tool injecting tailscale to pods. I could then finely be rid of the finicky &lt;code&gt;kubectl port-forward&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>selfhost</category>
      <category>tailscale</category>
      <category>devops</category>
    </item>
    <item>
      <title>Multi-Region K3s</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Thu, 11 Nov 2021 21:56:57 +0000</pubDate>
      <link>https://forem.com/kasuboski/multi-region-k3s-4jdd</link>
      <guid>https://forem.com/kasuboski/multi-region-k3s-4jdd</guid>
      <description>&lt;p&gt;I had stood up a cluster in the cloud before moving. Now, I've added some nodes at home to round it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not just the cloud?
&lt;/h2&gt;

&lt;p&gt;As previously mentioned in &lt;a href="https://www.joshkasuboski.com/posts/moving-cluster-to-the-cloud/"&gt;moving my cluster to the cloud&lt;/a&gt;, I run a VM for the control plane and then a free Oracle ARM 4x24 VM as a worker. You might think this is more than enough (and you'd probably be right), but there are some things that make more sense to run at home.&lt;/p&gt;

&lt;p&gt;I currently run media services and backups at home. I also hope to mess around with more home automation things in the future. The media consumption is nice to be local, but it also saves on cloud network transfer costs. Overall, the cloud worker VM is going to be much more reliable than the small form factor PCs that make up my home region.&lt;/p&gt;

&lt;h2&gt;
  
  
  How's it actually work?
&lt;/h2&gt;

&lt;p&gt;I install &lt;a href="https://tailscale.com/"&gt;Tailscale&lt;/a&gt; on all of the machines. This lets them easily connect to each other as if they were on the same local network. The kubernetes API server only binds to the Tailscale interface so all of the workers are able to reach it. After setting this up, it mostly just works™️.&lt;/p&gt;

&lt;p&gt;The nodes are separated into a cloud and home region. Each region is broken down into zones for the cloud provider or location. That ends up cloud-oracle, cloud-racknerd, home-austin, home-wisconsin region-zone pairs. I'm then able to use those labels for scheduling decisions.&lt;/p&gt;

&lt;p&gt;So far I'm only using the zones for austin and wisconsin to pin my media options to the respective machines. Other things, including my RSS reader, are able to float between regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;The cluster is still managed with GitOps from the repo &lt;a href="https://github.com/kasuboski/k8s-gitops"&gt;kasuboski/k8s-gitops&lt;/a&gt;. I'm using ArgoCD to apply the manifests, but might mess with other options.&lt;/p&gt;

&lt;p&gt;I recently changed the secrets management from &lt;a href="https://github.com/bitnami-labs/sealed-secrets"&gt;SealedSecrets&lt;/a&gt; to &lt;a href="https://www.doppler.com/"&gt;Doppler&lt;/a&gt;. There wasn't anything wrong with SealedSecrets, but it felt less magic since I had to make sure to manage the keys and reencrypt on changes. I have a script under &lt;code&gt;hack/&lt;/code&gt; in the repo that manages importing the correct Doppler project token. After creating their &lt;code&gt;DopplerSecret&lt;/code&gt; CRD, the secrets then just show up.&lt;/p&gt;

&lt;p&gt;Previously, I had run a separate VM to act as the entrypoint to the cluster. Now I use a loadbalancer from Oracle that points to the free ARM VM there. The DNS then points to this loadbalancer. I still want to add a separate ingress locally so I can avoid always going out and back in, but haven't gotten around to it.&lt;/p&gt;

&lt;p&gt;I also still have 4 Raspberry Pis that I haven't setup yet since moving. The general layout is as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="//homelab.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2eacHWX5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.joshkasuboski.com/posts/multi-region-k3s/homelab.png" alt="Homelab Diagram" width="880" height="590"&gt; &lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Improvements
&lt;/h2&gt;

&lt;p&gt;I have a lot of the setup documented in &lt;a href="https://github.com/kasuboski/home-infra"&gt;kasuboski/home-infra&lt;/a&gt;, but realized as I was setting up the machine in Austin that it leaves a lot to be desired. In particular, the setup for storage had me looking back through the command history of the wisconsin machine to figure out what I had done.&lt;/p&gt;

&lt;p&gt;I'm thinking to make a tool that will set this up or at least walk me through it. The &lt;code&gt;home-infra&lt;/code&gt; repo needs to be cleaned up a little as well. It still contains instructions for multiple past iterations of my homelab.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tailscale</category>
    </item>
    <item>
      <title>Add Multi-Arch Dependencies Easily</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Thu, 19 Nov 2020 17:02:45 +0000</pubDate>
      <link>https://forem.com/kasuboski/add-multi-arch-dependencies-easily-1k9o</link>
      <guid>https://forem.com/kasuboski/add-multi-arch-dependencies-easily-1k9o</guid>
      <description>&lt;p&gt;I wanted to build a multi-arch docker image for media transcoding. Time to get the dependencies from someone who already did it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Goal
&lt;/h2&gt;

&lt;p&gt;I wanted a multi-arch image to run &lt;a href="https://github.com/mdhiggins/sickbeard_mp4_automator"&gt;mdhiggins/sickbeard_mp4_automator&lt;/a&gt;. I just needed the &lt;code&gt;manual.py&lt;/code&gt; script not the radarr integration. The images published by mdhiggins are based on images like the linuxserver/radarr image and aren't multi-arch.&lt;/p&gt;

&lt;p&gt;You can skip ahead to just see the Dockerfile at &lt;a href="https://github.com/kasuboski/manual-sma/blob/main/Dockerfile"&gt;kasuboski/manual-sma&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The main issue for making the image multi-arch is &lt;code&gt;ffmpeg&lt;/code&gt;. In the &lt;a href="https://github.com/mdhiggins/radarr-sma"&gt;mdhiggins/radarr-sma&lt;/a&gt; Dockerfile, it is always downloading the amd64 version of ffmpeg. This obviously won't go well for other architectures.&lt;/p&gt;

&lt;p&gt;There are ffmpeg builds published for other architectures. You would just need to make sure to download the correct one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lazy Solution
&lt;/h2&gt;

&lt;p&gt;It wouldn't be too bad to figure out which architecture is being built and then download the correct version of ffmpeg. It is one more script to maintain though.&lt;/p&gt;

&lt;p&gt;I noticed the linuxserver repo has a multi-arch ffmpeg container already. They include a statically compiled ffmpeg so getting it into my image is as easy as copying the binary.&lt;/p&gt;

&lt;p&gt;Dockerfiles have a &lt;code&gt;COPY --from=&amp;lt;image&amp;gt;&lt;/code&gt; option. This lets you copy files from another image. That image will automatically be the correct one for your architecture (as long as the image supports it).&lt;/p&gt;

&lt;p&gt;So instead of figuring out the correct architecture and downloading the corresponding ffmpeg. You can just add &lt;code&gt;FROM linuxserver/ffmpeg as ffmpeg&lt;/code&gt; and then &lt;code&gt;COPY --from=ffmpeg /usr/local/bin/ff* /usr/local/bin/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This will copy both ffmpeg and ffprobe to the built container.&lt;/p&gt;

&lt;p&gt;To build the image, I used the same method as my &lt;a href="https://www.joshkasuboski.com/posts/build-multiarch-image/"&gt;building multiarch images post&lt;/a&gt;. Basically, setting up docker buildx in GitHub Actions. You can see the workflow at &lt;a href="https://github.com/kasuboski/manual-sma/blob/main/.github/workflows/docker.yml"&gt;kasuboski/manual-sma&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>github</category>
    </item>
    <item>
      <title>Leaving Feedly for Miniflux</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Fri, 13 Nov 2020 19:57:42 +0000</pubDate>
      <link>https://forem.com/kasuboski/leaving-feedly-for-miniflux-7f3</link>
      <guid>https://forem.com/kasuboski/leaving-feedly-for-miniflux-7f3</guid>
      <description>&lt;p&gt;I've wanted to move away from Feedly for awhile and finally found my alternative in Miniflux.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the move
&lt;/h2&gt;

&lt;p&gt;I had been having issues with the Feedly app where it would suddenly sign me out and take a while to log back in. Apparently that's all it takes for me to drop a service… I also wanted to run something myself and possibly build on top of it.&lt;/p&gt;

&lt;p&gt;I had contemplated moving to microsub as outlined in my &lt;a href="https://www.joshkasuboski.com/posts/replacing-feedly/"&gt;replacing Feedly&lt;/a&gt; post. I tried out &lt;a href="https://github.com/pstuifzand/ekster"&gt;ekster&lt;/a&gt; and I think it's still running 🤷‍♂️. I didn't like the readers I tried and wasn't a fan of how ekster doesn't seem to store data persistently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Moving to Miniflux
&lt;/h2&gt;

&lt;p&gt;I recently found &lt;a href="https://miniflux.app/"&gt;miniflux&lt;/a&gt;. It bills itself as “a minimalist and opinionated feed reader”. Minimalist and opinionated is all I can hope to be. Thankfully the &lt;a href="https://miniflux.app/opinionated.html"&gt;opinions&lt;/a&gt; aligned pretty well with mine.&lt;/p&gt;

&lt;p&gt;It's just a go binary and PostgreSQL. I run the binary on my &lt;a href="https://www.joshkasuboski.com/posts/home-k8s-raspberry-update/"&gt;kubernetes cluster&lt;/a&gt; and PostgreSQL on a Lenovo Thinkcentre I recently bought. I'm still not super happy in my openEBS storage so am running the database using &lt;a href="https://podman.io/"&gt;podman&lt;/a&gt; with a volume mount directly from the node.&lt;/p&gt;

&lt;p&gt;Running containers as a non-root account is pretty easy with podman and can be managed by systemd. I did the below to set it up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman create --name postgres -v /home/postgres/postgres:/bitnami/postgresql -e POSTGRESQL_PASSWORD=&amp;lt;root password&amp;gt; -p 5432:5432 bitnami/postgresql:13
podman generate systemd postgres -n

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll notice I'm using the bitnami postgresql image. This image doesn't run as root 👍. The output of the &lt;code&gt;podman generate systemd&lt;/code&gt; can then be copied to &lt;code&gt;$HOME/.config/systemd/user&lt;/code&gt;. More info can be found on the podman &lt;a href="http://docs.podman.io/en/latest/markdown/podman-generate-systemd.1.html#installation-of-generated-systemd-unit-files"&gt;site&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I like the minimal UI and it works pretty well on mobile as well. The delay in fetching feeds took a little getting used to. I did lower the thresholds, which was just an environment variable config. You can see my kubernetes config in &lt;a href="https://github.com/kasuboski/k8s-gitops/blob/master/default/miniflux/deploy.yaml"&gt;kasuboski/k8s-gitops&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lZ73bGKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.joshkasuboski.com/posts/switching-to-miniflux/miniflux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lZ73bGKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.joshkasuboski.com/posts/switching-to-miniflux/miniflux.png" alt="miniflux"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Moving Forward 🐱‍🏍
&lt;/h2&gt;

&lt;p&gt;With Feedly I had a limited set of feed categories because that was a limitation of the free plan. I want to decide what I actually want now that I have no limits with miniflux.&lt;/p&gt;

&lt;p&gt;There is also an API that I want to build some things around. My first thought would be to make a way for others to follow what I follow as well.&lt;/p&gt;

</description>
      <category>selfhost</category>
      <category>productivity</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Copy files to and from Kubernetes Pods with kubectl</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Fri, 06 Nov 2020 02:46:06 +0000</pubDate>
      <link>https://forem.com/kasuboski/copy-files-to-and-from-kubernetes-pods-with-kubectl-1n80</link>
      <guid>https://forem.com/kasuboski/copy-files-to-and-from-kubernetes-pods-with-kubectl-1n80</guid>
      <description>&lt;p&gt;I wanted a simple backup of some OpenEBS volumes. Why not just copy them out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why bother
&lt;/h2&gt;

&lt;p&gt;I run a number of things on my &lt;a href="https://www.joshkasuboski.com/posts/home-k8s-raspberry-pi/"&gt;Raspberry Pi kubernetes cluster&lt;/a&gt; that require storage. The main one I actually care about is my &lt;a href="https://github.com/usefathom/fathom"&gt;fathom&lt;/a&gt; instance.&lt;/p&gt;

&lt;p&gt;It has the website analytics for my site &lt;a href="https://www.joshkasuboski.com"&gt;joshkasuboski.com&lt;/a&gt;. It's certainly not the end of the world to lose this information, but I'd really rather not. Previously, I had tried to get &lt;a href="https://velero.io/"&gt;velero&lt;/a&gt; setup.&lt;/p&gt;

&lt;p&gt;I tried to use both the OpenEBS specific plugin and the generic restic. The OpenEBS one failed because my setup couldn't take snapshots. My USB drives are pretty slow so I just chalk it up to that. The restic option seemed to work, but it didn't actually restore data in my test. In addition, it kept exceeding my &lt;a href="https://www.backblaze.com/"&gt;backblaze&lt;/a&gt; s3 api limits.&lt;/p&gt;

&lt;p&gt;Anyway, I just wanted something simple before I upgraded OpenEBS. Then I found the &lt;code&gt;kubectl cp&lt;/code&gt; command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backing up the volumes
&lt;/h2&gt;

&lt;p&gt;I manually backed up all of my persistent volumes. This is only five items for me, but could get out of hand quickly. It was really as simple as running the command for each and storing the files locally.&lt;/p&gt;

&lt;p&gt;Copying files from a pod to your machine&lt;code&gt;kubectl cp &amp;lt;namespace&amp;gt;/&amp;lt;podname&amp;gt;:/mount/path /local/path&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I did this for all paths I cared about. If something goes wrong you can always copy the files back to the pod. It's the same command just swapping source and destination.&lt;/p&gt;

&lt;p&gt;Copying files to a pod from your machine&lt;code&gt;kubectl cp /local/path &amp;lt;namespace&amp;gt;/&amp;lt;podname&amp;gt;:/mount/path&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I still hope to get OpenEBS snapshots working in the future, but this worked great for now.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>learning</category>
    </item>
    <item>
      <title>Client Side Shopping Cart</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Thu, 29 Oct 2020 19:15:57 +0000</pubDate>
      <link>https://forem.com/kasuboski/client-side-shopping-cart-3j9j</link>
      <guid>https://forem.com/kasuboski/client-side-shopping-cart-3j9j</guid>
      <description>&lt;p&gt;I have my beer can working as a buy button, but what if you want to add products to a cart first.&lt;/p&gt;

&lt;h2&gt;
  
  
  But Why
&lt;/h2&gt;

&lt;p&gt;If you're trying to sell things easily (and cheaply) you can connect Stripe directly to a product page. You can see an example of this from the &lt;a href="https://www.joshkasuboski.com/posts/stripe-beer-money/"&gt;stripe beer money&lt;/a&gt; article.&lt;/p&gt;

&lt;p&gt;The main downside is that a customer would have to buy one thing at a time. We can add a cart, but still don't want to require a server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Cart
&lt;/h2&gt;

&lt;p&gt;For a cart, we just need to keep track of items added and how many of each. We can store this information in &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage"&gt;localstorage&lt;/a&gt;. This means if a user comes back to the page, their cart will still be there.&lt;/p&gt;

&lt;p&gt;I tested this out with a sample store page. You can see the code at &lt;a href="https://github.com/kasuboski/client-side-cart-example"&gt;kasuboski/client-side-cart-example&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="//example-store.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CAvbPtYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.joshkasuboski.com/posts/client-side-cart-1/example-store.png" alt="example store"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's basically a single &lt;code&gt;index.html&lt;/code&gt; with &lt;code&gt;javascript&lt;/code&gt;. It looks for products by finding &lt;code&gt;buttons&lt;/code&gt; with &lt;code&gt;data&lt;/code&gt; attributes. These attributes specify the product id, description and price.&lt;/p&gt;

&lt;p&gt;When a user clicks the button, the item is added to the cart. This just loads the cart from &lt;code&gt;localstorage&lt;/code&gt; updates the item quantity and saves it back.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;addToCart&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;cart&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getCart&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;prevQuantity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;quantity&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevQuantity&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;localStorage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;setItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cart&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="nx"&gt;populateCart&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;populateCart&lt;/code&gt; function sets up the cart area every time. There isn't anything fancy here… it just deletes all of the cart elements and recreates based on what's in &lt;code&gt;localstorage&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps 🦶
&lt;/h2&gt;

&lt;p&gt;This works as a generic cart… but you can't buy anything. I'm going to make an example store to show buying items using Stripe.&lt;/p&gt;

&lt;p&gt;Each item will need a Stripe Price and then when you Checkout it will call the Stripe redirect. Eventually, I want to make it easier to integrate as well. Maybe making this an actual library.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>html</category>
    </item>
    <item>
      <title>Scan Images in my GitOps Repo</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Thu, 22 Oct 2020 19:41:31 +0000</pubDate>
      <link>https://forem.com/kasuboski/scan-images-in-my-gitops-repo-14hm</link>
      <guid>https://forem.com/kasuboski/scan-images-in-my-gitops-repo-14hm</guid>
      <description>&lt;p&gt;Scanning the container images deployed to my cluster used to be manual. Now it happens automatically every night. 🐱‍🏍&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Scan
&lt;/h2&gt;

&lt;p&gt;I use Trivy to scan container images. I wrote about scanning my GitOps repo for images earlier &lt;a href="https://www.joshkasuboski.com/posts/scan-images-in-files/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Basically, there's a fancy grep that matches &lt;code&gt;image: (&amp;lt;name&amp;gt;)&lt;/code&gt; and that name is sent to Trivy.&lt;/p&gt;

&lt;p&gt;My GitOps repo is on GitHub at &lt;a href="https://github.com/kasuboski/k8s-gitops" rel="noopener noreferrer"&gt;kasuboski/k8s-gitops&lt;/a&gt;. It seemed natural to run the scan periodically using GitHub Actions. The scan will happen on every push and every night.&lt;/p&gt;

&lt;p&gt;I needed a way to exclude an image that wasn't able to scan on an x86 host. The one liner from my previous post needed a &lt;code&gt;grep -v&lt;/code&gt; to exclude certain patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making the GitHub Actions Workflow
&lt;/h2&gt;

&lt;p&gt;I had a lot of trouble getting &lt;code&gt;ack&lt;/code&gt; configured in a runner. I ended up making a docker image that downloads trivy, finds the images, and scans them.&lt;/p&gt;

&lt;p&gt;This image has its own repo &lt;a href="https://github.com/kasuboski/trivy-scan-dir" rel="noopener noreferrer"&gt;kasuboski/trivy-scan-dir&lt;/a&gt;. If you just want to scan a repo you can run &lt;code&gt;docker run -it --rm -v /path/to/yaml:/gitops -e EXCLUDED='no/scan also/noscan' kasuboski/trivy-scan-dir&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To run this in a workflow, add the below step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Scan Images
uses: docker://kasuboski/trivy-scan-dir:latest
env:
EXCLUDED: 'no/scan also/noscan'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My full workflow can be found in &lt;a href="https://github.com/kasuboski/k8s-gitops/blob/master/.github/workflows/scan-images.yaml" rel="noopener noreferrer"&gt;kasuboski/k8s-gitops&lt;/a&gt;. It triggers on &lt;code&gt;workflow_dispatch&lt;/code&gt;, &lt;code&gt;cron&lt;/code&gt;, and &lt;code&gt;push&lt;/code&gt; to yaml files.&lt;/p&gt;

&lt;p&gt;Workflow Dispatch lets you run the workflow manually from the GitHub Actions UI. This was really convenient for testing. The cron schedule runs every morning at 4:03am.&lt;/p&gt;

&lt;p&gt;&lt;a href="manual-workflow-run.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.joshkasuboski.com%2Fposts%2Ftrivy-gitops-repo-scan%2Fmanual-workflow-run.png" alt="manual trigger"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;This workflow has alerted me to multiple vulnerabilities. If the workflow fails, I get an email and then can look into updating the image.&lt;/p&gt;

&lt;p&gt;The results even look pretty decent in the GitHub app so I can tell which images I need to be worried about. An example failing run is shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="failed-image-scan.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.joshkasuboski.com%2Fposts%2Ftrivy-gitops-repo-scan%2Ffailed-image-scan.png" alt="failed run"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I still want to add something in cluster to enforce only the images I want are running. Finding the images to scan also needs to be more robust. For instance, some images only show up once manifests are rendered.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>Accept Beer Money with Stripe - Sans Server</title>
      <dc:creator>Josh Kasuboski</dc:creator>
      <pubDate>Thu, 15 Oct 2020 15:51:52 +0000</pubDate>
      <link>https://forem.com/kasuboski/accept-beer-money-with-stripe-sans-server-3e2i</link>
      <guid>https://forem.com/kasuboski/accept-beer-money-with-stripe-sans-server-3e2i</guid>
      <description>&lt;p&gt;I wanted to explore Stripe Payments, but didn't want to mess with a server. You can see the result as the little beer can in the footer of &lt;a href="https://www.joshkasuboski.com" rel="noopener noreferrer"&gt;my site&lt;/a&gt; 😉.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;Stripe has a Checkout option where you redirect to their page and it has your product and a payment form. This means you don't have to deal with &lt;a href="https://www.pcisecuritystandards.org/" rel="noopener noreferrer"&gt;PCI&lt;/a&gt; anything and basically just have to redirect correctly. They have a library to handle the redirect as well, so it's fairly easy.&lt;/p&gt;

&lt;p&gt;You need to place a button that when clicked calls the Stripe Javascript library. Since I'm “selling” beer money, I put a little beer can in my site footer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.joshkasuboski.com%2Fposts%2Fstripe-beer-money%2Ffooter-beercan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.joshkasuboski.com%2Fposts%2Fstripe-beer-money%2Ffooter-beercan.png" alt="footer-beercan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding to Your Site
&lt;/h2&gt;

&lt;p&gt;I followed this &lt;a href="https://stripe.com/docs/payments/checkout/client" rel="noopener noreferrer"&gt;guide&lt;/a&gt; from Stripe. It was a little difficult to find navigating the Stripe Docs, but searching &lt;code&gt;Stripe Checkout without server&lt;/code&gt; brought me there.&lt;/p&gt;

&lt;p&gt;I won't reiterate the guide, but basically you use the Stripe Dashboard to make a Product that has a Price. That price will then have an ID that you need. The dashboard will also generate the code snippet with the price ID and your API ID filled in. My edited snipped is below.&lt;/p&gt;

&lt;p&gt;You'll notice it also expects a success and cancel URL. I added two pages that just say success, and uh oh respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"https://js.stripe.com/v3"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;script&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;DOMAIN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://www.joshkasuboski.com/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;pk_livekey&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;price&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;price_key&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;stripe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Stripe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;checkoutButton&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;checkout-button-beermoney&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;checkoutButton&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;click&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// When the customer clicks on the button, redirect&lt;/span&gt;
      &lt;span class="c1"&gt;// them to Checkout.&lt;/span&gt;
      &lt;span class="nx"&gt;stripe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;redirectToCheckout&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;lineItems&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
        &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;payment&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c1"&gt;// Do not rely on the redirect to the successUrl for fulfilling&lt;/span&gt;
        &lt;span class="c1"&gt;// purchases, customers may not always reach the success_url after&lt;/span&gt;
        &lt;span class="c1"&gt;// a successful payment.&lt;/span&gt;
        &lt;span class="c1"&gt;// Instead use one of the strategies described in&lt;/span&gt;
        &lt;span class="c1"&gt;// https://stripe.com/docs/payments/checkout/fulfillment&lt;/span&gt;
        &lt;span class="na"&gt;successUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DOMAIN&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;success&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;cancelUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DOMAIN&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;canceled&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// If `redirectToCheckout` fails due to a browser or network&lt;/span&gt;
            &lt;span class="c1"&gt;// error, display the localized error message to your customer.&lt;/span&gt;
            &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;displayError&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;error-message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;displayError&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;textContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;})();&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That snippet and the button were all that I needed. Stripe will also provide testing keys for both price and API. So you can test with that first to make sure it is working.&lt;/p&gt;

&lt;p&gt;After I set that up, I can click my beer can and end up at a page like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.joshkasuboski.com%2Fposts%2Fstripe-beer-money%2Fbeermoney-checkout.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.joshkasuboski.com%2Fposts%2Fstripe-beer-money%2Fbeermoney-checkout.png" alt="beer can checkout"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💰💰💰 &lt;strong&gt;Profit&lt;/strong&gt; 💰💰💰&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Cases
&lt;/h2&gt;

&lt;p&gt;This works pretty well if someone would only buy one item at a time. You could probably make a cart entirely on the frontend, keeping track of items a user wants and then when a user clicks &lt;code&gt;checkout&lt;/code&gt;, you would send multiple &lt;code&gt;lineItems&lt;/code&gt; in the Stripe Redirect.&lt;/p&gt;

&lt;p&gt;This may not be good enough for a real store, but it's pretty convenient to have a fully client-side storefront.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
