<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dave Sudia</title>
    <description>The latest articles on Forem by Dave Sudia (@dsudia).</description>
    <link>https://forem.com/dsudia</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dsudia"/>
    <language>en</language>
    <item>
      <title>How to Integrate Docker &amp; JetBrains into Telepresence</title>
      <dc:creator>Dave Sudia</dc:creator>
      <pubDate>Tue, 07 Nov 2023 16:56:47 +0000</pubDate>
      <link>https://forem.com/dsudia/how-to-integrate-docker-jetbrains-into-telepresence-31op</link>
      <guid>https://forem.com/dsudia/how-to-integrate-docker-jetbrains-into-telepresence-31op</guid>
      <description>&lt;p&gt;You are a developer who enjoys experimenting while striving for optimal solutions. In the past, this was straightforward because your development work occurred on your own workstation. However, you now find yourself in a situation where your applications run within a container managed by a &lt;a href="https://www.getambassador.io/resources/multi-cluster-kubernetes?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Q423-Better-Together-Integrations" rel="noopener noreferrer"&gt;Kubernetes cluster&lt;/a&gt;. To implement any changes, you must first build a container and then deploy it to the cluster to have them tested.&lt;/p&gt;

&lt;p&gt;When the container malfunctions, &lt;a href="https://www.getambassador.io/blog/service-mesh-debugging?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Q423-Better-Together-Integrations" rel="noopener noreferrer"&gt;debugging&lt;/a&gt; becomes challenging; you are forced to rely on log outputs or various metrics to make educated guesses about the underlying issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  The missing piece
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.getambassador.io/products/telepresence?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Q423-Better-Together-Integrations" rel="noopener noreferrer"&gt;Telepresence&lt;/a&gt; enables the interception of a container within the cluster, redirecting all its traffic to a container running on your local workstation. The local container will have access to identical &lt;a href="https://www.getambassador.io/use-case/productive-local-dev-environment?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Q423-Better-Together-Integrations" rel="noopener noreferrer"&gt;environment&lt;/a&gt; variables, share the same mounted directories, and connect to a network that acts as a proxy for the cluster container's network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.getambassador.io/products/telepresence?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Q423-Better-Together-Integrations" rel="noopener noreferrer"&gt;Telepresence&lt;/a&gt; virtually positions the local container within the cluster, empowering you to debug, modify, rebuild, and restart the container as often as needed, all without the need to commit or deploy any of these changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging the container
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Remote run/debug configuration
&lt;/h3&gt;

&lt;p&gt;Debugging code running in containers is fairly trivial. Typically, it involves a debugger frontend, often integrated into an IDE, which connects to a debugger backend in the container via a TCP port. The backend may be an integral part of a runtime environment like the Java Virtual Machine (JVM), or it could exist as a distinct binary application, exerting precise control over another compiled binary.&lt;/p&gt;

&lt;p&gt;IDEs like &lt;a href="https://www.getambassador.io/news/press-release/new-plugin-for-jetbrains-marketplace?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Q423-Better-Together-Integrations" rel="noopener noreferrer"&gt;JetBrains&lt;/a&gt; and VSCode can be configured to perform debugging via a TCP port.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example using IntelliJ IDEA
&lt;/h2&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A running docker environment&lt;/li&gt;
&lt;li&gt;IntelliJ IDEA&lt;/li&gt;
&lt;li&gt;Telepresence 2.16.1 or later&lt;/li&gt;
&lt;li&gt;A Kubernetes cluster where the container can be deployed and intercepted&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prepare a container with a development and a production target
&lt;/h2&gt;

&lt;p&gt;This example builds on the &lt;a href="https://docs.docker.com/language/java/" rel="noopener noreferrer"&gt;Docker Getting started with Java&lt;/a&gt; guide. Reading it is recommended.&lt;/p&gt;

&lt;p&gt;We'll employ a Multi-stage Dockerfile as &lt;a href="https://docs.docker.com/language/java/develop/#multi-stage-dockerfile-for-development" rel="noopener noreferrer"&gt;outlined in the guide&lt;/a&gt;. The development container runs the code using ./mvnw spring-boot:run with specific JVM options to enable debugging. On the other hand, the production container utilizes the java command to execute precompiled JAR files generated by ./mvnw package.&lt;/p&gt;

&lt;p&gt;This Dockerfile is placed at the root of the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM eclipse-temurin:17-jdk-jammy as base
WORKDIR /app
COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:resolve dependency:resolve-plugins
COPY src ./src

FROM base as development
CMD ["./mvnw", "spring-boot:run", "-Dspring-boot.run.jvmArguments='-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:40000'"]

FROM base as build
RUN ./mvnw package

FROM eclipse-temurin:17-jre-jammy as production
EXPOSE 8080
COPY --from=build /app/target/spring-petclinic-*.jar /spring-petclinic.jar
CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/spring-petclinic.jar"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need a .dockerignore file to prevent intermediate files from the build being copied into the container. It contains one single line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Build and push the image
&lt;/h2&gt;

&lt;p&gt;Use docker to build and tag the image. In this example I use the docker registry “thhal”. You’ll need to swap that to a registry that you can push images to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker build . --tag petclinic --tag thhal/petclinic:1.0.0
$ docker push thhal/petclinic:1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the image in the cluster&lt;/p&gt;

&lt;p&gt;We need a service and a deployment in the cluster, so we add the following petclinic.yaml to define those.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Service
metadata:
  name: petclinic
spec:
  type: ClusterIP
  selector:
    service: petclinic
  ports:
    - name: proxied
      port: 80
      targetPort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: petclinic
  labels:
    service: petclinic
spec:
  replicas: 1
  selector:
    matchLabels:
      service: petclinic
  template:
    metadata:
      labels:
        service: petclinic
    spec:
      containers:
        - name: petclinic
          image: thhal/petclinic:1.0.0
          ports:
            - containerPort: 8080
              name: http
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then we apply that yaml using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f petclinic.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Point your browser to the service. It should show the home page of the Petclinic app. See the tip below If your cluster doesn’t have an ingress controller configured that will allow you to access the service from a browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prepare a Run/Debug Configuration in the IDE
&lt;/h2&gt;

&lt;p&gt;In the IntelliJ IDE, you can create a “Remote Run/Debug Configuration”. Its only purpose is to connect to a debugger that runs on a given port. That’s exactly what we want.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the “Run” menu, select “Edit configurations”&lt;/li&gt;
&lt;li&gt;Click the plus-sign in the upper left corner.&lt;/li&gt;
&lt;li&gt;Select “Remote JVM Debug” in the list that appears.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You’ll end up with a configuration that looks like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpimlzahm69eeqyx44kiq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpimlzahm69eeqyx44kiq.jpeg" alt="Run/Debug Configuration in the IDE"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, I named this configuration “Remote on port 40000”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect Telepresence to the cluster
&lt;/h2&gt;

&lt;p&gt;Use the following command to connect telepresence in docker mode so that the daemon runs in a container. We also use --expose 40000:40000 here to ensure that the port that the JVM will listen to can be reached from the IDE.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ telepresence connect --docker --expose 40000:40000
Launching Telepresence User Daemon
Connected to context default, namespace default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the daemon in a container ensures that the proxied cluster network is isolated from the host network, and that any volume mounts are invisible to the host.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add your breakpoints
&lt;/h2&gt;

&lt;p&gt;Use the IDE to add some breakpoints to your source code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debug the intercept
&lt;/h2&gt;

&lt;p&gt;Now start the intercept in a terminal using --docker-build flag so that it builds the development container, starts it, and ensures that it uses the correct network, environment, and volume mounts. Once the container is up and running, the Java debugger now awaits commands on port 40000.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ telepresence intercept petclinic --docker-build . \
  –-docker-build-opt target=development -- IMAGE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Start debugging
&lt;/h2&gt;

&lt;p&gt;Start the debugger in your IDE using the “Remote on port 40000” configuration that we created above. Your debug session is now up and running. Try and send some traffic to the cluster that is routed to the intercepted service and watch your breakpoints get hit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modify code
&lt;/h2&gt;

&lt;p&gt;Code modification is a four-step process.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modify the source code.&lt;/li&gt;
&lt;li&gt;Stop the Run/Debug configuration&lt;/li&gt;
&lt;li&gt;Start the intercept again.&lt;/li&gt;
&lt;li&gt;Start the Run/Debug configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Example using Jetbrains Goland IDE
&lt;/h2&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A running docker environment&lt;/li&gt;
&lt;li&gt;Jetbrains Goland IDE&lt;/li&gt;
&lt;li&gt;Telepresence 2.16.1 or later&lt;/li&gt;
&lt;li&gt;Source code for the docker container&lt;/li&gt;
&lt;li&gt;A Kubernetes cluster where the container is deployed and interceptable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The source code used in this example can be found &lt;a href="https://github.com/telepresenceio/telepresence/tree/release/v2/integration_test/testdata/echo-server" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prepare a debug version of the container
&lt;/h2&gt;

&lt;p&gt;Go is a compiled language, and debugging requires a debugger called Delve to control the binary. This implies that the container hosting the binary must also include Delve, necessitating a Dockerfile specifically customized for this purpose. The original container (the one running in the cluster) used for this example is built from this Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM golang:alpine AS builder

WORKDIR /echo-server
COPY go.mod .
COPY go.sum .
# Get dependencies - will also be cached if we won't change mod/sum
RUN go mod download

COPY frontend.go .
COPY main.go .
RUN go build -o echo-server .

FROM alpine
COPY --from=builder /echo-server/echo-server /
CMD ["/echo-server"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We name the Delve annotated copy Dockerfile.debug&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM golang:alpine AS builder

# Build Delve
RUN go install github.com/go-delve/delve/cmd/dlv@latest

WORKDIR /echo-server
COPY go.mod .
COPY go.sum .
#Get dependencies - will also be cached if we won't change mod/sum
RUN go mod download

COPY frontend.go .
COPY main.go .
RUN go build -gcflags="all=-N -l" -o echo-server .

EXPOSE 40000
CMD ["/go/bin/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/echo-server/echo-server"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notable additions to the debug container are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go build disables inlining and optimizations using --gcflags=”all=-N -l”&lt;/li&gt;
&lt;li&gt;Delve is installed&lt;/li&gt;
&lt;li&gt;The container exposes port 40000 (any free port can be used here).&lt;/li&gt;
&lt;li&gt;The CMD is modified so that Delve listens to the exposed port and executes the go binary.&lt;/li&gt;
&lt;li&gt;The extra FROM and COPY steps to minimize the container are removed because this container will never be published&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prepare a Run/Debug Configuration in the IDE
&lt;/h2&gt;

&lt;p&gt;In the Goland IDE, you can create a “Go Remote Run/Debug Configuration”. Its only purpose is to connect to a debugger that runs on a given port. That’s exactly what we want.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the “Run” menu, select “Edit configurations”&lt;/li&gt;
&lt;li&gt;Click the plus-sign in the upper left corner.&lt;/li&gt;
&lt;li&gt;Select “Go remote” in the list that appears.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You’ll end up with a configuration that looks like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvkp0dkmg7esv4dd73vd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvkp0dkmg7esv4dd73vd.jpeg" alt="Run/Debug Configuration in the IDE"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, I named this configuration “Remote on port 40000”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect Telepresence to the cluster
&lt;/h2&gt;

&lt;p&gt;Use the following command to connect telepresence in docker mode so that the daemon runs in a container. We also use --expose 40000:40000 here to ensure that the port that Delve will listen to can be reached from the IDE.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ telepresence connect --docker --expose 40000:40000
Launching Telepresence User Daemon
Connected to context default, namespace default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the daemon in a container ensures that the proxied cluster network is isolated from the host network, and that any volume mounts are invisible to the host.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add your breakpoints
&lt;/h2&gt;

&lt;p&gt;Use the IDE to add some breakpoints to your source code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debug the intercept
&lt;/h2&gt;

&lt;p&gt;Now start the intercept in a terminal using --docker-debug flag. This starts the container with relaxed security and ensures that it uses the correct network, environment, and volume mounts. Once the container is up and running, the Delve debugger now awaits commands on port 40000.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ telepresence intercept echo --docker-debug Dockerfile.debug -- IMAGE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s assumed that the name of the cluster deployment that runs our container remotely is “echo”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start debugging
&lt;/h2&gt;

&lt;p&gt;Start the debugger in your IDE using the “Remote on port 40000” configuration that we created above. Your debug session is now up and running. Try and send some traffic to the cluster that is routed to the intercepted service and watch your breakpoints get hit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modify code
&lt;/h2&gt;

&lt;p&gt;Code modification is a four-step process.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modify the source code.&lt;/li&gt;
&lt;li&gt;Stop the Run/Debug configuration&lt;/li&gt;
&lt;li&gt;Start the intercept again.&lt;/li&gt;
&lt;li&gt;Start the Run/Debug configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This example was inspired by the excellent Goland blog-post &lt;a href="https://blog.jetbrains.com/go/2020/05/06/debugging-a-go-application-inside-a-docker-container/" rel="noopener noreferrer"&gt;Debugging a Go application inside a Docker container&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Just run the container
&lt;/h3&gt;

&lt;p&gt;Just build and run the original container using the following command If you just want to try out source changes without starting a debugger:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ telepresence intercept echo --docker-build . -- IMAGE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bash with cluster network access
&lt;/h2&gt;

&lt;p&gt;Start a bash shell with cluster network access so that you can curl your services by name. The trick here is to start a container that uses the same network as the Telepresence daemon. The name of that network is included in the output from the telepresence status command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run --network $(telepresence status --output json | jq -r .user_daemon.container_network) \
  --rm -it jonlabelle/network-tools
[network-tools]$ curl echo
Request served by echo-76547fc7f8-hr2sg

GET / HTTP/1.1

Host: echo
Accept: */*
User-Agent: curl/8.3.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a windows box, you’ll need to first execute the telepresence status command, copy the entry for the “Container network” and then use that for the --network option in the docker run command.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://hub.docker.com/r/jonlabelle/network-tools" rel="noopener noreferrer"&gt;jonlabell/network-tools&lt;/a&gt; for more information about this very useful container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Browser access without Ingress
&lt;/h2&gt;

&lt;p&gt;You can utilize Kubernetes port-forwarding to establish a connection between your browser and a service within your cluster. This proves especially useful when you lack a dedicated ingress for the service. For instance, if you have a "petclinic" service running on port 80 (as demonstrated in the Java example) and you wish to access it from your browser, you can achieve this by executing the following command in your terminal, which maps "localhost:8080" to that service::&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward svc/petclinic 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now point your browser to &lt;a href="http://localhost:8080/" rel="noopener noreferrer"&gt;http://localhost:8080/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>jetbrains</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Best Kubernetes DevOps Tools: A Comprehensive Guide</title>
      <dc:creator>Dave Sudia</dc:creator>
      <pubDate>Mon, 16 Oct 2023 22:37:44 +0000</pubDate>
      <link>https://forem.com/dsudia/best-kubernetes-devops-tools-a-comprehensive-guide-da9</link>
      <guid>https://forem.com/dsudia/best-kubernetes-devops-tools-a-comprehensive-guide-da9</guid>
      <description>&lt;p&gt;&lt;a href="https://www.getambassador.io/kubernetes-glossary/kubernetes?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Corporate"&gt;Kubernetes&lt;/a&gt; has become the standard for container orchestration and is integral to modern DevOps workflows. However, realizing Kubernetes' full potential requires adopting the proper DevOps tools tailored for it. These &lt;a href="https://www.getambassador.io/products/telepresence?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Corporate"&gt;Kubernetes DevOps tools&lt;/a&gt; enable building, testing, deploying, monitoring, and managing applications on Kubernetes efficiently.&lt;/p&gt;

&lt;p&gt;This comprehensive guide explores the top DevOps tools purpose-built for Kubernetes to streamline workflows. It covers solutions for CI/CD, deployment, monitoring, automation, and more. The guide also highlights Telepresence as an innovative Kubernetes DevOps tool for accelerated &lt;a href="https://www.getambassador.io/blog/dev-workflow-intro?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Corporate"&gt;development workflows&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With a robust Kubernetes DevOps toolkit, teams can optimize workflows for application development and delivery. The ecosystem of specialized tools addresses processes and collaboration on top of Kubernetes’ core orchestration capabilities. Selecting the right solutions unlocks improved productivity, resilience, and agility.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Intersection of DevOps and Kubernetes
&lt;/h2&gt;

&lt;p&gt;DevOps emphasizes practices like &lt;a href="https://www.getambassador.io/kubernetes-learning-center/courses/continuous-integration?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Corporate"&gt;continuous integration&lt;/a&gt;, infrastructure as code, monitoring, and team collaboration. Kubernetes naturally complements these principles.&lt;/p&gt;

&lt;p&gt;Its api-driven architecture allows infrastructure changes to be version controlled and replicated identically across environments. Automated deployments become easier by packaging applications as Kubernetes resources.&lt;/p&gt;

&lt;p&gt;Runtime logging and monitoring give observability into apps. The portability of Kubernetes clusters enables multiple teams to work together.&lt;/p&gt;

&lt;p&gt;This synergy makes &lt;a href="https://www.getambassador.io/kubernetes-learning-center?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Corporate"&gt;Kubernetes&lt;/a&gt; a catalyst for DevOps transformation. But the technology is only one piece. Having the proper Kubernetes tooling is key to unlocking the full benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Kubernetes DevOps Tools
&lt;/h2&gt;

&lt;p&gt;Here are some of the top Kubernetes DevOps tools to streamline your workflow:&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous Integration Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://blog.getambassador.io/automating-microservice-testing-with-jenkins-ed49321a4f1"&gt;&lt;strong&gt;Jenkins&lt;/strong&gt;&lt;/a&gt; is an open source automation server that enables continuous integration and delivery pipelines. The &lt;a href="https://plugins.jenkins.io/kubernetes-cli/"&gt;Kubernetes plugin&lt;/a&gt; allows dynamic provisioning of agents as pods on a Kubernetes cluster. The plugin also allows Jenkins agents to be dynamically provisioned as pods within clusters. This enables scaling up CI capacity on-demand when workloads increase. Agents can build Docker images, execute tests, and deploy artifacts directly within a Kubernetes environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GitLab CI&lt;/strong&gt; - GitLab CI has integrated support for Kubernetes to natively &lt;a href="https://about.gitlab.com/solutions/kubernetes/#:~:text=Everything%20you%20need%20to%20build%2C%20test%2C%20deploy%2C%20and%20run%20your%20app%20at%20scale"&gt;build, test, and deploy&lt;/a&gt; applications to Kubernetes clusters through pipelines. GitLab can deploy review apps and production apps to Kubernetes out of the box. Pipelines can launch Kubernetes jobs to run CI steps in pods with required dependencies. GitLab also offers Kubernetes cluster management, auto-scaling, monitoring, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CircleCI&lt;/strong&gt; - CircleCI provides flexible workflows and orchestration to build, test, and deploy applications securely onto Kubernetes for teams. It enables you to seamlessly integrate &lt;a href="https://circleci.com/integrations/kubernetes/#:~:text=Execute%20pre%2Dconfigured%20Kubernetes%20operations%20in%20your%20CircleCI%20pipelines%20using%20orbs."&gt;pre-configured Kubernetes operations&lt;/a&gt; into your CI/CD pipelines using orbs. These orbs serve as reusable packages of configuration, allowing you to manage various Kubernetes-related tasks within your CircleCI workflows efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Continuous Deployment Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://helm.sh/%22"&gt;&lt;strong&gt;Helm&lt;/strong&gt;&lt;/a&gt; is a package manager that helps define, install, and manage complex Kubernetes applications packaged as charts with manifests, configs, and docs. Helm streamlines deploying complex packaged Kubernetes applications. Developers can create configurable Helm charts wrapping all YAML manifests, configs, and services needed to run an app. Ops teams can then deploy those charts easily across different environments and clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kustomize.io/"&gt;&lt;strong&gt;Kustomize&lt;/strong&gt;&lt;/a&gt; provides a template-free way to customize Kubernetes YAML configurations using overlays and generators without templates. It is ideal for customizing YAML configs for multiple Kubernetes environments like dev, staging, and prod. Engineering teams can define common resources in a base and then apply overlays with patches, variable substitutions, and images per environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://fluxcd.io/"&gt;&lt;strong&gt;Flux CD&lt;/strong&gt;&lt;/a&gt; enables continuous deployment to Kubernetes through GitOps by syncing Git repositories with Kubernetes clusters. Flux CD enables GitOps for Kubernetes through source control integration. It manages Kubernetes manifests as code and syncs git repo changes to clusters. Flux automates checks, deployments, and updates within clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://keda.sh/"&gt;&lt;strong&gt;KEDA&lt;/strong&gt;&lt;/a&gt; introduces event-driven scaling to Kubernetes workloads. It integrates with Kubernetes Horizontal Pod Autoscalers and can scale pods based on external metrics from services like databases and message queues (Kafka, RabbitMQ, MongoDB).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitoring &amp;amp; Logging
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.getambassador.io/docs/emissary/latest/howtos/prometheus/?utm_source=Dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=Corporate"&gt;&lt;strong&gt;Prometheus&lt;/strong&gt;&lt;/a&gt; is a leading open source monitoring and alerting system explicitly designed for Kubernetes environments with native support for metrics. Prometheus auto-discovers Kubernetes pods, services, and nodes to collect metrics seamlessly. Its Kubernetes service discovery integration scrapes metrics from API objects like deployments, jobs, and ingresses. Prometheus alerts can trigger autoscaling and remediation based on Kubernetes events and statuses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt; provides an intuitive dashboard interface to visualize metrics collected from sources like Prometheus. It offers out-of-the-box dashboards tailored for monitoring Kubernetes clusters, nodes, deployments, and pods. Users can create custom panels and graphs to build dashboards optimized for their Kubernetes workloads and services. Through dynamic metric visualizations, Grafana helps gain visibility into cluster resource usage, application performance, user activity, and more. Its annotation feature can mark deployment events on graphs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Datadog&lt;/strong&gt; provides end-to-end observability, including dashboards, alerts, and log management tailored for monitoring Kubernetes clusters and cloud-native apps. It integrates with Kubernetes to collect metrics and logs from containers, pods, nodes, and controllers. It offers out-of-the-box dashboards for Kubernetes monitoring, namespace mapping, cluster troubleshooting, and more. Datadog's Kubernetes autodiscovery enables tracking dynamic changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Automation &amp;amp; Configuration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; provides robust support for provisioning and managing Kubernetes infrastructure as code. The Kubernetes provider integrates deeply to manage resources like clusters, nodes, ingress, storage, RBAC controls, and more. Terraform modules help configure secure and production-ready Kubernetes setups across providers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pulumi&lt;/strong&gt; - takes an infrastructure-as-code approach to Kubernetes using real programming languages like JavaScript, Python, and Go instead of declarative configs. Pulumi's Kubernetes support allows the defining of clusters, configmaps, deployments, and infrastructure in code. It integrates seamlessly with Kubernetes CLI and APIs for full control through code. Pulumi provides flexible abstractions and reuse through packages and libraries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ansible&lt;/strong&gt; provides over 500 modules in the Kubernetes collection for automating tasks within clusters. Modules can deploy apps, configure clusters, manage nodes, handle networking, autoscaling, and security. Ansible is agentless, using OpenSSH to connect and leverage the Kubernetes API. Ansible integrates smoothly with Kubernetes tools like Helm, Kubespray, and Terraform. Ansible playbooks and Kubernetes modules enable automated and idempotent management of production Kubernetes infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubespray&lt;/strong&gt; automates production-grade deployment of Kubernetes clusters across cloud providers. It integrates natively with tools like Ansible, Terraform, Helm, and Kustomize for full lifecycle management. Kubespray handles cluster provisioning, configuration, upgrading, scaling, and more to simplify Kubernetes cluster operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Secret Management Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CyberArk Conjur&lt;/strong&gt; integrates with Kubernetes to provide robust secret management, access controls, and identity management capabilities essential for secure DevOps workflows. It enables teams to manage credentials, keys, certificates securely, and other secrets needed across Kubernetes environments and pipelines. Conjur brings auditing visibility, granular access policies, and RBAC integration to strengthen security across the Kubernetes stack. Its automation and integration with CI/CD pipelines and infrastructure as code tools gives Engineering teams more control over secrets management as they adopt Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HashiCorp Vault&lt;/strong&gt; manages and secures sensitive secrets like tokens, passwords, keys, and certificates used by Kubernetes clusters, applications, and tools. It centralizes secrets management with encryption, revocation, renewal, and auditing to provide teams visibility and control. Vault integrates with CI/CD and infrastructure as code tools to inject secrets safely into Kubernetes environments. Its dynamic secrets and automatic rotation remove manual burdens for teams. These capabilities make Vault a crucial DevOps tool for securely automating secrets handling as part of Kubernetes workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Secrets Manager&lt;/strong&gt; integrates deeply with Kubernetes to control access and reduce risks related to important secrets like database credentials and API keys used by applications. It brings fine-grained access controls, least privilege permissions, and audit trails to improve Kubernetes secrets security. Secrets Manager eliminates manual secret handling through automated rotation and versioning. Together, its ability to manage credentials at scale while providing visibility makes Secrets Manager an essential DevOps tool for teams adopting Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using Telepresence for Kubernetes Development
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.getambassador.io/products/telepresence"&gt;Telepresence&lt;/a&gt; is a developer productivity tool that connects your local development environment to a cluster, allowing you to maintain your favorite local development practices while working as if you were in your integration environment.&lt;/p&gt;

&lt;p&gt;You can run telepresence connect and talk to pods in the cluster via your browser or curl as if you were a pod in the cluster. You can also Intercept pods in the cluster and have requests to that pod come to the locally running code on your laptop, bringing the fast feedback of local development to Kubernetes. Some unique benefits of Telepresence include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fewer environments to manage&lt;/strong&gt;: With Telepresence, developers can share a development or staging cluster and receive just their test requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost savings&lt;/strong&gt;: When developers don’t need their own dev environments and databases, your cloud bill shrinks with every node you can turn off.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No more tedious build-test-deploy cycles&lt;/strong&gt;: Developers work faster and use less CI time by making live code changes proxied to remote Kubernetes clusters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This lightweight, local development experience accelerates iterating on apps interacting with remote Kubernetes services. Teams can catch issues early before deploying to production. Telepresence simplifies developing microservices on Kubernetes, bridging local and remote environments seamlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Criteria for Evaluating Kubernetes DevOps Tools
&lt;/h2&gt;

&lt;p&gt;With the plethora of tools available, focus on these factors when choosing solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;User-Friendliness&lt;/strong&gt;: Seek tools with intuitive interfaces and easy adoption. Complexity hinders productivity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compatibility&lt;/strong&gt;: Integration with other tools in the stack is key. Prioritize open standards over walled gardens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community Backing&lt;/strong&gt;: Look for active user communities that drive improvements and provide learning resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pricing&lt;/strong&gt;: Balance feature set against the total cost of ownership for commercial tools. Avoid vendor lock-in.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability &amp;amp; Performance&lt;/strong&gt;: Tools must scale alongside usage without degradation. Review benchmarks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Audit security practices and access controls. This is critical when dealing with sensitive data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Evaluating against these key focus areas will help you choose a cohesive DevOps toolkit for Kubernetes. Prioritize capabilities that map to your specific workflows and constraints. This thoughtful selection process leads to long-term efficiency gains, optimized workflows, and getting the most from Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes has become the leading platform for deploying containerized applications at scale. However, to fully realize its benefits depends on adopting the right set of Kubernetes DevOps tools and workflows.&lt;/p&gt;

&lt;p&gt;This guide provided an overview of the most essential Kubernetes DevOps tools across CI/CD, deployment, monitoring, automation, and other areas. While Kubernetes solves major technology challenges, complementary tools address processes and collaboration.&lt;/p&gt;

&lt;p&gt;By leveraging solutions like Jenkins, Helm, and Datadog, teams can optimize productivity and application quality. Adopting this new DevOps toolkit tailored for Kubernetes will accelerate your software delivery.&lt;/p&gt;

&lt;p&gt;The variety of options also means evaluating your needs, environment, and constraints before choosing solutions. Focus on capabilities, integration, usability, and community support during assessments.&lt;/p&gt;

&lt;p&gt;This new generation of purpose-built Kubernetes DevOps tools represents a turning point for optimizing Kubernetes productivity, resilience, and delivery.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
