<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tosin Akinosho</title>
    <description>The latest articles on Forem by Tosin Akinosho (@tosin2013).</description>
    <link>https://forem.com/tosin2013</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tosin2013"/>
    <language>en</language>
    <item>
      <title>Stop Using CI Scripts to Validate Jupyter Notebooks. Use a Kubernetes Operator Instead.</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Wed, 25 Mar 2026 13:00:57 +0000</pubDate>
      <link>https://forem.com/tosin2013/stop-using-ci-scripts-to-validate-jupyter-notebooks-use-a-kubernetes-operator-instead-4h5k</link>
      <guid>https://forem.com/tosin2013/stop-using-ci-scripts-to-validate-jupyter-notebooks-use-a-kubernetes-operator-instead-4h5k</guid>
      <description>&lt;p&gt;&lt;code&gt;jupyter nbconvert --execute&lt;/code&gt; tells you the notebook ran. It doesn't tell you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Whether it ran with the right GPU, memory limits, or node type&lt;/li&gt;
&lt;li&gt;Whether the secrets it needs are actually accessible&lt;/li&gt;
&lt;li&gt;Whether the model endpoint it calls is returning correct predictions&lt;/li&gt;
&lt;li&gt;Whether cell outputs regressed from last week's golden baseline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That gap is why notebooks keep breaking in production despite green CI. The &lt;a href="https://github.com/tosin2013/jupyter-notebook-validator-operator" rel="noopener noreferrer"&gt;Jupyter Notebook Validator Operator&lt;/a&gt; closes it by running validation inside Kubernetes — same environment, same resources, same model endpoints as production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone and install CRDs&lt;/span&gt;
git clone https://github.com/tosin2013/jupyter-notebook-validator-operator.git
&lt;span class="nb"&gt;cd &lt;/span&gt;jupyter-notebook-validator-operator
make deploy &lt;span class="nv"&gt;IMG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;quay.io/tosin2013/jupyter-notebook-validator-operator:latest

&lt;span class="c"&gt;# Verify the controller is running&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; jupyter-notebook-validator-operator-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then submit a validation job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mlops.mlops.dev/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NotebookValidationJob&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-notebook-validation&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mlops-staging&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;notebook&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;git&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://github.com/your-org/ml-models"&lt;/span&gt;
      &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;notebooks/inference.ipynb"&lt;/span&gt;
  &lt;span class="na"&gt;podConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;containerImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;quay.io/jupyter/scipy-notebook:latest"&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8Gi"&lt;/span&gt;
  &lt;span class="na"&gt;validation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;goldenNotebook&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; validation-job.yaml
kubectl get notebookvalidationjob my-notebook-validation &lt;span class="nt"&gt;-w&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;When you submit a &lt;code&gt;NotebookValidationJob&lt;/code&gt;, the controller:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clones your repo into an ephemeral volume (no manual file staging)&lt;/li&gt;
&lt;li&gt;Schedules a validation pod with your exact resource spec — GPU nodes, memory limits, node selectors&lt;/li&gt;
&lt;li&gt;Executes the notebook via &lt;strong&gt;Papermill&lt;/strong&gt; — cell-by-cell, full output capture&lt;/li&gt;
&lt;li&gt;Diffs outputs against the golden baseline (catches silent regressions)&lt;/li&gt;
&lt;li&gt;Calls your model serving endpoint and validates predictions&lt;/li&gt;
&lt;li&gt;Writes results back to the CR status — queryable with &lt;code&gt;kubectl&lt;/code&gt;, Prometheus metrics included&lt;/li&gt;
&lt;li&gt;Pod terminates&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No persistent services. No management overhead.&lt;/p&gt;




&lt;h2&gt;
  
  
  Model-Aware Validation
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 This is the differentiator. Most validators check for exceptions. This checks whether your model is actually behaving correctly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Connect a validation job directly to your serving infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;validation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;goldenNotebook&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;modelEndpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fraud-model"&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kserve"&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://fraud-model.kserve-inference.svc.cluster.local/v1/models/fraud-model:predict"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Supported serving platforms: &lt;strong&gt;KServe, OpenShift AI, vLLM, TorchServe, TensorFlow Serving, Triton Inference Server, Ray Serve, Seldon, BentoML.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Authentication via Kubernetes Secrets, External Secrets Operator, or HashiCorp Vault — no plaintext credentials in notebooks.&lt;/p&gt;




&lt;h2&gt;
  
  
  GPU Workloads
&lt;/h2&gt;

&lt;p&gt;For GPU-dependent notebooks, set the resource limits and the scheduler handles the rest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;podConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containerImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;quay.io/jupyter/pytorch-notebook:cuda12-latest"&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;32Gi"&lt;/span&gt;
      &lt;span class="na"&gt;nvidia.com/gpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
  &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nvidia.com/gpu.product&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;A100-SXM4-80GB"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The validation pod lands on a GPU node. Your CI runner doesn't need CUDA anywhere near it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Debugging Failed Validations
&lt;/h2&gt;

&lt;p&gt;When a job fails, check the CR status first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe notebookvalidationjob my-notebook-validation &lt;span class="nt"&gt;-n&lt;/span&gt; mlops-staging
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;status.conditions&lt;/code&gt; block will show where it failed — clone, execution, golden diff, or model endpoint check.&lt;/p&gt;

&lt;p&gt;For deeper inspection, grab the validation pod logs before they terminate (or increase the TTL in spec):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get the pod name from the CR status&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; mlops-staging &amp;lt;validation-pod-name&amp;gt; &lt;span class="nt"&gt;-c&lt;/span&gt; notebook-executor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Tip:&lt;/strong&gt; If the model endpoint check fails but execution passes, check your Kubernetes Secret first. The operator surfaces auth errors in the CR status under &lt;code&gt;modelValidation.error&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;RBAC with minimal required permissions — controller only has the API access it needs&lt;/li&gt;
&lt;li&gt;Pod Security Standards compliant validation pods&lt;/li&gt;
&lt;li&gt;External Secrets Operator integration for secret rotation&lt;/li&gt;
&lt;li&gt;Resource quotas to prevent runaway notebooks consuming cluster capacity&lt;/li&gt;
&lt;li&gt;Runs safely in multi-tenant clusters without granting data scientists cluster-admin&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;⭐ &lt;strong&gt;Star the repo:&lt;/strong&gt; &lt;a href="https://github.com/tosin2013/jupyter-notebook-validator-operator" rel="noopener noreferrer"&gt;github.com/tosin2013/jupyter-notebook-validator-operator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is v0.1.0 — early days. If you're running notebooks in production and have opinions on the CRD design, serving platform integrations, or validation patterns, open an issue or drop a comment below.&lt;/p&gt;

&lt;p&gt;Drop a ❤️ or 🦄 if this was useful — helps more platform engineers find it.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>mlops</category>
      <category>devops</category>
      <category>jupyter</category>
    </item>
    <item>
      <title>Escaping the "Blind Phase": How to Debug OpenShift 4 LDAP &amp; Active Directory Logins</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Fri, 13 Mar 2026 16:15:45 +0000</pubDate>
      <link>https://forem.com/tosin2013/escaping-the-blind-phase-how-to-debug-openshift-4-ldap-active-directory-logins-56h0</link>
      <guid>https://forem.com/tosin2013/escaping-the-blind-phase-how-to-debug-openshift-4-ldap-active-directory-logins-56h0</guid>
      <description>&lt;p&gt;If you manage an OpenShift 4 cluster, you’ve likely stared down this exact scenario: A user pings you saying they can’t log into the web console. You confidently pull up the logs for the &lt;code&gt;oauth-openshift&lt;/code&gt; pods, fully expecting to see a typo in a password or an expired LDAP bind account. &lt;/p&gt;

&lt;p&gt;Instead, you see... absolutely nothing. &lt;/p&gt;

&lt;p&gt;The logs show a generic &lt;code&gt;HTTP 401 Unauthorized&lt;/code&gt; response, but there is zero trace of the actual LDAP network handshakes, TLS negotiations, or payload exchanges. &lt;/p&gt;

&lt;p&gt;Welcome to the &lt;strong&gt;"Blind Phase"&lt;/strong&gt; of OpenShift troubleshooting.&lt;/p&gt;

&lt;p&gt;Because OpenShift 4 relies on a declarative Authentication Operator, the default logging intent (&lt;code&gt;Normal&lt;/code&gt;) deliberately suppresses verbose directory traffic. This is great for saving your Elasticsearch PVCs from filling up with noisy logs and preventing credential leakage, but it makes diagnosing a basic LDAP outage nearly impossible. &lt;/p&gt;

&lt;p&gt;A firewall drop (I/O Timeout) looks exactly the same in the logs as an Active Directory account lockout (Result Code 49). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmid6q9oquodjlgjh7155.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmid6q9oquodjlgjh7155.png" alt=" " width="800" height="952"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the exact, systematic workflow to pierce the blind phase, expose the root cause, prove it to your network team, and clean up afterward.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 1: Turn on the "X-Ray" (Enable Debug Logging)
&lt;/h3&gt;

&lt;p&gt;You can't fix what you can't see. You must temporarily mutate the cluster's global authentication resource to force the &lt;code&gt;oauth-openshift&lt;/code&gt; pods into &lt;code&gt;Debug&lt;/code&gt; mode. &lt;/p&gt;

&lt;p&gt;Execute this merge patch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc patch authentications.operator.openshift.io/cluster &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;merge &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"spec":{"logLevel":"Debug"}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The operator will immediately trigger a rolling restart of your authentication pods with the new verbosity injected. Once they are ready, tail the logs (filtering out the noisy health checks):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;oauth-openshift &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-authentication | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; healthz &lt;span class="nt"&gt;-e&lt;/span&gt; metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Step 2: Look for the "Holy Trinity" of LDAP Failures
&lt;/h3&gt;

&lt;p&gt;With the X-Ray on, every LDAP transaction is exposed in real-time. Watch the logs for failures in these three sequential phases. A failure in phase 1 prevents phase 2, and so on.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Connectivity (Network &amp;amp; Cryptography)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Symptom:&lt;/strong&gt; &lt;code&gt;dial tcp 10.X.X.X:389: i/o timeout&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; This is a pure network block. Check your OVN-Kubernetes egress IPs, EgressNetworkPolicies, and external enterprise firewalls.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;The Symptom:&lt;/strong&gt; &lt;code&gt;x509: certificate signed by unknown authority&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; The Active Directory server is using an internal CA. You must provide a ConfigMap containing the Base64 PEM-encoded CA bundle and reference it in the &lt;code&gt;ca.name&lt;/code&gt; field of your OpenShift OAuth configuration.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. Binding (Authentication)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Symptom:&lt;/strong&gt; &lt;code&gt;error binding to ou=abc... for search phase: LDAP Result Code 49 "Invalid Credentials"&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; Your &lt;strong&gt;Service Account&lt;/strong&gt; (the &lt;code&gt;bindDN&lt;/code&gt; OpenShift uses to search the directory) has a bad password, the wrong DN, or is locked out.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;The Symptom:&lt;/strong&gt; &lt;code&gt;Error authenticating login "user_name"... LDAP Result Code 49 "Invalid Credentials"&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; The Service Account is fine. The specific &lt;strong&gt;End User&lt;/strong&gt; just typed their password wrong. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Mapping (Schema Translation)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Symptom:&lt;/strong&gt; The logs show a successful bind, but the user is still denied access. 

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; OpenShift authenticated the password but couldn't map the user to an internal Identity object. This is usually an Active Directory schema mismatch. Make sure you are mapping &lt;code&gt;id&lt;/code&gt; and &lt;code&gt;preferredUsername&lt;/code&gt; to &lt;code&gt;sAMAccountName&lt;/code&gt; (Active Directory), NOT &lt;code&gt;uid&lt;/code&gt; (RFC-2307 LDAP).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  Step 3: The Ultimate "It's Not OpenShift's Fault" Test
&lt;/h3&gt;

&lt;p&gt;Sometimes, network teams insist the firewall is open, or identity teams insist the service account works. You need to isolate OpenShift's Go-based LDAP client from the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;You do this by bypassing the &lt;code&gt;oauth-openshift&lt;/code&gt; pods entirely and running a raw &lt;code&gt;ldapsearch&lt;/code&gt; directly from an administrative Linux jumpbox that has line-of-sight to the directory network.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. From your administrative jumpbox, install the standard LDAP utilities&lt;/span&gt;
yum &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; openldap-clients &lt;span class="c"&gt;# (or apt-get install ldap-utils)&lt;/span&gt;

&lt;span class="c"&gt;# 2. Execute the raw LDAP query mirroring your OAuth config&lt;/span&gt;
ldapsearch &lt;span class="nt"&gt;-x&lt;/span&gt; &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="s2"&gt;"CN=ocp-svc,OU=ServiceAccounts,DC=example,DC=com"&lt;/span&gt; &lt;span class="nt"&gt;-W&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; ldaps://ldaphost.example.com &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="s2"&gt;"ou=Users,dc=office,dc=example,DC=com"&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; sub &lt;span class="s1"&gt;'(sAMAccountName=user1)'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;ldapsearch&lt;/code&gt; times out, it's a network issue. If it throws an invalid credential error, the AD team gave you the wrong password. If it returns the full user payload, OpenShift's mapping configuration is the culprit. You now have definitive proof to attach to your support tickets.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 4: The Cleanup (CRITICAL)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Do not leave your cluster in Debug mode.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Running &lt;code&gt;oauth-openshift&lt;/code&gt; at elevated verbosity in a production environment will generate an exponential amount of log spam. It will chew through your OpenShift Logging (Elasticsearch/Loki) PVCs, potentially causing cluster-wide logging aggregation failures and risking the exposure of sensitive directory payloads.&lt;/p&gt;

&lt;p&gt;Once you have solved the login issue, safely revert to the default operational state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc patch authentications.operator.openshift.io/cluster &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;merge &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"spec":{"logLevel":"Normal"}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By understanding the operator intent model and systematically navigating the blind phase, you can turn a frustrating "Login Failed" screen into a precise, actionable root-cause analysis in minutes.&lt;/p&gt;




&lt;h3&gt;
  
  
  Learn More &amp;amp; Dive Deeper
&lt;/h3&gt;

&lt;p&gt;Want to master the intricacies of OpenShift authentication and RBAC? Check out the official resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📖 &lt;strong&gt;&lt;a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/authentication_and_authorization/configuring-identity-providers#configuring-ldap-identity-provider" rel="noopener noreferrer"&gt;OpenShift Docs: Configuring an LDAP Identity Provider&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;📖 &lt;strong&gt;&lt;a href="https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/authentication_and_authorization/ldap-syncing" rel="noopener noreferrer"&gt;OpenShift Docs: Syncing LDAP Groups&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;🔧 &lt;strong&gt;&lt;a href="https://access.redhat.com/articles/6990472" rel="noopener noreferrer"&gt;Red Hat Knowledgebase: Troubleshooting LDAP Authentication in OpenShift 4&lt;/a&gt;&lt;/strong&gt; &lt;em&gt;(Requires Red Hat Login)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>openshift</category>
      <category>authentication</category>
      <category>identitymanagement</category>
      <category>activedirectory</category>
    </item>
    <item>
      <title>Migrating from DAS to DRA in OpenShift: The Pragmatic Guide</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Wed, 11 Mar 2026 13:25:59 +0000</pubDate>
      <link>https://forem.com/tosin2013/migrating-from-das-to-dra-in-openshift-the-pragmatic-guide-2a03</link>
      <guid>https://forem.com/tosin2013/migrating-from-das-to-dra-in-openshift-the-pragmatic-guide-2a03</guid>
      <description>&lt;h1&gt;
  
  
  Migrating from DAS to DRA in OpenShift: The Pragmatic Guide
&lt;/h1&gt;

&lt;p&gt;If you are running high-density AI/ML workloads on OpenShift 4.20 or later, it's time to have a serious talk about your GPU partitioning strategy. &lt;/p&gt;

&lt;p&gt;For a long time, we relied on the Dynamic Accelerator Slicer (DAS)—usually via the InstaSlice operator—to dynamically carve out NVIDIA MIG partitions. It worked, but it was essentially a set of webhooks and custom scheduler plugins hacking the legacy device plugin model. &lt;/p&gt;

&lt;p&gt;With the introduction of &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/" rel="noopener noreferrer"&gt;Dynamic Resource Allocation (DRA)&lt;/a&gt; as a native Kubernetes standard, Red Hat is officially deprecating DAS. DRA isn't just an upgrade; it is a total "cut-over" replacement that moves away from treating GPUs as dumb integer counts (&lt;code&gt;nvidia.com/gpu: 1&lt;/code&gt;) to treating them as complex objects you can query using CEL.&lt;/p&gt;

&lt;p&gt;Here is the pragmatic, step-by-step guide to ripping out DAS and implementing the &lt;a href="https://github.com/NVIDIA/k8s-dra-driver" rel="noopener noreferrer"&gt;NVIDIA DRA Driver&lt;/a&gt; in OpenShift.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites for the DRA Stack
&lt;/h2&gt;

&lt;p&gt;Before you start tearing things down, verify your software stack. DRA is a complex orchestration of the container runtime and hardware:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Version:&lt;/strong&gt; v1.34.2+ (Foundational support for stable DRA APIs. Note: OpenShift 4.20 aligns with Kubernetes ~1.33, but full DRA support matures in the 4.21/1.34+ timeframe.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NVIDIA Driver:&lt;/strong&gt; v580+ (Required for CDI and dynamic reconfiguration)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU Operator:&lt;/strong&gt; v25.10.0+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime:&lt;/strong&gt; CRI-O with Container Device Interface (CDI) enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Phase 1: Scorched Earth (Decommissioning DAS)
&lt;/h2&gt;

&lt;p&gt;Because DAS and DRA use fundamentally different scheduling logic, they cannot co-exist. You must completely nuke the DAS environment. This isn't just deleting the operator pod; you have to clean the cluster state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# File: scripts/cleanup-das.sh&lt;/span&gt;
&lt;span class="c"&gt;# Lines: 1-14&lt;/span&gt;
&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# 1. Stop Existing Workloads (Find and terminate pods using DAS resources)&lt;/span&gt;
&lt;span class="c"&gt;# Add --dry-run or echo before xargs in production to verify targets first.&lt;/span&gt;
&lt;span class="c"&gt;# Note: This checks primary containers. Adjust the jq path if your initContainers also request MIG slices.&lt;/span&gt;
oc get pods &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; json | &lt;span class="se"&gt;\&lt;/span&gt;
  jq &lt;span class="s1"&gt;'.items[] | select(.spec.containers[].resources.requests["mig.das.com"] != null) | .metadata.name'&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  xargs &lt;span class="nt"&gt;-I&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; oc delete pod &lt;span class="o"&gt;{}&lt;/span&gt;

&lt;span class="c"&gt;# 2. Delete all legacy AllocationClaims (Note: verify the exact CRD name for your InstaSlice version)&lt;/span&gt;
oc delete allocationclaims &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; das-operator

&lt;span class="c"&gt;# 3. Verify the blast radius is clear&lt;/span&gt;
oc get crd | &lt;span class="nb"&gt;grep &lt;/span&gt;allocationclaim

&lt;span class="c"&gt;# 4. (Manual Step) Remove the DAS subscription, OperatorGroup, and the das-operator namespace.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Phase 2: Deploying the DRA Driver
&lt;/h2&gt;

&lt;p&gt;Once the nodes are clean, you need to prepare the workers and deploy the NVIDIA GPU Operator differently than you used to. &lt;/p&gt;

&lt;p&gt;First, label your DRA-targeted nodes to prevent the driver manager from evicting critical plugins during reconfiguration:&lt;br&gt;
&lt;code&gt;oc label node &amp;lt;node-name&amp;gt; nvidia.com/dra-kubelet-plugin=true&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When installing the GPU Operator, &lt;strong&gt;you must disable the legacy device plugin&lt;/strong&gt;. If you don't, you'll end up with resource advertisement conflicts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# File: values/gpu-operator-values.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# Lines: 1-10&lt;/span&gt;
&lt;span class="c1"&gt;# GPU Operator Helm Values for DRA&lt;/span&gt;
&lt;span class="na"&gt;devicePlugin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="c1"&gt;# Critical: hands control over to DRA&lt;/span&gt;

&lt;span class="c1"&gt;# When installing the separate DRA Driver chart (conceptual illustration):&lt;/span&gt;
&lt;span class="na"&gt;nvidiaDriverRoot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/run/nvidia/driver&lt;/span&gt;
&lt;span class="na"&gt;gpuResourcesEnabledOverride&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;# Required for full GPU &amp;amp; MIG allocation support via DRA&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Gotcha warning:&lt;/em&gt; If you are running A100s, changing the MIG partition layout via DRA currently requires a manual restart of the DRA kubelet plugin pod (e.g., &lt;code&gt;oc delete pod -l app.kubernetes.io/name=nvidia-dra-driver-kubelet-plugin -n gpu-operator&lt;/code&gt;). The manager doesn't auto-evict the plugin during reconfiguration yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: Rewriting Your Pod Manifests
&lt;/h2&gt;

&lt;p&gt;This is the biggest change for developers. You can no longer request &lt;code&gt;resources.limits&lt;/code&gt;. Everything moves to &lt;code&gt;resources.claims&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Old Way (DAS):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# File: legacy-pod.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# Lines: 1-10&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gemma-inference-legacy&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;llm&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;mig.das.com/1g.5gb&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The New Way (DRA):&lt;/strong&gt;&lt;br&gt;
You now reference a &lt;code&gt;ResourceClaimTemplate&lt;/code&gt;. Instead of just a single snippet, let's look at what a full, multi-container pod manifest actually looks like when interacting with DRA:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# File: dra-pod.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# Lines: 1-28&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gemma-inference-dra&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;llm-worker&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-registry/vllm:latest&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;claims&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gpu-primary&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring-sidecar&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-registry/gpu-telemetry:latest&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Sidecars don't need the GPU claim, they run alongside the workload&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
  &lt;span class="na"&gt;resourceClaims&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gpu-primary&lt;/span&gt;
    &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;resourceClaimTemplateName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;standard-mig-template&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This structural shift ensures the hardware is partitioned, reserved, and healthy &lt;em&gt;before&lt;/em&gt; the pod is even scheduled.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operator Dependencies
&lt;/h3&gt;

&lt;p&gt;When managing this lifecycle, you must ensure the NVIDIA GPU Operator (v25.10.0+) is orchestrating the DRA Driver components correctly. If you've been managing GPU operators through the Red Hat OpenShift OperatorHub via OLM (Operator Lifecycle Manager), be aware that the transition requires explicit &lt;code&gt;Subscription&lt;/code&gt; and &lt;code&gt;CSV&lt;/code&gt; (ClusterServiceVersion) awareness. You can't just apply the new driver and hope OLM understands the state change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debugging DRA Allocations
&lt;/h3&gt;

&lt;p&gt;What happens when your pod is pending and you aren't sure why? In the old DAS days, you checked the &lt;code&gt;AllocationClaim&lt;/code&gt; objects. In DRA, the nodes advertise their capacity via the &lt;code&gt;ResourceSlice&lt;/code&gt; API.&lt;/p&gt;

&lt;p&gt;Here is the exact workflow for figuring out why your GPU hasn't attached:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Check if the cluster even sees your node's physical GPUs&lt;/span&gt;
oc get resourceslices

&lt;span class="c"&gt;# 2. Check the specific status of your pod's claim&lt;/span&gt;
oc get resourceclaim &lt;span class="nt"&gt;-n&lt;/span&gt; my-ai-namespace
&lt;span class="c"&gt;# Look for STATUS: Pending or Allocated&lt;/span&gt;

&lt;span class="c"&gt;# 3. If Pending, describe the claim to see the K8s scheduler's CEL evaluation&lt;/span&gt;
oc describe resourceclaim my-gpu-claim-abc12 &lt;span class="nt"&gt;-n&lt;/span&gt; my-ai-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the scheduler cannot match your pod's hardware request (e.g., you asked for &lt;code&gt;device.attributes.vram &amp;gt;= 80GB&lt;/code&gt; but only have 40GB A100s), the &lt;code&gt;describe&lt;/code&gt; output will explicitly tell you the CEL evaluation failed on all available nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The NVLink Bonus: ComputeDomains
&lt;/h2&gt;

&lt;p&gt;While dynamic MIG slicing is the primary DAS replacement, DRA brings a massive upgrade for folks running NVIDIA GB200 or HGX systems: &lt;strong&gt;ComputeDomains&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Instead of just dividing a single GPU, ComputeDomains allow you to securely share GPU memory &lt;em&gt;across multiple nodes&lt;/em&gt; via Multi-Node NVLink (MNNVL). By specifying a &lt;code&gt;computeDomainName&lt;/code&gt; in your pod claims, the DRA driver handles all the heavy lifting to establish connectivity among pods in that domain. This keeps them isolated from other namespaces while operating at full NVLink speeds. For large-scale distributed training, this alone is worth the migration effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Moving from DAS to DRA is a paradigm shift. It requires coordination between platform teams and developers to rewrite manifests, but the payoff is a native, stable, and highly expressive hardware scheduling API. Open up your &lt;code&gt;ResourceSlices&lt;/code&gt; API, watch your devices get cleanly allocated, and enjoy the deprecation of hacky mutating webhooks.&lt;/p&gt;

&lt;h3&gt;
  
  
  External References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://github.com/NVIDIA/k8s-dra-driver" rel="noopener noreferrer"&gt;NVIDIA k8s-dra-driver Repository&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/" rel="noopener noreferrer"&gt;Kubernetes Core DRA Documentation&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://kubernetes.io/docs/reference/using-api/cel/" rel="noopener noreferrer"&gt;CEL Rules in Kubernetes&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>gpu</category>
      <category>openshift</category>
    </item>
    <item>
      <title>DNS Governance for OpenShift Beginners: A Friendly Guid</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Tue, 24 Feb 2026 20:22:54 +0000</pubDate>
      <link>https://forem.com/tosin2013/dns-governance-for-openshift-beginners-a-friendly-guid-172n</link>
      <guid>https://forem.com/tosin2013/dns-governance-for-openshift-beginners-a-friendly-guid-172n</guid>
      <description>&lt;h2&gt;
  
  
  Wait, Why Should I Care About DNS?
&lt;/h2&gt;

&lt;p&gt;Let me start with a story. Early in my career, I got paged at 2 AM because "nothing was working." Applications were timing out, users couldn't access services, and my monitoring was completely useless. After hours of panic, we discovered someone had accidentally modified the DNS configuration, and the entire cluster couldn't resolve internal service names.&lt;/p&gt;

&lt;p&gt;That night changed how I think about DNS. It's the foundation everything else builds on, and when it breaks, nothing works.&lt;/p&gt;

&lt;p&gt;In this guide, I'll walk you through how to set up DNS governance for OpenShift using Red Hat Advanced Cluster Management (RHACM). Don't worry if you're new to this - I'll explain everything from scratch.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Even Is DNS in OpenShift?
&lt;/h2&gt;

&lt;p&gt;First, let's talk about what DNS does in your cluster. You can think of DNS as the phonebook of the internet. When your application wants to talk to another service (like a database or API), it needs to know the IP address. DNS translates service names to IP addresses.&lt;/p&gt;

&lt;p&gt;In OpenShift, this is handled by CoreDNS - a DNS server that runs on every node in your cluster. Each node has its own DNS resolver, which means your pods don't need to go far to resolve names.&lt;/p&gt;

&lt;p&gt;Here's the cool part: OpenShift automatically creates DNS entries for your services. If you create a service called &lt;code&gt;my-app&lt;/code&gt; in the namespace &lt;code&gt;production&lt;/code&gt;, other pods can reach it just by using &lt;code&gt;my-app.production.svc.cluster.local&lt;/code&gt;. No manual configuration needed.&lt;/p&gt;

&lt;p&gt;The DNS Operator in OpenShift manages all of this. It watches for services you create and automatically updates the DNS records. Pretty handy, right?&lt;/p&gt;




&lt;h2&gt;
  
  
  What's RHACM and Why Do I Need It?
&lt;/h2&gt;

&lt;p&gt;Now, let's talk about RHACM. If you're managing multiple OpenShift clusters (like a development cluster, staging, and production), RHACM is like a "hub" that lets you control all of them from one place.&lt;/p&gt;

&lt;p&gt;One of RHACM's superpowers is &lt;strong&gt;policies&lt;/strong&gt;. A policy is basically a rule that says "this is how things should be configured." You define the policy on your hub cluster, and RHACM makes sure all your managed clusters comply with it.&lt;/p&gt;

&lt;p&gt;For DNS governance, we want policies that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check if DNS is healthy&lt;/li&gt;
&lt;li&gt;Verify the configuration hasn't drifted&lt;/li&gt;
&lt;li&gt;Alert us if something goes wrong&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Four Things We Need to Monitor
&lt;/h2&gt;

&lt;p&gt;Here's my simple framework for DNS governance. We're going to check four things:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Is the DNS Operator Happy?
&lt;/h3&gt;

&lt;p&gt;The DNS Operator is the thing that manages CoreDNS. If it's sad (degraded), nothing else will work properly. We monitor this using something called a &lt;code&gt;ClusterOperator&lt;/code&gt; resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Is the Corefile Correct?
&lt;/h3&gt;

&lt;p&gt;The Corefile is the configuration file for CoreDNS. It tells DNS what plugins to use and how to handle queries. We want to make sure critical plugins are always present.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Are All DNS Pods Running?
&lt;/h3&gt;

&lt;p&gt;CoreDNS runs as a DaemonSet (one pod per node). Sometimes pods show as "Running" but aren't actually working. We need to verify all expected pods are truly available.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Do We Get Alerted?
&lt;/h3&gt;

&lt;p&gt;If something goes wrong, we need to know about it. We'll set up alerts that page us when DNS has problems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Let's Build It: Step-by-Step
&lt;/h2&gt;

&lt;p&gt;Ready to see how this works? Here's how to set up DNS governance on your cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;You'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An OpenShift cluster with RHACM installed&lt;/li&gt;
&lt;li&gt;Access to the &lt;code&gt;oc&lt;/code&gt; command line tool&lt;/li&gt;
&lt;li&gt;Permission to create policies on the hub cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Clone the Repository
&lt;/h3&gt;

&lt;p&gt;First, grab the policy templates from my repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/tosin2013/dns-policy-config.git
&lt;span class="nb"&gt;cd &lt;/span&gt;dns-policy-config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Create the Policy Namespace
&lt;/h3&gt;

&lt;p&gt;On your RHACM hub cluster, create a namespace to hold your DNS policies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; demo/namespace.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a namespace called &lt;code&gt;dns-governance-policies&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Bind Your ClusterSet
&lt;/h3&gt;

&lt;p&gt;Next, connect your managed cluster to this namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; demo/clusterset-binding.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells RHACM which clusters should receive these policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Apply the DNS Policies
&lt;/h3&gt;

&lt;p&gt;Now let's add the four DNS governance policies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Monitor DNS Operator health&lt;/span&gt;
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; policies/dns/operator-health-check.yaml

&lt;span class="c"&gt;# Check Corefile configuration&lt;/span&gt;
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; policies/dns/corefile-integrity.yaml

&lt;span class="c"&gt;# Verify all DNS pods are running&lt;/span&gt;
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; policies/dns/resource-exhaustion.yaml

&lt;span class="c"&gt;# Set up alerting&lt;/span&gt;
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; policies/observability/dns-alerting-rule.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Point to Your Cluster
&lt;/h3&gt;

&lt;p&gt;Edit the &lt;code&gt;demo/placement.yaml&lt;/code&gt; file to target your managed cluster. Look for the cluster name and change it to match yours:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster.open-cluster-management.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Placement&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dns-policy-placement&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dns-governance-policies&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;predicates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;requiredClusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labelSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;name&lt;/span&gt;
          &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
          &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;your-cluster-name&lt;/span&gt;  &lt;span class="c1"&gt;# Change this!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then apply it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; demo/placement.yaml
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; demo/placement-binding.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6: Check Compliance
&lt;/h3&gt;

&lt;p&gt;Let's see if everything is working:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc get policy &lt;span class="nt"&gt;-n&lt;/span&gt; dns-governance-policies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                              REMEDIATION   COMPLIATION
policy-dns-operator-health        inform        Compliant
policy-dns-corefile-integrity    inform        Compliant
policy-dns-resource-exhaustion   inform        Compliant
policy-dns-alerting-rule         enforce       Compliant
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All four policies should show "Compliant"!&lt;/p&gt;




&lt;h2&gt;
  
  
  What Does Each Policy Actually Do?
&lt;/h2&gt;

&lt;p&gt;Let me break down each policy in plain English:&lt;/p&gt;

&lt;h3&gt;
  
  
  Policy 1: operator-health-check
&lt;/h3&gt;

&lt;p&gt;This watches the DNS Operator and makes sure it's not degraded. If the operator has problems, this policy will tell you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Policy 2: corefile-integrity
&lt;/h3&gt;

&lt;p&gt;This checks that your CoreDNS configuration has the essential plugins: &lt;code&gt;forward&lt;/code&gt;, &lt;code&gt;errors&lt;/code&gt;, &lt;code&gt;health&lt;/code&gt;, and &lt;code&gt;cache&lt;/code&gt;. If any are missing, you'll know.&lt;/p&gt;

&lt;h3&gt;
  
  
  Policy 3: resource-exhaustion
&lt;/h3&gt;

&lt;p&gt;This verifies that the number of DNS pods actually running matches what should be running. Sometimes pods can be in a weird state - this catches that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Policy 4: dns-alerting-rule
&lt;/h3&gt;

&lt;p&gt;This creates Prometheus alerts that will page you when DNS has problems. This is the only policy that uses "enforce" mode because it creates new alerting rules.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why "Inform" Instead of "Enforce"?
&lt;/h2&gt;

&lt;p&gt;You might notice most policies use "inform" mode instead of "enforce." Here's why that's intentional:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inform&lt;/strong&gt; means "tell me if something is wrong, but don't fix it automatically"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enforce&lt;/strong&gt; means "automatically fix things"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For DNS, automatic fixes are risky. Imagine if a policy accidentally overwrote your DNS configuration - you'd have a cluster-wide outage. By using "inform" mode, we get alerted to problems but don't risk making things worse automatically.&lt;/p&gt;

&lt;p&gt;The only exception is the alerting rule, which creates new alert definitions - that's safe to enforce.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;And that's it! You've now got DNS governance set up for your OpenShift cluster. Here's what you've accomplished:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Created policies that monitor your DNS Operator&lt;/li&gt;
&lt;li&gt;Verified your CoreDNS configuration is correct&lt;/li&gt;
&lt;li&gt;Ensured all DNS pods are actually running&lt;/li&gt;
&lt;li&gt;Set up alerts for when things go wrong&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DNS might seem like background infrastructure, but it deserves attention. With these policies in place, you'll know about DNS problems before they become cluster-wide outages.&lt;/p&gt;

&lt;p&gt;Remember: the best time to set up governance was when you deployed your cluster. The second best time is now.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Reference
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Commands used:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/tosin2013/dns-policy-config.git
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; demo/namespace.yaml
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; demo/clusterset-binding.yaml
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; policies/dns/operator-health-check.yaml
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; policies/dns/corefile-integrity.yaml
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; policies/dns/resource-exhaustion.yaml
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; policies/observability/dns-alerting-rule.yaml
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; demo/placement.yaml
oc apply &lt;span class="nt"&gt;-f&lt;/span&gt; demo/placement-binding.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What to check:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc get policy &lt;span class="nt"&gt;-n&lt;/span&gt; dns-governance-policies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;em&gt;Questions? Want to learn more? Check out the full repository at &lt;a href="https://github.com/tosin2013/dns-policy-config" rel="noopener noreferrer"&gt;github.com/tosin2013/dns-policy-config&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>openshift</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>dns</category>
    </item>
    <item>
      <title>How to Automate ADR Reviews and Keep Your Architectural Decisions in Sync with Your Codebase</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Sun, 25 Jan 2026 15:56:52 +0000</pubDate>
      <link>https://forem.com/tosin2013/how-to-automate-adr-reviews-and-keep-your-architectural-decisions-in-sync-with-your-codebase-kdn</link>
      <guid>https://forem.com/tosin2013/how-to-automate-adr-reviews-and-keep-your-architectural-decisions-in-sync-with-your-codebase-kdn</guid>
      <description>&lt;h2&gt;
  
  
  The Problem Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;If you've been in software architecture for any time, you know the pain. You carefully document an architectural decision in an ADR (Architectural Decision Record). You link it to pull requests, reference it in code reviews, and feel good about having actual documentation for once. Six months later, that ADR describes a system architecture that hasn't existed for two years.&lt;/p&gt;

&lt;p&gt;I've seen this pattern repeat across dozens of teams. ADRs start as living documents and end up as digital dust collectors. New team members read them and make decisions based on outdated information. Code reviews reference constraints that were removed ages ago. And nobody notices until something breaks.&lt;/p&gt;

&lt;p&gt;The core issue is that maintaining ADRs manually is time-consuming and easy to skip when deadlines loom. But what if you could automate it? What if an AI agent could review your ADRs against your actual codebase and tell you exactly what needs to be updated?&lt;/p&gt;

&lt;p&gt;That's exactly what the ADR Review and Synchronization Prompt enables. Let me walk you through how it works.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin, you'll need access to the MCP ADR Analysis Server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MCP ADR Analysis Server&lt;/strong&gt;: &lt;a href="https://github.com/tosin2013/mcp-adr-analysis-server" rel="noopener noreferrer"&gt;https://github.com/tosin2013/mcp-adr-analysis-server&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ADR Aggregator&lt;/strong&gt;: &lt;a href="https://adraggregator.com/" rel="noopener noreferrer"&gt;https://adraggregator.com/&lt;/a&gt; (for managing multiple ADRs)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your ADRs should follow the standard format with status fields (Proposed, Accepted, Deprecated, Superseded, Implemented) stored in a directory like &lt;code&gt;docs/adrs/&lt;/code&gt; or &lt;code&gt;adr/&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Full Prompt
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# ADR Review and Synchronization Prompt&lt;/span&gt;

&lt;span class="gu"&gt;## Persona&lt;/span&gt;

You are an expert Senior Software Architect responsible for maintaining the integrity and accuracy of our project's Architectural Decision Records (ADRs). Your primary goal is to ensure that ADRs are not just historical documents, but living artifacts that accurately reflect the current state of the codebase.

&lt;span class="gs"&gt;**Current Date:**&lt;/span&gt; 2026-01-24 20:02:21 EST

&lt;span class="gu"&gt;## Core Task&lt;/span&gt;

Your task is to perform a comprehensive review of all ADRs within the project located at &lt;span class="sb"&gt;`{PROJECT_PATH}`&lt;/span&gt;. You will use the &lt;span class="sb"&gt;`mcp-adr-analysis-server`&lt;/span&gt; to analyze the codebase and compare it against each ADR. The ultimate goal is to synchronize the ADRs with the code, updating their status and content as necessary.

&lt;span class="gu"&gt;## Step-by-Step Instructions&lt;/span&gt;
&lt;span class="p"&gt;
1.&lt;/span&gt;  &lt;span class="gs"&gt;**Initiate the Review:**&lt;/span&gt;
&lt;span class="p"&gt;    *&lt;/span&gt;   Invoke the &lt;span class="sb"&gt;`reviewExistingAdrs`&lt;/span&gt; tool from the &lt;span class="sb"&gt;`mcp-adr-analysis-server`&lt;/span&gt;.
&lt;span class="p"&gt;    *&lt;/span&gt;   Set &lt;span class="sb"&gt;`projectPath`&lt;/span&gt; to the root of the project you are analyzing.
&lt;span class="p"&gt;    *&lt;/span&gt;   Use &lt;span class="sb"&gt;`analysisDepth: 'comprehensive'`&lt;/span&gt; and &lt;span class="sb"&gt;`includeTreeSitter: true`&lt;/span&gt; to ensure the most accurate and in-depth analysis.
&lt;span class="p"&gt;    *&lt;/span&gt;   Set &lt;span class="sb"&gt;`generateUpdatePlan: true`&lt;/span&gt; to get actionable recommendations.
&lt;span class="p"&gt;    *&lt;/span&gt;   Example call:&lt;span class="sb"&gt;

        ```
&lt;/span&gt;
json
        {
          "tool": "reviewExistingAdrs",
          "args": {
            "projectPath": "{PROJECT_PATH}",
            "adrDirectory": "docs/adrs",
            "analysisDepth": "comprehensive",
            "includeTreeSitter": true,
            "generateUpdatePlan": true
          }
        }
&lt;span class="sb"&gt;

        ```

&lt;/span&gt;&lt;span class="p"&gt;2.&lt;/span&gt;  &lt;span class="gs"&gt;**Analyze Each ADR:**&lt;/span&gt;
    For each ADR returned by the &lt;span class="sb"&gt;`reviewExistingAdrs`&lt;/span&gt; tool, perform the following analysis based on the &lt;span class="sb"&gt;`complianceScore`&lt;/span&gt;, &lt;span class="sb"&gt;`gaps`&lt;/span&gt;, and &lt;span class="sb"&gt;`recommendations`&lt;/span&gt;:&lt;span class="sb"&gt;

    *   **Case 1: Full Implementation (Compliance Score &amp;gt;= 8.0)**
        *   **Action:** Update the ADR status to `Implemented`.
        *   **Justification:** The code fully implements the decision recorded in the ADR.

    *   **Case 2: Partial Implementation (5.0 &amp;lt;= Compliance Score &amp;lt; 8.0)**
        *   **Action:** Do not change the status. Create a `todo.md` file or update an existing one with tasks to address the identified `gaps`.
        *   **Justification:** The ADR is not fully implemented, and the remaining work needs to be tracked.

    *   **Case 3: Code is a Better Solution**
        *   **Condition:** The analysis indicates that the code has evolved beyond the ADR, implementing a different, but superior, solution.
        *   **Action:**
            1.  Update the ADR content to reflect the new implementation. Clearly state that the original decision was superseded by a better approach and describe the new approach.
            2.  Set the ADR status to `Implemented`.
        *   **Justification:** The ADR should reflect the current, superior state of the architecture.

    *   **Case 4: Not Implemented (Compliance Score &amp;lt; 5.0)**
        *   **Action:** Do not change the status. Review the ADR to determine if it is still relevant. If not, propose to supersede or deprecate it in a separate action.
        *   **Justification:** The decision has not been implemented, and its relevance needs to be re-evaluated.

&lt;/span&gt;&lt;span class="p"&gt;3.&lt;/span&gt;  &lt;span class="gs"&gt;**Generate Final Report:**&lt;/span&gt;
&lt;span class="p"&gt;    *&lt;/span&gt;   Produce a summary report of the actions taken.
&lt;span class="p"&gt;    *&lt;/span&gt;   For each ADR, list the file name, original status, new status, and a brief justification for the change.
&lt;span class="p"&gt;    *&lt;/span&gt;   If a &lt;span class="sb"&gt;`todo.md`&lt;/span&gt; was created or updated, include its content in the report.

&lt;span class="gu"&gt;## Output Format&lt;/span&gt;

Your final output should be a single markdown document containing:
&lt;span class="p"&gt;
1.&lt;/span&gt;  A summary table of all reviewed ADRs with their old and new statuses.
&lt;span class="p"&gt;2.&lt;/span&gt;  A "Justification" section detailing the reasoning for each change.
&lt;span class="p"&gt;3.&lt;/span&gt;  The content of the &lt;span class="sb"&gt;`todo.md`&lt;/span&gt; file, if any tasks were generated.

&lt;span class="gu"&gt;### Example Summary Table&lt;/span&gt;

| ADR File | Original Status | New Status | Justification |
| --- | --- | --- | --- |
| &lt;span class="sb"&gt;`adr-001.md`&lt;/span&gt; | Accepted | Implemented | Code fully aligns with the ADR. |
| &lt;span class="sb"&gt;`adr-002.md`&lt;/span&gt; | Accepted | Implemented | Code implements a superior solution; ADR updated. |
| &lt;span class="sb"&gt;`adr-003.md`&lt;/span&gt; | Accepted | Accepted | Partial implementation; tasks added to &lt;span class="sb"&gt;`todo.md`&lt;/span&gt;. |

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Initiate the ADR Review
&lt;/h2&gt;

&lt;p&gt;The first step is invoking the analysis server to examine your ADRs against your codebase. Here's the core invocation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tool"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"reviewExistingAdrs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"projectPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/your/project"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"adrDirectory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docs/adrs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"analysisDepth"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"comprehensive"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"includeTreeSitter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"generateUpdatePlan"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Parameter explanations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;projectPath&lt;/code&gt;: Root directory of your project&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;adrDirectory&lt;/code&gt;: Where your ADRs are stored (e.g., &lt;code&gt;docs/adrs&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;analysisDepth&lt;/code&gt;: Use &lt;code&gt;"comprehensive"&lt;/code&gt; for thorough analysis&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;includeTreeSitter&lt;/code&gt;: Enables deep code parsing for accurate verification&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;generateUpdatePlan&lt;/code&gt;: Produces actionable recommendations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The server analyzes your codebase structure, dependencies, and implementations, then compares findings against each ADR to calculate a &lt;strong&gt;compliance score&lt;/strong&gt; (0-10 scale).&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Interpret Compliance Scores
&lt;/h2&gt;

&lt;p&gt;The compliance score tells you how well your code matches your documentation:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Score Range&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;th&gt;Action&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;≥ 8.0&lt;/td&gt;
&lt;td&gt;Full Implementation&lt;/td&gt;
&lt;td&gt;Update status to "Implemented"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5.0 - 7.9&lt;/td&gt;
&lt;td&gt;Partial Implementation&lt;/td&gt;
&lt;td&gt;Don't change status; add tasks to &lt;code&gt;todo.md&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt; 5.0&lt;/td&gt;
&lt;td&gt;Not Implemented or Superseded&lt;/td&gt;
&lt;td&gt;Review relevance; may need to deprecate&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Step 3: Handle Each Case
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Case 1: Full Implementation (Score ≥ 8.0)
&lt;/h3&gt;

&lt;p&gt;When code fully implements a documented decision:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update the ADR status to &lt;code&gt;Implemented&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Justification: "Code fully aligns with the ADR"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Status&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; [x] Accepted
&lt;span class="p"&gt;-&lt;/span&gt; [x] Implemented
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Deprecated
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Superseded
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Case 2: Partial Implementation (5.0 ≤ Score &amp;lt; 8.0)
&lt;/h3&gt;

&lt;p&gt;When some aspects are implemented but gaps remain:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Keep the current status (typically "Accepted")&lt;/li&gt;
&lt;li&gt;Create or update &lt;code&gt;todo.md&lt;/code&gt; with tasks to complete implementation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example todo.md:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# ADR Implementation Tasks&lt;/span&gt;

&lt;span class="gu"&gt;## adr-003.md - API Gateway Integration&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Implement rate limiting middleware
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Add circuit breaker configuration
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Document error handling patterns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Case 3: Code Implements a Better Solution
&lt;/h3&gt;

&lt;p&gt;When the codebase has evolved beyond the ADR with a superior approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update the ADR content to reflect the new implementation&lt;/li&gt;
&lt;li&gt;Explain that the original decision was superseded&lt;/li&gt;
&lt;li&gt;Set status to &lt;code&gt;Implemented&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Update format:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Status&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; [x] Accepted
&lt;span class="p"&gt;-&lt;/span&gt; [x] Implemented
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Deprecated
&lt;span class="p"&gt;-&lt;/span&gt; [x] Superseded by ADR-XXX

&lt;span class="gs"&gt;**Note:**&lt;/span&gt; This ADR was originally accepted for [original approach].
However, during implementation, we discovered [reason for change]
and adopted [new approach] instead. See ADR-XXX for the current implementation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Case 4: Not Implemented (Score &amp;lt; 5.0)
&lt;/h3&gt;

&lt;p&gt;When an ADR hasn't been implemented and may no longer be relevant:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Do NOT change the status&lt;/li&gt;
&lt;li&gt;Review whether the ADR is still relevant&lt;/li&gt;
&lt;li&gt;Propose to supersede or deprecate if needed&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 4: Generate the Final Report
&lt;/h2&gt;

&lt;p&gt;, generateAfter completing your review a summary report documenting changes:&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary Table Template
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ADR File&lt;/th&gt;
&lt;th&gt;Original Status&lt;/th&gt;
&lt;th&gt;New Status&lt;/th&gt;
&lt;th&gt;Justification&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;adr-001.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Accepted&lt;/td&gt;
&lt;td&gt;Implemented&lt;/td&gt;
&lt;td&gt;Code fully aligns with the ADR&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;adr-002.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Accepted&lt;/td&gt;
&lt;td&gt;Implemented&lt;/td&gt;
&lt;td&gt;Code implements a superior solution; ADR updated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;adr-003.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Accepted&lt;/td&gt;
&lt;td&gt;Accepted&lt;/td&gt;
&lt;td&gt;Partial implementation; tasks added to todo.md&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Full Example Workflow
&lt;/h2&gt;

&lt;p&gt;Here's a complete example of the review process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tool"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"reviewExistingAdrs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"projectPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/workspace/my-api-project"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"adrDirectory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docs/adrs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"analysisDepth"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"comprehensive"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"includeTreeSitter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"generateUpdatePlan"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sample response from the server:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"adrs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"file"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"adr-001.md"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Use GraphQL for API Layer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"currentStatus"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Accepted"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"complianceScore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;9.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"gaps"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"recommendations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Update status to Implemented"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"file"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"adr-002.md"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Event-Driven Architecture"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"currentStatus"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Accepted"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"complianceScore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;6.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"gaps"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Event schema registry not implemented"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Dead letter queue configuration missing"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"recommendations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Create implementation tasks for remaining gaps"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"file"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"adr-003.md"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Monolithic Database"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"currentStatus"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Accepted"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"complianceScore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;2.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"gaps"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Codebase has migrated to microservices with separate databases"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"recommendations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Supersede this ADR with new microservice data patterns"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Actions taken:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;adr-001.md&lt;/strong&gt;: Status updated to "Implemented"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;adr-002.md&lt;/strong&gt;: Created &lt;code&gt;todo.md&lt;/code&gt; with event architecture tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;adr-003.md&lt;/strong&gt;: Updated ADR to document microservices migration, status set to "Superseded"&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Integration Tips
&lt;/h2&gt;

&lt;p&gt;Here are some practical ways to integrate ADR reviews into your workflow:&lt;/p&gt;

&lt;h3&gt;
  
  
  Schedule Regular Reviews
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Add ADR review to your sprint tasks&lt;/li&gt;
&lt;li&gt;Run before major releases&lt;/li&gt;
&lt;li&gt;Include in quarterly architecture audits&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CI/CD Integration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Trigger ADR analysis on significant architectural changes&lt;/li&gt;
&lt;li&gt;Add compliance checks to pull request automation&lt;/li&gt;
&lt;li&gt;Fail builds if ADR gaps exceed thresholds&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Team Workflow
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Make ADR updates explicit tasks in story estimation&lt;/li&gt;
&lt;li&gt;Include ADR review in code review checklists&lt;/li&gt;
&lt;li&gt;Celebrate teams that keep ADRs synchronized&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Benefits of Automated ADR Synchronization
&lt;/h2&gt;

&lt;p&gt;When you make ADR maintenance an ongoing practice rather than an occasional cleanup, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trustworthy onboarding&lt;/strong&gt;: New team members can rely on ADRs being accurate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Effective code reviews&lt;/strong&gt;: Reviewers can reference current architectural constraints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visible technical debt&lt;/strong&gt;: Gaps in ADR implementation highlight unfinished work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cultural shift&lt;/strong&gt;: Documentation becomes valuable rather than optional&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;ADRs only work when they're accurate. The drift between documentation and implementation is the silent killer of architectural integrity. But with tools like the MCP ADR Analysis Server and a systematic review process, you can keep your ADRs synchronized with reality.&lt;/p&gt;

&lt;p&gt;The tools are available at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/tosin2013/mcp-adr-analysis-server" rel="noopener noreferrer"&gt;https://github.com/tosin2013/mcp-adr-analysis-server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://adraggregator.com/" rel="noopener noreferrer"&gt;https://adraggregator.com/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What remains is making ADR maintenance an ongoing commitment. Start with one review. See what you find. Then decide how to make it part of your regular workflow.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>adr</category>
      <category>documentation</category>
      <category>automation</category>
    </item>
    <item>
      <title>A Deep Dive into Multi-Transport Protocol Abstraction in Python</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Tue, 04 Nov 2025 15:09:10 +0000</pubDate>
      <link>https://forem.com/tosin2013/a-deep-dive-into-multi-transport-protocol-abstraction-in-python-3gp5</link>
      <guid>https://forem.com/tosin2013/a-deep-dive-into-multi-transport-protocol-abstraction-in-python-3gp5</guid>
      <description>&lt;p&gt;As developers, we often build clients to communicate with servers. But what happens when that server can speak multiple languages? Not human languages, but transport protocols. One moment you're talking over &lt;code&gt;stdio&lt;/code&gt;, the next over &lt;code&gt;Server-Sent Events (SSE)&lt;/code&gt;, and tomorrow it might be raw &lt;code&gt;HTTP&lt;/code&gt; or &lt;code&gt;WebSockets&lt;/code&gt;. This is a common challenge in modern infrastructure, and it's one we tackled head-on in our &lt;strong&gt;ansible-collection-mcp-audit&lt;/strong&gt; project.&lt;/p&gt;

&lt;p&gt;In this post, we'll go deep on the design patterns we used to build a clean, transport-agnostic client in Python. We'll look at how an async context manager, a simple factory, and a well-defined class structure can tame the complexity of multi-protocol communication. This isn't just about the Model Context Protocol (MCP); these patterns are applicable to any project that needs to support multiple ways of talking to a service.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge: One Client, Three Protocols
&lt;/h2&gt;

&lt;p&gt;The goal was to create a single, unified client that could communicate with an MCP server regardless of the underlying transport. The initial requirements were:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;stdio&lt;/strong&gt;: For local, process-based communication. Ideal for testing local scripts and servers.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;SSE (Server-Sent Events)&lt;/strong&gt;: For persistent, one-way communication over HTTP. Great for remote servers that push updates.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;HTTP&lt;/strong&gt;: For standard request/response communication. A common requirement for web-based services.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The naive approach would be to write a bunch of &lt;code&gt;if/elif/else&lt;/code&gt; statements every time we need to make a call. You've seen that code. We've all written that code. It quickly becomes a tangled mess that's impossible to maintain.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We needed an abstraction. A clean interface that would hide the messy details of each protocol and present a simple, consistent set of methods to the rest of the application.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Solution: The &lt;code&gt;MCPClient&lt;/code&gt; Class
&lt;/h2&gt;

&lt;p&gt;The heart of our solution is the &lt;code&gt;MCPClient&lt;/code&gt; class. It serves as the single entry point for all interactions with an MCP server. Here's the core design philosophy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Initialize with Configuration&lt;/strong&gt;: The client is initialized with all the necessary configuration for &lt;em&gt;all&lt;/em&gt; supported transports.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;A Single &lt;code&gt;connect&lt;/code&gt; Method&lt;/strong&gt;: A powerful &lt;code&gt;connect&lt;/code&gt; method, implemented as an async context manager, handles the protocol-specific connection logic.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Consistent API&lt;/strong&gt;: Once connected, all other methods (&lt;code&gt;list_tools&lt;/code&gt;, &lt;code&gt;call_tool&lt;/code&gt;, etc.) don't need to know or care about the underlying transport.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's look at the &lt;code&gt;__init__&lt;/code&gt; method to see how this is set up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# File: plugins/module_utils/mcp_client.py
# Lines: 73-121
&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MCPClient&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;SUPPORTED_TRANSPORTS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ClassVar&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stdio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;transport&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stdio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;server_command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;server_args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;server_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;server_headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;transport&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SUPPORTED_TRANSPORTS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;MCPClientError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Unsupported transport &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;transport&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transport&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;transport&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;timeout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ClientSession&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;

        &lt;span class="c1"&gt;# Validate transport-specific parameters
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;transport&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stdio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;server_command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;MCPClientError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;server_command is required for stdio transport&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;server_command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server_command&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;server_args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server_args&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;transport&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;server_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;MCPClientError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;server_url is required for &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;transport&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; transport&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;server_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server_url&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;server_headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server_headers&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how the constructor takes all possible parameters but only validates and stores the ones relevant to the selected &lt;code&gt;transport&lt;/code&gt;. This keeps the initialization logic clean and ensures that the client is in a valid state from the moment it's created.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Magic of the Async Context Manager
&lt;/h2&gt;

&lt;p&gt;The real power of this abstraction comes from the &lt;code&gt;connect&lt;/code&gt; method. We implemented it as an &lt;code&gt;asynccontextmanager&lt;/code&gt;, which is a perfect fit for managing the lifecycle of a network connection.&lt;/p&gt;

&lt;p&gt;Here's a simplified view of the implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# File: plugins/module_utils/mcp_client.py
# Lines: 122-171
&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;contextlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asynccontextmanager&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mcp.client.stdio&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;stdio_client&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mcp.client.sse&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sse_client&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mcp&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ClientSession&lt;/span&gt;

&lt;span class="nd"&gt;@asynccontextmanager&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Establish connection to the MCP server as an async context manager.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transport&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stdio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;server_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StdioServerParameters&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;server_command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;server_args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;stdio_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;server_params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;as &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;ClientSession&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;
                    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;

        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transport&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;sse_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;server_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;server_headers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;as &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;ClientSession&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;
                    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;

        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transport&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;MCPTransportError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;HTTP transport not yet implemented&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;MCPConnectionError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Failed to connect via &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transport&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;!s}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;
    &lt;span class="k"&gt;finally&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="c1"&gt;# Ensure session is cleared on exit
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern is incredibly powerful. Let's break down what it's doing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Protocol-Specific Logic&lt;/strong&gt;: The &lt;code&gt;if/elif&lt;/code&gt; block contains the &lt;em&gt;only&lt;/em&gt; protocol-specific connection logic in the entire client.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Leveraging the SDK&lt;/strong&gt;: It uses the appropriate client function from the MCP Python SDK (&lt;code&gt;stdio_client&lt;/code&gt; or &lt;code&gt;sse_client&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Session Management&lt;/strong&gt;: It creates a &lt;code&gt;ClientSession&lt;/code&gt; and performs the initial handshake (&lt;code&gt;session.initialize()&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Yielding Control&lt;/strong&gt;: The &lt;code&gt;yield self&lt;/code&gt; is the crucial part. It passes the connected, ready-to-use client instance to the calling code.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Guaranteed Cleanup&lt;/strong&gt;: The &lt;code&gt;finally&lt;/code&gt; block ensures that the session is torn down and resources are released, no matter what happens inside the &lt;code&gt;with&lt;/code&gt; block. This prevents resource leaks, which are notoriously hard to debug.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  A Clean and Consistent API
&lt;/h2&gt;

&lt;p&gt;With the connection handled by the context manager, the rest of the client's methods become beautifully simple. They don't need to know anything about the transport; they just use the &lt;code&gt;self.session&lt;/code&gt; object that was set up during the connection.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# File: plugins/module_utils/mcp_client.py
# Lines: 173-213
&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;list_tools&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;List all tools available on the MCP server.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;MCPClientError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Not connected to MCP server&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list_tools&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;MCPClientError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Failed to list tools: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;!s}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tool_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arguments&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Call a tool on the MCP server.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;MCPClientError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Not connected to MCP server&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tool_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arguments&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="p"&gt;{})&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;MCPClientError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Failed to call tool &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;tool_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;!s}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the payoff for our architectural efforts. The code is clean, readable, and easy to test. Adding a new method is trivial, and it will automatically work with all supported transports.&lt;/p&gt;

&lt;h2&gt;
  
  
  Visualizing the Abstraction
&lt;/h2&gt;

&lt;p&gt;Here's a diagram that illustrates the abstraction layers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;graph TD
    subgraph "Application Layer"
        A[Ansible Module]
    end

    subgraph "Abstraction Layer (MCPClient)"
        B(connect() Context Manager)
        C(list_tools(), call_tool(), etc.)
    end

    subgraph "Transport Layer"
        D[stdio_client]
        E[sse_client]
        F[http_client (future)]
    end

    subgraph "Protocol Layer"
        G[MCP Python SDK]
    end

    A --&amp;gt; B
    A --&amp;gt; C
    B --&amp;gt; D
    B --&amp;gt; E
    B --&amp;gt; F
    C --&amp;gt; G
    D --&amp;gt; G
    E --&amp;gt; G
    F --&amp;gt; G
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The application layer (our Ansible modules) only ever talks to the &lt;code&gt;MCPClient&lt;/code&gt;. The &lt;code&gt;connect&lt;/code&gt; method acts as a gateway to the transport layer, and all subsequent calls go through the consistent API provided by the protocol layer. It's a clean separation of concerns that makes the entire system robust and extensible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Patterns for Maintainable Code
&lt;/h2&gt;

&lt;p&gt;Building a multi-transport client doesn't have to be a nightmare of nested &lt;code&gt;if&lt;/code&gt; statements. By applying a few key design patterns, we were able to create a solution that is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Maintainable&lt;/strong&gt;: Protocol-specific code is isolated in one place.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Extensible&lt;/strong&gt;: Adding a new transport (like WebSockets) would involve modifying only the &lt;code&gt;connect&lt;/code&gt; method.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Robust&lt;/strong&gt;: The context manager ensures that connections are always cleaned up properly.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Easy to Use&lt;/strong&gt;: The rest of the application interacts with a simple, consistent API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've found this pattern to be incredibly effective in a variety of projects. Whether you're working with different database drivers, message queues, or cloud APIs, the core principles of configuration-driven initialization and a context-managed connection lifecycle can save you from a world of technical debt.&lt;/p&gt;

&lt;p&gt;What are your favorite patterns for handling multi-protocol or multi-provider clients? Share your thoughts in the comments below!&lt;/p&gt;




&lt;h3&gt;
  
  
  Links
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Tosin Akinosho. (2025). &lt;em&gt;ansible-collection-mcp-audit&lt;/em&gt;. GitHub Repository. &lt;a href="https://github.com/tosin2013/ansible-collection-mcp-audit" rel="noopener noreferrer"&gt;https://github.com/tosin2013/ansible-collection-mcp-audit&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Model Context Protocol. (2025). &lt;em&gt;Protocol Documentation&lt;/em&gt;. &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;https://modelcontextprotocol.io/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>designpatterns</category>
      <category>async</category>
    </item>
    <item>
      <title>Building Better Documentation: My Journey with DocuMCP and the Model Context Protocol</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Wed, 29 Oct 2025 14:28:27 +0000</pubDate>
      <link>https://forem.com/tosin2013/building-better-documentation-my-journey-with-documcp-and-the-model-context-protocol-3686</link>
      <guid>https://forem.com/tosin2013/building-better-documentation-my-journey-with-documcp-and-the-model-context-protocol-3686</guid>
      <description>&lt;p&gt;What I've learned over the years is that great documentation isn't just about the words on the page—it's about making complex technical concepts accessible while maintaining accuracy and depth. In my experience working with various documentation tools and frameworks, I've found that the intersection of AI and traditional technical writing presents both exciting opportunities and unique challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge of Modern Documentation Deployment
&lt;/h2&gt;

&lt;p&gt;In most cases, setting up documentation for open-source projects feels like reinventing the wheel every single time. From what I've observed, developers often struggle with choosing the right static site generator, structuring content effectively, and maintaining consistency across projects. It's worth noting that this complexity might behave differently depending on your team's technical background and the scope of your project.&lt;/p&gt;

&lt;p&gt;Consider a scenario where you're leading a team that's just launched a new microservice architecture. You have multiple repositories, each needing its own documentation, and you want consistency across all of them. Picture this situation: your developers are spending more time figuring out Jekyll configurations than actually writing meaningful documentation content.&lt;/p&gt;

&lt;p&gt;This is where the Model Context Protocol (MCP) and tools like DocuMCP come into play, though your mileage may vary based on your specific use case.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[DIAGRAM: Traditional vs MCP-Powered Documentation Workflow]

Traditional Approach:
Developer → Manual Tool Selection → Manual Setup → Manual Deployment
    ↓              ↓                   ↓              ↓
Time Lost    Configuration Hell    Inconsistency   Maintenance

MCP-Powered Approach:
Developer → AI Analysis → Automated Setup → GitHub Pages
    ↓            ↓             ↓             ↓
Quick Start  Smart Choices   Consistency   Automation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Understanding the Model Context Protocol
&lt;/h2&gt;

&lt;p&gt;Let me share what I've discovered about MCP during my exploration of AI-assisted development workflows. The Model Context Protocol is essentially a standardized way for AI models to interact with external tools and data sources. Think of it as creating a universal language that allows AI assistants to understand and manipulate your development environment.&lt;/p&gt;

&lt;p&gt;In my experience, MCP addresses a fundamental problem: the fragmentation of AI tool integrations. Before MCP, every AI application needed custom code to interact with different services. What I've found particularly interesting is how MCP creates a bridge between the conversational nature of AI and the structured requirements of development tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  The DocuMCP Approach
&lt;/h3&gt;

&lt;p&gt;Imagine a scenario where you could simply tell an AI assistant: "analyze my repository and deploy documentation to GitHub Pages." That's the vision behind DocuMCP—an intelligent MCP server specifically designed for documentation deployment.&lt;/p&gt;

&lt;p&gt;From what I've observed in the project structure and implementation, DocuMCP follows the Diataxis framework for organizing documentation. This approach, which I've used in various projects, separates content into four categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tutorials&lt;/strong&gt; (learning-oriented)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How-to guides&lt;/strong&gt; (task-oriented) &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reference&lt;/strong&gt; (information-oriented)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explanation&lt;/strong&gt; (understanding-oriented)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "mcpServers": {
    "documcp": {
      "command": "npx",
      "args": ["documcp"]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Technical Architecture Behind DocuMCP
&lt;/h2&gt;

&lt;p&gt;Let's say you're working with a team that needs to understand how DocuMCP actually works under the hood. Based on my analysis of the project, it appears to follow a client-server architecture typical of MCP implementations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[DIAGRAM: DocuMCP Architecture Flow]

User Input
    ↓
Claude Desktop (MCP Client)
    ↓
DocuMCP Server
    ↓
Repository Analysis → Static Site Generator Selection → GitHub Pages Deployment
    ↓                        ↓                              ↓
README.md            Framework Choice                 Automated Setup
Code Structure       (Jekyll/Hugo/etc.)             Documentation Site
Documentation        Template Generation              Live URL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In most cases, this should work smoothly, though your experience might vary depending on your repository structure and existing documentation assets. What I've learned is that the intelligence lies in the analysis phase—the system examines your codebase, identifies documentation patterns, and makes informed decisions about the best deployment strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Your Documentation Pipeline
&lt;/h2&gt;

&lt;p&gt;Consider a team that wants to implement DocuMCP in their workflow. The setup process, from what I've gathered, is refreshingly straightforward compared to traditional approaches.&lt;/p&gt;

&lt;p&gt;The installation involves adding the MCP server configuration to your Claude Desktop setup. In my experience with similar tools, this type of configuration management has become much more standardized, which is a welcome improvement from the fragmented approaches of the past.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical Implementation Steps
&lt;/h3&gt;

&lt;p&gt;Picture this scenario: you're tasked with standardizing documentation across multiple repositories in your organization. Here's the approach I'd recommend based on the DocuMCP methodology:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Repository Analysis Phase&lt;/strong&gt;: The system analyzes your existing content structure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framework Recommendation&lt;/strong&gt;: Based on project characteristics, it suggests the optimal static site generator&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Template Generation&lt;/strong&gt;: Creates a professional documentation structure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Deployment&lt;/strong&gt;: Sets up GitHub Pages integration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From what I've observed, this automated approach tends to produce more consistent results than manual setup, though it's worth noting that complex projects might require some manual refinement.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Human Element in AI-Assisted Documentation
&lt;/h2&gt;

&lt;p&gt;What I've found particularly interesting about tools like DocuMCP is how they change the relationship between technical writers and their tools. Instead of being documentation architects, we become documentation curators and reviewers.&lt;/p&gt;

&lt;p&gt;In my experience, the most successful AI-assisted documentation workflows maintain a balance between automation and human insight. The AI handles the structural and repetitive aspects—choosing frameworks, setting up deployment pipelines, ensuring consistency—while humans focus on content strategy, user experience, and the nuanced communication that makes technical writing effective.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges and Considerations
&lt;/h2&gt;

&lt;p&gt;Let's be honest about the limitations. While MCP and DocuMCP represent significant steps forward, they're not silver bullets. In most cases, these tools work best with well-structured projects that follow common patterns. If your repository has unique organizational requirements or non-standard documentation needs, you might find yourself needing to customize the output.&lt;/p&gt;

&lt;p&gt;From what I've observed, the success of automated documentation deployment depends heavily on the quality of your source material. If your README files are sparse or your code comments are minimal, even the most sophisticated AI won't be able to generate comprehensive documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking Forward: The Future of Documentation Workflows
&lt;/h2&gt;

&lt;p&gt;What excites me most about developments like DocuMCP is how they're democratizing good documentation practices. Picture this: a junior developer who's never set up a documentation site before can now create professional, well-structured documentation with minimal friction.&lt;/p&gt;

&lt;p&gt;In my experience, tools that lower the barrier to entry for best practices tend to have the most significant impact on overall quality across an organization. When it's easier to do the right thing than the wrong thing, teams naturally gravitate toward better practices.&lt;/p&gt;

&lt;p&gt;The integration of AI into documentation workflows isn't about replacing technical writers—it's about amplifying our ability to create clear, consistent, and accessible technical content. What I've learned is that the most powerful applications of AI in documentation are those that handle the mechanical aspects while preserving space for human creativity and insight.&lt;/p&gt;

&lt;p&gt;As we continue to explore these new possibilities, I remain optimistic about the potential for AI-assisted documentation tools to help us communicate technical concepts more effectively. The key, as always, is maintaining our focus on the end user while leveraging these powerful new capabilities to create better experiences for everyone involved in the documentation lifecycle.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources and Further Reading
&lt;/h2&gt;

&lt;p&gt;If you're interested in exploring DocuMCP further, here are the key resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/tosin2013/documcp" rel="noopener noreferrer"&gt;DocuMCP GitHub Repository&lt;/a&gt;&lt;/strong&gt; - The main project repository with source code, installation instructions, and contribution guidelines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://tosin2013.github.io/documcp/" rel="noopener noreferrer"&gt;DocuMCP Documentation&lt;/a&gt;&lt;/strong&gt; - Comprehensive documentation following the Diataxis framework, including tutorials, how-to guides, and reference materials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What I've found is that having both the technical implementation details and well-structured documentation makes it much easier to get started with any new tool. The documentation site itself is actually a great example of what DocuMCP can help you achieve—clean, organized, and automatically deployed.&lt;/p&gt;

&lt;p&gt;Feel free to explore the project, contribute if you find it useful, or reach out if you have questions about implementing similar documentation workflows in your own projects.&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>ai</category>
      <category>mcp</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Keep Your Python Package Metadata in Sync with GitHub Release Tags</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Wed, 26 Mar 2025 14:43:02 +0000</pubDate>
      <link>https://forem.com/tosin2013/how-to-keep-your-python-package-metadata-in-sync-with-github-release-tags-fkb</link>
      <guid>https://forem.com/tosin2013/how-to-keep-your-python-package-metadata-in-sync-with-github-release-tags-fkb</guid>
      <description>&lt;p&gt;Keeping versions aligned across &lt;code&gt;setup.py&lt;/code&gt;, &lt;code&gt;pyproject.toml&lt;/code&gt;, and GitHub tags is critical for maintaining a healthy Python project. It prevents mismatches, enables CI/CD automation, and ensures seamless releases.  &lt;/p&gt;

&lt;p&gt;In this guide, you'll learn &lt;strong&gt;best practices for versioning Python packages and syncing metadata with GitHub release tags&lt;/strong&gt; using &lt;code&gt;bump-my-version&lt;/code&gt;, GitHub Actions, and automation scripts.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;📌 Table of Contents&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
Why Versioning is Crucial
&lt;/li&gt;
&lt;li&gt;
The Right Way to Define Your Version
&lt;/li&gt;
&lt;li&gt;
Aligning Versions with GitHub Tags
&lt;/li&gt;
&lt;li&gt;
Keeping Dependencies in Sync
&lt;/li&gt;
&lt;li&gt;
Using &lt;code&gt;bump-my-version&lt;/code&gt; for Automated Versioning
&lt;/li&gt;
&lt;li&gt;
Validating Versions in CI/CD
&lt;/li&gt;
&lt;li&gt;
Mistakes to Avoid
&lt;/li&gt;
&lt;li&gt;
Final Checklist
&lt;/li&gt;
&lt;li&gt;
FAQs
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Versioning is Crucial&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Imagine deploying a package and realizing later that the version in &lt;code&gt;setup.py&lt;/code&gt; differs from &lt;code&gt;pyproject.toml&lt;/code&gt;. 🤦‍♂️ This &lt;strong&gt;breaks automation, confuses users, and complicates debugging&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;When you keep versions in sync, you:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure smooth CI/CD deployments
&lt;/li&gt;
&lt;li&gt;Reduce version conflicts in dependencies
&lt;/li&gt;
&lt;li&gt;Automate and streamline release workflows
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Following best practices prevents the dreaded "version mismatch" error and keeps your project organized.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Right Way to Define Your Version&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A common &lt;strong&gt;pitfall&lt;/strong&gt; is defining the version in multiple places. Instead, &lt;strong&gt;define it in a single source of truth&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1️⃣ Store Version in &lt;code&gt;__version__.py&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create a &lt;code&gt;__version__.py&lt;/code&gt; file inside your package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# my_package/__version__.py
&lt;/span&gt;&lt;span class="n"&gt;__version__&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.2.3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;2️⃣ Use It in &lt;code&gt;setup.py&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Instead of manually entering a version, import it dynamically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;my_package.__version__&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;__version__&lt;/span&gt;

&lt;span class="nf"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my_package&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;__version__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;3️⃣ Sync with &lt;code&gt;pyproject.toml&lt;/code&gt; (for Poetry)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you're using Poetry, manually update &lt;code&gt;pyproject.toml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[tool.poetry]&lt;/span&gt;
&lt;span class="py"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"my-package"&lt;/span&gt;
&lt;span class="py"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"1.2.3"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📌 &lt;em&gt;Poetry does not support dynamic version imports—so keeping this updated manually (or via automation) is necessary.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Aligning Versions with GitHub Tags&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To ensure GitHub releases match your code, follow this &lt;strong&gt;release process&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Update the version&lt;/strong&gt; in &lt;code&gt;__version__.py&lt;/code&gt; and &lt;code&gt;pyproject.toml&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commit the change&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git commit &lt;span class="nt"&gt;-am&lt;/span&gt; &lt;span class="s2"&gt;"Release version 1.2.3"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create a tag matching the version&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git tag v1.2.3
   git push origin main &lt;span class="nt"&gt;--tags&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Ensure the tag and package version match before deploying.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;🚨 If the tag and package version &lt;strong&gt;don’t match&lt;/strong&gt;, CI/CD should catch the issue and stop the release.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Keeping Dependencies in Sync&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Beyond versioning, managing dependencies properly prevents unexpected failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Lock Dependencies in &lt;code&gt;requirements.txt&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For reproducible builds, lock dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip freeze &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Separate Dev Dependencies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Use a &lt;strong&gt;separate file&lt;/strong&gt; for development dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements-dev.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, if using &lt;strong&gt;Poetry&lt;/strong&gt;, do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;poetry add pytest &lt;span class="nt"&gt;--dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures production installs don’t pull unnecessary dev dependencies.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Using &lt;code&gt;bump-my-version&lt;/code&gt; for Automated Versioning&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is &lt;code&gt;bump-my-version&lt;/code&gt;?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://callowayproject.github.io/bump-my-version/" rel="noopener noreferrer"&gt;&lt;code&gt;bump-my-version&lt;/code&gt;&lt;/a&gt; is the modern replacement for &lt;code&gt;bump2version&lt;/code&gt; (which is no longer maintained).   &lt;/p&gt;

&lt;p&gt;It updates version numbers across &lt;strong&gt;all necessary files&lt;/strong&gt; (e.g., &lt;code&gt;__version__.py&lt;/code&gt;, &lt;code&gt;setup.py&lt;/code&gt;, &lt;code&gt;pyproject.toml&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How to Install It&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;bump-my-version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;How to Use It&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Increment version numbers automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bump-my-version patch   &lt;span class="c"&gt;# Updates 1.2.3 → 1.2.4&lt;/span&gt;
bump-my-version minor   &lt;span class="c"&gt;# Updates 1.2.3 → 1.3.0&lt;/span&gt;
bump-my-version major   &lt;span class="c"&gt;# Updates 1.2.3 → 2.0.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures versioning consistency, preventing human errors in updates.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Validating Versions in CI/CD&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To prevent mismatched versions between GitHub tags and your package metadata, &lt;strong&gt;add a validation step to GitHub Actions&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;CI Workflow to Validate Versions&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create &lt;code&gt;.github/workflows/version-check.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Version Check&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;v*'&lt;/span&gt;  &lt;span class="c1"&gt;# Runs only on version tags&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;check-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Validate package version consistency&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;TAG_VERSION=${GITHUB_REF#refs/tags/v}&lt;/span&gt;
          &lt;span class="s"&gt;PACKAGE_VERSION=$(python -c "import my_package.__version__ as v; print(v.__version__)")&lt;/span&gt;

          &lt;span class="s"&gt;if [ "$TAG_VERSION" != "$PACKAGE_VERSION" ]; then&lt;/span&gt;
            &lt;span class="s"&gt;echo "Version mismatch! GitHub tag is $TAG_VERSION but package version is $PACKAGE_VERSION."&lt;/span&gt;
            &lt;span class="s"&gt;exit 1&lt;/span&gt;
          &lt;span class="s"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ &lt;strong&gt;If the versions don’t match, the pipeline will fail&lt;/strong&gt;, preventing a broken release.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Mistakes to Avoid&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;🚨 &lt;strong&gt;Hardcoding Versions in Multiple Files&lt;/strong&gt; – Instead, use &lt;code&gt;__version__.py&lt;/code&gt;&lt;br&gt;&lt;br&gt;
🚨 &lt;strong&gt;Pushing GitHub Tags Without Updating Files First&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
🚨 &lt;strong&gt;Ignoring Dependency Locking (&lt;code&gt;requirements.txt&lt;/code&gt;)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
🚨 &lt;strong&gt;Manual Version Updates Instead of Automation (&lt;code&gt;bump-my-version&lt;/code&gt;)&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Avoid these, and your releases will be smooth! 🚀&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Final Checklist&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;✅ Keep version &lt;strong&gt;centralized&lt;/strong&gt; in &lt;code&gt;__version__.py&lt;/code&gt;&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Always&lt;/strong&gt; sync &lt;code&gt;pyproject.toml&lt;/code&gt; (for Poetry users)&lt;br&gt;&lt;br&gt;
✅ Automate with &lt;code&gt;bump-my-version&lt;/code&gt;&lt;br&gt;&lt;br&gt;
✅ Validate version consistency in &lt;strong&gt;CI/CD&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✅ Lock dependencies for &lt;strong&gt;reliable builds&lt;/strong&gt;  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;FAQs&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Why is &lt;code&gt;bump2version&lt;/code&gt; no longer recommended?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;bump2version&lt;/code&gt; is &lt;strong&gt;no longer maintained&lt;/strong&gt;. &lt;code&gt;bump-my-version&lt;/code&gt; is the modern alternative with active support.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. How do I ensure my GitHub release matches my package version?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Use GitHub Actions to verify that the &lt;strong&gt;Git tag matches &lt;code&gt;__version__.py&lt;/code&gt;&lt;/strong&gt; before releasing.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Should I use &lt;code&gt;setup.py&lt;/code&gt; or Poetry?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;If you use &lt;strong&gt;setuptools&lt;/strong&gt;, update &lt;code&gt;setup.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If you use &lt;strong&gt;Poetry&lt;/strong&gt;, manually update &lt;code&gt;pyproject.toml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Do I still need &lt;code&gt;requirements.txt&lt;/code&gt; if using Poetry?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;No! Poetry &lt;strong&gt;manages dependencies internally&lt;/strong&gt;, so &lt;code&gt;requirements.txt&lt;/code&gt; is unnecessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Is &lt;code&gt;bump-my-version&lt;/code&gt; required?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;No, but it &lt;strong&gt;automates versioning&lt;/strong&gt;, preventing human mistakes.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Keeping your Python packaging metadata in sync with GitHub release tags &lt;strong&gt;prevents deployment issues, enables automation, and ensures smooth releases&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;By following best practices like &lt;strong&gt;centralized versioning, GitHub Actions validation, and automated version bumps&lt;/strong&gt;, you'll create a &lt;strong&gt;robust, foolproof versioning system&lt;/strong&gt;! &lt;/p&gt;

&lt;p&gt;Want to take it a step further? &lt;strong&gt;Integrate this workflow into your CI/CD pipeline today!&lt;/strong&gt; 🚀&lt;/p&gt;

</description>
      <category>programming</category>
      <category>python</category>
      <category>github</category>
      <category>git</category>
    </item>
    <item>
      <title>How to Use Migration Toolkit for Virtualization 2.7 with OpenShift Internal Registry</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Wed, 19 Feb 2025 18:03:54 +0000</pubDate>
      <link>https://forem.com/tosin2013/how-to-use-migration-toolkit-for-virtualization-27-with-openshift-internal-registry-ie3</link>
      <guid>https://forem.com/tosin2013/how-to-use-migration-toolkit-for-virtualization-27-with-openshift-internal-registry-ie3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Migration Toolkit for Virtualization (MTV) 2.7 provides a comprehensive solution to migrate virtual machines (VMs) from VMware to OpenShift Virtualization. This guide outlines the steps required to install and configure MTV 2.7 while leveraging OpenShift's internal registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before proceeding, ensure you have met the following requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Review VMware Prerequisites&lt;/strong&gt;: Familiarize yourself with the VMware prerequisites as outlined in the &lt;a href="https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html-single/installing_and_using_the_migration_toolkit_for_virtualization/index" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creating a VDDK Image&lt;/strong&gt;: Follow the steps for &lt;a href="https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html-single/installing_and_using_the_migration_toolkit_for_virtualization/index#creating-vddk-image_mtv" rel="noopener noreferrer"&gt;creating a VDDK image&lt;/a&gt; for VMware disk introspection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenShift Cluster Requirements&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;A functional OpenShift Container Platform (OCP) cluster with OpenShift Virtualization enabled.&lt;/li&gt;
&lt;li&gt;Access to the OpenShift internal registry.&lt;/li&gt;
&lt;li&gt;Sufficient resources (CPU, Memory, and Storage) to accommodate migrated workloads.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1: Install Migration Toolkit for Virtualization
&lt;/h2&gt;

&lt;p&gt;MTV 2.7 can be installed via the OperatorHub in OpenShift.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Login to OpenShift&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   oc login &lt;span class="nt"&gt;--server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;OCP_API_SERVER&amp;gt; &lt;span class="nt"&gt;--token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;TOKEN&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install MTV Operator&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;em&gt;Operators &amp;gt; OperatorHub&lt;/em&gt; in the OpenShift Web Console.&lt;/li&gt;
&lt;li&gt;Search for &lt;em&gt;Migration Toolkit for Virtualization&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;em&gt;Install&lt;/em&gt; and follow the guided setup.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify Installation&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   oc get pods &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-mtv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure all pods in the &lt;code&gt;openshift-mtv&lt;/code&gt; namespace are running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure OpenShift Internal Registry for MTV
&lt;/h2&gt;

&lt;p&gt;MTV needs access to OpenShift’s internal image registry to store migrated VM images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expose the Internal Registry&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   oc patch configs.imageregistry.operator.openshift.io cluster &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;merge &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"spec":{"defaultRoute":true}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Retrieve the internal registry route:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   oc get route default-route &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-image-registry &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{ .spec.host }}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Authenticate with the Registry&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   podman login &lt;span class="nt"&gt;-u&lt;/span&gt; kubeadmin &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;oc &lt;span class="nb"&gt;whoami&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="si"&gt;$(&lt;/span&gt;oc get route default-route &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-image-registry &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{ .spec.host }}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Download and Save the VDDK Archive
&lt;/h2&gt;

&lt;p&gt;Download the required VDDK archive from VMware and save it in a temporary directory. After downloading, extract the archive:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create and Configure the VDDK Image&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Migration Toolkit for Virtualization requires a VMware Virtual Disk Development Kit (VDDK) image for efficient data transfer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prepare the VDDK Libraries&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In a browser, navigate to the &lt;a href="https://developer.broadcom.com/sdks/vmware-virtual-disk-development-kit-vddk/8.0" rel="noopener noreferrer"&gt;VMware VDDK version 8 download page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select version 8.0.1 and click Download.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Note&lt;/strong&gt;: In order to migrate to OpenShift Virtualization 4.12, download VDDK version 7.0.3.2 from the &lt;a href="https://developer.broadcom.com/sdks/vmware-virtual-disk-development-kit-vddk/7.0" rel="noopener noreferrer"&gt;VMware VDDK version 7 download page&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Extract the tar file &lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzf&lt;/span&gt; VMware-vix-disklib-&lt;span class="se"&gt;\&amp;lt;&lt;/span&gt;version&amp;gt;.x86&lt;span class="se"&gt;\_&lt;/span&gt;64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Build the Container Image&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create Dockerfile&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Dockerfile &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
FROM registry.access.redhat.com/ubi8/ubi-minimal
COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
RUN mkdir -p /opt
ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build the VDDK container image&lt;/span&gt;
podman build &lt;span class="nt"&gt;-t&lt;/span&gt; vddk-image:latest &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Tag the VDDK Image for OpenShift's Internal Registry&lt;/strong&gt;:&lt;br&gt;
&lt;code&gt;This will go in the openshift-mtv namespace&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   podman tag vddk-image:latest &lt;span class="si"&gt;$(&lt;/span&gt;oc get route default-route &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-image-registry &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{ .spec.host }}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/openshift-mtv/vddk-image:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Push the VDDK Image to OpenShift's Internal Registry&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   podman push &lt;span class="si"&gt;$(&lt;/span&gt;oc get route default-route &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-image-registry &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{ .spec.host }}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/openshift-mtv/vddk-image:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Give image pull permissions to openshift-mtv namespace&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc adm policy add-role-to-group system:image-puller system:serviceaccounts:openshift-mtv &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-image-registry
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configure MTV to Use the VDDK Image&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;em&gt;Operators &amp;gt; Installed Operators&lt;/em&gt; in OpenShift Web Console.&lt;/li&gt;
&lt;li&gt;Locate the &lt;code&gt;ForkliftController&lt;/code&gt; resource and edit its configuration.&lt;/li&gt;
&lt;li&gt;Specify the VDDK image location:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;vddkInitImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;image-registry.openshift-image-registry.svc:5000/openshift-mtv/vddk-image:latest'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Save and apply the changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 4: Configure Migration Plan
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Access the MTV Web UI&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;em&gt;Operators &amp;gt; Installed Operators&lt;/em&gt; in OpenShift.&lt;/li&gt;
&lt;li&gt;Click &lt;em&gt;Migration Toolkit for Virtualization&lt;/em&gt; and select &lt;em&gt;Open Console&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define a Migration Plan&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Add VMware as a source provider.&lt;/li&gt;
&lt;li&gt;Select OpenShift Virtualization as the destination.&lt;/li&gt;
&lt;li&gt;Map VM workloads accordingly.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start Migration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate VM connectivity.&lt;/li&gt;
&lt;li&gt;Initiate migration and monitor the process via OpenShift logs:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; oc logs &lt;span class="nt"&gt;-f&lt;/span&gt; &amp;lt;mtv-pod-name&amp;gt; &lt;span class="nt"&gt;-n&lt;/span&gt; openshift-mtv
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By following these steps, you have successfully migrated virtual machines from VMware to OpenShift Virtualization using Migration Toolkit for Virtualization 2.7 while leveraging the OpenShift internal registry. Ensure to monitor workloads post-migration and optimize resources accordingly.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>opensource</category>
      <category>openshift</category>
      <category>redhat</category>
    </item>
    <item>
      <title>Turbocharge Your OpenDevin Development: A Deep Dive into a Time-Saving Bash Script</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Fri, 14 Jun 2024 00:42:04 +0000</pubDate>
      <link>https://forem.com/tosin2013/turbocharge-your-opendevin-development-a-deep-dive-into-a-time-saving-bash-script-3p31</link>
      <guid>https://forem.com/tosin2013/turbocharge-your-opendevin-development-a-deep-dive-into-a-time-saving-bash-script-3p31</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Embarking on your &lt;a href="https://opendevin.github.io/OpenDevin/" rel="noopener noreferrer"&gt;OpenDevin&lt;/a&gt; journey?  This innovative platform opens doors to endless possibilities, but the initial setup can sometimes be a time sink. What if I told you there's a way to skip the tedious configuration and dive straight into coding? Meet our Bash script, your new best friend in the world of OpenDevin development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What This Script Does (and Why You'll Love It)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's break down the magic behind this script and how it can supercharge your workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Effortless Dependency Management:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Say goodbye to manual installations:&lt;/strong&gt;  The script does the heavy lifting by automatically checking for and installing essential tools like Docker, Node.js, Conda (a package and environment manager), and Poetry (a Python dependency manager).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why this matters:&lt;/strong&gt; No more scouring the web for installation instructions or troubleshooting compatibility issues. Your development environment is ready to roll in minutes.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Seamless Environment Configuration:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Your OpenDevin sanctuary:&lt;/strong&gt; The script sets up a dedicated Conda environment exclusively for &lt;a href="https://opendevin.github.io/OpenDevin/" rel="noopener noreferrer"&gt;OpenDevin&lt;/a&gt;. This keeps your project dependencies organized and prevents conflicts with other projects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git integration:&lt;/strong&gt; It effortlessly clones the OpenDevin repository (if you don't already have it), saving you a manual step.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama LLM setup (optional):&lt;/strong&gt; For those interested in working with Large Language Models (LLMs), the script can even help you get started with Ollama, a powerful LLM service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why this matters:&lt;/strong&gt; You get a clean, isolated workspace where you can experiment and build without worrying about messing up your system.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Configuration and Workspace Automation:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No more missing files:&lt;/strong&gt; The script ensures you have the essential configuration files (&lt;code&gt;config.toml&lt;/code&gt;) and a designated workspace directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why this matters:&lt;/strong&gt; You won't waste time tracking down missing components or wondering where to store your project files.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Dockerized Development Bliss:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build and run with a single command:&lt;/strong&gt; The script leverages the power of Docker to build and launch your OpenDevin application within a container.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency is key:&lt;/strong&gt; Docker guarantees that your development environment mirrors the production environment, minimizing those frustrating "it works on my machine" bugs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why this matters:&lt;/strong&gt; Docker streamlines testing and deployment, giving you the confidence that your application will behave as expected wherever it runs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Unleash the Script's Potential&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Getting started is a breeze:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Grab the Script:&lt;/strong&gt;  Head over to &lt;a href="https://raw.githubusercontent.com/tosin2013/OpenDevin/live/opendevin.sh" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/tosin2013/OpenDevin/live/opendevin.sh&lt;/a&gt; and download it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make it Executable:&lt;/strong&gt;  Open your terminal and run &lt;code&gt;chmod +x opendevin.sh&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Command Your Environment:&lt;/strong&gt; Execute the script with different flags to perform specific actions:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;./opendevin.sh -i -b&lt;/code&gt; (Install dependencies and build)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;./opendevin.sh -r&lt;/code&gt; (Run the project)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;./opendevin.sh -h&lt;/code&gt; (View all options)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Who Should Use This Script?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenDevin Newcomers:&lt;/strong&gt;  Hit the ground running without getting bogged down in setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seasoned Developers:&lt;/strong&gt;  Automate repetitive tasks and reclaim your precious time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Teams:&lt;/strong&gt; Ensure everyone on your team has a consistent development environment, leading to smoother collaboration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Going Beyond the Basics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ready to customize? This script is your starting point. Dive into the code, tweak it to match your preferences, and even contribute back to the OpenDevin community!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's Get Developing!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Don't let setup slow you down. Let this Bash script be your trusty sidekick as you explore the exciting world of OpenDevin development. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are you waiting for? Share your OpenDevin creations in the comments below!&lt;/strong&gt; &lt;/p&gt;

</description>
      <category>opendevin</category>
      <category>ai</category>
      <category>opensource</category>
      <category>development</category>
    </item>
    <item>
      <title>Ansible Vault Secrets Documentation</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Fri, 19 Apr 2024 17:48:17 +0000</pubDate>
      <link>https://forem.com/tosin2013/ansible-vault-secrets-documentation-3g1a</link>
      <guid>https://forem.com/tosin2013/ansible-vault-secrets-documentation-3g1a</guid>
      <description>&lt;p&gt;This post outlines the necessary secrets required for Ansible playbooks. It includes details on how to use the &lt;a href="https://github.com/tosin2013/ansiblesafe" rel="noopener noreferrer"&gt;ansiblesafe&lt;/a&gt; tool to manage these secrets securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Red Hat Subscription Manager (RHSM) Variables
&lt;/h2&gt;

&lt;p&gt;These variables are used to register the Ansible Automation Platform instance with Red Hat Subscription Manager and attach the necessary subscriptions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;rhsm_username&lt;/code&gt;: The username for your Red Hat account. (&lt;a href="https://access.redhat.com/solutions/253273" rel="noopener noreferrer"&gt;More info&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rhsm_password&lt;/code&gt;: The password for your Red Hat account. (&lt;a href="https://access.redhat.com/solutions/253273" rel="noopener noreferrer"&gt;More info&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rhsm_org&lt;/code&gt;: The ID of the organization to register the system to. (&lt;a href="https://access.redhat.com/articles/1378093" rel="noopener noreferrer"&gt;More info&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rhsm_activationkey&lt;/code&gt;: The activation key used to register the system. (&lt;a href="https://access.redhat.com/articles/1378093" rel="noopener noreferrer"&gt;More info&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Admin User Variables
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;admin_user_password&lt;/code&gt;: The password for the admin user in Virtual Machines using kcli-pipelines. (&lt;a href="https://github.com/tosin2013/kcli-pipelines" rel="noopener noreferrer"&gt;More info&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Offline Token Variables
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;offline_token&lt;/code&gt;: The offline token used for Red Hat Subscription Manager. (&lt;a href="https://access.redhat.com/solutions/3868301" rel="noopener noreferrer"&gt;More info&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;automation_hub_offline_token&lt;/code&gt;: The offline token used for Automation Hub. (&lt;a href="https://console.redhat.com/ansible/automation-hub/token/" rel="noopener noreferrer"&gt;More info&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  OpenShift Pull Secret
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;openshift_pull_secret&lt;/code&gt;: The pull secret used to deploy OpenShift Clusters. (&lt;a href="https://cloud.redhat.com/openshift/install/pull-secret" rel="noopener noreferrer"&gt;More info&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  FreeIPA Server Admin Password
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;freeipa_server_admin_password&lt;/code&gt;: The password for the FreeIPA server admin user using the freeipa-workshop-deployer. (&lt;a href="https://github.com/tosin2013/freeipa-workshop-deployer" rel="noopener noreferrer"&gt;More info&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Managing Secrets with Ansiblesafe
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;ansiblesafe&lt;/code&gt; is a Go script that facilitates the encryption and decryption of YAML files using the Ansible Vault CLI. It supports various operations such as encrypting, decrypting, and syncing secrets with HashiCorp Vault.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dnf install ansible-core -y 
curl -OL https://github.com/tosin2013/ansiblesafe/releases/download/v0.0.8/ansiblesafe-v0.0.8-linux-amd64.tar.gz
tar -zxvf ansiblesafe-v0.0.8-linux-amd64.tar.gz
chmod +x ansiblesafe-linux-amd64 
sudo mv ansiblesafe-linux-amd64 /usr/local/bin/ansiblesafe
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Usage
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;If you do not pass any flags everything wil be auto generated for you&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ansiblesafe -h
Usage of /tmp/go-build1657505477/b001/exe/ansiblesafe:
  -f, --file string     Path to YAML file (default: $HOME/vault.yml)
  -o, --operation int   Operation to perform (1: encrypt, 2: decrypt, 3: Write secrets to HashiCorp Vault, 4: Read secrets from HashiCorp Vault, 5: skip encrypting/decrypting)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use &lt;code&gt;ansiblesafe&lt;/code&gt;, navigate to the cloned directory and perform the following commands based on your needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Encrypt a YAML file:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  ./ansiblesafe &lt;span class="nt"&gt;-f&lt;/span&gt; path_to_your_file &lt;span class="nt"&gt;-o&lt;/span&gt; 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decrypt a YAML file:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  ./ansiblesafe &lt;span class="nt"&gt;-f&lt;/span&gt; path_to_your_file &lt;span class="nt"&gt;-o&lt;/span&gt; 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Hasicorp Examples
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Write secrets to HashiCorp Vault&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export VAULT_ADDRESS=http://127.0.0.1:8200/
$ export VAULT_TOKEN=token
$ export SECRET_PATH=ansiblesafe/example
$ ansiblesafe -o 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Read secrets from HashiCorp Vault and safe to vault.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export VAULT_ADDRESS=http://127.0.0.1:8200/
$ export VAULT_TOKEN=token
$ export SECRET_PATH=ansiblesafe/example
$ ansiblesafe -o 4
$ ansiblesafe -o 1 # Optional encrypt the file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Instructions to use ansiblesale without a password prompt&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ touch ~/.vault_password
$ chmod 600 ~/.vault_password
# The leading space here is necessary to keep the command out of the command history
$  echo password &amp;gt;&amp;gt; ~/.vault_password
# Link the password file into the current working directory
$ ln ~/.vault_password .
# Set the environment variable to the location of the file
$ export ANSIBLE_VAULT_PASSWORD_FILE=.vault_password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember to keep your vault password and tokens secure and limit access to authorized users only.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Information
&lt;/h2&gt;

&lt;p&gt;For more details on &lt;code&gt;ansiblesafe&lt;/code&gt; and its capabilities, visit the &lt;a href="https://github.com/tosin2013/ansiblesafe.git" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>redhat</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Streamlining Git Repository Management with Bash</title>
      <dc:creator>Tosin Akinosho</dc:creator>
      <pubDate>Wed, 03 Apr 2024 00:25:24 +0000</pubDate>
      <link>https://forem.com/tosin2013/streamlining-git-repository-management-with-bash-3mpa</link>
      <guid>https://forem.com/tosin2013/streamlining-git-repository-management-with-bash-3mpa</guid>
      <description>&lt;p&gt;Managing multiple Git repositories can be a challenging task, especially when it comes to keeping track of their status, merging changes, and ensuring everything is up to date. However, with the power of Bash scripting, you can streamline this process and save valuable time and effort. In this article, we'll delve into how a Bash script can simplify Git repository management, focusing on automating status checks, merging changes, and discussing potential enhancements to the script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges of Manual Git Repository Management
&lt;/h2&gt;

&lt;p&gt;Manually managing multiple Git repositories involves several challenges. Firstly, keeping track of the status of each repository, including whether there are uncommitted changes, untracked files, or incoming changes from remote branches, can be time-consuming. Moreover, performing repetitive tasks like merging changes between repositories or pushing updates can lead to errors and inconsistencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Git Repository Management with Bash
&lt;/h2&gt;

&lt;p&gt;To address these challenges, a Bash script can be incredibly useful. Let's take a look at a sample script that automates the process of merging changes from a source repository to a target repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# Source repository configuration (HTTPS)&lt;/span&gt;
&lt;span class="nv"&gt;source_repo_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/SOURCE_REPO_OWNER/SOURCE_REPO_NAME.git"&lt;/span&gt;
&lt;span class="nv"&gt;source_branch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"main"&lt;/span&gt;

&lt;span class="c"&gt;# Target repository configuration (SSH)&lt;/span&gt;
&lt;span class="nv"&gt;target_repo_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"git@gitea.example.com:TARGET_REPO_OWNER/TARGET_REPO_NAME.git"&lt;/span&gt;
&lt;span class="nv"&gt;target_branch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"main"&lt;/span&gt;

&lt;span class="c"&gt;# Temporary directory for cloning repositories&lt;/span&gt;
&lt;span class="nv"&gt;temp_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Clone the source repository&lt;/span&gt;
git clone &lt;span class="nt"&gt;--branch&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$source_branch&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$source_repo_url&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$temp_dir&lt;/span&gt;&lt;span class="s2"&gt;/source_repo"&lt;/span&gt;

&lt;span class="c"&gt;# Clone the target repository&lt;/span&gt;
git clone &lt;span class="nt"&gt;--branch&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$target_branch&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$target_repo_url&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$temp_dir&lt;/span&gt;&lt;span class="s2"&gt;/target_repo"&lt;/span&gt;

&lt;span class="c"&gt;# Navigate to the target repository&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$temp_dir&lt;/span&gt;&lt;span class="s2"&gt;/target_repo"&lt;/span&gt;

&lt;span class="c"&gt;# Add the source repository as a remote&lt;/span&gt;
git remote add &lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$source_repo_url&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Fetch the latest changes from the source repository&lt;/span&gt;
git fetch &lt;span class="nb"&gt;source&lt;/span&gt;

&lt;span class="c"&gt;# Create a new branch for merging&lt;/span&gt;
git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="s2"&gt;"merge-from-source"&lt;/span&gt;

&lt;span class="c"&gt;# Merge the changes from the source branch&lt;/span&gt;
git merge &lt;span class="s2"&gt;"source/&lt;/span&gt;&lt;span class="nv"&gt;$source_branch&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Push the merged changes to the target repository&lt;/span&gt;
git push origin &lt;span class="s2"&gt;"merge-from-source"&lt;/span&gt;

&lt;span class="c"&gt;# Clean up the temporary directory&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$temp_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Merge completed successfully!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script automates the process of merging changes from a specified branch (&lt;code&gt;main&lt;/code&gt; in this case) of a source repository to a target repository. It clones both repositories into a temporary directory, fetches the latest changes from the source repository, creates a new branch for merging, performs the merge, pushes the changes to the target repository, and finally cleans up the temporary directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Components of the Script
&lt;/h2&gt;

&lt;p&gt;Let's break down the key components of the script:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Repository Configuration:&lt;/strong&gt; The script begins by defining the URLs and branches for the source and target repositories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Temporary Directory:&lt;/strong&gt; It creates a temporary directory (&lt;code&gt;temp_dir&lt;/code&gt;) to store the cloned repositories and perform the merge operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloning Repositories:&lt;/strong&gt; The script clones the source and target repositories into the temporary directory using the specified branch (&lt;code&gt;main&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Merging Changes:&lt;/strong&gt; After adding the source repository as a remote, fetching the latest changes, and creating a new branch for merging, the script performs the actual merge operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pushing Changes:&lt;/strong&gt; Once the merge is successful, the script pushes the merged changes to the target repository's branch (&lt;code&gt;main&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cleanup:&lt;/strong&gt; Finally, the script cleans up by removing the temporary directory.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Benefits of Using the Script
&lt;/h2&gt;

&lt;p&gt;Using this Bash script offers several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time Savings:&lt;/strong&gt; Automating the merge process saves time compared to manual intervention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Errors:&lt;/strong&gt; The script reduces the risk of errors that can occur during manual repository management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency:&lt;/strong&gt; By automating repetitive tasks, the script ensures consistency across repositories.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Potential Enhancements and Additional Features
&lt;/h2&gt;

&lt;p&gt;While the provided script streamlines basic Git repository management tasks, several enhancements and additional features can be added to further improve its functionality:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling:&lt;/strong&gt; Incorporate error handling mechanisms to gracefully handle failures during cloning, merging, or pushing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Branch Selection:&lt;/strong&gt; Allow users to specify source and target branches dynamically rather than hardcoding them in the script.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interactive Mode:&lt;/strong&gt; Implement an interactive mode that prompts users for inputs such as repository URLs, branches, and confirmation before proceeding with actions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging:&lt;/strong&gt; Add logging capabilities to track the execution of the script and capture relevant information for troubleshooting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallel Processing:&lt;/strong&gt; For large repositories or multiple merges, consider implementing parallel processing to improve performance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By incorporating these enhancements, the script can become more robust, user-friendly, and suitable for a wider range of use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, Bash scripting offers a powerful way to streamline Git repository management tasks. By automating processes like checking status, merging changes, and pushing updates, scripts like the one discussed in this article can significantly save time, reduce errors, and improve overall efficiency. With the potential for further enhancements and customization, Bash scripts become invaluable tools for developers and teams managing multiple Git repositories.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
