<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Animesh Pathak</title>
    <description>The latest articles on Forem by Animesh Pathak (@sonichigo).</description>
    <link>https://forem.com/sonichigo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sonichigo"/>
    <language>en</language>
    <item>
      <title>How diffChangelog and Snapshots Work Together</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Thu, 12 Feb 2026 04:01:24 +0000</pubDate>
      <link>https://forem.com/sonichigo/how-diffchangelog-and-snapshots-work-together-2l0i</link>
      <guid>https://forem.com/sonichigo/how-diffchangelog-and-snapshots-work-together-2l0i</guid>
      <description>&lt;p&gt;When I started formalizing &lt;a href="https://www.harness.io/products/database-devops" rel="noopener noreferrer"&gt;Database DevOps practices&lt;/a&gt;, one recurring issue kept surfacing: &lt;strong&gt;schema drift&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Development was ahead of staging. Production had emergency hotfixes. QA sometimes had “just one small tweak” that never made it back to &lt;a href="https://developer.harness.io/docs/database-devops/gitops/maintaining-database-schema" rel="noopener noreferrer"&gt;version control&lt;/a&gt;. Keeping schemas aligned across environments wasn’t just operational hygiene, it became foundational for reliability.&lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;Liquibase OSS&lt;/strong&gt; and specifically &lt;code&gt;diffChangelog&lt;/code&gt; became a core part of my schema syncing strategy.&lt;/p&gt;

&lt;p&gt;This article explains two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How I use &lt;code&gt;diffChangelog&lt;/code&gt; for database &lt;a href="https://developer.harness.io/docs/database-devops/use-database-devops/schema-syncronisation" rel="noopener noreferrer"&gt;schema synchronization&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;How it actually works internally using snapshots&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because understanding the mechanics changes how confidently you automate it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Schema Sync Problem
&lt;/h2&gt;

&lt;p&gt;In practice, &lt;a href="https://developer.harness.io/docs/database-devops/use-database-devops/schema-syncronisation" rel="noopener noreferrer"&gt;schema syncing&lt;/a&gt; usually looks like one of these scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dev schema contains new tables not yet in staging&lt;/li&gt;
&lt;li&gt;Production has an index created manually during incident mitigation&lt;/li&gt;
&lt;li&gt;A column datatype differs across environments&lt;/li&gt;
&lt;li&gt;Constraints exist in one environment but not another&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Traditional approaches involve manual inspection or ad-hoc SQL comparison scripts. Both are error-prone. With Liquibase OSS, I approach this differently. Instead of comparing raw SQL, I compare &lt;strong&gt;database states&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Command That Powers Schema Sync
&lt;/h2&gt;

&lt;p&gt;Here’s the command I typically use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;liquibase diffChangelog &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--referenceUrl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jdbc:postgresql://localhost:5432/dev &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jdbc:postgresql://localhost:5432/prod &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--changeLogFile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;schema-sync.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Conceptually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reference database&lt;/strong&gt; → Desired schema&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Target database&lt;/strong&gt; → Actual schema&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output&lt;/strong&gt; → ChangeLog needed to align target with reference&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But what’s happening under the hood?&lt;/p&gt;

&lt;h2&gt;
  
  
  How diffChangelog Actually Works?
&lt;/h2&gt;

&lt;p&gt;The power of diffChangelog lies in &lt;a href="https://docs.liquibase.com/reference-guide/database-inspection-change-tracking-and-utility-commands/snapshot" rel="noopener noreferrer"&gt;Liquibase’s snapshot engine&lt;/a&gt;. It does not compare SQL files. It does not parse DDL text. Instead, it performs a structured, object-level comparison.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Snapshot Creation
&lt;/h3&gt;

&lt;p&gt;Liquibase generates a snapshot of each database. A snapshot is an in-memory representation of schema metadata. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schemas&lt;/li&gt;
&lt;li&gt;Tables&lt;/li&gt;
&lt;li&gt;Columns&lt;/li&gt;
&lt;li&gt;Indexes&lt;/li&gt;
&lt;li&gt;Primary keys&lt;/li&gt;
&lt;li&gt;Foreign keys&lt;/li&gt;
&lt;li&gt;Unique constraints&lt;/li&gt;
&lt;li&gt;Sequences&lt;/li&gt;
&lt;li&gt;Views&lt;/li&gt;
&lt;li&gt;Data types&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the lifecycle:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart TD
    A[Reference Database] --&amp;gt; B[Snapshot Generator]
    C[Target Database] --&amp;gt; D[Snapshot Generator]
    B --&amp;gt; E[Reference Snapshot Object]
    D --&amp;gt; F[Target Snapshot Object]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Liquibase queries database metadata (e.g., information_schema in PostgreSQL) and converts it into structured objects. Instead of raw SQL text, it now has two object graphs representing each schema.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Object Graph Comparison
&lt;/h3&gt;

&lt;p&gt;Once both snapshots are built, Liquibase runs its diff engine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart TD
    E[Reference Snapshot] --&amp;gt; G[Diff Engine]
    F[Target Snapshot] --&amp;gt; G
    G --&amp;gt; H[Difference Classification]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Differences are categorized as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Missing → Exists in reference, not in target&lt;/li&gt;
&lt;li&gt;Unexpected → Exists in target, not in reference&lt;/li&gt;
&lt;li&gt;Changed → Exists in both but attributes differ&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A table in dev but not prod → Missing&lt;/li&gt;
&lt;li&gt;An index in prod but not dev → Unexpected&lt;/li&gt;
&lt;li&gt;Column length differs → Changed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This classification becomes the foundation for changelog generation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: ChangeLog Generation
&lt;/h3&gt;

&lt;p&gt;Liquibase then converts differences into changeSets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart TD
    H[Difference Classification] --&amp;gt; I[ChangeLog Generator]
    I --&amp;gt; J[Generated ChangeLog File]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example outputs:&lt;/p&gt;

&lt;p&gt;If a table is missing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;createTable &lt;span class="nv"&gt;tableName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"orders"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
   ...
&amp;lt;/createTable&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a datatype changed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;modifyDataType &lt;span class="nv"&gt;tableName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"users"&lt;/span&gt; &lt;span class="nv"&gt;columnName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"email"&lt;/span&gt; &lt;span class="nv"&gt;newDataType&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"varchar(255)"&lt;/span&gt;/&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is a deployable changelog that synchronizes the target schema.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Use This for Schema Syncing?
&lt;/h2&gt;

&lt;p&gt;Understanding the internal mechanics allows me to apply it confidently in real workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Environment Alignment (Dev → Staging → Prod)
&lt;/h3&gt;

&lt;p&gt;When dev is the source of truth:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart LR
    A[Dev Schema] --&amp;gt; B[Snapshot]
    C[Prod Schema] --&amp;gt; D[Snapshot]
    B --&amp;gt; E[Diff Engine]
    D --&amp;gt; E
    E --&amp;gt; F[Sync ChangeLog]
    F --&amp;gt; G[Apply to Prod]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures production moves forward predictably.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Drift Detection in CI
&lt;/h3&gt;

&lt;p&gt;I often run diffChangelog during CI to detect unauthorized drift :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy known changelog to temp DB&lt;/li&gt;
&lt;li&gt;Snapshot production&lt;/li&gt;
&lt;li&gt;Compare&lt;/li&gt;
&lt;li&gt;Fail pipeline if differences exist
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart TD
    A[Git Controlled Schema] --&amp;gt; B[Temp DB]
    C[Production DB] --&amp;gt; D[Snapshot]
    B --&amp;gt; E[Snapshot]
    D --&amp;gt; F[Diff Engine]
    E --&amp;gt; F
    F --&amp;gt; G{Drift Detected?}
    G --&amp;gt;|Yes| H[Fail Build]
    G --&amp;gt;|No| I[Continue]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives me governance without slowing velocity.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Legacy Database Onboarding
&lt;/h3&gt;

&lt;p&gt;For existing systems not yet version-controlled:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Snapshot production&lt;/li&gt;
&lt;li&gt;Generate baseline changelog&lt;/li&gt;
&lt;li&gt;Commit to Git&lt;/li&gt;
&lt;li&gt;Transition into controlled migration model&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;diffChangelog&lt;/code&gt; becomes a bridge between unmanaged and managed schemas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Important Practical Lessons
&lt;/h2&gt;

&lt;p&gt;Over time, I’ve learned several nuances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Snapshot Scope Matters
&lt;/h3&gt;

&lt;p&gt;For large schemas, snapshot generation can be heavy. I use filters like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;--schemas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;public
&lt;span class="nt"&gt;--includeObjects&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;table:users,index:users_email_idx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This keeps comparisons focused and performant.&lt;/p&gt;

&lt;h3&gt;
  
  
  Not Every Difference Should Be Deployed
&lt;/h3&gt;

&lt;p&gt;Auto-generated constraint names can differ across environments. Index naming strategies may vary. I always review generated changeLogs before applying them. Schema syncing is powerful but it must be deliberate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Snapshots Enable Determinism
&lt;/h3&gt;

&lt;p&gt;The biggest realization for me was this - “diffChangelog is not comparing SQL text”. It is comparing structured schema models. That abstraction layer is what makes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-database comparison possible&lt;/li&gt;
&lt;li&gt;CI automation reliable&lt;/li&gt;
&lt;li&gt;Drift detection accurate&lt;/li&gt;
&lt;li&gt;Schema syncing deterministic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without snapshots, diffing would be brittle and vendor-specific.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters in Database DevOps?
&lt;/h2&gt;

&lt;p&gt;Schema syncing is not just about keeping environments tidy. It enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Predictable deployments&lt;/li&gt;
&lt;li&gt;Audit traceability&lt;/li&gt;
&lt;li&gt;Reduced incident risk&lt;/li&gt;
&lt;li&gt;Environment parity&lt;/li&gt;
&lt;li&gt;Controlled rollback strategies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Liquibase OSS gives me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Snapshot generation&lt;/li&gt;
&lt;li&gt;Object-level diffing&lt;/li&gt;
&lt;li&gt;ChangeLog generation&lt;/li&gt;
&lt;li&gt;Automated synchronization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All without requiring enterprise extensions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;When I think about database schema syncing today, I no longer see it as a manual reconciliation process. I see it as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Snapshot state&lt;/li&gt;
&lt;li&gt;Compare object graphs&lt;/li&gt;
&lt;li&gt;Generate delta&lt;/li&gt;
&lt;li&gt;Apply controlled synchronization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;diffChangelog&lt;/code&gt;, powered by snapshots, turns schema comparison into a structured, automatable workflow. And once I understood how it works internally, I stopped treating it as a convenience command and started treating it as a foundational component of my Database DevOps architecture.&lt;/p&gt;

</description>
      <category>database</category>
      <category>devops</category>
      <category>postgres</category>
      <category>schema</category>
    </item>
    <item>
      <title>Observability for Databases in CI/CD</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Mon, 08 Sep 2025 09:36:44 +0000</pubDate>
      <link>https://forem.com/sonichigo/observability-for-databases-in-cicd-15bd</link>
      <guid>https://forem.com/sonichigo/observability-for-databases-in-cicd-15bd</guid>
      <description>&lt;p&gt;When organizations think about &lt;strong&gt;continuous integration and continuous delivery (CI/CD)&lt;/strong&gt;, the focus often centers on application code: unit tests, build pipelines, automated deployments, and monitoring for microservices. But there’s a blind spot that frequently gets overlooked - “the &lt;strong&gt;database&lt;/strong&gt;.”&lt;/p&gt;

&lt;p&gt;Databases aren’t just another service; they are the backbone of modern applications. A schema change, performance regression, or even a small migration error can bring an entire release to a halt. This is why &lt;strong&gt;observability for databases in CI/CD pipelines&lt;/strong&gt; has become critical, though it often remains under-prioritized.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll explore why observability matters for databases, what unique challenges it presents, and how teams can begin embedding observability into their database delivery workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Observability Matters for Databases ?
&lt;/h2&gt;

&lt;p&gt;When we think about observability in modern engineering, it often conjures up images of dashboards filled with service metrics, traces, and logs. But databases demand a different lens. They are stateful systems, deeply intertwined with application logic, and they evolve in ways that are more delicate and often riskier than application code itself.&lt;/p&gt;

&lt;p&gt;A typical software deployment can be rolled back with relative ease. If an API misbehaves, the deployment can be reverted, and the system is usually restored quickly. Databases, however, do not follow this forgiving pattern. Schema changes are not simply “&lt;strong&gt;versioned&lt;/strong&gt;” like code; they alter the shape of the data itself. Once an &lt;code&gt;ALTER TABLE&lt;/code&gt; or a &lt;code&gt;DROP COLUMN&lt;/code&gt; runs in production, undoing it can be slow, painful, and in some cases impossible without a full restore from backups. This makes the stakes of database delivery inherently higher.&lt;/p&gt;

&lt;p&gt;Another reason observability matters is performance. Many organizations have faced situations where a release seemed successful, only to discover days later that a single schema tweak had increased query latency across the board. The application monitoring tools might show a spike in response times, but tracing that back to the exact migration can be like finding a needle in a haystack. With proper observability, the connection between “deployment event” and “query performance regression” becomes visible and actionable.&lt;/p&gt;

&lt;p&gt;Finally, there’s the matter of &lt;strong&gt;speed versus safety&lt;/strong&gt;. The promise of CI/CD is agility, faster releases, quicker iterations, and reduced time to market. Yet, rushing database deployments without adequate visibility is like driving a car at top speed with no dashboard indicators. You don’t know if you’re running low on fuel, if the engine is overheating, or if a tire is about to burst. Observability provides that feedback loop, enabling teams to move quickly while still protecting data integrity and system reliability.&lt;/p&gt;

&lt;p&gt;In short: observability isn’t a &lt;strong&gt;&lt;em&gt;“nice to have”&lt;/em&gt;&lt;/strong&gt; for databases; it’s the foundation that makes modern database delivery feasible at scale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5id2o8icjd9racnei0pc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5id2o8icjd9racnei0pc.png" alt="frustration of devops engineer" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Observability Challenges Unique to Databases
&lt;/h2&gt;

&lt;p&gt;Adding observability for databases isn’t as straightforward as reusing traditional APM tools. Databases behave differently than stateless services. Here are a few pain points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Black Box Migrations&lt;/strong&gt; – Most teams treat migrations as fire-and-forget scripts. When they fail, root cause analysis is often tedious.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Drift Detection&lt;/strong&gt; – Environments fall out of sync easily, leading to inconsistencies and unpredictable behavior in production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Metrics Granularity&lt;/strong&gt; – Beyond CPU and memory, teams need visibility into query execution times, index usage, and lock contention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tooling Fragmentation&lt;/strong&gt; – Application monitoring stacks rarely integrate cleanly with database-native metrics.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Adding Observability in Database CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;So, how do we solve this? The answer lies in shifting observability left side, i.e. adding it into the pipeline rather than treating it as an afterthought.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pre-Deployment Checks&lt;/strong&gt; – Validate schema compatibility and dependencies before deploying.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Migration Visibility&lt;/strong&gt; – Capture execution times, before/after states, and log outputs for every migration. Interestingly, migration strategies themselves can influence observability. For example, whether teams adopt a &lt;a href="http://harness.io/blog/state-vs-script-migrations-in-modern-database-devops" rel="noopener noreferrer"&gt;&lt;strong&gt;state-based&lt;/strong&gt; or &lt;strong&gt;script-based&lt;/strong&gt;&lt;/a&gt; model directly impacts how changes are tracked and monitored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Performance Monitoring&lt;/strong&gt; – Extend existing observability stacks to monitor query latency and slow queries after deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Drift Alerts&lt;/strong&gt; – Automate schema comparison across environments to catch unapproved changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feedback Loops&lt;/strong&gt; – Build dashboards for developers and DBAs alike, encouraging shared ownership.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Payoff: Faster, Safer Releases
&lt;/h2&gt;

&lt;p&gt;By investing in observability, organizations gain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Confidence in Deployments&lt;/strong&gt;: Teams know changes are safe before they hit production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fewer Firefights&lt;/strong&gt;: Early detection reduces late-night incidents and downtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Shared Responsibility&lt;/strong&gt;: Observability bridges the gap between DevOps engineers and DBAs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Better Business Outcomes&lt;/strong&gt;: Faster releases, less downtime, and improved customer experience.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, observability doesn’t just protect your database; it accelerates your delivery pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Database DevOps is no longer optional. As organizations adopt &lt;a href="https://www.harness.io/blog/trunk-vs-feature-vs-environment-database-deployment" rel="noopener noreferrer"&gt;trunk-based development&lt;/a&gt; and rapid release cycles, databases must keep pace. Observability is the missing piece that ensures every schema change, migration, or deployment is executed with confidence.&lt;/p&gt;

&lt;p&gt;Start small: integrate migration visibility, add drift detection, and connect your databases to your observability stack. The sooner you embed observability, the sooner your pipeline becomes both &lt;strong&gt;faster and safer&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>cicd</category>
      <category>devops</category>
      <category>learning</category>
    </item>
    <item>
      <title>Smarter Database DevOps with AI: From Changelogs to Intelligent Pipelines</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Thu, 04 Sep 2025 10:02:34 +0000</pubDate>
      <link>https://forem.com/sonichigo/smarter-database-devops-with-ai-from-changelogs-to-intelligent-pipelines-1642</link>
      <guid>https://forem.com/sonichigo/smarter-database-devops-with-ai-from-changelogs-to-intelligent-pipelines-1642</guid>
      <description>&lt;p&gt;Databases have always been the “final boss” of DevOps. You can automate your CI/CD pipelines all you want, but when it comes to database deployments, teams often slow down. Manual changelogs, risky rollbacks, schema drift sound familiar?&lt;br&gt;
But what if AI could help?&lt;br&gt;
That’s exactly what we’re experimenting with in my side project based on &lt;a href="https://www.harness.io/blog/ai-in-database-devops-from-manual-bottlenecks-to-autonomous-change" rel="noopener noreferrer"&gt;Harness Database DevOps&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Databases Lag Behind in DevOps
&lt;/h2&gt;

&lt;p&gt;Unlike application code, database changes are stateful and persistent. If you mess up a deployment, you can’t just roll back by redeploying a container. The cost of errors is high, and the tooling hasn’t always kept up with the speed of modern CI/CD.&lt;br&gt;
Some common pain points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing and maintaining changelogs is tedious.&lt;/li&gt;
&lt;li&gt;Keeping environments (dev, staging, prod) in sync is tough.&lt;/li&gt;
&lt;li&gt;Rollbacks are not always straightforward.&lt;/li&gt;
&lt;li&gt;Drift happens, and often you catch it too late.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where AI fits in?
&lt;/h2&gt;

&lt;p&gt;This is where AI and intelligent agents come into play. Imagine describing your database change in plain English i.e. “Add a created_at column to the users table” and getting back a production ready changelog file.&lt;br&gt;
Or even better: pasting your existing changelog, and asking the AI to insert new changes in the correct order while preserving history.&lt;br&gt;
Some of the things we’ve been exploring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI-generated changelogs&lt;/strong&gt;: From natural language to YAML/XML/JSON.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Editing existing changelogs&lt;/strong&gt;: Insert new changesets without breaking old ones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment-specific changes&lt;/strong&gt;: Generate context-aware migrations for dev, staging, or prod.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conflict detection&lt;/strong&gt;: Use AI to flag duplicates or risky changes before deployment.
Here’s a sneak peek of the prototype in action 👇&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Beyond Changelogs: Towards Intelligent Pipelines
&lt;/h2&gt;

&lt;p&gt;Changelogs are just the start. Once you bring AI into the Database DevOps loop, the possibilities get exciting:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rollback strategy suggestions → Learn from past patterns to recommend safer rollbacks.&lt;/li&gt;
&lt;li&gt;Conversational approvals → Approve or reject deployments via chat with natural language.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This isn’t about replacing DBAs or developers, it’s about giving them better tools, automating the repetitive parts, and reducing the risk of human error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;We’re still experimenting, but you can try the &lt;a href="https://huggingface.co/spaces/Sonichigo/harness-database-devops" rel="noopener noreferrer"&gt;AI Changeset Generator on Hugging Face&lt;/a&gt;. And if you’re already deep into Database DevOps, we’d love to hear how you’d want AI to fit into your workflow. Would you trust it to write migrations? Spot drift? Recommend rollbacks?&lt;/p&gt;

</description>
      <category>database</category>
      <category>ai</category>
      <category>devops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Wed, 30 Jul 2025 08:34:32 +0000</pubDate>
      <link>https://forem.com/sonichigo/-46e8</link>
      <guid>https://forem.com/sonichigo/-46e8</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/sonichigo/why-your-git-branching-strategy-is-breaking-your-database-deployments-4k0" class="crayons-story__hidden-navigation-link"&gt;Why Your Git Branching Strategy Is Breaking Your Database Deployments&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/sonichigo" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F491160%2Fc1d3df1a-9ada-43c4-af12-8738cebdf995.jpg" alt="sonichigo profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/sonichigo" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Animesh Pathak
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Animesh Pathak
                
              
              &lt;div id="story-author-preview-content-2741338" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/sonichigo" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F491160%2Fc1d3df1a-9ada-43c4-af12-8738cebdf995.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Animesh Pathak&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/sonichigo/why-your-git-branching-strategy-is-breaking-your-database-deployments-4k0" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jul 30 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/sonichigo/why-your-git-branching-strategy-is-breaking-your-database-deployments-4k0" id="article-link-2741338"&gt;
          Why Your Git Branching Strategy Is Breaking Your Database Deployments
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/git"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;git&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/database"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;database&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/devops"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;devops&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/tooling"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;tooling&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/sonichigo/why-your-git-branching-strategy-is-breaking-your-database-deployments-4k0" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;6&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/sonichigo/why-your-git-branching-strategy-is-breaking-your-database-deployments-4k0#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            2 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>git</category>
      <category>database</category>
      <category>devops</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Why Your Git Branching Strategy Is Breaking Your Database Deployments</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Wed, 30 Jul 2025 08:33:41 +0000</pubDate>
      <link>https://forem.com/sonichigo/why-your-git-branching-strategy-is-breaking-your-database-deployments-4k0</link>
      <guid>https://forem.com/sonichigo/why-your-git-branching-strategy-is-breaking-your-database-deployments-4k0</guid>
      <description>&lt;p&gt;As DevOps and GitOps practices evolve, one area remains notoriously fragile: &lt;strong&gt;&lt;em&gt;database deployments&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;While application delivery has matured with CI/CD, many teams still struggle with schema changes, rollbacks, and environment consistency. One hidden reason? &lt;strong&gt;Poor Git branching strategies.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many teams default to a "branch-per-environment" model (&lt;code&gt;main&lt;/code&gt; → &lt;code&gt;dev&lt;/code&gt; → &lt;code&gt;qa&lt;/code&gt; → &lt;code&gt;prod&lt;/code&gt;). It feels logical… until it isn't. Merge conflicts spike. Hotfixes drift. QA stops reflecting production. Your pipeline becomes a patchwork of manual interventions.&lt;/p&gt;

&lt;p&gt;This approach creates drift, increases cognitive overhead, and slows down delivery. For stateful systems like databases, that risk multiplies.&lt;/p&gt;

&lt;p&gt;Without a clean GitOps model, visibility is lost and rollbacks &lt;a href="https://open.spotify.com/episode/4sIHumDhC0RWVEksk4QNck" rel="noopener noreferrer"&gt;become nightmares&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 The Better Approach: Trunk-Based GitOps for DBs
&lt;/h2&gt;

&lt;p&gt;Adopt trunk-based development:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One main branch as the source of truth&lt;/li&gt;
&lt;li&gt;Use contexts (e.g., &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;qa&lt;/code&gt;, &lt;code&gt;prod&lt;/code&gt;) to control deployment per environment&lt;/li&gt;
&lt;li&gt;Manage environments declaratively via metadata, not folders or branches&lt;/li&gt;
&lt;li&gt;Promote via pipeline stages, not Git merges&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ✅ A Better Way: Trunk-Based GitOps for Databases
&lt;/h2&gt;

&lt;p&gt;Instead of long-lived branches, adopt a &lt;a href="https://www.harness.io/blog/how-git-strategy-can-break-your-database-pipeline" rel="noopener noreferrer"&gt;trunk-based development model&lt;/a&gt; with GitOps principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a single main branch to track all database changes.&lt;/li&gt;
&lt;li&gt;Apply contexts (dev, qa, prod) to control environment targeting.&lt;/li&gt;
&lt;li&gt;Promote through environments using pipelines, not merges.&lt;/li&gt;
&lt;li&gt;Keep your Git history clean and auditable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This creates cleaner workflows, simplifies automation, and keeps all environments aligned with the same deployment logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧰 Tools to Make It Work
&lt;/h2&gt;

&lt;p&gt;We’re using this strategy with &lt;a href="https://harness.io/products/database-devops" rel="noopener noreferrer"&gt;Harness Database DevOps&lt;/a&gt;, which supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Liquibase-native changelogs with context-based targeting.&lt;/li&gt;
&lt;li&gt;CI/CD pipelines that pull from main and apply changes declaratively.&lt;/li&gt;
&lt;li&gt;Rollbacks using rollback blocks, backups, or roll-forward techniques.&lt;/li&gt;
&lt;li&gt;Git as the single source of truth.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The result?&lt;/strong&gt; Database deployments that are safe, scalable, and reproducible.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Conclusion
&lt;/h2&gt;

&lt;p&gt;Avoiding per-environment branching is critical in modern Database DevOps. While it may appear organized at first, this approach often results in drift, merge conflicts, and inconsistent environments over time. Instead of creating separate branches for &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;qa&lt;/code&gt;, and &lt;code&gt;prod&lt;/code&gt;, consolidate your workflow into a single mainline development branch.&lt;/p&gt;

&lt;p&gt;By tagging changelogs with appropriate contexts, you can control where and how changes are applied—without duplicating files or relying on directory structures like &lt;code&gt;/dev&lt;/code&gt; or &lt;code&gt;/prod&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Maintaining all database changelogs in a single branch ensures consistency, traceability, and a reliable source of truth. Promotions between environments should be handled through automated pipelines, not Git merges.&lt;/p&gt;

&lt;p&gt;Ultimately, embracing GitOps brings visibility, policy enforcement, and rollback control to database workflows. By combining declarative tooling with robust pipeline orchestration, teams can ship database changes as confidently as application code.&lt;/p&gt;

</description>
      <category>git</category>
      <category>database</category>
      <category>devops</category>
      <category>tooling</category>
    </item>
    <item>
      <title>What’s Happening Inside Your Linux Kernel?</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Sat, 17 May 2025 10:18:06 +0000</pubDate>
      <link>https://forem.com/sonichigo/whats-happening-inside-your-linux-kernel-gcg</link>
      <guid>https://forem.com/sonichigo/whats-happening-inside-your-linux-kernel-gcg</guid>
      <description>&lt;p&gt;Have you ever wondered how exactly your Linux system knows when a program is running, a file system is being mounted, or a new module is being loaded? It’s all happening deep inside the Linux kernel, and most of the time, we’re completely blind to it.&lt;/p&gt;

&lt;p&gt;But what if I told you there’s a way to peek inside, without rebooting the system, installing special software, or breaking anything?&lt;/p&gt;

&lt;p&gt;Welcome to the world of "&lt;strong&gt;kprobes&lt;/strong&gt;", where you can trace important kernel events in real-time, like a system detective 🕵️‍♂️. &lt;/p&gt;

&lt;p&gt;Let’s dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Should You Care? 🤔
&lt;/h2&gt;

&lt;p&gt;Let’s say you’re running containers in production. One day, something feels off—a container might be doing something it shouldn’t.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is it spawning weird processes?&lt;/li&gt;
&lt;li&gt;Is it trying to mount a new filesystem?&lt;/li&gt;
&lt;li&gt;Is it trying to gain extra privileges?&lt;/li&gt;
&lt;li&gt;Is someone loading a sketchy kernel module?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are early warning signs of something going wrong—maybe a misconfigured app, maybe an attack. &lt;strong&gt;&lt;em&gt;Kprobes let you catch these signs early&lt;/em&gt;&lt;/strong&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  What kernel functions we can trace?
&lt;/h3&gt;

&lt;p&gt;Linux is full of functions, but we’ll focus on four powerful ones that reveal a lot about what’s going on:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. &lt;code&gt;do_execve&lt;/code&gt;: Process Launch Tracker 🚀
&lt;/h4&gt;

&lt;p&gt;Whenever you run a program like &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;python&lt;/code&gt;, or a shell script, the Linux kernel calls a function named &lt;code&gt;do_execve&lt;/code&gt;. This function is the gateway to launching any new executable in the system. Why is this important? Because if you're monitoring for suspicious activity, like an unexpected script suddenly running on your server:&lt;code&gt;do_execve&lt;/code&gt; is your best friend. It's essentially a signal that &lt;em&gt;something new is being executed&lt;/em&gt;. Think of it like someone opening a door and entering a new room in a secure building. Naturally, you’d want to know who just walked in and whether they belong there.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. &lt;code&gt;security_capable&lt;/code&gt;: Permission Checkpoint 🛡️
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;security_capable&lt;/code&gt; function is called whenever a process tries to perform an action that requires special privileges like changing network configurations, modifying system time, or accessing restricted resources. It's a key part of Linux's internal permission checking system. In essence, this function asks: &lt;em&gt;Does this process have the right capabilities?&lt;/em&gt; Monitoring this can reveal when processes are trying to act like an admin or escalate privileges. Imagine someone attempting to unlock the admin panel in a web app. Wouldn’t you want to know who they are and whether they should be allowed?&lt;/p&gt;

&lt;h4&gt;
  
  
  3. &lt;code&gt;security_sb_mount&lt;/code&gt;: Filesystem Mount Auditor 🗂️
&lt;/h4&gt;

&lt;p&gt;Every time a filesystem is mounted whether it’s an external drive, a virtual filesystem, or a container volume, the &lt;code&gt;security_sb_mount&lt;/code&gt; function is called. Mount operations are usually routine, but they can also be exploited to access unauthorized data or escape from container environments. From a security perspective, this function lets you keep tabs on what’s being attached to your system and by whom. Think of it like plugging a USB stick into a laptop. What’s on it? Is it safe? Who’s doing it? eBPF can help you find out.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. &lt;code&gt;load_module&lt;/code&gt;: Kernel Code Gatekeeper 🧩
&lt;/h4&gt;

&lt;p&gt;Linux is modular, and the kernel allows dynamic loading of new code modules, like drivers or extensions at runtime. This is managed by the &lt;code&gt;load_module&lt;/code&gt; function. While many modules are harmless or necessary, some can introduce vulnerabilities or even backdoors. By tracing &lt;code&gt;load_module&lt;/code&gt;, you can detect whenever new kernel code is being added. It’s like someone installing an app on your phone. &lt;em&gt;Is it coming from a trusted source? Should it be there at all?&lt;/em&gt; Keeping an eye on this function gives you insight into what’s being added to the core of your OS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvugg0nas408gdll6hkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvugg0nas408gdll6hkj.png" alt="Architecture" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do We Trace These Functions?
&lt;/h2&gt;

&lt;p&gt;This is where &lt;code&gt;kprobes&lt;/code&gt; come in. Think of &lt;code&gt;kprobes&lt;/code&gt; as little spy cameras inside the kernel. You can place them at any function and they’ll quietly record what’s happening.&lt;/p&gt;

&lt;p&gt;Here’s how the flow works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You choose a kernel function, like &lt;code&gt;do_execve&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You set a kprobe on it, which says: “Whenever this function is called, tell me.”&lt;/li&gt;
&lt;li&gt;The system logs each call, including key details like PID, UID, and arguments.&lt;/li&gt;
&lt;li&gt;You read the trace from a special file: &lt;code&gt;/sys/kernel/debug/tracing/trace_pipe&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  💬 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Think of kernel tracing like having X-ray vision into your Linux system. You don’t need to guess what your containers or processes are doing, you will be able to see it.&lt;/p&gt;

&lt;p&gt;Kprobes give you power, visibility, and control especially when something feels off but you don’t know where to start looking.&lt;/p&gt;

&lt;p&gt;Ready to try it out? Start small, trace one function, and see what insights pop up. The Linux kernel is full of secrets it’s time to uncover them.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. What is &lt;code&gt;do_execve&lt;/code&gt;, and why trace it?
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;do_execve&lt;/code&gt; function is the kernel’s internal handler for the &lt;code&gt;execve&lt;/code&gt; system call, responsible for loading a new program into a process’s address space and starting its execution. Tracing &lt;code&gt;do_execve&lt;/code&gt; reveals exactly when and how processes are launched, making it invaluable for detecting unauthorized or unexpected binaries running on a system.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. What does &lt;code&gt;security_capable&lt;/code&gt; check?
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;security_capable&lt;/code&gt; function is part of the Linux Security Modules (LSM) framework and is invoked whenever a process requests a privileged capability (e.g., &lt;code&gt;CAP_SYS_ADMIN&lt;/code&gt;). Tracing this hook shows when processes attempt actions that require elevated privileges, helping to catch privilege escalations or policy violations.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. What is &lt;code&gt;security_sb_mount&lt;/code&gt;, and why is it important?
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;security_sb_mount&lt;/code&gt; is the LSM hook called whenever a filesystem is mounted (including container volumes). Monitoring this function can detect unauthorized mounts—which may indicate container escapes or illicit data access—by reporting details such as device path and filesystem type.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. How do kprobes attach hooks to kernel functions?
&lt;/h3&gt;

&lt;p&gt;Kprobes dynamically insert breakpoints into almost any kernel routine. You define a probe by writing a line like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'p:myprobe do_execve'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /sys/kernel/debug/tracing/kprobe_events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells the kernel to invoke your handler whenever &lt;code&gt;do_execve&lt;/code&gt; runs. The recorded data is then read from &lt;code&gt;/sys/kernel/debug/tracing/trace_pipe&lt;/code&gt; for live monitoring.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>ebpf</category>
      <category>security</category>
      <category>github</category>
    </item>
    <item>
      <title>How to Effectively Vet Your Supply Chain for Optimal Performance</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Thu, 15 May 2025 09:09:50 +0000</pubDate>
      <link>https://forem.com/sonichigo/how-to-effectively-vet-your-supply-chain-for-optimal-performance-1fkn</link>
      <guid>https://forem.com/sonichigo/how-to-effectively-vet-your-supply-chain-for-optimal-performance-1fkn</guid>
      <description>&lt;p&gt;In today’s world, software projects often rely on many open source libraries. While these libraries speed up development, they can also bring hidden risks if they are not checked carefully. A single unsafe library can compromise an entire project. SafeDep’s &lt;strong&gt;vet&lt;/strong&gt; tool helps you guard your software supply chain by checking every library you use. Below is a detailed, step-by-step guide to understanding, installing, and using &lt;strong&gt;vet&lt;/strong&gt; in your projects. But &lt;strong&gt;&lt;em&gt;why is Supply Chain Security matter?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Risk of Supply Chain Attacks
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;supply chain attack&lt;/strong&gt; happens when an attacker hides bad code in a library or package that many developers download. When you add that library to your project, the bad code can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Steal sensitive information&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Break parts of your application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Spread malware to users&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Beyond Traditional Scanning
&lt;/h3&gt;

&lt;p&gt;Supply chain attacks have surged, leveraging tactics such as malicious code injection, dependency confusion, and typosquatting. Conventional scanners focus narrowly on known &lt;code&gt;CVEs&lt;/code&gt;, leaving blind spots in popularity, maintenance status, license compliance, and more. Modern organizations require a holistic solution that codifies and automates risk evaluation across multiple dimensions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing vet: Policy-Driven Supply Chain Protection
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;vet&lt;/strong&gt; transforms security requirements into executable policies using the Common Expression Language (CEL). By treating security guardrails as code, you achieve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Compliance:&lt;/strong&gt; Define rules once and enforce them across every build, pull request, and release.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customizable Risk Tolerance:&lt;/strong&gt; Craft filters for critical vulnerabilities, unacceptable licenses, low adoption, or missing maintenance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extensible Metadata:&lt;/strong&gt; Leverage OSV vulnerability feeds, popularity metrics, maintenance indicators, extended license attributes, and OpenSSF Scorecards for third-party risk assessment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Capability&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code Analysis &amp;amp; Filtering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Focus on high-impact risks using CEL filters to target critical issues only.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Direct OSV Integration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pull up-to-date vulnerability data from the OSV ecosystem.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Popularity &amp;amp; Maintenance Checks&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Block unvetted or unmaintained packages before they enter your codebase.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;License &amp;amp; Compliance Controls&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Enforce acceptable license policies automatically.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OpenSSF Scorecard Insights&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Incorporate third-party security posture into approval workflows.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Transitive Dependency Coverage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Analyze both direct and transitive components for end-to-end visibility.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Filter Suites&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Group multiple CEL filters into a single policy suite for complex guardrails.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Installation and Setup
&lt;/h2&gt;

&lt;p&gt;Getting started with &lt;strong&gt;vet&lt;/strong&gt; is straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Local CLI Installation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;brew tap safedep/tap&lt;/span&gt;
&lt;span class="s"&gt;brew install safedep/tap/vet&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Initial Configuration&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Create a &lt;code&gt;policy.yml&lt;/code&gt; defining your filter suites:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;filter_suites&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;block_high_severity&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;require_popularity&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;enforce_scorecard&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Repository Scan&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vet scan &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--filter-suite&lt;/span&gt; default &lt;span class="nt"&gt;--fail-on-filter&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Integrating vet into CI/CD Workflows
&lt;/h2&gt;

&lt;p&gt;Seamless CI/CD integration is critical for “shift-left” security. &lt;strong&gt;vet&lt;/strong&gt; offers native GitHub Action support, enabling policy execution on every pull request, commit, and release:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Supply Chain Security&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*.go'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*.js'&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vet_scan&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;safedep/vet-action@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;filter-suite&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
          &lt;span class="na"&gt;fail-on-filter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon policy violation, &lt;strong&gt;vet&lt;/strong&gt; automatically annotates pull requests with inline comments, detailing the failed filters and suggesting remediation steps. For teams using GitLab or Jenkins, &lt;strong&gt;vet&lt;/strong&gt; can be executed via CLI in pipeline stages, with exit codes controlling build success. Furthermore, &lt;strong&gt;vet&lt;/strong&gt; can emit SARIF output, integrating with security dashboards and code scanning interfaces for unified visibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Case Studies
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; A global bank with hundreds of Java microservices sought to harden its supply chain against high-severity vulnerabilities and unmaintained packages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Baseline Vulnerability Exposure:&lt;/strong&gt; Prior to &lt;strong&gt;vet&lt;/strong&gt;, the bank’s DevSecOps team found that, on average, &lt;strong&gt;52%&lt;/strong&gt; of pulled dependencies contained at least one high- or critical-severity CVE.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Policy Implementation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Block High-Severity CVEs&lt;/strong&gt;: &lt;code&gt;dependency.osv.vulnerabilities.any(v | v.severity in ["HIGH","CRITICAL"])&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintenance Check&lt;/strong&gt;: Reject libraries with zero commits or releases in the last 12 months.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration:&lt;/strong&gt; Embedded &lt;strong&gt;vet&lt;/strong&gt; into the GitHub Actions pipeline across 120 repositories, running scans on every pull request.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Results Over Three Months:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;85% reduction&lt;/strong&gt; in dependencies with high-severity CVEs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;70% fewer&lt;/strong&gt; unmaintained packages entering the codebase, compared to a 30% industry average for mature banking organizations generating SBOMs and enforcing security policies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mean Time to Remediation (MTTR)&lt;/strong&gt; for critical vulnerabilities improved from &lt;strong&gt;72 hours&lt;/strong&gt; pre-&lt;strong&gt;vet&lt;/strong&gt; to &lt;strong&gt;18 hours&lt;/strong&gt;, outpacing the 66% of organizations that remediate within a day&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways:&lt;/strong&gt; By codifying high-severity and maintenance policies in CEL and automating enforcement, the bank not only slashed vulnerable dependencies but also accelerated remediation workflows, aligning with top-quartile performance in financial services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;SafeDep’s &lt;strong&gt;vet&lt;/strong&gt; redefines software supply chain security through a policy-as-code approach that integrates seamlessly into development workflows. By consolidating metadata from OSV, OpenSSF Scorecards, popularity and maintenance indicators, and license attributes, &lt;strong&gt;vet&lt;/strong&gt; provides a comprehensive, real-time defense against diverse supply chain threats.&lt;/p&gt;

&lt;p&gt;Organizations can codify their security and compliance requirements in CEL filters, enforce them automatically in CI/CD pipelines, and empower developers with immediate, actionable feedback. Embracing &lt;strong&gt;vet&lt;/strong&gt; enables teams to balance innovation and risk, ensuring only secure, compliant open source components advance through their pipelines - ultimately fostering resilient, trustworthy software delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;FAQ’s&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What policies can I define with vet?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can express any security or compliance requirement as CEL filters—critical CVEs, banned licenses, low popularity thresholds, end-of-life projects, or OpenSSF Scorecard minima—and group them into reusable filter suites.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How does vet obtain vulnerability data?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;vet integrates directly with the OSV ecosystem, pulling the latest vulnerability feeds and mapping them to your dependencies in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Can I scan transitive dependencies?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Yes. vet analyzes both direct and transitive dependencies to ensure end-to-end supply chain visibility and enforcement.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How do I integrate vet into my CI/CD pipeline?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Use the &lt;code&gt;safedep/vet-action&lt;/code&gt; for GitHub Actions (or analogous steps for other systems) to run &lt;code&gt;vet scan&lt;/code&gt; on every pull request and build, automatically blocking policy violations before code merges.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Is vet extensible to custom metadata sources?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;While vet natively supports OSV, popularity, maintenance, license, and OpenSSF Scorecard data, the CEL-based architecture allows you to incorporate additional metadata feeds or internal risk indicators as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  References -
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Visit SafeDep - &lt;a href="https://safedep.io" rel="noopener noreferrer"&gt;https://safedep.io&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explore about SafeDep on GitHub - &lt;a href="https://github.com/safedep/vet" rel="noopener noreferrer"&gt;https://github.com/safedep/vet&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>safedep</category>
      <category>opensource</category>
      <category>security</category>
      <category>startup</category>
    </item>
    <item>
      <title>How Liquibase Makes Life Easy for DB Admins</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Wed, 07 May 2025 06:30:00 +0000</pubDate>
      <link>https://forem.com/sonichigo/how-liquibase-makes-life-easy-for-db-admins-23ej</link>
      <guid>https://forem.com/sonichigo/how-liquibase-makes-life-easy-for-db-admins-23ej</guid>
      <description>&lt;p&gt;Let’s be honest - managing database changes can sometimes feel like juggling fire. You’ve got multiple developers making updates, environments to manage, rollbacks to worry about, and let’s not forget those late-night “&lt;strong&gt;It worked on dev!&lt;/strong&gt;” surprises.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;But guess what?&lt;/em&gt;&lt;/strong&gt; There's a tool that can help you stay ahead of the chaos. It’s called &lt;strong&gt;Liquibase&lt;/strong&gt;, and it's like having a helpful assistant who always remembers what changes were made, who made them, and when. Today, we're going to break it down - what Liquibase is, how it works (especially with YAML), why it's useful, and how it's being used in real projects like &lt;a href="https://github.com/Sonichigo/mux-sql/blob/main/liquibase.yml" rel="noopener noreferrer"&gt;mux-sql&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So grab your favourite cup of coffee ☕️, and let’s dive in!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Liquibase?
&lt;/h2&gt;

&lt;p&gt;Liquibase is an open-source database change management tool. Think of it as version control, but for your database.&lt;/p&gt;

&lt;p&gt;Just like Git helps developers manage changes in their code, Liquibase helps you manage changes in your database schema. It keeps track of all the changes you've made (like adding tables, modifying columns, or creating indexes) and applies them in a controlled, consistent way across different environments - dev, test, staging, production.&lt;/p&gt;

&lt;p&gt;And yes, it works with most major databases: MySQL, PostgreSQL, Oracle, SQL Server, and many others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Should You Care?
&lt;/h2&gt;

&lt;p&gt;You might be thinking, “My team already handles DB scripts manually. Why switch?”&lt;/p&gt;

&lt;p&gt;Here’s why Liquibase can make your life easier:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No More Manual Scripts&lt;/strong&gt;: Say goodbye to writing and tracking &lt;code&gt;V1__create_table.sql&lt;/code&gt;, &lt;code&gt;V2__add_column.sql&lt;/code&gt;, and so on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tracks What’s Been Applied&lt;/strong&gt;: It keeps a changelog and logs every change in a special table (&lt;code&gt;DATABASECHANGELOG&lt;/code&gt;) inside your DB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Works with CI/CD&lt;/strong&gt;: Automate your DB updates during deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Supports Rollbacks&lt;/strong&gt;: Made a mistake? You can roll back changes with a command.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clear Audit Trail&lt;/strong&gt;: Know who changed what and when.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, Liquibase gives you control, clarity, and confidence when managing DB updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It Work?
&lt;/h2&gt;

&lt;p&gt;Liquibase works using something called a &lt;strong&gt;changelog&lt;/strong&gt;. This is a file where you define all your database changes using a format like YAML, XML, JSON, or SQL. Each change is grouped into a &lt;strong&gt;changeset&lt;/strong&gt;—a small, trackable unit of change.&lt;/p&gt;

&lt;p&gt;Here's an example from the &lt;a href="https://github.com/Sonichigo/mux-sql/blob/main/liquibase.yml" rel="noopener noreferrer"&gt;mux-sql app's Liquibase YAML file&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;databaseChangeLog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;includeAll&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sql&lt;/span&gt;
      &lt;span class="na"&gt;relativeToChangelogFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;changeSet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;product-table&lt;/span&gt;
      &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;claude&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;products-api&lt;/span&gt; 
      &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Creating product table for REST API&lt;/span&gt;
      &lt;span class="na"&gt;changes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;createTable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;tableName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;products&lt;/span&gt;
            &lt;span class="na"&gt;columns&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;column&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;id&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SERIAL&lt;/span&gt;
                  &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;primaryKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;column&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;name&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VARCHAR(100)&lt;/span&gt;
                  &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;nullable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;column&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;price&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NUMERIC(10,2)&lt;/span&gt;
                  &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;nullable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
                  &lt;span class="na"&gt;defaultValue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.00&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet says: “Hey, create a table called &lt;code&gt;users&lt;/code&gt; with &lt;code&gt;id&lt;/code&gt;, &lt;code&gt;username&lt;/code&gt;, and &lt;code&gt;email&lt;/code&gt; columns. The &lt;code&gt;id&lt;/code&gt; is the primary key and cannot be null.”&lt;/p&gt;

&lt;p&gt;Once you run Liquibase, it reads this changelog, checks which changes haven’t been applied yet (based on the &lt;code&gt;DATABASECHANGELOG&lt;/code&gt; table), and runs the SQL under the hood to make the changes. &lt;strong&gt;&lt;em&gt;Easy, right?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  YAML + Liquibase: A Perfect Match made in heaven
&lt;/h2&gt;

&lt;p&gt;Many DBAs are familiar with SQL, but YAML might feel new. Don’t worry - YAML is just a human-readable way to structure data. It’s like writing your changes in plain English.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why use YAML?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It’s &lt;strong&gt;clean and readable&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Great for &lt;strong&gt;code reviews&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Less error-prone&lt;/strong&gt; than long SQL scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supported natively by Liquibase.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’ve ever worked with configuration files in Kubernetes, Docker Compose, or CI tools like GitHub Actions - you’ve already used YAML. You’re ahead of the game!&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real-World Example: Mux + PostgreSQL
&lt;/h2&gt;

&lt;p&gt;Let’s talk about something real.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/Sonichigo/mux-sql" rel="noopener noreferrer"&gt;mux-sql&lt;/a&gt; project uses Liquibase with a YAML changelog to manage its database schema. It's a backend app built with Go, and like many projects, it needs to manage a growing database as new features are added.&lt;/p&gt;

&lt;p&gt;Here’s how they’ve set things up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;They use a &lt;code&gt;liquibase.yml&lt;/code&gt; file in the root of their project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All database changes (like new tables, updates) are defined inside it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developers make schema changes by adding new &lt;code&gt;changesets&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the app is deployed, Liquibase applies only the new changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The team doesn’t have to guess what version the database is on—Liquibase handles it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach keeps things clean, consistent, and avoids the &lt;strong&gt;&lt;em&gt;“Did we run that script on staging?”&lt;/em&gt;&lt;/strong&gt; drama.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjx97yu2dzuc85o0ftbqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjx97yu2dzuc85o0ftbqo.png" alt="Flowchart detailing a database update process with Liquibase. It begins with " width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD and Liquibase = New Power Duo
&lt;/h2&gt;

&lt;p&gt;Now, let’s level up, iIf your team is using a CI/CD pipeline (like GitHub Actions, GitLab CI, or Jenkins), you can run Liquibase automatically whenever you deploy. Imagine this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A developer creates a new table.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They add a new &lt;code&gt;changeset&lt;/code&gt; to the YAML changelog.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They push their code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your CI pipeline runs, and Liquibase applies the new schema changes as part of the deploy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Boom! Database updated, and everyone’s happy. No more hunting down SQL scripts or forgetting to run migrations.&lt;/p&gt;

&lt;h2&gt;
  
  
  But what About Rollbacks?
&lt;/h2&gt;

&lt;p&gt;Mistakes happen. Maybe a changeset dropped the wrong column. Don’t worry—Liquibase has rollback support.&lt;/p&gt;

&lt;p&gt;You can define a rollback inside your &lt;code&gt;changeset&lt;/code&gt; like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;databaseChangeLog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;includeAll&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sql&lt;/span&gt;
      &lt;span class="na"&gt;relativeToChangelogFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;changeSet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;product-table&lt;/span&gt;
      &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;claude&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;products-api&lt;/span&gt; 
      &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Creating product table for REST API&lt;/span&gt;
      &lt;span class="na"&gt;changes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;createTable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;tableName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;products&lt;/span&gt;
            &lt;span class="na"&gt;columns&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;column&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;id&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SERIAL&lt;/span&gt;
                  &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;primaryKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;column&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;name&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VARCHAR(100)&lt;/span&gt;
                  &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;nullable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;column&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;price&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NUMERIC(10,2)&lt;/span&gt;
                  &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;nullable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
                  &lt;span class="na"&gt;defaultValue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.00&lt;/span&gt;
      &lt;span class="na"&gt;rollback&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;dropTable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;tableName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;products&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if something goes wrong, you can just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;liquibase rollbackCount 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it will undo that change. Peace of mind, built-in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for DB Admins Using Liquibase
&lt;/h2&gt;

&lt;p&gt;Here are a few tips to make the most of Liquibase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use descriptive&lt;/strong&gt; &lt;code&gt;id&lt;/code&gt; and &lt;code&gt;author&lt;/code&gt; in each changeset helps trace who did what.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keep changelogs in version control (Git)&lt;/strong&gt; treat schema as code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test locally before pushing changes&lt;/strong&gt; always!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Liquibase commands to validate&lt;/strong&gt; before applying changes: &lt;code&gt;liquibase validate&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Modularize your changelogs&lt;/strong&gt; if your project gets big. You can &lt;code&gt;include&lt;/code&gt; files.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Learning Curve? Not So Steep!
&lt;/h2&gt;

&lt;p&gt;Some folks might worry that using Liquibase adds complexity. But in practice, it actually &lt;strong&gt;reduces&lt;/strong&gt; complexity. No more guesswork. No more broken SQL files. No more “It worked on my machine.”&lt;/p&gt;

&lt;p&gt;Instead, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A single source of truth.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Audit history of every change.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easy rollbacks and repeatable deploys.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you use it on a few projects, it becomes second nature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Liquibase is like the superhero sidekick you didn’t know you needed. It helps you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Track database changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply them in a consistent way.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Roll them back when things go sideways.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Work better with your team and CI/CD pipelines.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re a DB Admin tired of manual SQL chaos, it’s worth giving Liquibase a try. The &lt;a href="https://github.com/Sonichigo/mux-sql/blob/main/liquibase.yml" rel="noopener noreferrer"&gt;mux-sql project&lt;/a&gt; shows just how clean and simple a YAML-based changelog can be. You’ve already got the skills - Liquibase just helps you use them smarter. So go ahead - set up that &lt;code&gt;liquibase.yml&lt;/code&gt;, commit it to Git, and start managing your database like a boss.&lt;/p&gt;

&lt;h2&gt;
  
  
  ❓ FAQs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Do I need to know Java to use Liquibase?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Nope!&lt;/strong&gt; Liquibase is built in Java, but you don’t need to write any Java code to use it. You just need the Java Runtime Environment (JRE) installed to run the Liquibase CLI. Everything else like - YAML, SQL, or XML changelogs is what you already know.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Can I use Liquibase with my existing database?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Yes.&lt;/strong&gt; Liquibase can integrate with an existing database by using the &lt;code&gt;generateChangeLog&lt;/code&gt; command to capture the current state. From there, you can start tracking future changes incrementally with changesets.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;What if I accidentally apply the wrong changeset?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;That’s where &lt;strong&gt;rollback&lt;/strong&gt; comes in. If you’ve defined a rollback block inside the changeset, you can undo changes safely using a simple command like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;liquibase rollbackCount 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without a defined rollback, you’ll need to handle it manually—but Liquibase will still show you what was applied and when.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Can multiple developers work on the same changelog?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Yes, but with structure.&lt;/strong&gt; Each developer should add their own changesets with unique &lt;code&gt;id&lt;/code&gt;s and authors. Keeping changelogs in version control (like Git) and modularizing them with &lt;code&gt;include&lt;/code&gt; files helps avoid merge conflicts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;📘&lt;a href="https://www.harness.io/blog/introducing-harness-database-devops" rel="noopener noreferrer"&gt;&lt;strong&gt;Introducing Harness Database DevOps&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🤔&lt;a href="https://www.harness.io/blog/automating-environment-specific-verification-queries-with-liquibase-and-harness-database-devops" rel="noopener noreferrer"&gt;&lt;strong&gt;Automating Environment-Specific Verification Queries with Liquibase&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;💻 &lt;a href="https://github.com/Sonichigo/mux-sql/blob/main/liquibase.yml" rel="noopener noreferrer"&gt;mux-sql Liquibase Example&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>opensource</category>
      <category>database</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Running WebAssembly with ContainerD + CRUN + WasmEdge</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Mon, 21 Apr 2025 11:30:00 +0000</pubDate>
      <link>https://forem.com/sonichigo/running-webassembly-with-containerd-crun-wasmedge-1dm5</link>
      <guid>https://forem.com/sonichigo/running-webassembly-with-containerd-crun-wasmedge-1dm5</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;WebAssembly (Wasm) isn’t just for the browser anymore. It’s conquering the cloud too — and if you're feeling adventurous, let's plug it into container runtimes like &lt;code&gt;containerd&lt;/code&gt;, run it with &lt;code&gt;crun&lt;/code&gt;, and power up workloads in Kubernetes like a boss. Oh, and yes — we'll even touch on &lt;strong&gt;KubeVirt&lt;/strong&gt; to show Wasm's flexibility in a virtualized world.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🛠️ Step 1: Install WasmEdge (The Wasm Runtime Hero)
&lt;/h2&gt;

&lt;p&gt;Let’s kick things off by installing &lt;a href="https://wasmedge.org/" rel="noopener noreferrer"&gt;WasmEdge&lt;/a&gt; — a blazing-fast Wasm runtime optimized for cloud-native.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sSf&lt;/span&gt; https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash
&lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.wasmedge/env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ✅ Verify Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wasmedge &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wasmedge version 0.13.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that you’re equipped with Wasm superpowers, let’s bring in the container orchestration party.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧱 Step 2: Set Up ContainerD, CRUN, and Kubernetes
&lt;/h2&gt;

&lt;p&gt;One-liner to install everything you need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget &lt;span class="nt"&gt;-qO-&lt;/span&gt; https://raw.githubusercontent.com/sonichigo/wasmedge-demo-example/main/install.sh | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Installing &lt;code&gt;containerd&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configuring &lt;code&gt;crun&lt;/code&gt; as the runtime&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Patching containerd for Wasm support&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setting up a local Kubernetes cluster&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Expected terminal logs include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Installing ContainerD and CRUN
&amp;lt;==============================&amp;gt;
...
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Boom. That’s your Wasm-ready k8s cluster firing up 🎉&lt;/p&gt;

&lt;h2&gt;
  
  
  ☁️ Step 3: Run a Wasm Container in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Switch into Kubernetes source and set up your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;kubernetes &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git checkout v1.22.4

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBERNETES_PROVIDER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;local

sudo &lt;/span&gt;cluster/kubectl.sh config set-cluster &lt;span class="nb"&gt;local&lt;/span&gt; &lt;span class="nt"&gt;--server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://localhost:6443 &lt;span class="nt"&gt;--certificate-authority&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/run/kubernetes/server-ca.crt
&lt;span class="nb"&gt;sudo &lt;/span&gt;cluster/kubectl.sh config set-credentials myself &lt;span class="nt"&gt;--client-key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/run/kubernetes/client-admin.key &lt;span class="nt"&gt;--client-certificate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/var/run/kubernetes/client-admin.crt
&lt;span class="nb"&gt;sudo &lt;/span&gt;cluster/kubectl.sh config set-context &lt;span class="nb"&gt;local&lt;/span&gt; &lt;span class="nt"&gt;--cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;local&lt;/span&gt; &lt;span class="nt"&gt;--user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;myself
&lt;span class="nb"&gt;sudo &lt;/span&gt;cluster/kubectl.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check cluster status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;cluster/kubectl.sh cluster-info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your cluster nodes happily registered.&lt;/p&gt;

&lt;h2&gt;
  
  
  🌐 Step 4: Deploy a WebAssembly HTTP Service
&lt;/h2&gt;

&lt;p&gt;Time to run a Wasm image in the cluster!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;cluster/kubectl.sh run &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never http-server &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;wasmedge/example-wasi-http:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--annotations&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"module.wasm.image/variant=compat-smart"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--overrides&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{"kind":"Pod", "apiVersion":"v1", "spec": {"hostNetwork": true}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then hit it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"name=WasmEdge"&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://127.0.0.1:1234
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo: name=WasmEdge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🥳 Success! You just ran a Wasm container natively in Kubernetes.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧪 Bonus: Run This on KubeVirt!
&lt;/h2&gt;

&lt;p&gt;Want to get even fancier? Try this in a &lt;strong&gt;KubeVirt&lt;/strong&gt; virtualized environment. Since KubeVirt can run Kubernetes inside virtual machines, you can embed this Wasm-ready containerd setup inside those VMs — making it super versatile for hybrid environments and edge deployments.&lt;/p&gt;

&lt;p&gt;Steps are similar, but inside a KubeVirt VM with Ubuntu or Fedora base image. After that, follow this same guide within that VM and enjoy full Wasm-k8s magic, powered by KubeVirt's virtualization layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  3 Ways to Run WASM with OCI &amp;amp; Container Runtimes
&lt;/h2&gt;

&lt;p&gt;There’s more than one way to cook this goose. Here's a breakdown of the 3 main approaches:&lt;/p&gt;

&lt;h3&gt;
  
  
  🧩 With &lt;code&gt;containerd-shim&lt;/code&gt; (runwasi)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Detects Wasm images via target platform (&lt;code&gt;wasm32&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses &lt;code&gt;runwasi&lt;/code&gt; for Wasm, &lt;code&gt;runc&lt;/code&gt; for regular containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Backed by Docker, Microsoft&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Powering Docker+WASM preview releases&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ⚙️ Option #2: With &lt;code&gt;crun&lt;/code&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A C-based OCI runtime (from Red Hat)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Detects Wasm via &lt;strong&gt;annotations&lt;/strong&gt; (&lt;code&gt;module.wasm.image/variant&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports full k8s stack: CRI-O, containerd, Podman, kind, microk8s&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🦀 Option #3: With &lt;code&gt;youki&lt;/code&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Rust-based OCI runtime&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also annotation-based detection&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can power CRI-O, containerd, Podman, kind, microk8s, and k8s&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these can be visualized via their respective integration diagrams — worth including one if you're making this into a slide deck or dev talk!&lt;/p&gt;

&lt;h2&gt;
  
  
  🔚 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;WebAssembly isn’t the future - it’s the &lt;strong&gt;present&lt;/strong&gt;. With WasmEdge, CRUN, and Kubernetes (even in KubeVirt), we now have the flexibility to run lightweight, fast, and secure applications anywhere. Whether you're targeting the cloud, the edge, or somewhere in between — you’re set up for success.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Why use Wasm instead of Linux containers?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Wasm provides faster startup, smaller size, and better isolation — ideal for microservices and edge computing.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Do I need to change my Kubernetes setup to run Wasm?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Nope! With runtimes like &lt;code&gt;crun&lt;/code&gt; or &lt;code&gt;youki&lt;/code&gt; and proper annotations, it plugs right into your existing Kubernetes flow.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;What languages can I use to build Wasm apps?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Rust, Go (with TinyGo), C/C++, Python (via Pyodide), and even JavaScript!&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Can I run Wasm apps alongside Linux containers?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Yes, mixed workloads are totally supported — just annotate your Wasm images and the runtime handles the rest.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;How does this differ from running containers in Docker?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Docker (via containerd) supports Wasm via &lt;code&gt;runwasi&lt;/code&gt;, but this approach goes deeper into Kubernetes and custom runtime integrations using &lt;code&gt;crun&lt;/code&gt;/&lt;code&gt;youki&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>webassembly</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>🔥 Google Launches Firebase Studio: A New Era for AI App Development</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Fri, 11 Apr 2025 10:11:18 +0000</pubDate>
      <link>https://forem.com/sonichigo/google-launches-firebase-studio-a-new-era-for-ai-app-development-1p36</link>
      <guid>https://forem.com/sonichigo/google-launches-firebase-studio-a-new-era-for-ai-app-development-1p36</guid>
      <description>&lt;p&gt;Google has just introduced Firebase Studio, a revolutionary cloud-based development environment tailored for building full-stack AI applications. This new toolset seamlessly integrates with Firebase services and leverages Gemini AI, creating a smart, collaborative, and agentic workspace for developers, all accessible via the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 What’s Firebase Studio All About?
&lt;/h2&gt;

&lt;p&gt;Firebase Studio reimagines the development workflow, combining AI-driven tools with a visual interface to make app building faster, smarter, and more collaborative. Whether you're prototyping ideas or deploying production-ready apps, Firebase Studio covers it all - no switching between multiple platforms or environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔑 Key Features at a Glance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal Prototyping&lt;/strong&gt;: Build app prototypes using natural language, images, and even hand-drawn sketches. This makes it incredibly easy for developers to bring ideas to life, especially during early-stage brainstorming and MVP creation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-Powered Development&lt;/strong&gt;: Refine your application in real-time with an integrated AI assistant. The Gemini-powered chat experience helps you iterate quickly, answer questions, suggest code, and improve UX—all within the Studio.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seamless Code + Visual Workflow&lt;/strong&gt;: Firebase Studio bridges the gap between visual prototyping and hands-on coding. Developers can dive into code whenever needed without disrupting the visual flow—perfect for teams with diverse skill sets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live Preview on Multiple Devices&lt;/strong&gt;: Preview your app across different screen sizes and devices instantly. This ensures responsiveness and design consistency from the get-go.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Effortless Deployment&lt;/strong&gt;: Once your app is ready, publish it with a click using Firebase App Hosting. The integration cuts down manual steps and gets your product to users faster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built-in Real-Time Collaboration&lt;/strong&gt;: Work on the same project with teammates in real-time. Whether it's pair programming, design reviews, or debugging, Firebase Studio supports seamless collaboration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💡 Why It Matters
&lt;/h2&gt;

&lt;p&gt;Firebase Studio isn't just another development tool—it’s a leap toward making AI app development more accessible and agile. With real-time collaboration, AI support, and end-to-end deployment tools baked in, it simplifies what used to be a fragmented and time-consuming process.&lt;/p&gt;

&lt;p&gt;This launch positions Firebase Studio as a must-watch tool for devs working in the AI, startup, and rapid prototyping space. Whether you’re a solo founder, product engineer, or part of a large dev team, Firebase Studio offers a unified platform to build, iterate, and launch smarter.&lt;/p&gt;

&lt;p&gt;👉 Ready to try it out?&lt;br&gt;
Check out the official announcement and dive into Firebase Studio &lt;a href="https://firebase.blog/posts/2025/04/introducing-firebase-studio/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>firebase</category>
      <category>googlecloud</category>
      <category>development</category>
      <category>google</category>
    </item>
    <item>
      <title>Breaking Language Barriers with Azure OpenAI and Next.js</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Thu, 10 Apr 2025 07:38:24 +0000</pubDate>
      <link>https://forem.com/sonichigo/breaking-language-barriers-with-azure-openai-and-nextjs-4aof</link>
      <guid>https://forem.com/sonichigo/breaking-language-barriers-with-azure-openai-and-nextjs-4aof</guid>
      <description>&lt;p&gt;Ever found yourself struggling to communicate in another language? Yeah, me too. That’s why I built &lt;a href="https://github.com/Sonichigo/translate.ai?utm_source=blog_devto" rel="noopener noreferrer"&gt;&lt;strong&gt;Translate.AI&lt;/strong&gt;&lt;/a&gt; - an AI-powered translation tool that makes breaking language barriers not just seamless but actually fun!&lt;/p&gt;

&lt;p&gt;As a developer, I wanted something &lt;strong&gt;fast, accurate, and scalable&lt;/strong&gt;, so I turned to &lt;strong&gt;Azure OpenAI&lt;/strong&gt; for its powerful language models and &lt;strong&gt;Next.js&lt;/strong&gt; for a slick, performant frontend. What started as a simple idea quickly turned into an exciting project, blending AI magic with modern web development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Did I Create my own translator? 🤔
&lt;/h2&gt;

&lt;p&gt;I’ve used plenty of translation tools, and let’s be real, some of them completely miss the mark when it comes to &lt;strong&gt;accuracy and context&lt;/strong&gt;. You’ve probably seen translations that are technically correct but sound robotic or awkward. That’s because many tools struggle to grasp the &lt;strong&gt;nuances&lt;/strong&gt; of language, idioms, and contextual meaning.&lt;/p&gt;

&lt;p&gt;I wanted to build something smarter - &lt;strong&gt;a translation tool that doesn’t just translate words but understands intent and context&lt;/strong&gt;. By leveraging the power of &lt;strong&gt;Azure OpenAI&lt;/strong&gt;, I created Translate.AI, a fast, accurate, and context-aware translation tool that feels natural.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Makes Translate.AI Special?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;🧠 &lt;strong&gt;AI-Powered Smarts&lt;/strong&gt;: No more weirdly structured sentences; Azure OpenAI keeps it natural and context-aware.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🌍 &lt;strong&gt;Multi-Language Support&lt;/strong&gt;: Whether it’s Spanish, French, or Klingon (okay, not yet, but soon?), it’s got you covered.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;⚡ &lt;strong&gt;Blazing Fast&lt;/strong&gt;: Powered by Next.js API routes, making translations nearly instant.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🔌 &lt;strong&gt;Easy Integration&lt;/strong&gt;: Works smoothly with other apps via API, your app can talk to the world effortlessly!&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  My Stack of Choice
&lt;/h3&gt;

&lt;p&gt;Building &lt;strong&gt;Translate.AI&lt;/strong&gt; required a robust and efficient tech stack. Here’s what I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Next.js&lt;/strong&gt;: Handles server-side rendering (SSR) and API routes, making translations lightning-fast.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure OpenAI&lt;/strong&gt;: The GPT-based engine that powers the AI translations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Axios&lt;/strong&gt;: Makes fetching data easy and efficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keploy&lt;/strong&gt;: Ensures my code is bug-free by handling testing and mocking API calls.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Next.js? 🚀
&lt;/h2&gt;

&lt;p&gt;If you're building a modern web app that needs to be fast, scalable, and SEO-friendly, &lt;strong&gt;Next.js is the way to go&lt;/strong&gt;. For Translate.AI, Next.js provided the perfect blend of &lt;strong&gt;server-side rendering (SSR)&lt;/strong&gt; and &lt;strong&gt;API routes&lt;/strong&gt;, ensuring that translations are lightning-fast and the site loads almost instantly.&lt;/p&gt;

&lt;p&gt;Since &lt;strong&gt;Next.js&lt;/strong&gt; is built on React, development was smooth, and I could use my favorite UI components without any hassle.&lt;/p&gt;

&lt;p&gt;One of the biggest advantages of &lt;strong&gt;Next.js&lt;/strong&gt; is its built-in SEO optimizations.When you’re building a tool that needs to be easily found on Google, &lt;strong&gt;having proper meta tags, structured data, and blazing-fast load speeds&lt;/strong&gt; is a game-changer.&lt;/p&gt;

&lt;p&gt;In short, Next.js made Translate.AI &lt;strong&gt;efficient, scalable, and search engine-friendly&lt;/strong&gt;. What more could a dev ask for? 😎&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s Peek Under the Hood 🧐
&lt;/h2&gt;

&lt;p&gt;Here’s a quick breakdown of how it processes translation requests:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Handling the Request 📩
&lt;/h3&gt;

&lt;p&gt;The API takes in three key parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;text&lt;/code&gt;: The text that needs to be translated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sourceLang&lt;/code&gt;: The language it’s currently in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;targetLang&lt;/code&gt;: The language it needs to be translated into.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Talking to Azure OpenAI 🤖
&lt;/h3&gt;

&lt;p&gt;The request is structured like a conversation with the AI, ensuring &lt;strong&gt;context matters&lt;/strong&gt; in &lt;a href="http://translate-ai.sonichigo.com/" rel="noopener noreferrer"&gt;translations&lt;/a&gt;. Once the request is received, it’s structured in a way that ensures accurate translations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; 
    &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Translate the following text from &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;sourceLang&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; to &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;targetLang&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:

"&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"

Only provide the translated text without any additional commentary or explanation.`&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DEPLOYMENT_NAME&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting &lt;strong&gt;temperature to 0.3&lt;/strong&gt; ensures translations remain precise and avoid unnecessary AI creativity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Sending Back the Magic ✨
&lt;/h3&gt;

&lt;p&gt;Once &lt;strong&gt;Azure OpenAI&lt;/strong&gt; processes the request, the translated text is extracted and sent back:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;NextResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;originalText&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;translatedText&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;sourceLang&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;targetLang&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Making Sure Project Ranks High on Google
&lt;/h2&gt;

&lt;p&gt;No point in making an awesome tool if no one finds it, right? Here’s how I made sure &lt;strong&gt;Translate.AI&lt;/strong&gt; is &lt;strong&gt;SEO-friendly&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Catchy Page Titles &amp;amp; Meta Descriptions&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Keywords like &lt;strong&gt;AI-powered translation&lt;/strong&gt;, &lt;strong&gt;Next.js translation API&lt;/strong&gt;, and &lt;strong&gt;Azure OpenAI translator&lt;/strong&gt; are sprinkled throughout.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Super Fast Load Speeds&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Thanks to &lt;strong&gt;Next.js server-side rendering (SSR)&lt;/strong&gt; and &lt;strong&gt;static generation (SSG)&lt;/strong&gt;, pages load almost instantly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Mobile-Friendly FTW&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Built with a &lt;strong&gt;mobile-first&lt;/strong&gt; approach, because let’s face it - most people are on their phones.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Google-Friendly Schema Markup&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Structured data helps Google show Translate.AI in rich search results.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;Great Content &amp;amp; Backlinks&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Blog posts (like this one!) and backlinks help &lt;strong&gt;boost authority and trust&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What’s Next for Translate.AI? 🔮
&lt;/h2&gt;

&lt;p&gt;This is just the start! Here’s what’s coming next:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Voice Translation&lt;/strong&gt;: Speak, and it translates real-time magic!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AI Context Learning&lt;/strong&gt;: The more you use it, the smarter it gets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Browser Extension&lt;/strong&gt;: Translate anything while you browse.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>nextjs</category>
      <category>openai</category>
      <category>javascript</category>
    </item>
    <item>
      <title>6 AI Tools every developer must try</title>
      <dc:creator>Animesh Pathak</dc:creator>
      <pubDate>Wed, 12 Jul 2023 08:39:01 +0000</pubDate>
      <link>https://forem.com/sonichigo/6-ai-tools-every-developer-must-try-47g6</link>
      <guid>https://forem.com/sonichigo/6-ai-tools-every-developer-must-try-47g6</guid>
      <description>&lt;p&gt;The code editors or software in the development process integrate AI tools. This automates complex steps in the development process. Developers get real-time feedback on code quality through these AI tools. We have listed the best AI tools in 2023 based on their usability. AI tools every developer must use to stay ahead of the competition.&lt;/p&gt;

&lt;h3&gt;
  
  
  TabNine
&lt;/h3&gt;

&lt;p&gt;With AI-driven code suggestions, &lt;a href="https://www.tabnine.com/" rel="noopener noreferrer"&gt;TabNine&lt;/a&gt; is a hit in the developer's community. It leverages machine learning algorithms to offer context-aware predictions. Based on the developer's pattern of writing code, it suggests code completions. Some features of TabNine include:&lt;br&gt;
It integrates with Integrated Development Environments (IDEs) like Visual Studio Code, IntelliJ, Sublime Text, and Atom.&lt;br&gt;
It supports over 20+ programming languages, including C/C++, TypeScript, React, and more.&lt;br&gt;
It Converts natural language description into functional code.&lt;br&gt;
TabNine provides privacy and security features to your code.&lt;br&gt;
It gives quality output in any development field, whether web, mobile, or data science.&lt;br&gt;
You can connect your codes to repositories such as GitHub, GitLab, and more.&lt;br&gt;
The AI tool suggests automatic code refactoring to maintain code consistency and reduce review iterations.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Copilot
&lt;/h3&gt;

&lt;p&gt;GitHub and OpenAI collaborated to develop GitHub Copilot. This AI-powered coding tool auto-generates and auto-completes code snippets. Copilot uses OpenAI’s advanced GPT model to provide context-based predictions. Some features of GitHub Copilot include:&lt;br&gt;
It generates code segments based on the description, patterns, and contexts.&lt;br&gt;
It integrates with the preferred coding tools like IDEs or visual studio code.&lt;br&gt;
The tool can test your code, select code to perform different actions, and review existing code.&lt;br&gt;
It supports various programming languages, including Python, TypeScript, Ruby, and more.&lt;br&gt;
Copilot allows developers to share their code in real time with other developers.&lt;br&gt;
Developers can track the progress of projects, including code suggestions.&lt;br&gt;
The integration with GitHub’s code editor speeds up the coding process.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://aws.amazon.com/codeguru/" rel="noopener noreferrer"&gt;Amazon CodeGuru&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;AI-driven code review tool developed and managed by Amazon Web Services(AWS). With the automated code review functionality, Amazon CodeGuru reviews pull requests in corresponding repositories. This allows developers to deliver quality software solutions and enhance resource efficiency. Some key features of CodeGuru include:&lt;br&gt;
It utilizes machine learning algorithms to identify potential issues and suggest recommendations for code optimization.&lt;br&gt;
Reviews code to identify bugs and security vulnerabilities.&lt;br&gt;
It helps developers to speed up the development process by reducing manual code review and optimizing performance.&lt;br&gt;
It integrates with the preferred coding tools like IDEs through plugins and extensions.&lt;br&gt;
CodeGuru provides real-time feedback and code suggestions to improve code quality.&lt;br&gt;
The performance profiling feature helps developers to find and fix performance issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  DeepCode
&lt;/h3&gt;

&lt;p&gt;This AI-driven code review tool identifies potential coding errors and suggests improvements. Using DeepCode, developers tend to write cleaner code and improve coding standards. Some key features include:&lt;br&gt;
It leverages machine learning to analyze code, spot bugs, and clean them up.&lt;br&gt;
DeepCode supports almost all programming languages and is quick at identifying errors.&lt;br&gt;
 It is an independent platform or can integrate with the preferred code editors.&lt;br&gt;
Developers can share, review and receive feedback on their code among the team.&lt;/p&gt;

&lt;h3&gt;
  
  
  IntelliCode
&lt;/h3&gt;

&lt;p&gt;IntelliCode is an AI-powered coding tool by Visual Studio. Developers prefer it due to its context-specific code suggestions. During coding, IntelliCode identifies common coding tasks and suggests efficient action. Some key features of IntelliCode include:&lt;br&gt;
It is a cloud-based tool supporting various programming languages, including Python, Kotlin, Ruby, Swift and more.&lt;br&gt;
Through extensions and plugins, it can support additional programming languages.&lt;br&gt;
IntelliCode leverages machine learning algorithms to provide code suggestions related to contexts.&lt;br&gt;
Developers can share code across multiple contributors to get valuable recommendations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keploy
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://keploy.io" rel="noopener noreferrer"&gt;Keploy&lt;/a&gt; is an open-source, end-to-end (E2E) testing toolkit for developers. It creates test cases and data mocks/stubs by recording API calls, database queries, etc., making releases faster and more reliable.&lt;br&gt;
Keploy works by being added as a middleware to your application. It captures and replays all network interaction served to the application from any source. This allows Keploy to generate test cases for all of your API endpoints, including those that are not explicitly tested by your unit tests. This can help you to identify and fix bugs that would otherwise go undetected.&lt;br&gt;
Keploy can create data &lt;a href="https://docs.keploy.io/docs/concepts/reference/glossary/mocks/" rel="noopener noreferrer"&gt;mocks&lt;/a&gt;/stubs for your APIs, which can help you to isolate your tests and make them more reliable. It can automatically compare test cases generated from previously collected traffic against updated behaviour of your application, and bring any differences to your attention. This can help you to identify regressions in your production code early on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With AI-powered tools, developers are coding smarter, not harder. Not only do developers save time and efficiency, but they also improve the quality and maintainability of software applications. Developers can unlock potential coding abilities and ensure quality code with increased efficiency. &lt;/p&gt;

&lt;p&gt;From code completion, suggestion, and bug detection to code review assistance and automated testing, we have suggested the AI tools every developer must use. &lt;/p&gt;

&lt;p&gt;In this blog post, we have covered the best AI tools of 2023 that assist developers throughout the development process. As AI advances, it is expected to change the viewpoint of the development process with its robust tools.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
