<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Gleno</title>
    <description>The latest articles on Forem by Gleno (@naysmith).</description>
    <link>https://forem.com/naysmith</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/naysmith"/>
    <language>en</language>
    <item>
      <title>What's your stack?</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Sun, 22 Mar 2026 23:52:43 +0000</pubDate>
      <link>https://forem.com/naysmith/whats-your-stack-4hp1</link>
      <guid>https://forem.com/naysmith/whats-your-stack-4hp1</guid>
      <description>&lt;p&gt;So I asked chatGPT to list my core "AI-assisted coding" stack items and what they are and it went waaaay overboard. I think it thinks I'm a moron.&lt;/p&gt;

&lt;p&gt;What's your stack? Is the below stack really the best combination (chatGPT seems to think so)? If anybody wants to add their stack in the comments, maybe just do it in one line, like chatGPT did right at the end: myStack=&lt;strong&gt;TypeScript + React + Next.js + Tailwind + Supabase + PostgreSQL + Vercel&lt;/strong&gt; with &lt;strong&gt;Tauri&lt;/strong&gt; added when I want a desktop app.&lt;/p&gt;

&lt;h2&gt;
  
  
  The core stack items
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) JavaScript
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; The core programming language of the web.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Frontend and backend code.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; The foundation everything else sits on.&lt;/p&gt;




&lt;h3&gt;
  
  
  2) TypeScript
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; JavaScript with static typing.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; JavaScript.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Safer code, better autocomplete, fewer silly mistakes.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; JavaScript with guard rails.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Rule of thumb:&lt;/strong&gt; TypeScript is usually the better default for app projects.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  3) React
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A UI library.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; JavaScript / TypeScript.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Building screens, components, forms, dashboards, and reusable UI pieces.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; The system for building the visible parts of an app.&lt;/p&gt;




&lt;h3&gt;
  
  
  4) Next.js
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A full-stack framework built on React.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; React.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Routing, layouts, server-side logic, app structure, and production-ready web apps.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; React plus the app framework around it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Simple view:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
React = components&lt;br&gt;&lt;br&gt;
Next.js = complete app structure&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  5) HTML
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; The structure of web pages.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Markup and page content.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; The bones of the UI.&lt;/p&gt;




&lt;h3&gt;
  
  
  6) CSS
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; The styling language for the web.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Layout, spacing, colours, typography, responsiveness.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; The visual design layer.&lt;/p&gt;




&lt;h3&gt;
  
  
  7) Tailwind CSS
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A utility-first CSS framework.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; CSS.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Fast, consistent styling directly in components.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; A quicker, more structured way to style apps.&lt;/p&gt;




&lt;h3&gt;
  
  
  8) Node.js
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A JavaScript runtime for running code outside the browser.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; JavaScript.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Dev servers, build tools, package scripts, backend code.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; What powers the development/tooling side of modern JS apps.&lt;/p&gt;




&lt;h3&gt;
  
  
  9) npm / pnpm
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Package managers for the Node.js ecosystem.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Installing libraries and running project scripts.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; Dependency management for JavaScript projects.&lt;/p&gt;




&lt;h3&gt;
  
  
  10) Supabase
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A backend platform / BaaS.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; PostgreSQL.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Database, auth, storage, APIs, realtime features.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; A ready-made backend platform for modern apps.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important distinction:&lt;/strong&gt; Supabase is &lt;strong&gt;not&lt;/strong&gt; the database language. It is a platform built around PostgreSQL.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  11) PostgreSQL
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A relational database.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; SQL.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Storing structured application data.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; The actual database engine underneath Supabase.&lt;/p&gt;




&lt;h3&gt;
  
  
  12) SQL
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A query language for relational databases.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Reading and writing data.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; The language used to talk to PostgreSQL.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Easy chain to remember:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
SQL → language&lt;br&gt;&lt;br&gt;
PostgreSQL → database&lt;br&gt;&lt;br&gt;
Supabase → platform built around PostgreSQL&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  13) Auth
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Authentication and user access control.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Sign in, sign up, passwords, roles, permissions.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Typical provider in this stack:&lt;/strong&gt; Supabase Auth.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; User accounts and access rules.&lt;/p&gt;




&lt;h3&gt;
  
  
  14) Vercel
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A hosting and deployment platform.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Publishing Next.js web apps.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; Where the app lives online.&lt;/p&gt;




&lt;h3&gt;
  
  
  15) Tauri
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A desktop app wrapper/framework.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; A web frontend plus a Rust native layer.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Turning a web app into a Windows/Mac/Linux desktop app.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; The desktop shell around a React/Next-style app.&lt;/p&gt;




&lt;h3&gt;
  
  
  16) Rust
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A systems programming language.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; The native/backend layer in Tauri apps.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; The engine under the bonnet of the desktop wrapper.&lt;/p&gt;




&lt;h3&gt;
  
  
  17) SQLite
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A lightweight embedded SQL database.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; SQL.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Local desktop storage.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; A simple local database when you do not want a full cloud backend.&lt;/p&gt;




&lt;h3&gt;
  
  
  18) JSX / TSX
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; React file syntax.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Based on:&lt;/strong&gt; JavaScript / TypeScript plus HTML-like markup.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Used for:&lt;/strong&gt; Writing React components.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Think of it as:&lt;/strong&gt; The format React code is usually written in.&lt;/p&gt;




&lt;h2&gt;
  
  
  The default stack combo I’ve mostly been using
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Web app / SaaS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;TypeScript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;React&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Next.js&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tailwind CSS&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Supabase&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PostgreSQL&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vercel&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Desktop app
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;TypeScript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;React&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tailwind CSS&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tauri&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rust&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQLite&lt;/strong&gt; &lt;em&gt;or&lt;/em&gt; &lt;strong&gt;Supabase/PostgreSQL&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The plain-English mapping
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JavaScript&lt;/strong&gt; = programming language
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TypeScript&lt;/strong&gt; = safer JavaScript
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;React&lt;/strong&gt; = UI building system
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Next.js&lt;/strong&gt; = full web app framework
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailwind&lt;/strong&gt; = styling approach
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supabase&lt;/strong&gt; = backend platform
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; = relational database
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQL&lt;/strong&gt; = database query language
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vercel&lt;/strong&gt; = hosting
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tauri&lt;/strong&gt; = desktop app shell
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rust&lt;/strong&gt; = native layer under Tauri
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;If I had to summarise the core modern vibe-coding stack in one line:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TypeScript + React + Next.js + Tailwind + Supabase + PostgreSQL + Vercel&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
with &lt;strong&gt;Tauri&lt;/strong&gt; added when I want a desktop app.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>opensource</category>
      <category>react</category>
      <category>architecture</category>
    </item>
    <item>
      <title>When the Cure Is the Disease: New Zealand’s Health IT</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Sun, 15 Mar 2026 01:12:36 +0000</pubDate>
      <link>https://forem.com/naysmith/when-the-cure-is-the-disease-new-zealands-health-it-2ij2</link>
      <guid>https://forem.com/naysmith/when-the-cure-is-the-disease-new-zealands-health-it-2ij2</guid>
      <description>&lt;div class="crayons-card c-embed"&gt;

  
&lt;h3&gt;
  
  
  Why New Zealand’s Health IT Systems Are Such a Mess (And Why Fixing Them Is So Hard)
&lt;/h3&gt;

&lt;p&gt;Every time there’s a hospital IT outage in New Zealand, the same question comes up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can a modern healthcare system still rely on fragile, outdated technology?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The answer isn’t simple. The current situation is the result of &lt;strong&gt;25 years of decentralised decision-making, legacy systems, and extremely complex integration layers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To understand the debate around recent digital job cuts at Health New Zealand, you first need to understand the architecture underneath it all.&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The Origin of the Problem: 20 Separate Health Systems&lt;/li&gt;
&lt;li&gt;The Regional Cluster Attempt&lt;/li&gt;
&lt;li&gt;Legacy Systems That Never Die&lt;/li&gt;
&lt;li&gt;The Hidden Monster: Integration&lt;/li&gt;
&lt;li&gt;National Systems Layered on Top&lt;/li&gt;
&lt;li&gt;The Attempt to Modernise&lt;/li&gt;
&lt;li&gt;Why the IT Job Cuts Became Controversial&lt;/li&gt;
&lt;li&gt;What Hospital IT Architecture Actually Looks Like&lt;/li&gt;
&lt;li&gt;Final Thoughts&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Origin of the Problem: 20 Separate Health Systems
&lt;/h2&gt;

&lt;p&gt;For over two decades, healthcare in New Zealand was run by &lt;strong&gt;20 District Health Boards (DHBs)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Each DHB had the authority to choose its own technology stack.&lt;/p&gt;

&lt;p&gt;That meant every region could select different vendors for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Patient administration systems (PAS)&lt;/li&gt;
&lt;li&gt;Electronic medical records&lt;/li&gt;
&lt;li&gt;Laboratory systems&lt;/li&gt;
&lt;li&gt;Radiology systems&lt;/li&gt;
&lt;li&gt;Referral platforms&lt;/li&gt;
&lt;li&gt;Scheduling tools&lt;/li&gt;
&lt;li&gt;Integration engines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Twenty different ecosystems evolving independently.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some DHBs invested heavily in modern digital systems. Others moved more slowly or kept older infrastructure running longer.&lt;/p&gt;

&lt;p&gt;When &lt;strong&gt;Health New Zealand – Te Whatu Ora&lt;/strong&gt; replaced the DHB system in 2022, it inherited &lt;strong&gt;20 different IT environments&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Sources:&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.health.govt.nz/new-zealand-health-system/key-health-sector-organisations-and-people/district-health-boards" rel="noopener noreferrer"&gt;https://www.health.govt.nz/new-zealand-health-system/key-health-sector-organisations-and-people/district-health-boards&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.tewhatuora.govt.nz/about-us/our-health-system/" rel="noopener noreferrer"&gt;https://www.tewhatuora.govt.nz/about-us/our-health-system/&lt;/a&gt;&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;br&gt;
Think of it like merging twenty companies that all used different ERPs, CRMs, and databases — and expecting them to instantly behave like one system.&lt;br&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  The Regional Cluster Attempt
&lt;/h2&gt;

&lt;p&gt;Before the national merger, DHBs tried to reduce fragmentation by forming &lt;strong&gt;four regional digital clusters&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Northern&lt;/li&gt;
&lt;li&gt;Midland&lt;/li&gt;
&lt;li&gt;Central&lt;/li&gt;
&lt;li&gt;South Island&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Within each cluster, hospitals attempted to standardise some systems.&lt;/p&gt;

&lt;p&gt;But between clusters, major differences remained.&lt;/p&gt;

&lt;p&gt;Instead of &lt;strong&gt;20 separate stacks&lt;/strong&gt;, New Zealand effectively ended up with &lt;strong&gt;four large incompatible stacks&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Sources:&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.health.govt.nz/system/files/documents/pages/national-health-it-plan-update.pdf" rel="noopener noreferrer"&gt;https://www.health.govt.nz/system/files/documents/pages/national-health-it-plan-update.pdf&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.hinz.org.nz/page/nz-digital-health-history" rel="noopener noreferrer"&gt;https://www.hinz.org.nz/page/nz-digital-health-history&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Legacy Systems That Never Die
&lt;/h2&gt;

&lt;p&gt;Healthcare systems are notoriously difficult to replace.&lt;/p&gt;

&lt;p&gt;Some hospital software currently in use dates back decades.&lt;/p&gt;

&lt;p&gt;Examples seen in healthcare environments worldwide include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Patient administration systems from the &lt;strong&gt;1990s&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Infrastructure built around older Windows environments&lt;/li&gt;
&lt;li&gt;Specialist clinical software with extremely long vendor lifecycles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why not just upgrade everything?&lt;/p&gt;

&lt;p&gt;Because healthcare systems cannot simply shut down for migration.&lt;/p&gt;

&lt;p&gt;Hospitals operate &lt;strong&gt;24 hours a day&lt;/strong&gt;, and replacing a clinical system introduces enormous risk.&lt;/p&gt;

&lt;p&gt;Source:&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.healthit.gov/topic/scientific-initiatives/health-it-modernization" rel="noopener noreferrer"&gt;https://www.healthit.gov/topic/scientific-initiatives/health-it-modernization&lt;/a&gt;&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;br&gt;
Replacing hospital software is like changing the engines on a plane while it is still flying.&lt;br&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  The Hidden Monster: Integration
&lt;/h2&gt;

&lt;p&gt;The real complexity sits between systems.&lt;/p&gt;

&lt;p&gt;Because different vendors and platforms are used across regions, hospitals rely heavily on integration standards and middleware.&lt;/p&gt;

&lt;p&gt;One of the most common standards is &lt;strong&gt;HL7&lt;/strong&gt;, which allows systems to exchange clinical messages.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lab results flowing to patient records&lt;/li&gt;
&lt;li&gt;Radiology images linking to clinical systems&lt;/li&gt;
&lt;li&gt;Referrals moving between hospitals&lt;/li&gt;
&lt;li&gt;Admission notifications sent across services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These messages travel through &lt;strong&gt;interface engines&lt;/strong&gt; and middleware that translate and route data.&lt;/p&gt;

&lt;p&gt;Over time, hundreds or thousands of integrations accumulate.&lt;/p&gt;

&lt;p&gt;Source:&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.hl7.org/about/" rel="noopener noreferrer"&gt;https://www.hl7.org/about/&lt;/a&gt;&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;br&gt;
This creates what architects often call &lt;strong&gt;“spaghetti integration architecture”&lt;/strong&gt; — a dense web of connections between systems that all depend on each other.&lt;br&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  National Systems Layered on Top
&lt;/h2&gt;

&lt;p&gt;Rather than replacing regional systems, New Zealand introduced several &lt;strong&gt;national digital platforms&lt;/strong&gt; that sit above hospital systems.&lt;/p&gt;

&lt;p&gt;Examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;National Health Index (NHI)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;National screening registers&lt;/li&gt;
&lt;li&gt;ePrescribing services&lt;/li&gt;
&lt;li&gt;Shared health record initiatives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These systems rely on the regional infrastructure continuing to function.&lt;/p&gt;

&lt;p&gt;So instead of simplifying the architecture, they often &lt;strong&gt;add another layer of dependency&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Source:&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.tewhatuora.govt.nz/for-health-professionals/digital-health/" rel="noopener noreferrer"&gt;https://www.tewhatuora.govt.nz/for-health-professionals/digital-health/&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  The Attempt to Modernise
&lt;/h2&gt;

&lt;p&gt;When the government created &lt;strong&gt;Health New Zealand – Te Whatu Ora&lt;/strong&gt;, one of the goals was to finally create a &lt;strong&gt;nationally coherent digital health system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The vision included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Greater standardisation&lt;/li&gt;
&lt;li&gt;Shared platforms&lt;/li&gt;
&lt;li&gt;Improved data interoperability&lt;/li&gt;
&lt;li&gt;Reduced duplication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, transforming a system this complex takes &lt;strong&gt;many years and enormous investment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Large health IT programmes around the world have struggled with exactly the same challenge.&lt;/p&gt;

&lt;p&gt;Source:&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.health.govt.nz/publication/new-zealand-health-strategy" rel="noopener noreferrer"&gt;https://www.health.govt.nz/publication/new-zealand-health-strategy&lt;/a&gt;&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;br&gt;
The UK’s NHS once attempted a massive national digital transformation that became one of the most expensive failed IT projects in history.&lt;br&gt;

&lt;/div&gt;


&lt;p&gt;Source:&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.nao.org.uk/reports/the-national-programme-for-it-in-the-nhs/" rel="noopener noreferrer"&gt;https://www.nao.org.uk/reports/the-national-programme-for-it-in-the-nhs/&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Why the IT Job Cuts Became Controversial
&lt;/h2&gt;

&lt;p&gt;Recent changes to Health New Zealand’s digital workforce sparked debate.&lt;/p&gt;

&lt;p&gt;Critics argued that cutting digital staff could increase the risk of outages because the infrastructure is already fragile.&lt;/p&gt;

&lt;p&gt;The government argued the restructuring was aimed at reducing duplication and administrative overhead created when the DHBs were merged.&lt;/p&gt;

&lt;p&gt;Sources:&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.rnz.co.nz/news/political/589440/documents-reveal-health-nz-knew-it-job-cuts-would-risk-patient-care-hospital-resilience" rel="noopener noreferrer"&gt;https://www.rnz.co.nz/news/political/589440/documents-reveal-health-nz-knew-it-job-cuts-would-risk-patient-care-hospital-resilience&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.rnz.co.nz/news/political/584179/health-nz-confirms-another-major-tech-outage" rel="noopener noreferrer"&gt;https://www.rnz.co.nz/news/political/584179/health-nz-confirms-another-major-tech-outage&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both perspectives reflect a deeper truth:&lt;/p&gt;

&lt;p&gt;The system itself is &lt;strong&gt;incredibly complicated&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Even small organisational changes can ripple across hundreds of interconnected systems.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Hospital IT Architecture Actually Looks Like
&lt;/h2&gt;

&lt;p&gt;A simplified view of a hospital technology stack might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Clinicians
   |
   v
Electronic Medical Record (EMR)
   |
   +-----------------------------+
   |                             |
Lab System (LIS)           Radiology (RIS/PACS)
   |                             |
   +-------------+---------------+
                 |
           Integration Engine
            (HL7 / Messaging)
                 |
        National Health Systems
                 |
        Identity (NHI) / Registries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But in reality, each box above often contains &lt;strong&gt;multiple systems from different vendors&lt;/strong&gt;, with hundreds of message interfaces connecting them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;New Zealand’s healthcare technology environment didn’t become complex overnight.&lt;/p&gt;

&lt;p&gt;It is the product of &lt;strong&gt;decades of regional autonomy, vendor ecosystems, and cautious upgrades in a high-risk industry&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Fixing it requires more than just funding or restructuring.&lt;/p&gt;

&lt;p&gt;It requires carefully untangling a web of systems that hospitals depend on every minute of every day.&lt;/p&gt;

&lt;p&gt;And that is why healthcare IT remains one of the hardest architecture problems in the world.&lt;/p&gt;

</description>
      <category>healthydebate</category>
      <category>career</category>
      <category>news</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>5ive ways to make good AI code exceptional</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Sat, 14 Mar 2026 07:19:12 +0000</pubDate>
      <link>https://forem.com/naysmith/5ive-ways-to-making-good-ai-code-exceptional-3edl</link>
      <guid>https://forem.com/naysmith/5ive-ways-to-making-good-ai-code-exceptional-3edl</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Why this matters&lt;/li&gt;
&lt;li&gt;1. Keep scope brutally clear&lt;/li&gt;
&lt;li&gt;2. Give your model a stronger quality bar&lt;/li&gt;
&lt;li&gt;3. Make it critique itself before and after&lt;/li&gt;
&lt;li&gt;4. Separate build passes from polish passes&lt;/li&gt;
&lt;li&gt;5. Make it work against a reference standard&lt;/li&gt;
&lt;li&gt;The playbook&lt;/li&gt;
&lt;li&gt;The biggest unlock&lt;/li&gt;
&lt;li&gt;My strongest recommendation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;I'm interested to hear how other vibe-coders are getting the best out of whatever model they are using. For me, and Claude, I've been reading &lt;strong&gt;everything, everywhere&lt;/strong&gt; in order to improve my already pretty great outcomes and make more robust software and applications.&lt;/p&gt;

&lt;p&gt;I’ve found that the difference between average AI-assisted coding and genuinely impressive output often has less to do with the model itself and more to do with how you direct it. In other words, the biggest improvement usually comes from becoming a better editor, product owner, and critic of the work.&lt;/p&gt;

&lt;p&gt;This article is basically my current thinking on how to get better results. Not just more code, but better code. Better UX. Better structure. Better judgment. Less fluff. Less fake completeness. Less "looks great in a screenshot, falls over in real life."&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Keep scope brutally clear
&lt;/h2&gt;

&lt;p&gt;The first big improvement for me was realising that the model does best when the task is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;narrow&lt;/li&gt;
&lt;li&gt;concrete&lt;/li&gt;
&lt;li&gt;sequenced&lt;/li&gt;
&lt;li&gt;judged against a clear standard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What hurts quality is asking for too much at once:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;coding&lt;/li&gt;
&lt;li&gt;product decisions&lt;/li&gt;
&lt;li&gt;architecture&lt;/li&gt;
&lt;li&gt;UX redesign&lt;/li&gt;
&lt;li&gt;future planning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;all bundled into one giant prompt.&lt;/p&gt;

&lt;p&gt;The best pattern I’ve found is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one task&lt;/li&gt;
&lt;li&gt;one success definition&lt;/li&gt;
&lt;li&gt;one summary at the end&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Good prompt:&lt;/strong&gt;

&lt;p&gt;Improve the dashboard filter UX only. Do not add scope. Focus on clarity, spacing, active filter visibility, and reducing clicks. After changes, summarize what improved and what tradeoffs remain.&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;



&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Less good prompt:&lt;/strong&gt;

&lt;p&gt;Make the whole app more modern, smarter, and production-ready.&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;


&lt;p&gt;That second prompt sounds ambitious, but it usually produces mush. The model starts solving five different problems badly instead of one problem well.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Give your model a stronger quality bar
&lt;/h2&gt;

&lt;p&gt;Another thing that helped me a lot was stopping myself from only telling the model &lt;em&gt;what to do&lt;/em&gt; and instead telling it &lt;em&gt;what good looks like&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Don’t settle for technically correct if what you actually want is product-quality work.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Quality bar prompt:&lt;/strong&gt;

&lt;p&gt;Aim for senior product-quality work, not just technically correct implementation.&lt;/p&gt;

&lt;p&gt;Bar for quality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;obvious, calm, low-friction UX&lt;/li&gt;
&lt;li&gt;strong visual hierarchy&lt;/li&gt;
&lt;li&gt;consistent naming and spacing&lt;/li&gt;
&lt;li&gt;no unnecessary complexity&lt;/li&gt;
&lt;li&gt;no fake completeness&lt;/li&gt;
&lt;li&gt;preserve user familiarity where helpful&lt;/li&gt;
&lt;li&gt;improve common workflows, not edge-case cleverness

&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;



&lt;p&gt;For UI work, I’ve also found it helps to be even more direct:&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;UI quality prompt:&lt;/strong&gt;

&lt;p&gt;Do not settle for generic dashboard SaaS. Make this feel immediately understandable to experienced users, but cleaner, calmer, and easier to scan.&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;


&lt;p&gt;That little shift makes a big difference. Otherwise the model often gives you something that is fine in a technical sense but generic in every other sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Make it critique itself before and after
&lt;/h2&gt;

&lt;p&gt;This is one of the best tricks I’ve found.&lt;/p&gt;

&lt;p&gt;Before it changes anything, get it to identify weak spots. After it changes things, get it to critique the result honestly.&lt;/p&gt;

&lt;p&gt;That helps push it out of “task completed” mode and into “quality review” mode.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Before coding:&lt;/strong&gt;

&lt;p&gt;Before coding, identify:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the 3 weakest parts of the current implementation for this task&lt;/li&gt;
&lt;li&gt;the biggest risk of making it worse&lt;/li&gt;
&lt;li&gt;the standard you will use to judge success

&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;




&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;After coding:&lt;/strong&gt;

&lt;p&gt;After coding, critique your own work:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;what improved materially&lt;/li&gt;
&lt;li&gt;what still feels weak or generic&lt;/li&gt;
&lt;li&gt;what a strong human product designer would probably still want changed

&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;



&lt;p&gt;That has been incredibly useful for me because the model will often otherwise sound pleased with itself far too early.&lt;/p&gt;
&lt;h2&gt;
  
  
  4. Separate build passes from polish passes
&lt;/h2&gt;

&lt;p&gt;One thing I’ve had to learn is not to expect the first pass to also be the best pass.&lt;/p&gt;

&lt;p&gt;The model can build quickly, but quality usually comes from doing the work in layers.&lt;/p&gt;

&lt;p&gt;My preferred sequence is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;build it&lt;/li&gt;
&lt;li&gt;verify it&lt;/li&gt;
&lt;li&gt;refine UX&lt;/li&gt;
&lt;li&gt;clean code&lt;/li&gt;
&lt;li&gt;document next gaps&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That feels much closer to how good teams actually work.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Refinement pass prompt:&lt;/strong&gt;

&lt;p&gt;Do not add features.&lt;/p&gt;

&lt;p&gt;Now do a refinement pass on the existing implementation only.&lt;/p&gt;

&lt;p&gt;Improve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;spacing&lt;/li&gt;
&lt;li&gt;hierarchy&lt;/li&gt;
&lt;li&gt;wording&lt;/li&gt;
&lt;li&gt;empty states&lt;/li&gt;
&lt;li&gt;action discoverability&lt;/li&gt;
&lt;li&gt;friction in common interactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Do not redesign the product. Tighten what already exists.&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;


&lt;p&gt;That distinction between &lt;em&gt;build pass&lt;/em&gt; and &lt;em&gt;polish pass&lt;/em&gt; has improved my results a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Make it work against a reference standard
&lt;/h2&gt;

&lt;p&gt;The model does better when it has something to aim at.&lt;/p&gt;

&lt;p&gt;If you already know the kind of experience you want, say so clearly and repeatedly. Don’t assume the model will infer your taste.&lt;/p&gt;

&lt;p&gt;For me, this often means defining a few non-negotiables around familiarity, clarity, calmness, cognitive load, and usability.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Design standard prompt:&lt;/strong&gt;

&lt;p&gt;Design standard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;preserve familiar mental models where helpful&lt;/li&gt;
&lt;li&gt;reduce clutter&lt;/li&gt;
&lt;li&gt;make active state obvious&lt;/li&gt;
&lt;li&gt;improve scanability&lt;/li&gt;
&lt;li&gt;reduce clicks&lt;/li&gt;
&lt;li&gt;make common actions easier to find&lt;/li&gt;
&lt;li&gt;use calmer, cleaner visual hierarchy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prefer familiarity for core workflows and innovation only where it clearly improves speed, confidence, or clarity.&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;


&lt;p&gt;This helps stop the model wandering off into novelty for novelty’s sake.&lt;/p&gt;

&lt;h2&gt;
  
  
  The playbook
&lt;/h2&gt;

&lt;p&gt;Here’s the practical version of how I now try to work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Review every summary properly
&lt;/h3&gt;

&lt;p&gt;When the model says it has completed something, I ask myself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Did it solve the real user problem?&lt;/li&gt;
&lt;li&gt;Did it stay in scope?&lt;/li&gt;
&lt;li&gt;Did it introduce unnecessary complexity?&lt;/li&gt;
&lt;li&gt;Does it feel generic?&lt;/li&gt;
&lt;li&gt;Would a busy user understand this quickly?&lt;/li&gt;
&lt;li&gt;Does it still feel like the kind of product I’m trying to make?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the answer is “technically yes, emotionally meh,” that usually means it needs a refinement pass.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ask for rationale on UI tasks
&lt;/h3&gt;

&lt;p&gt;Not chain of thought. Just design rationale.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;UI rationale prompt:&lt;/strong&gt;

&lt;p&gt;For each significant UI change, explain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what stayed familiar&lt;/li&gt;
&lt;li&gt;what improved&lt;/li&gt;
&lt;li&gt;why it reduces cognitive load&lt;/li&gt;
&lt;li&gt;any tradeoff between clarity and familiarity

&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;



&lt;p&gt;That helps surface whether the model is making deliberate choices or just decorating things.&lt;/p&gt;
&lt;h3&gt;
  
  
  Keep a quality debt list
&lt;/h3&gt;

&lt;p&gt;This has been useful too.&lt;/p&gt;

&lt;p&gt;I like having the model maintain a small file of things that are still weak, awkward, or unfinished, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;awkward labels&lt;/li&gt;
&lt;li&gt;generic empty states&lt;/li&gt;
&lt;li&gt;spacing that still feels off&lt;/li&gt;
&lt;li&gt;interactions that are too hidden&lt;/li&gt;
&lt;li&gt;things that need real user testing&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Quality debt prompt:&lt;/strong&gt;

&lt;p&gt;Maintain a short docs/quality-debt.md with only meaningful remaining UX/code quality issues. Keep it concise and prioritized.&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;


&lt;p&gt;That stops “good enough” from becoming invisible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make it do a cleanup pass
&lt;/h3&gt;

&lt;p&gt;Before a task is truly finished, I often want one more sweep for consistency and simplification.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Cleanup pass prompt:&lt;/strong&gt;

&lt;p&gt;Before you finish this task, do a cleanup pass for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;naming consistency&lt;/li&gt;
&lt;li&gt;dead code&lt;/li&gt;
&lt;li&gt;duplicate logic&lt;/li&gt;
&lt;li&gt;awkward wording&lt;/li&gt;
&lt;li&gt;spacing inconsistencies&lt;/li&gt;
&lt;li&gt;unnecessary abstractions

&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Force human-grade restraint
&lt;/h3&gt;

&lt;p&gt;This one matters more than people think.&lt;/p&gt;

&lt;p&gt;The model loves adding layers, helpers, hooks, abstractions, and future-proofing when they are not actually needed.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Restraint prompt:&lt;/strong&gt;

&lt;p&gt;Do not add abstractions, helper layers, hooks, or configuration unless they clearly reduce present complexity. Prefer simple, readable code over future-proofing theater.&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;


&lt;p&gt;That line alone can save a surprising amount of nonsense.&lt;/p&gt;

&lt;h2&gt;
  
  
  For UI/UX specifically
&lt;/h2&gt;

&lt;p&gt;If I want the best UI/UX possible, I try to push the model toward these principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;10-second scanability&lt;/li&gt;
&lt;li&gt;recognition over recall&lt;/li&gt;
&lt;li&gt;fewer competing focal points&lt;/li&gt;
&lt;li&gt;obvious primary actions&lt;/li&gt;
&lt;li&gt;visible active state&lt;/li&gt;
&lt;li&gt;trustworthy summary-to-detail flows&lt;/li&gt;
&lt;li&gt;calm density, not empty Dribbble fluff&lt;/li&gt;
&lt;li&gt;tables and controls that are genuinely usable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;
  Click to see the full UI polish prompt
  &lt;br&gt;
Do not add scope.

&lt;p&gt;Refine the current UI to improve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;scanability in under 10 seconds&lt;/li&gt;
&lt;li&gt;visual hierarchy&lt;/li&gt;
&lt;li&gt;spacing consistency&lt;/li&gt;
&lt;li&gt;filter clarity&lt;/li&gt;
&lt;li&gt;table readability&lt;/li&gt;
&lt;li&gt;action discoverability&lt;/li&gt;
&lt;li&gt;trust and calmness of the interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep the core mental model familiar.&lt;br&gt;
Do not turn this into a flashy SaaS dashboard.&lt;br&gt;
Explain what changed, why it is better, and what still feels weak.&lt;br&gt;
&lt;/p&gt;

&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;That prompt has been a good one for me because it pushes the model toward clarity instead of visual showing off.&lt;/p&gt;

&lt;h2&gt;
  
  
  The biggest unlock
&lt;/h2&gt;

&lt;p&gt;One of the best prompts I’ve used is asking the model what a great human would still dislike about the work.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Critique from multiple perspectives:&lt;/strong&gt;

&lt;p&gt;Critique this implementation from the perspective of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a strong senior engineer&lt;/li&gt;
&lt;li&gt;a strong product designer&lt;/li&gt;
&lt;li&gt;a skeptical internal business user&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What would each still dislike or question?&lt;br&gt;
Then improve the top issues that are in scope.&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;


&lt;p&gt;That often gets you a better result than simply saying “make it better.”&lt;/p&gt;

&lt;h2&gt;
  
  
  My role in all this
&lt;/h2&gt;

&lt;p&gt;The way I think about my job now is this:&lt;/p&gt;

&lt;p&gt;My role is to stop the model becoming:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;too broad&lt;/li&gt;
&lt;li&gt;too clever&lt;/li&gt;
&lt;li&gt;too generic&lt;/li&gt;
&lt;li&gt;too pleased with itself&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and keep pushing it toward being:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clearer&lt;/li&gt;
&lt;li&gt;tighter&lt;/li&gt;
&lt;li&gt;calmer&lt;/li&gt;
&lt;li&gt;more honest&lt;/li&gt;
&lt;li&gt;more product-quality&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That has made a massive difference.&lt;/p&gt;

&lt;h2&gt;
  
  
  My strongest recommendation
&lt;/h2&gt;

&lt;p&gt;If I had to pick one thing that consistently improves output, it would be adding an &lt;strong&gt;excellence pass&lt;/strong&gt; after each meaningful chunk of work.&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;The excellence pass prompt:&lt;/strong&gt;

&lt;p&gt;Do not add scope.&lt;/p&gt;

&lt;p&gt;Now do an excellence pass on the existing implementation.&lt;/p&gt;

&lt;p&gt;Raise the quality of the work without changing the product scope.&lt;/p&gt;

&lt;p&gt;Focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;code clarity&lt;/li&gt;
&lt;li&gt;naming consistency&lt;/li&gt;
&lt;li&gt;removal of dead or unnecessary complexity&lt;/li&gt;
&lt;li&gt;UI hierarchy&lt;/li&gt;
&lt;li&gt;spacing and readability&lt;/li&gt;
&lt;li&gt;action discoverability&lt;/li&gt;
&lt;li&gt;empty/loading states&lt;/li&gt;
&lt;li&gt;reduction of cognitive load&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then critique the result honestly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what is now strong&lt;/li&gt;
&lt;li&gt;what still feels average&lt;/li&gt;
&lt;li&gt;what still needs human judgment

&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;



&lt;p&gt;That has probably been the single best quality multiplier for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;I’d genuinely love to hear how other people are getting the best out of their models.&lt;/p&gt;

&lt;p&gt;What are you doing that consistently improves outcomes?&lt;/p&gt;

&lt;p&gt;Are you using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;tighter prompting&lt;/li&gt;
&lt;li&gt;staged passes&lt;/li&gt;
&lt;li&gt;self-critique&lt;/li&gt;
&lt;li&gt;design standards&lt;/li&gt;
&lt;li&gt;test-first workflows&lt;/li&gt;
&lt;li&gt;something else entirely?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because the more I do this, the more I think the real skill in vibe-coding is not just getting the model to produce code.&lt;/p&gt;

&lt;p&gt;It’s getting it to produce work you’d actually be proud to ship.&lt;/p&gt;

</description>
      <category>claude</category>
      <category>ai</category>
      <category>tutorial</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I reviewed a lot of chatter about AI-assisted engineering (vibe-coding) and it's a little scary TBH</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Fri, 13 Mar 2026 18:42:57 +0000</pubDate>
      <link>https://forem.com/naysmith/i-reviewed-a-lot-of-chatter-about-ai-assisted-engineering-vibe-coding-and-its-a-little-scary-tbh-30p3</link>
      <guid>https://forem.com/naysmith/i-reviewed-a-lot-of-chatter-about-ai-assisted-engineering-vibe-coding-and-its-a-little-scary-tbh-30p3</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/naysmith" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3813728%2F00eeab43-d78e-4199-a6fb-d8446669c49f.jpg" alt="naysmith"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/naysmith/ai-assisted-engineering-saas-anxiety-and-the-new-mood-in-software-5ai4" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;AI-assisted engineering, SaaS anxiety, and the new mood in software&lt;/h2&gt;
      &lt;h3&gt;Gleno ・ Mar 13&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#discuss&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#saas&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#news&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>discuss</category>
      <category>ai</category>
      <category>saas</category>
      <category>news</category>
    </item>
    <item>
      <title>AI-assisted engineering, SaaS anxiety, and the new mood in software #Episode 1</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Fri, 13 Mar 2026 18:41:08 +0000</pubDate>
      <link>https://forem.com/naysmith/ai-assisted-engineering-saas-anxiety-and-the-new-mood-in-software-5ai4</link>
      <guid>https://forem.com/naysmith/ai-assisted-engineering-saas-anxiety-and-the-new-mood-in-software-5ai4</guid>
      <description>&lt;p&gt;Everyone is asking some version of the same question now: if AI makes software dramatically easier to build, what happens to software businesses?&lt;/p&gt;

&lt;p&gt;The strongest mood I’m seeing is not “SaaS is dead,” but something more unsettling: &lt;strong&gt;the cost of building software is falling fast, which means weak software moats are getting exposed&lt;/strong&gt;. The moat is shifting away from raw implementation and toward distribution, trust, proprietary data, deep workflow ownership, compliance, and operational excellence.&lt;/p&gt;

&lt;p&gt;That is the backdrop to a lot of the sharpest opinion pieces being written right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;My take&lt;/li&gt;
&lt;li&gt;Articles and blogs&lt;/li&gt;
&lt;li&gt;Themes showing up across the debate&lt;/li&gt;
&lt;li&gt;Final thoughts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My take
&lt;/h2&gt;

&lt;p&gt;Here is the broad feeling in the software world right now: &lt;strong&gt;excitement on the surface, anxiety underneath&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;There is genuine optimism that AI-assisted engineering (vibe coding) can make small teams absurdly productive. It is easier than ever to prototype, ship, test ideas, and build useful internal or niche software without a giant engineering org. That part is real, and a lot of writers are no longer treating it as a novelty.&lt;/p&gt;

&lt;p&gt;But there is also a growing suspicion that the old software business playbook is weakening. If code generation becomes cheaper, faster, and more accessible, then software companies can no longer lean so heavily on “it took years to build” as their main justification for valuation, pricing, or defensibility.&lt;/p&gt;

&lt;p&gt;So the current vibe is not quite doom, and not quite triumph. It is more like this:&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  &lt;strong&gt;Software is getting easier to create, harder to defend, and more important than ever to operate well.&lt;/strong&gt;
&lt;/div&gt;


&lt;p&gt;That is why the conversation has become so charged. People are not really arguing about code editors. They are arguing about whether the economics of software are changing under everyone’s feet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Articles and blogs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://a16z.com/good-news-ai-will-eat-application-software/" rel="noopener noreferrer"&gt;Good news: AI Will Eat Application Software&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;One of the boldest takes in the whole debate. The argument is that application software is becoming less defensible as AI lowers the cost of reproducing product functionality, and that users increasingly care more about outcomes than about seat-based tools. This is a strong inclusion if you want a fearless, investor-style view of the pressure building under traditional SaaS.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://a16z.com/insights-for-enterprise-ai-builders/" rel="noopener noreferrer"&gt;From Demos to Deals: Insights for Building in Enterprise AI&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;A useful piece on how AI companies may need to operate differently from classic SaaS companies. The emphasis is not just on building the product, but on how product commoditization changes go-to-market, value capture, and what an enduring moat might actually look like in AI.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://a16z.com/need-for-speed-in-ai-sales-ai-doesnt-just-change-what-you-sell-it-also-changes-how-you-sell-it/" rel="noopener noreferrer"&gt;The New Rhythm of the Enterprise Sale&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This one is less about coding and more about business mechanics. The key idea is that AI is changing not only what companies buy, but how quickly they evaluate and replace software. That matters because shorter replacement cycles are bad news for sleepy incumbents.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://stackoverflow.blog/2025/12/25/whether-ai-is-a-bubble-or-revolution-how-does-software-survive/" rel="noopener noreferrer"&gt;Whether AI is a bubble or revolution, how does software survive?&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;A thoughtful middle-ground essay. The framing here is that cheap software generation does not mean all software value goes to zero; it means the economics of smaller, simpler tools are changing much faster than the economics of high-stakes systems. Good for readers who want nuance instead of chest-beating.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://stackoverflow.blog/2026/02/09/why-demand-for-code-is-infinite-how-ai-creates-more-developer-jobs/" rel="noopener noreferrer"&gt;Why demand for code is infinite: How AI creates more developer jobs&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;A useful counterweight to the “software jobs are over” narrative. The argument is that lowering the cost of creating software can increase the total amount of software the world wants, which may expand demand even while changing what developers actually do.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://stackoverflow.blog/2026/01/02/a-new-worst-coder-has-entered-the-chat-vibe-coding-without-code-knowledge/" rel="noopener noreferrer"&gt;A new worst coder has entered the chat: vibe coding without code knowledge&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This captures the backlash side of the conversation. The point is not that AI-assisted engineering is fake, but that it can make it much easier to generate impressive-looking nonsense and push brittle software further than it deserves to go.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://simonwillison.net/2025/Mar/19/vibe-coding/" rel="noopener noreferrer"&gt;Not all AI-assisted programming is vibe coding&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;One of the most useful framing pieces on the whole subject. Simon Willison separates carefree, high-trust prompting from more accountable AI-assisted programming. It is a good reminder that not all AI-heavy coding practices are equally reckless.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://simonwillison.net/2025/Mar/6/vibe-coding/" rel="noopener noreferrer"&gt;Will the future of software development run on vibes?&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;A short but sharp warning. The basic message is that AI-first development is fantastic for experiments and prototypes, but dangerous when people let a “good enough” prototype slide into production without the engineering discipline that production systems require.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://simonwillison.net/2025/jun/10/ai-assisted-coding/" rel="noopener noreferrer"&gt;AI-assisted coding for teams that can't get away with vibes&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This is a good bridge piece between fast AI experimentation and serious delivery. It speaks directly to teams that still need maintainability, accountability, and reliability, even while using AI heavily.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://addyo.substack.com/p/my-llm-coding-workflow-going-into" rel="noopener noreferrer"&gt;My LLM coding workflow going into 2026&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;A practical and disciplined view of how experienced engineers are folding AI into real work. Rather than treating AI as magic, it presents a workflow where AI speeds things up but the human still owns the outcome.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://addyo.substack.com/p/the-reality-of-ai-assisted-software" rel="noopener noreferrer"&gt;The reality of AI-Assisted software engineering productivity&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;A useful reality check. The argument is that AI can absolutely save time, but coding is only one part of software delivery, so the gains are often uneven. It is a good antidote to simplistic “10x developer” claims.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://addyo.substack.com/p/how-to-write-a-good-spec-for-ai-agents" rel="noopener noreferrer"&gt;How to write a good spec for AI agents&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This piece is really about where the world may be heading next: from prompting by instinct to building with clearer specs and more structured intent. It hints at a future where software teams become more like editors, orchestrators, and reviewers of machine-generated work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Themes showing up across the debate
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. The moat is moving, not disappearing
&lt;/h3&gt;

&lt;p&gt;The strongest recurring idea is that &lt;strong&gt;code itself is becoming less scarce&lt;/strong&gt;, but software advantage is not vanishing — it is relocating. The defensibility is moving toward distribution, trust, data, integration depth, workflow ownership, and the ability to operate reliably at scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Weak SaaS looks shakier than strong SaaS
&lt;/h3&gt;

&lt;p&gt;The software businesses under the most pressure seem to be the ones selling generic, shallow, seat-based tools with limited differentiation. If your product can be approximated quickly by a smart team using AI, the market is going to ask hard questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. “Can build” and “can run” are diverging
&lt;/h3&gt;

&lt;p&gt;This is one of the most important distinctions in the current discussion. AI makes it easier to build software. Running resilient, secure, supportable systems is still a different discipline. The gap between those two things is becoming more visible.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. The winners may look more like operators than coders
&lt;/h3&gt;

&lt;p&gt;A lot of the opinion pieces point to the same conclusion: the next winners may not be the teams writing the most code, but the teams that understand the workflow best, own the customer relationship, integrate deeply, and use AI to move faster without losing control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;The overall vibe in the software world right now is &lt;strong&gt;restless, opportunistic, and slightly paranoid&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;There is a sense that something big has already shifted, even if nobody agrees yet on where it ends. The loud optimists think AI-assisted engineering will blow up the old SaaS model and turn software into a faster, cheaper, more fluid business. The more grounded voices think that is only half true: software creation is being commoditized, but software operations, trust, and workflow ownership are becoming even more valuable.&lt;/p&gt;

&lt;p&gt;That feels closest to reality.&lt;/p&gt;

&lt;p&gt;So no, SaaS does not look dead soon. But it does look less comfortable. Less protected. Less able to rely on historical effort as proof of future value.&lt;/p&gt;

&lt;p&gt;The old story was: &lt;em&gt;building software is hard, so incumbents are safe.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The new story seems to be: &lt;em&gt;building software is getting easier, so incumbents have to prove they deserve to survive.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And that is the mood now more than anything else.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>ai</category>
      <category>saas</category>
      <category>news</category>
    </item>
    <item>
      <title>Claude's take on the Slawk Codebase (14-day build)</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Thu, 12 Mar 2026 23:07:13 +0000</pubDate>
      <link>https://forem.com/naysmith/claudes-take-on-the-slawk-codebase-14-day-build-26o6</link>
      <guid>https://forem.com/naysmith/claudes-take-on-the-slawk-codebase-14-day-build-26o6</guid>
      <description>&lt;h1&gt;
  
  
  Engineering Review
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Overall Assessment: B+
&lt;/h2&gt;

&lt;p&gt;This is a strong result for a 14-day build.&lt;/p&gt;

&lt;p&gt;It shows real engineering judgment in the places that matter: security awareness, validation discipline, transactional correctness, and test coverage. It does not read like a fragile demo or a pure UI clone. It reads like a serious prototype built by someone who understands backend risk and has made a genuine effort to control it.&lt;/p&gt;

&lt;p&gt;The codebase is not production-ready yet, but the gap is mostly operational maturity rather than fundamental incompetence or weak foundations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is strong
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Security posture is materially better than average for a fast build
&lt;/h3&gt;

&lt;p&gt;There are several decisions here that indicate actual security thinking rather than cosmetic hardening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;timing attack mitigation&lt;/li&gt;
&lt;li&gt;token revocation through &lt;code&gt;tokenVersion&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;per-user WebSocket rate limiting&lt;/li&gt;
&lt;li&gt;UUID-based filenames&lt;/li&gt;
&lt;li&gt;bcrypt with cost factor 10&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is a better baseline than many production systems shipped under normal timelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Input validation is consistently applied
&lt;/h3&gt;

&lt;p&gt;Validation appears to be taken seriously across the API surface:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zod is used across most endpoints&lt;/li&gt;
&lt;li&gt;null byte filtering is present&lt;/li&gt;
&lt;li&gt;channel naming includes path traversal prevention&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That consistency matters. A lot of rushed applications have one or two “secure” endpoints and then obvious gaps elsewhere. This does not appear to be one of those cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Transaction boundaries are correctly used
&lt;/h3&gt;

&lt;p&gt;Prisma transactions are being used in the right places, especially around message creation and counter updates. That suggests a correct understanding of atomicity and reduces the likelihood of subtle race-condition bugs in core workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Architecture is clean and understandable
&lt;/h3&gt;

&lt;p&gt;The separation between Express routes, middleware, Prisma access, Socket.io handling, and Zustand stores is sensible. The code appears structured for maintainability rather than just speed of initial assembly.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Test coverage is meaningful
&lt;/h3&gt;

&lt;p&gt;Sixty-eight backend tests, including multi-user and security-oriented scenarios, is a strong showing for a project of this age. More importantly, the tests are exercising the right categories of risk rather than just happy-path CRUD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  High severity
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In-memory rate limiting and account lockout&lt;/strong&gt;&lt;br&gt;
This is the one serious production blocker.&lt;/p&gt;

&lt;p&gt;Both controls are stateful security mechanisms, and in-memory implementations fail in exactly the situations where they matter most:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;process restart clears enforcement state&lt;/li&gt;
&lt;li&gt;horizontal scaling causes inconsistent enforcement across instances&lt;/li&gt;
&lt;li&gt;failover behavior becomes unpredictable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a real deployment, this needs to move to Redis or to durable database-backed counters with clear expiry semantics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Medium severity
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Duplicated message creation logic across REST and WebSocket handlers&lt;/strong&gt;&lt;br&gt;
This is not immediately dangerous, but it is a maintainability and consistency risk. Message creation rules should live in a shared service layer or domain function so validation, side effects, and persistence semantics stay aligned across transports.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weak HTML stripping approach&lt;/strong&gt;&lt;br&gt;
Using a regex such as &lt;code&gt;/&amp;lt;[^&amp;gt;]*&amp;gt;/g&lt;/code&gt; is not a reliable sanitization strategy and can be bypassed. If rich text or user-supplied markup is in scope, sanitization should be handled by a proper HTML sanitizer with an explicit allowlist strategy. If markup is not needed, escaping on output is safer than attempting ad hoc stripping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No audit logging for sensitive administrative operations&lt;/strong&gt;&lt;br&gt;
Role changes, user deactivation, and similar privileged actions should produce durable audit records. Without that, incident review, internal accountability, and enterprise-readiness are all weaker than they should be.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overly broad WebSocket CSP policy&lt;/strong&gt;&lt;br&gt;
Allowing all &lt;code&gt;wss:&lt;/code&gt; origins is unnecessarily permissive. This should be constrained to same-origin or to a strict allowlist of expected WebSocket endpoints.&lt;/p&gt;

&lt;h2&gt;
  
  
  Low severity
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;No frontend error boundaries&lt;/strong&gt;&lt;br&gt;
A render failure taking down the whole app is not unusual in an early build, but it should be corrected before broader usage. Error boundaries around the main application shell and higher-risk UI surfaces would materially improve resilience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ephemeral JWT secret fallback in development&lt;/strong&gt;&lt;br&gt;
This is mostly a developer-experience issue rather than a production risk, assuming production secrets are properly configured. Still, random secret fallback causes token invalidation on restart and can obscure auth-related debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verdict
&lt;/h2&gt;

&lt;p&gt;For a 14-day experiment, this is impressive work.&lt;/p&gt;

&lt;p&gt;The most notable thing is that the author spent effort in the right places. The foundations are not superficial. Security, validation, transaction safety, and tests all indicate competent engineering judgment.&lt;/p&gt;

&lt;p&gt;The main deficiency is operational readiness. The current design still assumes a single-process, non-distributed execution model for some important control paths. That is acceptable for a prototype, but it is the first thing that breaks when the system is exposed to real production conditions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-negotiable production fix
&lt;/h2&gt;

&lt;p&gt;Before this should be considered for production use with real users:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replace in-memory rate limiting and account lockout with Redis or an equivalent durable shared store.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is the only issue here I would classify as a true release blocker.&lt;/p&gt;

&lt;p&gt;Everything else is real, but secondary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reduce duplication in message creation paths&lt;/li&gt;
&lt;li&gt;replace regex-based HTML stripping with proper sanitization or output escaping&lt;/li&gt;
&lt;li&gt;add audit logging for privileged actions&lt;/li&gt;
&lt;li&gt;tighten WebSocket CSP&lt;/li&gt;
&lt;li&gt;add frontend error boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final view
&lt;/h2&gt;

&lt;p&gt;This is better than most 14-day builds, and frankly better than plenty of software that is already in production.&lt;/p&gt;

&lt;p&gt;The right summary is not “finished,” and it is not “just a demo” either.&lt;/p&gt;

&lt;p&gt;It is a credible early-stage system with solid engineering instincts and one clear operational maturity gap that must be addressed before production.&lt;/p&gt;

</description>
      <category>slawk</category>
      <category>ai</category>
      <category>vibecoding</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Building a Real Product With AI: Progress on the “AI Dev Team of One”</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Tue, 10 Mar 2026 06:55:21 +0000</pubDate>
      <link>https://forem.com/naysmith/building-a-real-product-with-ai-progress-on-the-ai-dev-team-of-one-209b</link>
      <guid>https://forem.com/naysmith/building-a-real-product-with-ai-progress-on-the-ai-dev-team-of-one-209b</guid>
      <description>&lt;p&gt;Firstly, excuse the AI generated image. This is the Lone Ranger on a robot horse apparently..&lt;/p&gt;

&lt;p&gt;Anyhoo, a few days ago I wrote about the idea of the &lt;strong&gt;AI developer team of one&lt;/strong&gt; — the notion that a single developer, working with modern AI coding tools, can operate more like a small engineering team than an individual contributor.&lt;/p&gt;

&lt;p&gt;At the time that post was mostly theoretical. I had only just started building something to test the idea.&lt;/p&gt;

&lt;p&gt;Since then I’ve been running the experiment properly.&lt;/p&gt;

&lt;p&gt;The project I’ve been building is a small application designed to answer a surprisingly common business question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Will this invoice actually be accepted by the company I'm sending it to?”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It turns out that this question is more complicated than it sounds.&lt;/p&gt;

&lt;p&gt;Different organisations have very specific requirements for invoices. Some require purchase orders in strict formats. Others require exact site names. Some ingest invoices via OCR, others via EDI or Peppol networks. When those rules aren't followed, invoices get rejected or delayed.&lt;/p&gt;

&lt;p&gt;So the system I'm building tries to check invoices &lt;em&gt;before they are sent&lt;/em&gt; and predict whether the buyer's systems will accept them.&lt;/p&gt;

&lt;p&gt;The interesting part, though, isn't just the software itself.&lt;/p&gt;

&lt;p&gt;It's &lt;strong&gt;how it was built.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The development experiment
&lt;/h2&gt;

&lt;p&gt;Instead of building the application entirely by hand, I used AI coding tools as collaborators.&lt;/p&gt;

&lt;p&gt;In practice my workflow looks something like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code&lt;/strong&gt; handles most implementation work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChatGPT&lt;/strong&gt; helps with architecture, design thinking, documentation, and code review&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub tests and type checking&lt;/strong&gt; act as guardrails&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rather than typing every line of code myself, the process feels more like directing a small engineering team.&lt;/p&gt;

&lt;p&gt;I describe the architecture or change I want.&lt;br&gt;&lt;br&gt;
The AI proposes an implementation plan.&lt;br&gt;&lt;br&gt;
I review it, adjust the direction if needed, and then let it execute.&lt;/p&gt;

&lt;p&gt;The result feels surprisingly similar to working with a junior or mid-level developer: the AI does the bulk of the typing, while I focus on the structure of the system.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the system does now
&lt;/h2&gt;

&lt;p&gt;The application converts invoices into a &lt;strong&gt;canonical internal format&lt;/strong&gt;, which allows invoices from different accounting systems to be processed consistently.&lt;/p&gt;

&lt;p&gt;Once the invoice is in that format, the system runs several layers of analysis.&lt;/p&gt;

&lt;p&gt;First it validates the structure of the invoice to ensure the data is coherent. Then it applies buyer-specific rules — things like purchase order formatting or required fields.&lt;/p&gt;

&lt;p&gt;After that it calculates a &lt;strong&gt;readiness score&lt;/strong&gt;, which indicates how close the invoice is to meeting the buyer's requirements.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;Invoice readiness: 82%&lt;/p&gt;

&lt;p&gt;Errors: 1&lt;br&gt;&lt;br&gt;
Warnings: 2  &lt;/p&gt;

&lt;p&gt;Issues detected:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Purchase order must be 10 digits
&lt;/li&gt;
&lt;li&gt;Site name must match official buyer list&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is not just to say “valid” or “invalid,” but to give clear guidance on what needs fixing before the invoice is sent.&lt;/p&gt;

&lt;p&gt;The system also runs a &lt;strong&gt;buyer acceptance simulation&lt;/strong&gt;, which predicts whether the buyer's system is likely to accept the invoice automatically.&lt;/p&gt;

&lt;p&gt;That step turns the tool from a simple validator into something closer to a &lt;strong&gt;buyer compliance engine&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Guardrails matter more than prompts
&lt;/h2&gt;

&lt;p&gt;One of the biggest lessons from this experiment is that &lt;strong&gt;good guardrails matter more than clever prompts&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It’s tempting to think that the secret to AI-assisted development is prompt engineering.&lt;/p&gt;

&lt;p&gt;In reality, the most important factor has been putting constraints around the system.&lt;/p&gt;

&lt;p&gt;Those guardrails include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;strong TypeScript typing&lt;/li&gt;
&lt;li&gt;automated tests&lt;/li&gt;
&lt;li&gt;deterministic validation rules&lt;/li&gt;
&lt;li&gt;versioned invoice models&lt;/li&gt;
&lt;li&gt;audit logging&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These systems allow the AI to make large code changes safely.&lt;/p&gt;

&lt;p&gt;If something breaks, the tests catch it immediately.&lt;/p&gt;

&lt;p&gt;Without those guardrails, the system would drift quickly.&lt;/p&gt;

&lt;p&gt;With them, the AI can move very quickly without losing control of the codebase.&lt;/p&gt;




&lt;h2&gt;
  
  
  How much time did this actually take?
&lt;/h2&gt;

&lt;p&gt;One of the most interesting questions people have asked is how much time this project required.&lt;/p&gt;

&lt;p&gt;So far, I’ve spent roughly &lt;strong&gt;8–12 hours of my own time&lt;/strong&gt; working on the project across a few sessions.&lt;/p&gt;

&lt;p&gt;That includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;architecture thinking&lt;/li&gt;
&lt;li&gt;reviewing AI implementation plans&lt;/li&gt;
&lt;li&gt;adjusting prompts&lt;/li&gt;
&lt;li&gt;checking test results&lt;/li&gt;
&lt;li&gt;writing documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The actual code produced during that time is significantly larger than what I would normally write in that time window.&lt;/p&gt;




&lt;h2&gt;
  
  
  How long would a traditional team take?
&lt;/h2&gt;

&lt;p&gt;This is obviously a rough comparison, but it’s useful to think about.&lt;/p&gt;

&lt;p&gt;To reach the current stage — which includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;canonical invoice modelling&lt;/li&gt;
&lt;li&gt;validation engine&lt;/li&gt;
&lt;li&gt;buyer rule system&lt;/li&gt;
&lt;li&gt;readiness scoring&lt;/li&gt;
&lt;li&gt;acceptance simulation&lt;/li&gt;
&lt;li&gt;artifact generation (Peppol XML, CSV, etc.)&lt;/li&gt;
&lt;li&gt;API endpoints&lt;/li&gt;
&lt;li&gt;automated tests&lt;/li&gt;
&lt;li&gt;documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A traditional development process might involve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a product owner&lt;/li&gt;
&lt;li&gt;a backend developer&lt;/li&gt;
&lt;li&gt;a frontend developer&lt;/li&gt;
&lt;li&gt;possibly a QA engineer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even with a small team moving quickly, that work could easily represent &lt;strong&gt;120–200 hours of engineering time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Spread across a typical development cycle, that would likely mean &lt;strong&gt;three to six weeks&lt;/strong&gt; of work before reaching the same level of functionality.&lt;/p&gt;

&lt;p&gt;That difference doesn’t mean AI replaces developers.&lt;/p&gt;

&lt;p&gt;But it does change the economics of experimentation.&lt;/p&gt;

&lt;p&gt;Ideas that previously required weeks of engineering effort can now reach a working prototype in a matter of hours.&lt;/p&gt;




&lt;h2&gt;
  
  
  The most interesting direction the project is heading
&lt;/h2&gt;

&lt;p&gt;The original goal was simply to validate invoices.&lt;/p&gt;

&lt;p&gt;But something more interesting started to emerge while building the system.&lt;/p&gt;

&lt;p&gt;As buyer rules are added — for example, Mitre 10 requirements, government invoicing rules, or Peppol constraints — the system starts to accumulate &lt;strong&gt;knowledge about how different buyers operate&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Over time this could become a shared registry of buyer requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;purchase order formats&lt;/li&gt;
&lt;li&gt;accepted invoice structures&lt;/li&gt;
&lt;li&gt;known rejection patterns&lt;/li&gt;
&lt;li&gt;integration formats&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point the product stops being just an invoice validator.&lt;/p&gt;

&lt;p&gt;It becomes a &lt;strong&gt;buyer compliance knowledge system&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this experiment changed for me
&lt;/h2&gt;

&lt;p&gt;The biggest change isn’t just productivity.&lt;/p&gt;

&lt;p&gt;It’s how development feels.&lt;/p&gt;

&lt;p&gt;Instead of spending most of the time typing code, a lot of the work becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;designing the architecture&lt;/li&gt;
&lt;li&gt;defining constraints&lt;/li&gt;
&lt;li&gt;reviewing implementation plans&lt;/li&gt;
&lt;li&gt;steering the system toward the correct design&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The role becomes closer to &lt;strong&gt;directing an engineering team&lt;/strong&gt; than acting as a single developer.&lt;/p&gt;

&lt;p&gt;And surprisingly, that workflow works.&lt;/p&gt;




&lt;h2&gt;
  
  
  What happens next
&lt;/h2&gt;

&lt;p&gt;At this point the focus shifts from building features to something more important:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;showing the system to real users.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because ultimately software isn’t valuable because of how it’s built.&lt;/p&gt;

&lt;p&gt;It’s valuable if it solves a real problem.&lt;/p&gt;

&lt;p&gt;In this case, the question is simple:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Can a system like this help businesses avoid invoice rejections and get paid faster?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That’s my next experiment.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>api</category>
      <category>learning</category>
    </item>
    <item>
      <title>The Missing Guardrail in AI Coding: Protecting Architecture</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Tue, 10 Mar 2026 00:43:58 +0000</pubDate>
      <link>https://forem.com/naysmith/the-missing-guardrail-in-ai-coding-protecting-architecture-2jmj</link>
      <guid>https://forem.com/naysmith/the-missing-guardrail-in-ai-coding-protecting-architecture-2jmj</guid>
      <description>&lt;p&gt;Most discussions about AI-assisted development focus on prompting.&lt;/p&gt;

&lt;p&gt;Better prompts.&lt;br&gt;
Better instructions.&lt;br&gt;
Better discipline when asking the AI to write code.&lt;/p&gt;

&lt;p&gt;And while that absolutely helps, I think prompting discipline is only the first layer of guardrails.&lt;/p&gt;

&lt;p&gt;The deeper guardrails are the ones that live in the system itself.&lt;/p&gt;

&lt;p&gt;In the last few weeks I've been experimenting with a workflow where AI writes and modifies code inside a repository that constantly checks whether those changes are acceptable. The AI can propose changes freely, but the system pushes back if something drifts too far.&lt;/p&gt;

&lt;p&gt;The interesting thing is that these guardrails are not particularly exotic. They are mostly the same tools developers have been using for years.&lt;/p&gt;

&lt;p&gt;Type checking.&lt;br&gt;&lt;br&gt;
Tests.&lt;br&gt;&lt;br&gt;
Linting.&lt;br&gt;&lt;br&gt;
Structured workflows.&lt;/p&gt;

&lt;p&gt;But once AI enters the loop, those tools become much more important.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Risk Isn't Bad Code
&lt;/h2&gt;

&lt;p&gt;The biggest problem I see with AI-generated code is not that it produces obviously broken programs.&lt;/p&gt;

&lt;p&gt;The bigger problem is something subtler.&lt;/p&gt;

&lt;p&gt;AI often produces changes that are individually reasonable but collectively incoherent.&lt;/p&gt;

&lt;p&gt;A function gets renamed here.&lt;br&gt;&lt;br&gt;
A data structure gets reshaped there.&lt;br&gt;&lt;br&gt;
A validation rule moves to a different layer.&lt;/p&gt;

&lt;p&gt;Each change looks fine in isolation. But over time the system slowly drifts away from its original architecture.&lt;/p&gt;

&lt;p&gt;Eventually the code still compiles and even passes some tests, but the structure that made the system understandable has been eroded.&lt;/p&gt;

&lt;p&gt;I think of this as the &lt;strong&gt;plausible but incoherent&lt;/strong&gt; problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Guardrails That Actually Work
&lt;/h2&gt;

&lt;p&gt;The most effective guardrails I've found so far are the ones that run automatically after every change.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;TypeScript type checking immediately catches structural mistakes.&lt;/p&gt;

&lt;p&gt;Unit tests verify that the core logic still works.&lt;/p&gt;

&lt;p&gt;End-to-end tests ensure the user workflows are still intact.&lt;/p&gt;

&lt;p&gt;These are not new ideas, but they become far more valuable once AI is writing large portions of the codebase.&lt;/p&gt;

&lt;p&gt;Instead of relying on the developer to remember rules, the system itself becomes the enforcer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Missing Guardrail: Architecture Protection
&lt;/h2&gt;

&lt;p&gt;There is another guardrail that I think teams will start adopting soon.&lt;/p&gt;

&lt;p&gt;Protecting the architecture itself.&lt;/p&gt;

&lt;p&gt;Traditional tests verify behaviour. They ask questions like:&lt;/p&gt;

&lt;p&gt;Does this function return the right result?&lt;/p&gt;

&lt;p&gt;Does this API endpoint respond correctly?&lt;/p&gt;

&lt;p&gt;But architecture tests ask a different question:&lt;/p&gt;

&lt;p&gt;Did the structure of the system remain the same?&lt;/p&gt;

&lt;p&gt;For example, in the system I'm currently building, everything revolves around a canonical invoice data structure. If that structure drifts, the entire pipeline breaks.&lt;/p&gt;

&lt;p&gt;So one approach I'm exploring is &lt;strong&gt;golden output tests&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Given a known invoice input, the system must always produce the same validated output structure.&lt;/p&gt;

&lt;p&gt;If AI-generated changes alter that structure unexpectedly, the tests fail immediately.&lt;/p&gt;

&lt;p&gt;This protects the boundaries between layers of the system, not just the behaviour of individual functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Inside Guardrails
&lt;/h2&gt;

&lt;p&gt;What I'm gradually discovering is that AI works best when it operates inside a constrained system.&lt;/p&gt;

&lt;p&gt;The workflow ends up looking something like this:&lt;/p&gt;

&lt;p&gt;The AI proposes changes.&lt;br&gt;&lt;br&gt;
The repository runs automated checks.&lt;br&gt;&lt;br&gt;
Failures are reported back.&lt;br&gt;&lt;br&gt;
The AI adjusts the implementation.&lt;/p&gt;

&lt;p&gt;Instead of treating prompting as the only control mechanism, the repository itself becomes a feedback loop.&lt;/p&gt;

&lt;p&gt;The AI can explore solutions, but the guardrails prevent it from drifting too far from the intended design.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future: Policy-As-Code for Architecture
&lt;/h2&gt;

&lt;p&gt;This idea is not entirely new.&lt;/p&gt;

&lt;p&gt;Infrastructure teams have been moving toward &lt;strong&gt;policy-as-code&lt;/strong&gt; for years. Security rules, deployment constraints, and compliance requirements are expressed as machine-checkable rules.&lt;/p&gt;

&lt;p&gt;I suspect application architecture will move in a similar direction.&lt;/p&gt;

&lt;p&gt;Instead of relying purely on documentation and conventions, systems will increasingly encode architectural constraints directly into tests and automated checks.&lt;/p&gt;

&lt;p&gt;AI will still generate code.&lt;/p&gt;

&lt;p&gt;But it will do so inside an environment that continuously evaluates whether those changes respect the architecture.&lt;/p&gt;

&lt;p&gt;That feels like a much more sustainable model than relying on perfect prompting.&lt;/p&gt;

&lt;p&gt;And if AI-assisted development keeps accelerating, I suspect these kinds of architectural guardrails will become essential.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>typescript</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>AI Might Create a New Job: The Developer Inside Every Business</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Mon, 09 Mar 2026 08:47:06 +0000</pubDate>
      <link>https://forem.com/naysmith/ai-might-create-a-new-job-the-developer-inside-every-business-2i4f</link>
      <guid>https://forem.com/naysmith/ai-might-create-a-new-job-the-developer-inside-every-business-2i4f</guid>
      <description>&lt;p&gt;For the past two years we've been asking the wrong question about AI and software development.&lt;/p&gt;

&lt;p&gt;The question everyone keeps asking is:&lt;/p&gt;

&lt;p&gt;Will AI replace developers?&lt;/p&gt;

&lt;p&gt;But after spending time building software with AI coding tools, I’ve started to suspect something very different might happen.&lt;/p&gt;

&lt;p&gt;AI might not eliminate developers.&lt;/p&gt;

&lt;p&gt;It might &lt;strong&gt;create demand for more of them — just in places that never hired developers before.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In particular, it may create a new role that many companies have never had:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;the internal developer.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Company With No IT Department
&lt;/h2&gt;

&lt;p&gt;Imagine a typical mid-sized business.&lt;/p&gt;

&lt;p&gt;Maybe 30–60 employees.&lt;/p&gt;

&lt;p&gt;They are good at what they do, but their internal systems look familiar to anyone who has worked outside the tech industry.&lt;/p&gt;

&lt;p&gt;Spreadsheets everywhere.&lt;/p&gt;

&lt;p&gt;Quotes built in Word.&lt;/p&gt;

&lt;p&gt;Customer notes stored in email threads.&lt;/p&gt;

&lt;p&gt;Information scattered across accounting software, shared drives, and manual processes.&lt;/p&gt;

&lt;p&gt;None of it is completely broken. But none of it is particularly efficient either.&lt;/p&gt;

&lt;p&gt;Historically these businesses solved software problems in three ways:&lt;/p&gt;

&lt;p&gt;• buy SaaS products&lt;br&gt;&lt;br&gt;
• hire consultants&lt;br&gt;&lt;br&gt;
• live with inefficient processes  &lt;/p&gt;

&lt;p&gt;Custom software was rarely an option.&lt;/p&gt;

&lt;p&gt;Not because the problems weren’t worth solving.&lt;/p&gt;

&lt;p&gt;But because building software felt expensive, risky, and slow.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Software Has Traditionally Been Built
&lt;/h2&gt;

&lt;p&gt;For most companies, the model looked something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkdjmgge2unoln50p13z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkdjmgge2unoln50p13z.png" alt="Traditional consultancy model" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach works, but it creates friction.&lt;/p&gt;

&lt;p&gt;Projects take time. Requirements must be carefully written. The people building the system are often external to the business itself.&lt;/p&gt;

&lt;p&gt;As a result, many operational problems simply never get solved with software.&lt;/p&gt;

&lt;p&gt;They remain spreadsheets.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Changes the Starting Point
&lt;/h2&gt;

&lt;p&gt;AI-assisted development tools change one critical thing:&lt;/p&gt;

&lt;p&gt;They dramatically lower the cost of &lt;strong&gt;creating the first version of a system.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now imagine someone inside that business starts experimenting with AI coding tools.&lt;/p&gt;

&lt;p&gt;Within a few days they produce a rough internal tool:&lt;/p&gt;

&lt;p&gt;• a quoting calculator&lt;br&gt;&lt;br&gt;
• a small workflow form&lt;br&gt;&lt;br&gt;
• a dashboard showing job status  &lt;/p&gt;

&lt;p&gt;It isn’t perfect.&lt;/p&gt;

&lt;p&gt;But it works.&lt;/p&gt;

&lt;p&gt;And suddenly the company realises something important.&lt;/p&gt;

&lt;p&gt;Custom software might no longer be unreachable.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Different Model Starts to Appear
&lt;/h2&gt;

&lt;p&gt;Instead of software always being built outside the business, the workflow might start to look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbowhlycf2zpjff21xtoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbowhlycf2zpjff21xtoc.png" alt="AI-assisted internal software creation model" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The company no longer needs to start with a consultancy engagement.&lt;/p&gt;

&lt;p&gt;They can start with experimentation.&lt;/p&gt;

&lt;p&gt;That’s a very different starting point.&lt;/p&gt;




&lt;h2&gt;
  
  
  The First Internal Tool
&lt;/h2&gt;

&lt;p&gt;Imagine the first real success.&lt;/p&gt;

&lt;p&gt;Someone builds a small quoting system.&lt;/p&gt;

&lt;p&gt;The impact is immediate:&lt;/p&gt;

&lt;p&gt;• quotes are generated faster&lt;br&gt;&lt;br&gt;
• pricing becomes consistent&lt;br&gt;&lt;br&gt;
• information is stored properly&lt;br&gt;&lt;br&gt;
• reporting becomes possible  &lt;/p&gt;

&lt;p&gt;The tool might only save a few hours each week.&lt;/p&gt;

&lt;p&gt;But the business now sees something it has rarely experienced before:&lt;/p&gt;

&lt;p&gt;A problem solved &lt;strong&gt;exactly the way their workflow needs it solved.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s when the mindset changes.&lt;/p&gt;

&lt;p&gt;Instead of asking:&lt;/p&gt;

&lt;p&gt;“Which software should we buy?”&lt;/p&gt;

&lt;p&gt;The company begins asking:&lt;/p&gt;

&lt;p&gt;“Could we build something for this?”&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hidden Reality
&lt;/h2&gt;

&lt;p&gt;Once that first internal system exists, a new reality appears.&lt;/p&gt;

&lt;p&gt;Software isn’t just something you build.&lt;/p&gt;

&lt;p&gt;It’s something you &lt;strong&gt;own&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The system needs improvements.&lt;/p&gt;

&lt;p&gt;Someone reports a bug.&lt;/p&gt;

&lt;p&gt;Another team wants a similar tool.&lt;/p&gt;

&lt;p&gt;AI may make building software easier, but it does not remove the need for someone who understands how the system works.&lt;/p&gt;

&lt;p&gt;That responsibility has to live somewhere.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Internal Developer
&lt;/h2&gt;

&lt;p&gt;At this point a new role quietly emerges.&lt;/p&gt;

&lt;p&gt;Not a full engineering team.&lt;/p&gt;

&lt;p&gt;Not necessarily even a traditional software engineer.&lt;/p&gt;

&lt;p&gt;But someone inside the business who becomes responsible for the company’s internal software capability.&lt;/p&gt;

&lt;p&gt;Someone who can:&lt;/p&gt;

&lt;p&gt;• understand business workflows&lt;br&gt;&lt;br&gt;
• design simple systems&lt;br&gt;&lt;br&gt;
• use AI tools to build and evolve them&lt;br&gt;&lt;br&gt;
• maintain those systems over time  &lt;/p&gt;

&lt;p&gt;This person becomes the bridge between &lt;strong&gt;how the business operates&lt;/strong&gt; and &lt;strong&gt;the software that supports it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Historically many companies never needed that role.&lt;/p&gt;

&lt;p&gt;AI might change that.&lt;/p&gt;




&lt;h2&gt;
  
  
  A New Kind of Demand
&lt;/h2&gt;

&lt;p&gt;For decades, most companies outside the tech industry simply didn’t hire developers.&lt;/p&gt;

&lt;p&gt;They relied on vendors, consultants, and packaged software.&lt;/p&gt;

&lt;p&gt;But if AI makes building internal tools dramatically cheaper, the economics change.&lt;/p&gt;

&lt;p&gt;Instead of outsourcing every project, many companies may decide it makes sense to have &lt;strong&gt;one internal builder&lt;/strong&gt; who understands their systems and can create solutions when needed.&lt;/p&gt;

&lt;p&gt;Across thousands of companies, that could represent a significant shift.&lt;/p&gt;

&lt;p&gt;Not fewer developers.&lt;/p&gt;

&lt;p&gt;Just developers working in different places.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Opportunity
&lt;/h2&gt;

&lt;p&gt;The interesting implication is not that AI replaces developers.&lt;/p&gt;

&lt;p&gt;It’s that AI might make software creation accessible to businesses that previously couldn’t justify it.&lt;/p&gt;

&lt;p&gt;That means:&lt;/p&gt;

&lt;p&gt;• more internal tools&lt;br&gt;&lt;br&gt;
• more automation&lt;br&gt;&lt;br&gt;
• more operational software solving real problems  &lt;/p&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;more software built in more places.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Places that previously had none.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Question I'm Curious About
&lt;/h2&gt;

&lt;p&gt;If AI continues lowering the barrier to building software, do you think we'll start seeing more companies hire &lt;strong&gt;one internal developer&lt;/strong&gt; instead of relying entirely on agencies?&lt;/p&gt;

&lt;p&gt;Or will consultancies simply adopt AI themselves and keep the existing model?&lt;/p&gt;

&lt;p&gt;I'm curious what other developers are seeing in the real world.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>discuss</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>The Prompt Pattern That Made AI Coding Actually Work</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Mon, 09 Mar 2026 05:05:12 +0000</pubDate>
      <link>https://forem.com/naysmith/the-prompt-pattern-that-made-ai-coding-actually-work-3c88</link>
      <guid>https://forem.com/naysmith/the-prompt-pattern-that-made-ai-coding-actually-work-3c88</guid>
      <description>&lt;p&gt;When I first started experimenting seriously with AI-assisted development, I ran into the same problem many people seem to hit.&lt;/p&gt;

&lt;p&gt;The AI could write code.&lt;/p&gt;

&lt;p&gt;But the results were inconsistent.&lt;/p&gt;

&lt;p&gt;Sometimes the output was excellent. Other times it was strange, overly complicated, or subtly wrong. Even when the code worked, it sometimes didn’t quite fit the system it was supposed to belong to.&lt;/p&gt;

&lt;p&gt;At first I thought this was just the nature of the tools.&lt;/p&gt;

&lt;p&gt;But after a few weeks of experimenting I realised the real issue wasn’t the AI.&lt;/p&gt;

&lt;p&gt;It was the way I was asking it to work.&lt;/p&gt;

&lt;p&gt;The biggest mistake I was making was simple: I was asking the AI to start coding immediately.&lt;/p&gt;

&lt;p&gt;Once I changed that, everything improved.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with "just build this"
&lt;/h2&gt;

&lt;p&gt;Early on my prompts looked something like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Build an API endpoint for validating invoices.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Or:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Refactor this validation function.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Or:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Add support for buyer-specific rules.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The AI would respond quickly and confidently. Files would appear, functions would change, and new structures would sometimes emerge that looked quite reasonable.&lt;/p&gt;

&lt;p&gt;But there was a subtle issue.&lt;/p&gt;

&lt;p&gt;The AI was making architectural decisions on my behalf.&lt;/p&gt;

&lt;p&gt;Sometimes those decisions were fine. Other times they introduced inconsistencies or patterns that didn’t quite match the rest of the system.&lt;/p&gt;

&lt;p&gt;The problem wasn’t that the AI was bad at coding.&lt;/p&gt;

&lt;p&gt;The problem was that it was &lt;strong&gt;improvising&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern that changed everything
&lt;/h2&gt;

&lt;p&gt;Eventually I started using a much stricter prompt pattern.&lt;/p&gt;

&lt;p&gt;Instead of asking the AI to implement something directly, I ask it to go through a small planning process first.&lt;/p&gt;

&lt;p&gt;The pattern looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scan the repository
&lt;/li&gt;
&lt;li&gt;Explain the relevant architecture
&lt;/li&gt;
&lt;li&gt;Propose a minimal implementation plan
&lt;/li&gt;
&lt;li&gt;Wait for approval
&lt;/li&gt;
&lt;li&gt;Then implement the change&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The difference this makes is surprisingly large.&lt;/p&gt;

&lt;p&gt;When the AI explains the architecture first, it usually discovers existing structures that should be reused. When it proposes a plan, it often reveals assumptions that can be corrected before any code is written.&lt;/p&gt;

&lt;p&gt;By the time implementation begins, the AI is no longer guessing.&lt;/p&gt;

&lt;p&gt;It is executing a plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this works
&lt;/h2&gt;

&lt;p&gt;There’s a simple reason this pattern is effective.&lt;/p&gt;

&lt;p&gt;AI models are very good at generating code, but they don’t naturally pause to reason about the system unless you explicitly ask them to.&lt;/p&gt;

&lt;p&gt;If you jump straight to implementation, the model fills in the missing context however it thinks is best.&lt;/p&gt;

&lt;p&gt;That can work sometimes.&lt;/p&gt;

&lt;p&gt;But if the system has any real complexity, it’s much safer to force the AI to describe the system first.&lt;/p&gt;

&lt;p&gt;This accomplishes two things.&lt;/p&gt;

&lt;p&gt;First, it gives the AI a chance to understand the existing architecture before making changes.&lt;/p&gt;

&lt;p&gt;Second, it gives the human developer a chance to review the plan and catch mistakes before they turn into code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like in practice
&lt;/h2&gt;

&lt;p&gt;When I’m working on my current project, the prompt often looks something like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Scan the repository and explain how invoice validation currently works.  &lt;/p&gt;

&lt;p&gt;Then propose the smallest safe implementation plan for adding buyer-specific rules.  &lt;/p&gt;

&lt;p&gt;List the files that would need to change, and wait for approval before implementing anything.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The response usually includes a clear explanation of the existing system along with a step-by-step implementation plan.&lt;/p&gt;

&lt;p&gt;Only after reviewing and approving that plan do I ask the AI to proceed.&lt;/p&gt;

&lt;p&gt;At that point, implementation becomes very fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  The role of the developer changes here
&lt;/h2&gt;

&lt;p&gt;This workflow also changes how the developer interacts with the AI.&lt;/p&gt;

&lt;p&gt;Instead of acting like someone requesting code, the developer starts acting more like someone directing an engineering process.&lt;/p&gt;

&lt;p&gt;The AI proposes plans.&lt;br&gt;&lt;br&gt;
The developer approves or modifies them.&lt;br&gt;&lt;br&gt;
Then the AI implements the result.&lt;/p&gt;

&lt;p&gt;In other words, the human remains responsible for the architecture, while the AI accelerates the implementation.&lt;/p&gt;

&lt;p&gt;That separation turns out to be extremely important.&lt;/p&gt;

&lt;h2&gt;
  
  
  The surprising result
&lt;/h2&gt;

&lt;p&gt;Once I started using this pattern consistently, the quality of the AI’s output improved dramatically.&lt;/p&gt;

&lt;p&gt;The code fit the existing system better. Changes were smaller and more targeted. The AI was less likely to invent unnecessary abstractions or refactor unrelated parts of the repository.&lt;/p&gt;

&lt;p&gt;The tools didn’t change.&lt;/p&gt;

&lt;p&gt;The prompt did.&lt;/p&gt;

&lt;p&gt;And that turned out to make all the difference.&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple rule
&lt;/h2&gt;

&lt;p&gt;If you’re experimenting with AI coding tools, one small change can improve your results immediately.&lt;/p&gt;

&lt;p&gt;Don’t start with implementation.&lt;/p&gt;

&lt;p&gt;Start with understanding.&lt;/p&gt;

&lt;p&gt;Ask the AI to explain the system first, propose a plan second, and implement only after the plan is clear.&lt;/p&gt;

&lt;p&gt;It slows the process down by a few minutes.&lt;/p&gt;

&lt;p&gt;But it saves a lot of confusion later.&lt;/p&gt;




</description>
    </item>
    <item>
      <title>What I’ve Been Building With AI (ChatGPT + Claude)</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Mon, 09 Mar 2026 04:00:12 +0000</pubDate>
      <link>https://forem.com/naysmith/what-ive-been-building-with-ai-chatgpt-claude-4ea6</link>
      <guid>https://forem.com/naysmith/what-ive-been-building-with-ai-chatgpt-claude-4ea6</guid>
      <description>&lt;p&gt;Hi everyone — I'm Glen.&lt;/p&gt;

&lt;p&gt;I live and work in New Zealand and spend most of my time building software. My day job is as a FileMaker developer, but lately I’ve been spending a lot of my spare time experimenting with AI-assisted development and seeing how far a single developer can push things with the current generation of tools.&lt;/p&gt;

&lt;p&gt;I've been in IT for quite a while (though relatively new to the FileMaker ecosystem), and the speed at which AI tools can help you move from idea to working system is pretty remarkable.&lt;/p&gt;

&lt;p&gt;Over the last few weeks I decided to test something properly.&lt;/p&gt;

&lt;p&gt;Instead of just experimenting with prompts or small scripts, I started building a real system to see what AI-first development actually looks like in practice.&lt;/p&gt;

&lt;p&gt;The project is called &lt;strong&gt;InvoiceReady&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm building
&lt;/h2&gt;

&lt;p&gt;The goal of InvoiceReady is fairly straightforward: help companies validate invoices before sending them into networks like &lt;strong&gt;Peppol&lt;/strong&gt;, where invoices often get rejected because of formatting issues, missing fields, or buyer-specific rules.&lt;/p&gt;

&lt;p&gt;If you've worked with systems like Peppol you’ll know that the hardest part isn’t generating the invoice — it’s making sure the invoice satisfies all the rules before it gets sent.&lt;/p&gt;

&lt;p&gt;So the platform focuses on catching those issues early.&lt;/p&gt;

&lt;p&gt;Right now the system includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a canonical invoice data model&lt;/li&gt;
&lt;li&gt;a multi-layer validation engine&lt;/li&gt;
&lt;li&gt;buyer rule management&lt;/li&gt;
&lt;li&gt;destination profile configuration&lt;/li&gt;
&lt;li&gt;invoice versioning and history&lt;/li&gt;
&lt;li&gt;a sandbox testing environment&lt;/li&gt;
&lt;li&gt;an external API designed for integration with systems like FileMaker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The validation engine works in layers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;System checks
&lt;/li&gt;
&lt;li&gt;Peppol baseline rules
&lt;/li&gt;
&lt;li&gt;Buyer-specific requirements
&lt;/li&gt;
&lt;li&gt;Destination rules
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each layer validates the invoice data before it ever gets sent to a network.&lt;/p&gt;

&lt;p&gt;The idea is simple: &lt;strong&gt;catch problems before an invoice ever leaves your system.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The development setup
&lt;/h2&gt;

&lt;p&gt;The interesting part isn’t just the project itself, but how it’s being built.&lt;/p&gt;

&lt;p&gt;Instead of a traditional workflow, I’ve been using a combination of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ChatGPT&lt;/strong&gt; for architectural thinking, planning, and writing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code&lt;/strong&gt; for repository implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Next.js + TypeScript&lt;/strong&gt; for the application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supabase&lt;/strong&gt; for the database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For anyone curious about the tooling: I'm not using the free tiers.&lt;/p&gt;

&lt;p&gt;I'm currently on the &lt;strong&gt;paid plans just above the free tier&lt;/strong&gt; for both ChatGPT and Claude.&lt;/p&gt;

&lt;p&gt;That gives enough usage to work with them continuously while building real software, without constantly hitting limits. Even combined, the monthly cost is still fairly modest compared to traditional development tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the workflow actually works
&lt;/h2&gt;

&lt;p&gt;The most important thing I've learned so far is that AI development only works well if you control the workflow.&lt;/p&gt;

&lt;p&gt;The loop I'm using looks something like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Think through the architecture
&lt;/li&gt;
&lt;li&gt;Define a clear implementation plan
&lt;/li&gt;
&lt;li&gt;Ask Claude to implement the change in the repository
&lt;/li&gt;
&lt;li&gt;Run type checks and tests
&lt;/li&gt;
&lt;li&gt;Iterate
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In other words, the AI isn't just asked to "build something".&lt;/p&gt;

&lt;p&gt;It is asked to &lt;strong&gt;implement a plan&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The rule I've been following is simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Never let the AI start coding without a plan.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the plan is clear, the implementation phase becomes dramatically faster.&lt;/p&gt;

&lt;p&gt;Tasks like scaffolding routes, wiring APIs, generating types, creating migrations, and even producing tests can happen in minutes.&lt;/p&gt;

&lt;p&gt;The bottleneck stops being implementation.&lt;/p&gt;

&lt;p&gt;The bottleneck becomes decision quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  What surprised me
&lt;/h2&gt;

&lt;p&gt;The biggest surprise wasn’t that AI can generate code.&lt;/p&gt;

&lt;p&gt;It was how quickly the &lt;strong&gt;implementation layer stops being the constraint&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Once the architecture is clear, the AI can produce large amounts of surrounding implementation very quickly.&lt;/p&gt;

&lt;p&gt;What ends up taking the most time is deciding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what the system should actually do&lt;/li&gt;
&lt;li&gt;where the logic should live&lt;/li&gt;
&lt;li&gt;what the smallest safe change is&lt;/li&gt;
&lt;li&gt;what shouldn't be built at all&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are architectural questions.&lt;/p&gt;

&lt;p&gt;AI can help with them, but they still require human judgment.&lt;/p&gt;

&lt;h2&gt;
  
  
  What still requires human oversight
&lt;/h2&gt;

&lt;p&gt;AI is very good at producing code that looks plausible.&lt;/p&gt;

&lt;p&gt;But plausible code isn’t the same as a coherent system.&lt;/p&gt;

&lt;p&gt;The areas that still require the most careful attention are things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;system architecture&lt;/li&gt;
&lt;li&gt;domain modelling&lt;/li&gt;
&lt;li&gt;product decisions&lt;/li&gt;
&lt;li&gt;defining boundaries between components&lt;/li&gt;
&lt;li&gt;deciding when something is the wrong feature entirely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, the AI can accelerate implementation, but it doesn’t replace the thinking that keeps a system stable.&lt;/p&gt;

&lt;p&gt;If anything, those decisions become more important because implementation is now so fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I'm writing about this
&lt;/h2&gt;

&lt;p&gt;There’s a lot of hype around AI coding tools right now.&lt;/p&gt;

&lt;p&gt;Some of it is deserved.&lt;/p&gt;

&lt;p&gt;But the real shift isn't simply that AI can write code.&lt;/p&gt;

&lt;p&gt;The real shift is that a single developer can now coordinate something that behaves more like a small engineering team — if the workflow is structured properly.&lt;/p&gt;

&lt;p&gt;That’s what I’m experimenting with.&lt;/p&gt;

&lt;p&gt;Building a real system.&lt;/p&gt;

&lt;p&gt;Using AI as the primary implementation layer.&lt;/p&gt;

&lt;p&gt;And documenting what works, what breaks, and what turns into chaos if you're not careful.&lt;/p&gt;

&lt;p&gt;I'll keep sharing what I learn as the project evolves.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>sideprojects</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>The AI Development Loop</title>
      <dc:creator>Gleno</dc:creator>
      <pubDate>Mon, 09 Mar 2026 03:54:42 +0000</pubDate>
      <link>https://forem.com/naysmith/the-ai-development-loop-3dld</link>
      <guid>https://forem.com/naysmith/the-ai-development-loop-3dld</guid>
      <description>&lt;p&gt;A lot of discussion around AI-assisted development focuses on tools.&lt;/p&gt;

&lt;p&gt;Which model should you use?&lt;br&gt;&lt;br&gt;
Which coding assistant is best?&lt;br&gt;&lt;br&gt;
Which IDE plugin works the fastest?&lt;/p&gt;

&lt;p&gt;Those questions are interesting, but they miss something more important.&lt;/p&gt;

&lt;p&gt;AI-assisted development is not really about tools.&lt;/p&gt;

&lt;p&gt;It is about workflow.&lt;/p&gt;

&lt;p&gt;When people struggle with AI coding tools, it is usually not because the model is bad. It is because the workflow is wrong. They ask the AI to produce code immediately, without context, planning, or structure.&lt;/p&gt;

&lt;p&gt;That works occasionally for small tasks.&lt;/p&gt;

&lt;p&gt;But for real systems, it quickly creates confusion.&lt;/p&gt;

&lt;p&gt;What actually works is a loop.&lt;/p&gt;

&lt;p&gt;A repeatable cycle that separates thinking from implementation and keeps the human in control of the system.&lt;/p&gt;

&lt;p&gt;I think of it as &lt;strong&gt;the AI development loop&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The basic structure
&lt;/h2&gt;

&lt;p&gt;The loop is simple.&lt;/p&gt;

&lt;p&gt;An idea appears.&lt;br&gt;&lt;br&gt;
The architecture is examined.&lt;br&gt;&lt;br&gt;
A plan is created.&lt;br&gt;&lt;br&gt;
AI implements the plan.&lt;br&gt;&lt;br&gt;
The result is tested.&lt;br&gt;&lt;br&gt;
Then the system evolves again.&lt;/p&gt;

&lt;p&gt;Visually it looks something like this:&lt;/p&gt;

&lt;p&gt;Idea&lt;br&gt;&lt;br&gt;
↓&lt;br&gt;&lt;br&gt;
Architecture Review&lt;br&gt;&lt;br&gt;
↓&lt;br&gt;&lt;br&gt;
Implementation Plan&lt;br&gt;&lt;br&gt;
↓&lt;br&gt;&lt;br&gt;
AI Implementation&lt;br&gt;&lt;br&gt;
↓&lt;br&gt;&lt;br&gt;
Testing&lt;br&gt;&lt;br&gt;
↓&lt;br&gt;&lt;br&gt;
Iteration  &lt;/p&gt;

&lt;p&gt;The important detail is not the diagram itself.&lt;/p&gt;

&lt;p&gt;The important detail is &lt;strong&gt;the order&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Thinking happens before implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this loop matters
&lt;/h2&gt;

&lt;p&gt;In traditional development, the implementation phase was expensive. Writing code took time, and that time naturally slowed developers down enough to think about what they were doing.&lt;/p&gt;

&lt;p&gt;AI removes that friction.&lt;/p&gt;

&lt;p&gt;You can go from idea to code almost instantly. The temptation is to skip the thinking stage entirely and just see what happens.&lt;/p&gt;

&lt;p&gt;That usually produces one of two outcomes.&lt;/p&gt;

&lt;p&gt;Sometimes the AI gets lucky and produces something close to what you wanted.&lt;/p&gt;

&lt;p&gt;More often, it produces something that technically works but does not quite fit the system. The architecture drifts, patterns diverge, and small inconsistencies begin to accumulate.&lt;/p&gt;

&lt;p&gt;The loop prevents that drift.&lt;/p&gt;

&lt;p&gt;It creates a structure where AI accelerates implementation but does not replace architectural thinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step one: the idea
&lt;/h2&gt;

&lt;p&gt;Every change starts with an idea.&lt;/p&gt;

&lt;p&gt;A feature request.&lt;br&gt;&lt;br&gt;
A bug report.&lt;br&gt;&lt;br&gt;
A new capability the product needs.&lt;/p&gt;

&lt;p&gt;The temptation is to jump straight to implementation.&lt;/p&gt;

&lt;p&gt;Instead, the idea should first be examined in the context of the system.&lt;/p&gt;

&lt;p&gt;Where does this belong?&lt;br&gt;&lt;br&gt;
Does the system already contain something similar?&lt;br&gt;&lt;br&gt;
Is this actually the right feature?&lt;/p&gt;

&lt;p&gt;AI can help answer those questions, but it should not start writing code yet.&lt;/p&gt;

&lt;p&gt;The first step is always understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step two: architecture review
&lt;/h2&gt;

&lt;p&gt;Before any implementation happens, the system should be examined.&lt;/p&gt;

&lt;p&gt;This means looking at the current architecture and asking:&lt;/p&gt;

&lt;p&gt;How does this part of the system work today?&lt;/p&gt;

&lt;p&gt;What services, components, or models already exist?&lt;/p&gt;

&lt;p&gt;What patterns are already in place?&lt;/p&gt;

&lt;p&gt;AI can be surprisingly good at this stage. When asked to scan a repository and explain the relevant architecture, it can quickly surface relationships between files and components that might otherwise take time to rediscover.&lt;/p&gt;

&lt;p&gt;The goal is not to change anything yet.&lt;/p&gt;

&lt;p&gt;The goal is to understand the system before touching it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step three: the implementation plan
&lt;/h2&gt;

&lt;p&gt;Once the system is understood, the next step is to define the smallest safe change.&lt;/p&gt;

&lt;p&gt;A good implementation plan answers a few key questions:&lt;/p&gt;

&lt;p&gt;Which files need to change?&lt;br&gt;&lt;br&gt;
What new code needs to exist?&lt;br&gt;&lt;br&gt;
What existing patterns should be reused?&lt;br&gt;&lt;br&gt;
What should not change?&lt;/p&gt;

&lt;p&gt;This step is where AI becomes particularly useful. The model can propose a structured plan that outlines the steps required to implement the idea while preserving the existing architecture.&lt;/p&gt;

&lt;p&gt;But the plan should always be reviewed by a human before implementation begins.&lt;/p&gt;

&lt;p&gt;This is the moment where architectural mistakes are easiest to catch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step four: AI implementation
&lt;/h2&gt;

&lt;p&gt;Only now does the AI begin writing code.&lt;/p&gt;

&lt;p&gt;Because the plan is clear, the implementation phase becomes dramatically faster. The AI knows what it is supposed to build, which files to touch, and what patterns to follow.&lt;/p&gt;

&lt;p&gt;This is where AI shines.&lt;/p&gt;

&lt;p&gt;Tasks that used to take hours — scaffolding routes, wiring handlers, creating models, generating types, and writing tests — can now happen in minutes.&lt;/p&gt;

&lt;p&gt;The key is that the AI is implementing a plan, not improvising.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step five: testing
&lt;/h2&gt;

&lt;p&gt;Implementation is never the end of the loop.&lt;/p&gt;

&lt;p&gt;Every change must pass through testing.&lt;/p&gt;

&lt;p&gt;Type checks, automated tests, and manual verification all play a role here. AI can help generate tests, but the human still needs to validate that the behaviour matches the intention.&lt;/p&gt;

&lt;p&gt;Testing acts as the control layer that prevents fast implementation from becoming fast mistakes.&lt;/p&gt;

&lt;p&gt;If something breaks, the loop simply runs again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step six: iteration
&lt;/h2&gt;

&lt;p&gt;Software development is not linear.&lt;/p&gt;

&lt;p&gt;The result of one change often reveals a better version of the same idea. Maybe the architecture can be simplified. Maybe the feature belongs in a different place. Maybe the product should evolve slightly.&lt;/p&gt;

&lt;p&gt;The loop makes iteration natural.&lt;/p&gt;

&lt;p&gt;Each pass through the cycle refines the system without losing coherence.&lt;/p&gt;

&lt;p&gt;Instead of chaotic bursts of AI-generated code, development becomes a controlled series of improvements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this loop scales surprisingly well
&lt;/h2&gt;

&lt;p&gt;One of the most interesting things about the AI development loop is how well it works for a single developer.&lt;/p&gt;

&lt;p&gt;Because AI handles much of the mechanical implementation, the human can spend more time on the higher-leverage parts of development:&lt;/p&gt;

&lt;p&gt;understanding the system&lt;br&gt;&lt;br&gt;
making architectural decisions&lt;br&gt;&lt;br&gt;
reviewing plans&lt;br&gt;&lt;br&gt;
testing outcomes&lt;/p&gt;

&lt;p&gt;This is one of the reasons the idea of a “dev team of one” is becoming more realistic.&lt;/p&gt;

&lt;p&gt;The loop turns AI into a multiplier for thoughtful engineering rather than a replacement for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real lesson
&lt;/h2&gt;

&lt;p&gt;The most important shift in AI-assisted development is not that AI writes code.&lt;/p&gt;

&lt;p&gt;It is that developers now have the ability to separate thinking from implementation more cleanly than ever before.&lt;/p&gt;

&lt;p&gt;When that separation is respected, the results are powerful.&lt;/p&gt;

&lt;p&gt;When it is ignored, the result is chaos.&lt;/p&gt;

&lt;p&gt;The AI development loop is simply a way of keeping those two worlds in the right order.&lt;/p&gt;

&lt;p&gt;Think first.&lt;/p&gt;

&lt;p&gt;Build second.&lt;/p&gt;

&lt;p&gt;Repeat.&lt;/p&gt;




</description>
      <category>ai</category>
      <category>productivity</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
    </item>
  </channel>
</rss>
