<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Brunelly</title>
    <description>The latest articles on Forem by Brunelly (@brunellyai).</description>
    <link>https://forem.com/brunellyai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/brunellyai"/>
    <language>en</language>
    <item>
      <title>Vibe Coding Isn't Bad, You Are</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Wed, 04 Mar 2026 15:37:29 +0000</pubDate>
      <link>https://forem.com/brunellyai/vibe-coding-isnt-bad-you-are-31</link>
      <guid>https://forem.com/brunellyai/vibe-coding-isnt-bad-you-are-31</guid>
      <description>&lt;p&gt;I cannot get through a day without reading that everything produced by vibe coding is awful. The code isn’t production ready, config values are hardcoded, secrets leak out on the client side and entire databases are exposed. It’s best to not even consider a multi-tenant SaaS solution. &lt;/p&gt;

&lt;p&gt;And we’re in a polarised world – so everyone is either for or against vibe coding. &lt;/p&gt;

&lt;p&gt;The vibers? It democratises development, anyone can put their ideas into practice, and it saves a tonne of time testing a concept before investing months and thousands of dollars? &lt;/p&gt;

&lt;p&gt;The haters? The quality is poor, developers will lose their jobs, it’s just AI generated slop, it’s impossible to finish off an idea when things get complicated. &lt;/p&gt;

&lt;p&gt;Is vibe coding actually awful, or are you just doing it wrong? &lt;/p&gt;

&lt;h2&gt;
  
  
  It’s A Tool, Not A Religion
&lt;/h2&gt;

&lt;p&gt;Vibe coding isn’t a philosophical principle. It’s a set of tools. If you gave me bag of tools and some raw materials and told me to build a house I’d laugh and politely decline. That’s not my skillset. &lt;/p&gt;

&lt;p&gt;If you gave me a construction crew, told me that they’ll do what I tell them but no more and asked me to build a house – I might be tempted to have a go. &lt;/p&gt;

&lt;p&gt;I’d most likely not consider foundation depth sufficiently, miss a huge amount of bracing in the roof and put bathrooms all over the place even if the pipework made no sense. I’d have an idea of what I wanted but at the end of it the house would fall down or not even finish construction. &lt;/p&gt;

&lt;p&gt;I shouldn’t kid myself about construction. I’ve got a vague idea what a lintel is, I’ve got a reasonable idea of how plumbing and electrical wiring works, but I have none of the skills to design, oversee or construct a house. For that reason, I also am woefully unqualified to oversee a team of people following my instructions. &lt;/p&gt;

&lt;p&gt;This is vibe coding. &lt;/p&gt;

&lt;p&gt;You’re given an incredibly complicated team of experts that will help you but the gotcha is they’ll only do what you want and nothing more.   &lt;/p&gt;

&lt;h2&gt;
  
  
  My Buddy Claude
&lt;/h2&gt;

&lt;p&gt;I started working with Claude last July. I signed up, started a subscription and shouted “create me a game in the style of Monkey Island but based around a cat that rides a crinkle cut chip” into the ether. &lt;/p&gt;

&lt;p&gt;I first had a yellow HTML canvas with rectangle on it. &lt;/p&gt;

&lt;p&gt;Then I had several failed Unity projects that wouldn’t build. &lt;/p&gt;

&lt;p&gt;Finally, I created a console app that was only pretty awful. &lt;/p&gt;

&lt;p&gt;I’d spent $200 on AI credits. Oy. &lt;/p&gt;

&lt;p&gt;We’ve evolved since then. I started using Claude to work through architectural ideas.  I’ve written previously about the five-day coding binge where I worked with Claude to create my own programming language, compiler, assembler and virtual machine from scratch with zero experience. But Claude wrote none of it. &lt;/p&gt;

&lt;p&gt;Over the last three months things have changed, and Claude probably writes 75% of my code. It’s a combination of model improvements and how I manage Claude. &lt;/p&gt;

&lt;p&gt;I no longer treat it as a font of all knowledge – but rather I tightly define what I want to be done, I focus on testing, I review as if a member of my team had done the work and I iterate. &lt;/p&gt;

&lt;p&gt;The key to it all though is that I am tightly defining exactly what I want it to create, I can verify that it is correct and it goes through exactly the same process as code that a member of my team would write. &lt;/p&gt;

&lt;p&gt;I can do this because I’ve been coding since I was seven and thirty-five years later, I have a fairly good idea how this is supposed to work. But I still couldn’t vibe code a Unity app as I have no experience in that sphere. &lt;/p&gt;

&lt;p&gt;With any tool you need to know how to use it. Vibe coding tools are no different. &lt;/p&gt;

&lt;p&gt;I’m sorry – but vibe coding isn’t the problem. The users are. &lt;/p&gt;

&lt;h2&gt;
  
  
  If You Can’t Code, You Can’t Vibe Code
&lt;/h2&gt;

&lt;p&gt;I can code. I get phenomenal results out of AI coding tools now. This year they have finally reached the ‘code per hour’ that I can produce giving me the ability to oversee rather than code. &lt;/p&gt;

&lt;p&gt;But if you can’t code, that is not going to be your experience. &lt;/p&gt;

&lt;p&gt;Sure, you can create a quick proof of concept and see if an idea works. But it will be no better than my theoretically wonky house.  It will fall down. &lt;/p&gt;

&lt;p&gt;To use any vibe coding tool, you need to understand code, architecture, security, performance, deployments and monitoring. If you don’t know what you need to ask for the AI won’t magically think about it. &lt;/p&gt;

&lt;p&gt;And if you’re a developer and you know all that? It still might not be for you. What feature are you building? What does a user need? What should it look like? What kind of test plan do you need to follow? How should you sequence your features to engage with users more quickly and what adds the most business value? &lt;/p&gt;

&lt;p&gt;None of those were technical questions but they all need answering if you want to build an actual product. &lt;/p&gt;

&lt;p&gt;Not All Is Lost Though &lt;/p&gt;

&lt;p&gt;If you want to vibe code, then that's great. Use it to learn. Ask questions. Read widely - not just about code but about all the disciplines that make a product. Throw away what you created and start again. Use multiple tools and learn how to master them. &lt;/p&gt;

&lt;p&gt;The future absolutely contains 'vibe coder' as a role - but it will be someone with sufficient inter-disciplinary experience to oversee the AI tools they're using. You don't need mile-deep knowledge in everything, but you do need mile-wide, inch-deep understanding across the board to even have a chance of asking the right questions. &lt;/p&gt;

&lt;p&gt;It's not a magic box. It's a set of tools. &lt;/p&gt;

&lt;p&gt;Master them and you can create something extraordinary. &lt;/p&gt;

&lt;p&gt;Shout into the void and a yellow HTML canvas can be yours. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>discuss</category>
      <category>coding</category>
      <category>software</category>
    </item>
    <item>
      <title>Your AI Product Is Not A Real Business</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Mon, 23 Feb 2026 10:19:57 +0000</pubDate>
      <link>https://forem.com/brunellyai/your-ai-product-is-not-a-real-business-4knn</link>
      <guid>https://forem.com/brunellyai/your-ai-product-is-not-a-real-business-4knn</guid>
      <description>&lt;p&gt;I just got back from STEP 2026 in Dubai. Whilst there were some genuinely amazing businesses there, I also saw a lot of companies that won’t make their first year. &lt;/p&gt;

&lt;p&gt;Most startups now splash AI on to all their marketing. AI is not your product. AI itself does not deliver business value. Unless you are a frontier lab, AI is nothing more than a tool in your stack. Nobody is there shouting ‘MongoDB-enabled trading platform’. &lt;/p&gt;

&lt;p&gt;Users don’t care if it’s AI. Investors don’t care if it’s AI. They care about what it does, what problem it solves and whether there’s space for it in the market. &lt;/p&gt;

&lt;p&gt;And if you want to sell to real businesses? I've sat across the table from $5bn consultancies evaluating AI tools. They ask about your architecture, your data residency, how to deploy it on-prem and what you actually own. If the answer is 'we call the OpenAI API' – the meeting is over. &lt;/p&gt;

&lt;h2&gt;
  
  
  Wrappers… Everywhere
&lt;/h2&gt;

&lt;p&gt;There are tens of thousands of AI startups right now whose core premise is: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vague idea about product &lt;/li&gt;
&lt;li&gt;Put a bit of a wrapper around an AI model &lt;/li&gt;
&lt;li&gt;Display it to the user &lt;/li&gt;
&lt;li&gt;Charge $29/month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not a business. Your users could most likely just use ChatGPT – why would they want another subscription? &lt;/p&gt;

&lt;p&gt;It’s not defensible. There’s no IP there. There’s nothing unique. On the contrary your whole business is at risk of changes to a model. &lt;/p&gt;

&lt;p&gt;Remember when everyone built apps on top of Twitter and then they changed API rules overnight? That can happen to you if you’re just wrapping a model. It’s even worse here as the frontier models have incentive to compete against you when you come up with a good, simple idea. &lt;/p&gt;

&lt;p&gt;Let’s not even get into the fact that you’re open to a huge cost base where you aren’t in control of input or output tokens and just rack up an AI bill behind the scenes. &lt;/p&gt;

&lt;p&gt;The playbook right now seems to be: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wrapper launches and gets traction &lt;/li&gt;
&lt;li&gt;Model provider notices traction &lt;/li&gt;
&lt;li&gt;Model provider adds features to handle some of this in house &lt;/li&gt;
&lt;li&gt;Business case evaporates &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You’re doing market research for OpenAI – and they can execute better than you can. &lt;/p&gt;

&lt;p&gt;Stop doing this. &lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Coding Is Making This Worse
&lt;/h2&gt;

&lt;p&gt;My most successful summary of &lt;a href="https://go.brunelly.com/devto" rel="noopener noreferrer"&gt;Brunelly&lt;/a&gt; at STEP 2026 was ‘You know what vibe coding is right? We’re the opposite of that. We actually create real-world enterprise quality software’. &lt;/p&gt;

&lt;p&gt;That has to be the opener because vibe coding has got such a bad reputation in the real-world. Security gaps, bugs, scalability, deployments, infrastructure management, compliance – all non-existent. &lt;/p&gt;

&lt;p&gt;And vibe coded AI products take the worst of all worlds. The simplest AI wrapper around some basic CRUD operations but lacking any scalability. &lt;/p&gt;

&lt;p&gt;Please stop. &lt;/p&gt;

&lt;h2&gt;
  
  
  There’s A Better Way To Do AI
&lt;/h2&gt;

&lt;p&gt;I’ve spent the last year building Maitento – our AI native operating system. Think of it as a cross between Unix and AWS but AI native. Models are drivers. There are different process types (Linux containers, AI’s interacting with each other, apps developed in our own programming language, code generation orchestration). Every agent can connect to any OpenAPI or MCP server out there. Applications are defined declaratively. Shell. RAG. Memory system. Context management. Multi-modal. There’s a lot. &lt;/p&gt;

&lt;p&gt;This is the iceberg we needed to create a real enterprise-ready AI-enabled application. &lt;/p&gt;

&lt;p&gt;Why did we need it? Extensibility. Quality. Scalability. Performance. Speed of development. Duct-taping a bunch of Python scripts together didn’t cut it. &lt;/p&gt;

&lt;p&gt;I’m not saying you need the level of orchestration that we have – but wanted to emphasise that the moving pieces in enterprise grade AI orchestration are far more complex. &lt;/p&gt;

&lt;p&gt;Do you think ChatGPT is just a wrapper around their own API with some system prompts?  There’s file management, prompt injection detection, context analysis, memory management, rolling context windows, deployments, scalability, backend queueing, real-time streaming across millions of users, multi-modal input, distributed Python execution environments. ChatGPT itself has a ‘call the model’ step but it’s the tiniest part of the overall infrastructure. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;It’s easy to call an API. It’s far harder to build real infrastructure than many founders realise. &lt;/p&gt;

&lt;p&gt;Founders want to ship so rush to deliver. But that doesn’t mean you’re actually building a business – you’re building a tech demo. &lt;/p&gt;

&lt;p&gt;A demo is not a product. It’s a controlled environment that doesn’t replicate reality. &lt;/p&gt;

&lt;p&gt;The gap between impressive demo and production-grade product in AI is wider than in any other category of software. Because AI systems fail in ways that traditional software doesn't. They hallucinate, they lose context, they confidently produce wrong outputs.  &lt;/p&gt;

&lt;p&gt;Managing that failure mode requires infrastructure. Real infrastructure. Not a try/catch block around an API call. &lt;/p&gt;

&lt;h2&gt;
  
  
  Build Something That Matters
&lt;/h2&gt;

&lt;p&gt;The AI gold rush is producing a lot of shovels. &lt;/p&gt;

&lt;p&gt;Most of those shovels are made of cardboard. &lt;/p&gt;

&lt;p&gt;The companies that will still exist in five years are the ones building real infrastructure today. Not just calling APIs. Not chaining prompts. Not wrapping someone else's intelligence in a pretty interface and calling it innovation. &lt;/p&gt;

&lt;p&gt;Build the thing that's hard to build. That's the only strategy that works. It always has been. &lt;/p&gt;

&lt;p&gt;If you were able to build it in a few days, so can anyone else. &lt;/p&gt;

&lt;p&gt;If it’s difficult for you then it is for your competitors. &lt;/p&gt;

&lt;p&gt;And then you may actually have a genuinely novel business. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
    <item>
      <title>How Many Rs Are There Really In Strawberry? AI Is So Stupid</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Mon, 16 Feb 2026 15:16:12 +0000</pubDate>
      <link>https://forem.com/brunellyai/how-many-rs-are-there-really-in-strawberry-ai-is-so-stupid-2f5k</link>
      <guid>https://forem.com/brunellyai/how-many-rs-are-there-really-in-strawberry-ai-is-so-stupid-2f5k</guid>
      <description>&lt;p&gt;How many Rs are there in the word &lt;em&gt;strawberry&lt;/em&gt;? AI can’t tell you. Apparently. You’ve all seen it. Screenshots, Reddit threads, smug tweets. Models tripping over letters like toddlers. Everyone pointing and laughing. Reassuring stuff.&lt;/p&gt;

&lt;p&gt;Wind the clock back a little.&lt;/p&gt;

&lt;p&gt;It’s 2023. Image generation is exploding. It’s magical. Also: why does that hand have five fingers &lt;em&gt;and&lt;/em&gt; a thumb?&lt;/p&gt;

&lt;p&gt;A year later and we’ve uncovered a new, devastating limitation. AI cannot render a wine glass completely full. Half the internet concludes: preposterous technology, case closed.&lt;/p&gt;

&lt;p&gt;By 2025 things are truly dire. Models still can’t reliably count the Rs in strawberry. Ask for a seahorse emoji and they spiral into what looks suspiciously like an existential crisis.&lt;/p&gt;

&lt;p&gt;These examples don’t matter. Not really.&lt;/p&gt;

&lt;p&gt;What’s interesting is how obsessively we return to them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cgdimx9gp85p6r599mw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cgdimx9gp85p6r599mw.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  It Will Never Be Able To Code Though
&lt;/h2&gt;

&lt;p&gt;The memes are obvious if you use AI regularly. But this reflex isn’t limited to casual users. Technical people do it too and often more loudly.&lt;/p&gt;

&lt;p&gt;Early 2023: ChatGPT can spit out a half-decent for loop. Sometimes it even answers technical questions correctly. Incredible. But obviously it can’t build an app.&lt;/p&gt;

&lt;p&gt;Late 2024: we’ve got basic code-generation tools. Still, no danger. It makes too many mistakes. Barely junior level.&lt;/p&gt;

&lt;p&gt;2025: the year of the vibe coder. Suddenly everyone can spin up a website. Sure, it’s riddled with security holes and questionable decisions. So again: no threat. We’ll just clean it up. AI is junk.&lt;/p&gt;

&lt;p&gt;For years now, we’ve watched models repeatedly blow past their previous ceilings. Each time, the criticism simply slides sideways to the next obvious limitation.&lt;/p&gt;

&lt;p&gt;Reddit is still full of people pointing out how stupid AI is. They’re not wrong. They’re just always late and missing the important part.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is AI Stupid Though?
&lt;/h2&gt;

&lt;p&gt;Before getting philosophical, it’s worth grounding this in reality. These glitches exist for reasons. If you’re building with AI, you need to understand them.&lt;/p&gt;

&lt;p&gt;How often have you seen a photograph of a wine glass filled perfectly to the brim? Until recently: almost never. That means that the model hasn’t either. It’s not failing, it’s interpolating from a deeply human dataset.&lt;/p&gt;

&lt;p&gt;Why do seahorse emojis cause chaos? Because at some point the internet collectively decided a seahorse emoji existed. Reddit talked about it. Joked about it. Imagined it. The model learns that seahorse emoji is plausible and goes to insert it. Then, mid-generation, it realizes that it doesn’t exist and starts chasing its own tail ad-infinitum.&lt;/p&gt;

&lt;p&gt;Why does AI-generated code contain errors? Because it’s trained on Stack Overflow, blogs, gists, half-finished examples and heroic hacks. You didn’t ask it to be secure. You didn’t constrain it. It’s doing exactly what humanity taught it to do.&lt;/p&gt;

&lt;p&gt;People say AI is a mirror to the user.  It’s also a mirror to humanity… and a lot of what we’re seeing reflected back isn’t flattering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k237de0q7kxi8j29s76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k237de0q7kxi8j29s76.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Does It Matter?
&lt;/h2&gt;

&lt;p&gt;Because this isn’t abstract. It has real consequences – for society and for anyone building real products with AI baked in. If you’re developing on top of AI and you don’t understand how it fails, you’re already in trouble.&lt;/p&gt;

&lt;p&gt;At Brunelly we assume AI is an intern who found a 20-year-old Stack Overflow answer and ran with it. We prompt heavily, guide explicitly, and still don’t trust the output. Everything passes through multiple agents to surface bugs, performance issues, and security concerns.&lt;/p&gt;

&lt;p&gt;The only viable starting point is: it will underperform… so how do we correct it?&lt;/p&gt;

&lt;p&gt;But this misunderstanding goes wider than product design.&lt;/p&gt;

&lt;p&gt;Stack Overflow is effectively dead. Let that sink in. Once the backbone of developer knowledge, now barely visited. Why? Because ChatGPT gives faster, better, contextual answers.&lt;/p&gt;

&lt;p&gt;Music, images, stock photography – already flooded. Half of the lo-fi playlists on Spotify are AI-generated. We just stopped calling it slop.&lt;br&gt;
Remember when everyone complained about AI slop in early 2025?  Bad news: it’s still AI. It’s just a lot less sloppy.&lt;/p&gt;

&lt;p&gt;Jobs are changing. Trust is changing. Evidence is changing.  When you can’t trust photos, videos, reviews or faces then everything downstream shifts with it.&lt;/p&gt;

&lt;p&gt;If you’re focused on strawberries, you’re going to wake up one day and wonder when the world quietly re-organised itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkbetnv533sgdw2w60jz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkbetnv533sgdw2w60jz.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do We Fixate Though?
&lt;/h2&gt;

&lt;p&gt;Because known failure modes are comforting.&lt;/p&gt;

&lt;p&gt;They give us a boundary. Something to point at. Something to laugh at. A place where we still feel safely on top.&lt;/p&gt;

&lt;p&gt;Finding a bug in YouTube is annoying. Finding a bug in AI is reassuring.&lt;br&gt;
The problem is that these failures don’t last.&lt;/p&gt;

&lt;p&gt;Our mental model of AI already lags reality, and that gap is widening. Even if AI progress stopped tomorrow, it would take years for organisations to fully exploit what already exists. Orchestration is immature. Skills are scarce. Understanding is shallow.&lt;/p&gt;

&lt;p&gt;This isn’t about whether LLMs lead to AGI or consciousness. It doesn’t matter. The systems we already have are enough to reshape everything if we actually learn how to use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does It Mean For Builders?
&lt;/h2&gt;

&lt;p&gt;It means stability is gone. The model you used last month is obsolete. The workaround you wrote last week no longer applies. Every solved edge case is replaced by three new ones.&lt;/p&gt;

&lt;p&gt;This isn’t like JavaScript frameworks. This is orders of magnitude faster.&lt;br&gt;
You have to design for an environment that mutates continuously. Trust becomes a UX problem, not a marketing one. AI labels actively reduce confidence.&lt;/p&gt;

&lt;p&gt;Textbox-and-send is not a product strategy.&lt;/p&gt;

&lt;p&gt;Trust nothing. Convert outputs into constrained state machines. Design experiences that absorb failure gracefully.&lt;/p&gt;

&lt;p&gt;We didn’t build Brunelly because AI is magical. We built it because AI is a tool that can be harnessed and nobody else was doing it right. And the orchestrator underneath it evolves almost as fast as the models themselves – because it has to.&lt;/p&gt;

&lt;h2&gt;
  
  
  And What Does It Mean For All Of Us?
&lt;/h2&gt;

&lt;p&gt;That’s the real question.&lt;/p&gt;

&lt;p&gt;I was coding in the 90s during the original internet boom and bust. It wasn’t like this. Code lasted years. Systems were stable. Patterns endured.&lt;/p&gt;

&lt;p&gt;This time is different – not because the tech is smarter, but because the pace is relentless.&lt;/p&gt;

&lt;p&gt;Laughing at AI’s mistakes is fine. It &lt;em&gt;is&lt;/em&gt; funny.  But it’s also a distraction.&lt;/p&gt;

&lt;p&gt;Assume the world is changing before you notice it.&lt;/p&gt;

&lt;p&gt;If you’re building: design for failure, assume the system will outgrow you mid-flight, and plan accordingly.&lt;/p&gt;

&lt;p&gt;And maybe stop counting Rs.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
