<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Guy</title>
    <description>The latest articles on Forem by Guy (@guypowell).</description>
    <link>https://forem.com/guypowell</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/guypowell"/>
    <language>en</language>
    <item>
      <title>Vibe Coding Isn't Bad, You Are</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Wed, 04 Mar 2026 15:37:29 +0000</pubDate>
      <link>https://forem.com/brunellyai/vibe-coding-isnt-bad-you-are-31</link>
      <guid>https://forem.com/brunellyai/vibe-coding-isnt-bad-you-are-31</guid>
      <description>&lt;p&gt;I cannot get through a day without reading that everything produced by vibe coding is awful. The code isn’t production ready, config values are hardcoded, secrets leak out on the client side and entire databases are exposed. It’s best to not even consider a multi-tenant SaaS solution. &lt;/p&gt;

&lt;p&gt;And we’re in a polarised world – so everyone is either for or against vibe coding. &lt;/p&gt;

&lt;p&gt;The vibers? It democratises development, anyone can put their ideas into practice, and it saves a tonne of time testing a concept before investing months and thousands of dollars? &lt;/p&gt;

&lt;p&gt;The haters? The quality is poor, developers will lose their jobs, it’s just AI generated slop, it’s impossible to finish off an idea when things get complicated. &lt;/p&gt;

&lt;p&gt;Is vibe coding actually awful, or are you just doing it wrong? &lt;/p&gt;

&lt;h2&gt;
  
  
  It’s A Tool, Not A Religion
&lt;/h2&gt;

&lt;p&gt;Vibe coding isn’t a philosophical principle. It’s a set of tools. If you gave me bag of tools and some raw materials and told me to build a house I’d laugh and politely decline. That’s not my skillset. &lt;/p&gt;

&lt;p&gt;If you gave me a construction crew, told me that they’ll do what I tell them but no more and asked me to build a house – I might be tempted to have a go. &lt;/p&gt;

&lt;p&gt;I’d most likely not consider foundation depth sufficiently, miss a huge amount of bracing in the roof and put bathrooms all over the place even if the pipework made no sense. I’d have an idea of what I wanted but at the end of it the house would fall down or not even finish construction. &lt;/p&gt;

&lt;p&gt;I shouldn’t kid myself about construction. I’ve got a vague idea what a lintel is, I’ve got a reasonable idea of how plumbing and electrical wiring works, but I have none of the skills to design, oversee or construct a house. For that reason, I also am woefully unqualified to oversee a team of people following my instructions. &lt;/p&gt;

&lt;p&gt;This is vibe coding. &lt;/p&gt;

&lt;p&gt;You’re given an incredibly complicated team of experts that will help you but the gotcha is they’ll only do what you want and nothing more.   &lt;/p&gt;

&lt;h2&gt;
  
  
  My Buddy Claude
&lt;/h2&gt;

&lt;p&gt;I started working with Claude last July. I signed up, started a subscription and shouted “create me a game in the style of Monkey Island but based around a cat that rides a crinkle cut chip” into the ether. &lt;/p&gt;

&lt;p&gt;I first had a yellow HTML canvas with rectangle on it. &lt;/p&gt;

&lt;p&gt;Then I had several failed Unity projects that wouldn’t build. &lt;/p&gt;

&lt;p&gt;Finally, I created a console app that was only pretty awful. &lt;/p&gt;

&lt;p&gt;I’d spent $200 on AI credits. Oy. &lt;/p&gt;

&lt;p&gt;We’ve evolved since then. I started using Claude to work through architectural ideas.  I’ve written previously about the five-day coding binge where I worked with Claude to create my own programming language, compiler, assembler and virtual machine from scratch with zero experience. But Claude wrote none of it. &lt;/p&gt;

&lt;p&gt;Over the last three months things have changed, and Claude probably writes 75% of my code. It’s a combination of model improvements and how I manage Claude. &lt;/p&gt;

&lt;p&gt;I no longer treat it as a font of all knowledge – but rather I tightly define what I want to be done, I focus on testing, I review as if a member of my team had done the work and I iterate. &lt;/p&gt;

&lt;p&gt;The key to it all though is that I am tightly defining exactly what I want it to create, I can verify that it is correct and it goes through exactly the same process as code that a member of my team would write. &lt;/p&gt;

&lt;p&gt;I can do this because I’ve been coding since I was seven and thirty-five years later, I have a fairly good idea how this is supposed to work. But I still couldn’t vibe code a Unity app as I have no experience in that sphere. &lt;/p&gt;

&lt;p&gt;With any tool you need to know how to use it. Vibe coding tools are no different. &lt;/p&gt;

&lt;p&gt;I’m sorry – but vibe coding isn’t the problem. The users are. &lt;/p&gt;

&lt;h2&gt;
  
  
  If You Can’t Code, You Can’t Vibe Code
&lt;/h2&gt;

&lt;p&gt;I can code. I get phenomenal results out of AI coding tools now. This year they have finally reached the ‘code per hour’ that I can produce giving me the ability to oversee rather than code. &lt;/p&gt;

&lt;p&gt;But if you can’t code, that is not going to be your experience. &lt;/p&gt;

&lt;p&gt;Sure, you can create a quick proof of concept and see if an idea works. But it will be no better than my theoretically wonky house.  It will fall down. &lt;/p&gt;

&lt;p&gt;To use any vibe coding tool, you need to understand code, architecture, security, performance, deployments and monitoring. If you don’t know what you need to ask for the AI won’t magically think about it. &lt;/p&gt;

&lt;p&gt;And if you’re a developer and you know all that? It still might not be for you. What feature are you building? What does a user need? What should it look like? What kind of test plan do you need to follow? How should you sequence your features to engage with users more quickly and what adds the most business value? &lt;/p&gt;

&lt;p&gt;None of those were technical questions but they all need answering if you want to build an actual product. &lt;/p&gt;

&lt;p&gt;Not All Is Lost Though &lt;/p&gt;

&lt;p&gt;If you want to vibe code, then that's great. Use it to learn. Ask questions. Read widely - not just about code but about all the disciplines that make a product. Throw away what you created and start again. Use multiple tools and learn how to master them. &lt;/p&gt;

&lt;p&gt;The future absolutely contains 'vibe coder' as a role - but it will be someone with sufficient inter-disciplinary experience to oversee the AI tools they're using. You don't need mile-deep knowledge in everything, but you do need mile-wide, inch-deep understanding across the board to even have a chance of asking the right questions. &lt;/p&gt;

&lt;p&gt;It's not a magic box. It's a set of tools. &lt;/p&gt;

&lt;p&gt;Master them and you can create something extraordinary. &lt;/p&gt;

&lt;p&gt;Shout into the void and a yellow HTML canvas can be yours. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>discuss</category>
      <category>coding</category>
      <category>software</category>
    </item>
    <item>
      <title>Vibe Coding Isn't Bad, You Are</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Wed, 04 Mar 2026 15:27:31 +0000</pubDate>
      <link>https://forem.com/guypowell/vibe-coding-isnt-bad-you-are-b4l</link>
      <guid>https://forem.com/guypowell/vibe-coding-isnt-bad-you-are-b4l</guid>
      <description>&lt;p&gt;I cannot get through a day without reading that everything produced by vibe coding is awful. The code isn’t production ready, config values are hardcoded, secrets leak out on the client side and entire databases are exposed. It’s best to not even consider a multi-tenant SaaS solution. &lt;/p&gt;

&lt;p&gt;And we’re in a polarised world – so everyone is either for or against vibe coding. &lt;/p&gt;

&lt;p&gt;The vibers? It democratises development, anyone can put their ideas into practice, and it saves a tonne of time testing a concept before investing months and thousands of dollars? &lt;/p&gt;

&lt;p&gt;The haters? The quality is poor, developers will lose their jobs, it’s just AI generated slop, it’s impossible to finish off an idea when things get complicated. &lt;/p&gt;

&lt;p&gt;Is vibe coding actually awful, or are you just doing it wrong? &lt;/p&gt;

&lt;h2&gt;
  
  
  It’s A Tool, Not A Religion
&lt;/h2&gt;

&lt;p&gt;Vibe coding isn’t a philosophical principle. It’s a set of tools. If you gave me bag of tools and some raw materials and told me to build a house I’d laugh and politely decline. That’s not my skillset. &lt;/p&gt;

&lt;p&gt;If you gave me a construction crew, told me that they’ll do what I tell them but no more and asked me to build a house – I might be tempted to have a go. &lt;/p&gt;

&lt;p&gt;I’d most likely not consider foundation depth sufficiently, miss a huge amount of bracing in the roof and put bathrooms all over the place even if the pipework made no sense. I’d have an idea of what I wanted but at the end of it the house would fall down or not even finish construction. &lt;/p&gt;

&lt;p&gt;I shouldn’t kid myself about construction. I’ve got a vague idea what a lintel is, I’ve got a reasonable idea of how plumbing and electrical wiring works, but I have none of the skills to design, oversee or construct a house. For that reason, I also am woefully unqualified to oversee a team of people following my instructions. &lt;/p&gt;

&lt;p&gt;This is vibe coding. &lt;/p&gt;

&lt;p&gt;You’re given an incredibly complicated team of experts that will help you but the gotcha is they’ll only do what you want and nothing more.   &lt;/p&gt;

&lt;h2&gt;
  
  
  My Buddy Claude
&lt;/h2&gt;

&lt;p&gt;I started working with Claude last July. I signed up, started a subscription and shouted “create me a game in the style of Monkey Island but based around a cat that rides a crinkle cut chip” into the ether. &lt;/p&gt;

&lt;p&gt;I first had a yellow HTML canvas with rectangle on it. &lt;/p&gt;

&lt;p&gt;Then I had several failed Unity projects that wouldn’t build. &lt;/p&gt;

&lt;p&gt;Finally, I created a console app that was only pretty awful. &lt;/p&gt;

&lt;p&gt;I’d spent $200 on AI credits. Oy. &lt;/p&gt;

&lt;p&gt;We’ve evolved since then. I started using Claude to work through architectural ideas.  I’ve written previously about the five-day coding binge where I worked with Claude to create my own programming language, compiler, assembler and virtual machine from scratch with zero experience. But Claude wrote none of it. &lt;/p&gt;

&lt;p&gt;Over the last three months things have changed, and Claude probably writes 75% of my code. It’s a combination of model improvements and how I manage Claude. &lt;/p&gt;

&lt;p&gt;I no longer treat it as a font of all knowledge – but rather I tightly define what I want to be done, I focus on testing, I review as if a member of my team had done the work and I iterate. &lt;/p&gt;

&lt;p&gt;The key to it all though is that I am tightly defining exactly what I want it to create, I can verify that it is correct and it goes through exactly the same process as code that a member of my team would write. &lt;/p&gt;

&lt;p&gt;I can do this because I’ve been coding since I was seven and thirty-five years later, I have a fairly good idea how this is supposed to work. But I still couldn’t vibe code a Unity app as I have no experience in that sphere. &lt;/p&gt;

&lt;p&gt;With any tool you need to know how to use it. Vibe coding tools are no different. &lt;/p&gt;

&lt;p&gt;I’m sorry – but vibe coding isn’t the problem. The users are. &lt;/p&gt;

&lt;h2&gt;
  
  
  If You Can’t Code, You Can’t Vibe Code
&lt;/h2&gt;

&lt;p&gt;I can code. I get phenomenal results out of AI coding tools now. This year they have finally reached the ‘code per hour’ that I can produce giving me the ability to oversee rather than code. &lt;/p&gt;

&lt;p&gt;But if you can’t code, that is not going to be your experience. &lt;/p&gt;

&lt;p&gt;Sure, you can create a quick proof of concept and see if an idea works. But it will be no better than my theoretically wonky house.  It will fall down. &lt;/p&gt;

&lt;p&gt;To use any vibe coding tool, you need to understand code, architecture, security, performance, deployments and monitoring. If you don’t know what you need to ask for the AI won’t magically think about it. &lt;/p&gt;

&lt;p&gt;And if you’re a developer and you know all that? It still might not be for you. What feature are you building? What does a user need? What should it look like? What kind of test plan do you need to follow? How should you sequence your features to engage with users more quickly and what adds the most business value? &lt;/p&gt;

&lt;p&gt;None of those were technical questions but they all need answering if you want to build an actual product. &lt;/p&gt;

&lt;h2&gt;
  
  
  Not All Is Lost Though
&lt;/h2&gt;

&lt;p&gt;If you want to vibe code, then that's great. Use it to learn. Ask questions. Read widely - not just about code but about all the disciplines that make a product. Throw away what you created and start again. Use multiple tools and learn how to master them. &lt;/p&gt;

&lt;p&gt;The future absolutely contains 'vibe coder' as a role - but it will be someone with sufficient inter-disciplinary experience to oversee the AI tools they're using. You don't need mile-deep knowledge in everything, but you do need mile-wide, inch-deep understanding across the board to even have a chance of asking the right questions. &lt;/p&gt;

&lt;p&gt;It's not a magic box. It's a set of tools. &lt;/p&gt;

&lt;p&gt;Master them and you can create something extraordinary. &lt;/p&gt;

&lt;p&gt;Shout into the void and a yellow HTML canvas can be yours. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>discuss</category>
      <category>coding</category>
      <category>software</category>
    </item>
    <item>
      <title>Your AI Product Is Not A Real Business</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Mon, 23 Feb 2026 10:19:57 +0000</pubDate>
      <link>https://forem.com/brunellyai/your-ai-product-is-not-a-real-business-4knn</link>
      <guid>https://forem.com/brunellyai/your-ai-product-is-not-a-real-business-4knn</guid>
      <description>&lt;p&gt;I just got back from STEP 2026 in Dubai. Whilst there were some genuinely amazing businesses there, I also saw a lot of companies that won’t make their first year. &lt;/p&gt;

&lt;p&gt;Most startups now splash AI on to all their marketing. AI is not your product. AI itself does not deliver business value. Unless you are a frontier lab, AI is nothing more than a tool in your stack. Nobody is there shouting ‘MongoDB-enabled trading platform’. &lt;/p&gt;

&lt;p&gt;Users don’t care if it’s AI. Investors don’t care if it’s AI. They care about what it does, what problem it solves and whether there’s space for it in the market. &lt;/p&gt;

&lt;p&gt;And if you want to sell to real businesses? I've sat across the table from $5bn consultancies evaluating AI tools. They ask about your architecture, your data residency, how to deploy it on-prem and what you actually own. If the answer is 'we call the OpenAI API' – the meeting is over. &lt;/p&gt;

&lt;h2&gt;
  
  
  Wrappers… Everywhere
&lt;/h2&gt;

&lt;p&gt;There are tens of thousands of AI startups right now whose core premise is: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vague idea about product &lt;/li&gt;
&lt;li&gt;Put a bit of a wrapper around an AI model &lt;/li&gt;
&lt;li&gt;Display it to the user &lt;/li&gt;
&lt;li&gt;Charge $29/month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not a business. Your users could most likely just use ChatGPT – why would they want another subscription? &lt;/p&gt;

&lt;p&gt;It’s not defensible. There’s no IP there. There’s nothing unique. On the contrary your whole business is at risk of changes to a model. &lt;/p&gt;

&lt;p&gt;Remember when everyone built apps on top of Twitter and then they changed API rules overnight? That can happen to you if you’re just wrapping a model. It’s even worse here as the frontier models have incentive to compete against you when you come up with a good, simple idea. &lt;/p&gt;

&lt;p&gt;Let’s not even get into the fact that you’re open to a huge cost base where you aren’t in control of input or output tokens and just rack up an AI bill behind the scenes. &lt;/p&gt;

&lt;p&gt;The playbook right now seems to be: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wrapper launches and gets traction &lt;/li&gt;
&lt;li&gt;Model provider notices traction &lt;/li&gt;
&lt;li&gt;Model provider adds features to handle some of this in house &lt;/li&gt;
&lt;li&gt;Business case evaporates &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You’re doing market research for OpenAI – and they can execute better than you can. &lt;/p&gt;

&lt;p&gt;Stop doing this. &lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Coding Is Making This Worse
&lt;/h2&gt;

&lt;p&gt;My most successful summary of &lt;a href="https://go.brunelly.com/devto" rel="noopener noreferrer"&gt;Brunelly&lt;/a&gt; at STEP 2026 was ‘You know what vibe coding is right? We’re the opposite of that. We actually create real-world enterprise quality software’. &lt;/p&gt;

&lt;p&gt;That has to be the opener because vibe coding has got such a bad reputation in the real-world. Security gaps, bugs, scalability, deployments, infrastructure management, compliance – all non-existent. &lt;/p&gt;

&lt;p&gt;And vibe coded AI products take the worst of all worlds. The simplest AI wrapper around some basic CRUD operations but lacking any scalability. &lt;/p&gt;

&lt;p&gt;Please stop. &lt;/p&gt;

&lt;h2&gt;
  
  
  There’s A Better Way To Do AI
&lt;/h2&gt;

&lt;p&gt;I’ve spent the last year building Maitento – our AI native operating system. Think of it as a cross between Unix and AWS but AI native. Models are drivers. There are different process types (Linux containers, AI’s interacting with each other, apps developed in our own programming language, code generation orchestration). Every agent can connect to any OpenAPI or MCP server out there. Applications are defined declaratively. Shell. RAG. Memory system. Context management. Multi-modal. There’s a lot. &lt;/p&gt;

&lt;p&gt;This is the iceberg we needed to create a real enterprise-ready AI-enabled application. &lt;/p&gt;

&lt;p&gt;Why did we need it? Extensibility. Quality. Scalability. Performance. Speed of development. Duct-taping a bunch of Python scripts together didn’t cut it. &lt;/p&gt;

&lt;p&gt;I’m not saying you need the level of orchestration that we have – but wanted to emphasise that the moving pieces in enterprise grade AI orchestration are far more complex. &lt;/p&gt;

&lt;p&gt;Do you think ChatGPT is just a wrapper around their own API with some system prompts?  There’s file management, prompt injection detection, context analysis, memory management, rolling context windows, deployments, scalability, backend queueing, real-time streaming across millions of users, multi-modal input, distributed Python execution environments. ChatGPT itself has a ‘call the model’ step but it’s the tiniest part of the overall infrastructure. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;It’s easy to call an API. It’s far harder to build real infrastructure than many founders realise. &lt;/p&gt;

&lt;p&gt;Founders want to ship so rush to deliver. But that doesn’t mean you’re actually building a business – you’re building a tech demo. &lt;/p&gt;

&lt;p&gt;A demo is not a product. It’s a controlled environment that doesn’t replicate reality. &lt;/p&gt;

&lt;p&gt;The gap between impressive demo and production-grade product in AI is wider than in any other category of software. Because AI systems fail in ways that traditional software doesn't. They hallucinate, they lose context, they confidently produce wrong outputs.  &lt;/p&gt;

&lt;p&gt;Managing that failure mode requires infrastructure. Real infrastructure. Not a try/catch block around an API call. &lt;/p&gt;

&lt;h2&gt;
  
  
  Build Something That Matters
&lt;/h2&gt;

&lt;p&gt;The AI gold rush is producing a lot of shovels. &lt;/p&gt;

&lt;p&gt;Most of those shovels are made of cardboard. &lt;/p&gt;

&lt;p&gt;The companies that will still exist in five years are the ones building real infrastructure today. Not just calling APIs. Not chaining prompts. Not wrapping someone else's intelligence in a pretty interface and calling it innovation. &lt;/p&gt;

&lt;p&gt;Build the thing that's hard to build. That's the only strategy that works. It always has been. &lt;/p&gt;

&lt;p&gt;If you were able to build it in a few days, so can anyone else. &lt;/p&gt;

&lt;p&gt;If it’s difficult for you then it is for your competitors. &lt;/p&gt;

&lt;p&gt;And then you may actually have a genuinely novel business. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
    <item>
      <title>Your AI Product Is Not A Real Business</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Mon, 23 Feb 2026 10:08:59 +0000</pubDate>
      <link>https://forem.com/guypowell/your-ai-product-is-not-a-real-business-5cac</link>
      <guid>https://forem.com/guypowell/your-ai-product-is-not-a-real-business-5cac</guid>
      <description>&lt;p&gt;I just got back from STEP 2026 in Dubai. Whilst there were some genuinely amazing businesses there, I also saw a lot of companies that won’t make their first year. &lt;/p&gt;

&lt;p&gt;Most startups now splash AI on to all their marketing. AI is not your product. AI itself does not deliver business value. Unless you are a frontier lab, AI is nothing more than a tool in your stack. Nobody is there shouting ‘MongoDB-enabled trading platform’. &lt;/p&gt;

&lt;p&gt;Users don’t care if it’s AI. Investors don’t care if it’s AI. They care about what it does, what problem it solves and whether there’s space for it in the market. &lt;/p&gt;

&lt;p&gt;And if you want to sell to real businesses? I've sat across the table from $5bn consultancies evaluating AI tools. They ask about your architecture, your data residency, how to deploy it on-prem and what you actually own. If the answer is 'we call the OpenAI API' – the meeting is over. &lt;/p&gt;

&lt;h2&gt;
  
  
  Wrappers… Everywhere
&lt;/h2&gt;

&lt;p&gt;There are tens of thousands of AI startups right now whose core premise is: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vague idea about product &lt;/li&gt;
&lt;li&gt;Put a bit of a wrapper around an AI model &lt;/li&gt;
&lt;li&gt;Display it to the user &lt;/li&gt;
&lt;li&gt;Charge $29/month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not a business. Your users could most likely just use ChatGPT – why would they want another subscription? &lt;/p&gt;

&lt;p&gt;It’s not defensible. There’s no IP there. There’s nothing unique. On the contrary your whole business is at risk of changes to a model. &lt;/p&gt;

&lt;p&gt;Remember when everyone built apps on top of Twitter and then they changed API rules overnight? That can happen to you if you’re just wrapping a model. It’s even worse here as the frontier models have incentive to compete against you when you come up with a good, simple idea. &lt;/p&gt;

&lt;p&gt;Let’s not even get into the fact that you’re open to a huge cost base where you aren’t in control of input or output tokens and just rack up an AI bill behind the scenes. &lt;/p&gt;

&lt;p&gt;The playbook right now seems to be: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wrapper launches and gets traction &lt;/li&gt;
&lt;li&gt;Model provider notices traction &lt;/li&gt;
&lt;li&gt;Model provider adds features to handle some of this in house &lt;/li&gt;
&lt;li&gt;Business case evaporates &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You’re doing market research for OpenAI – and they can execute better than you can. &lt;/p&gt;

&lt;p&gt;Stop doing this. &lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Coding Is Making This Worse
&lt;/h2&gt;

&lt;p&gt;My most successful summary of &lt;a href="https://go.brunelly.com/devto" rel="noopener noreferrer"&gt;Brunelly&lt;/a&gt;at STEP 2026 was ‘You know what vibe coding is right? We’re the opposite of that. We actually create real-world enterprise quality software’. &lt;/p&gt;

&lt;p&gt;That has to be the opener because vibe coding has got such a bad reputation in the real-world. Security gaps, bugs, scalability, deployments, infrastructure management, compliance – all non-existent. &lt;/p&gt;

&lt;p&gt;And vibe coded AI products take the worst of all worlds. The simplest AI wrapper around some basic CRUD operations but lacking any scalability. &lt;/p&gt;

&lt;p&gt;Please stop. &lt;/p&gt;

&lt;h2&gt;
  
  
  There’s A Better Way To Do AI
&lt;/h2&gt;

&lt;p&gt;I’ve spent the last year building Maitento – our AI native operating system. Think of it as a cross between Unix and AWS but AI native. Models are drivers. There are different process types (Linux containers, AI’s interacting with each other, apps developed in our own programming language, code generation orchestration). Every agent can connect to any OpenAPI or MCP server out there. Applications are defined declaratively. Shell. RAG. Memory system. Context management. Multi-modal. There’s a lot. &lt;/p&gt;

&lt;p&gt;This is the iceberg we needed to create a real enterprise-ready AI-enabled application. &lt;/p&gt;

&lt;p&gt;Why did we need it? Extensibility. Quality. Scalability. Performance. Speed of development. Duct-taping a bunch of Python scripts together didn’t cut it. &lt;/p&gt;

&lt;p&gt;I’m not saying you need the level of orchestration that we have – but wanted to emphasise that the moving pieces in enterprise grade AI orchestration are far more complex. &lt;/p&gt;

&lt;p&gt;Do you think ChatGPT is just a wrapper around their own API with some system prompts?  There’s file management, prompt injection detection, context analysis, memory management, rolling context windows, deployments, scalability, backend queueing, real-time streaming across millions of users, multi-modal input, distributed Python execution environments. ChatGPT itself has a ‘call the model’ step but it’s the tiniest part of the overall infrastructure. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;It’s easy to call an API. It’s far harder to build real infrastructure than many founders realise. &lt;/p&gt;

&lt;p&gt;Founders want to ship so rush to deliver. But that doesn’t mean you’re actually building a business – you’re building a tech demo. &lt;/p&gt;

&lt;p&gt;A demo is not a product. It’s a controlled environment that doesn’t replicate reality. &lt;/p&gt;

&lt;p&gt;The gap between impressive demo and production-grade product in AI is wider than in any other category of software. Because AI systems fail in ways that traditional software doesn't. They hallucinate, they lose context, they confidently produce wrong outputs.  &lt;/p&gt;

&lt;p&gt;Managing that failure mode requires infrastructure. Real infrastructure. Not a try/catch block around an API call. &lt;/p&gt;

&lt;h2&gt;
  
  
  Build Something That Matters
&lt;/h2&gt;

&lt;p&gt;The AI gold rush is producing a lot of shovels. &lt;/p&gt;

&lt;p&gt;Most of those shovels are made of cardboard. &lt;/p&gt;

&lt;p&gt;The companies that will still exist in five years are the ones building real infrastructure today. Not just calling APIs. Not chaining prompts. Not wrapping someone else's intelligence in a pretty interface and calling it innovation. &lt;/p&gt;

&lt;p&gt;Build the thing that's hard to build. That's the only strategy that works. It always has been. &lt;/p&gt;

&lt;p&gt;If you were able to build it in a few days, so can anyone else. &lt;/p&gt;

&lt;p&gt;If it’s difficult for you then it is for your competitors. &lt;/p&gt;

&lt;p&gt;And then you may actually have a genuinely novel business. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How Many Rs Are There Really In Strawberry? AI Is So Stupid</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Mon, 16 Feb 2026 15:16:12 +0000</pubDate>
      <link>https://forem.com/brunellyai/how-many-rs-are-there-really-in-strawberry-ai-is-so-stupid-2f5k</link>
      <guid>https://forem.com/brunellyai/how-many-rs-are-there-really-in-strawberry-ai-is-so-stupid-2f5k</guid>
      <description>&lt;p&gt;How many Rs are there in the word &lt;em&gt;strawberry&lt;/em&gt;? AI can’t tell you. Apparently. You’ve all seen it. Screenshots, Reddit threads, smug tweets. Models tripping over letters like toddlers. Everyone pointing and laughing. Reassuring stuff.&lt;/p&gt;

&lt;p&gt;Wind the clock back a little.&lt;/p&gt;

&lt;p&gt;It’s 2023. Image generation is exploding. It’s magical. Also: why does that hand have five fingers &lt;em&gt;and&lt;/em&gt; a thumb?&lt;/p&gt;

&lt;p&gt;A year later and we’ve uncovered a new, devastating limitation. AI cannot render a wine glass completely full. Half the internet concludes: preposterous technology, case closed.&lt;/p&gt;

&lt;p&gt;By 2025 things are truly dire. Models still can’t reliably count the Rs in strawberry. Ask for a seahorse emoji and they spiral into what looks suspiciously like an existential crisis.&lt;/p&gt;

&lt;p&gt;These examples don’t matter. Not really.&lt;/p&gt;

&lt;p&gt;What’s interesting is how obsessively we return to them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cgdimx9gp85p6r599mw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cgdimx9gp85p6r599mw.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  It Will Never Be Able To Code Though
&lt;/h2&gt;

&lt;p&gt;The memes are obvious if you use AI regularly. But this reflex isn’t limited to casual users. Technical people do it too and often more loudly.&lt;/p&gt;

&lt;p&gt;Early 2023: ChatGPT can spit out a half-decent for loop. Sometimes it even answers technical questions correctly. Incredible. But obviously it can’t build an app.&lt;/p&gt;

&lt;p&gt;Late 2024: we’ve got basic code-generation tools. Still, no danger. It makes too many mistakes. Barely junior level.&lt;/p&gt;

&lt;p&gt;2025: the year of the vibe coder. Suddenly everyone can spin up a website. Sure, it’s riddled with security holes and questionable decisions. So again: no threat. We’ll just clean it up. AI is junk.&lt;/p&gt;

&lt;p&gt;For years now, we’ve watched models repeatedly blow past their previous ceilings. Each time, the criticism simply slides sideways to the next obvious limitation.&lt;/p&gt;

&lt;p&gt;Reddit is still full of people pointing out how stupid AI is. They’re not wrong. They’re just always late and missing the important part.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is AI Stupid Though?
&lt;/h2&gt;

&lt;p&gt;Before getting philosophical, it’s worth grounding this in reality. These glitches exist for reasons. If you’re building with AI, you need to understand them.&lt;/p&gt;

&lt;p&gt;How often have you seen a photograph of a wine glass filled perfectly to the brim? Until recently: almost never. That means that the model hasn’t either. It’s not failing, it’s interpolating from a deeply human dataset.&lt;/p&gt;

&lt;p&gt;Why do seahorse emojis cause chaos? Because at some point the internet collectively decided a seahorse emoji existed. Reddit talked about it. Joked about it. Imagined it. The model learns that seahorse emoji is plausible and goes to insert it. Then, mid-generation, it realizes that it doesn’t exist and starts chasing its own tail ad-infinitum.&lt;/p&gt;

&lt;p&gt;Why does AI-generated code contain errors? Because it’s trained on Stack Overflow, blogs, gists, half-finished examples and heroic hacks. You didn’t ask it to be secure. You didn’t constrain it. It’s doing exactly what humanity taught it to do.&lt;/p&gt;

&lt;p&gt;People say AI is a mirror to the user.  It’s also a mirror to humanity… and a lot of what we’re seeing reflected back isn’t flattering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k237de0q7kxi8j29s76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4k237de0q7kxi8j29s76.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Does It Matter?
&lt;/h2&gt;

&lt;p&gt;Because this isn’t abstract. It has real consequences – for society and for anyone building real products with AI baked in. If you’re developing on top of AI and you don’t understand how it fails, you’re already in trouble.&lt;/p&gt;

&lt;p&gt;At Brunelly we assume AI is an intern who found a 20-year-old Stack Overflow answer and ran with it. We prompt heavily, guide explicitly, and still don’t trust the output. Everything passes through multiple agents to surface bugs, performance issues, and security concerns.&lt;/p&gt;

&lt;p&gt;The only viable starting point is: it will underperform… so how do we correct it?&lt;/p&gt;

&lt;p&gt;But this misunderstanding goes wider than product design.&lt;/p&gt;

&lt;p&gt;Stack Overflow is effectively dead. Let that sink in. Once the backbone of developer knowledge, now barely visited. Why? Because ChatGPT gives faster, better, contextual answers.&lt;/p&gt;

&lt;p&gt;Music, images, stock photography – already flooded. Half of the lo-fi playlists on Spotify are AI-generated. We just stopped calling it slop.&lt;br&gt;
Remember when everyone complained about AI slop in early 2025?  Bad news: it’s still AI. It’s just a lot less sloppy.&lt;/p&gt;

&lt;p&gt;Jobs are changing. Trust is changing. Evidence is changing.  When you can’t trust photos, videos, reviews or faces then everything downstream shifts with it.&lt;/p&gt;

&lt;p&gt;If you’re focused on strawberries, you’re going to wake up one day and wonder when the world quietly re-organised itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkbetnv533sgdw2w60jz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkbetnv533sgdw2w60jz.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do We Fixate Though?
&lt;/h2&gt;

&lt;p&gt;Because known failure modes are comforting.&lt;/p&gt;

&lt;p&gt;They give us a boundary. Something to point at. Something to laugh at. A place where we still feel safely on top.&lt;/p&gt;

&lt;p&gt;Finding a bug in YouTube is annoying. Finding a bug in AI is reassuring.&lt;br&gt;
The problem is that these failures don’t last.&lt;/p&gt;

&lt;p&gt;Our mental model of AI already lags reality, and that gap is widening. Even if AI progress stopped tomorrow, it would take years for organisations to fully exploit what already exists. Orchestration is immature. Skills are scarce. Understanding is shallow.&lt;/p&gt;

&lt;p&gt;This isn’t about whether LLMs lead to AGI or consciousness. It doesn’t matter. The systems we already have are enough to reshape everything if we actually learn how to use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does It Mean For Builders?
&lt;/h2&gt;

&lt;p&gt;It means stability is gone. The model you used last month is obsolete. The workaround you wrote last week no longer applies. Every solved edge case is replaced by three new ones.&lt;/p&gt;

&lt;p&gt;This isn’t like JavaScript frameworks. This is orders of magnitude faster.&lt;br&gt;
You have to design for an environment that mutates continuously. Trust becomes a UX problem, not a marketing one. AI labels actively reduce confidence.&lt;/p&gt;

&lt;p&gt;Textbox-and-send is not a product strategy.&lt;/p&gt;

&lt;p&gt;Trust nothing. Convert outputs into constrained state machines. Design experiences that absorb failure gracefully.&lt;/p&gt;

&lt;p&gt;We didn’t build Brunelly because AI is magical. We built it because AI is a tool that can be harnessed and nobody else was doing it right. And the orchestrator underneath it evolves almost as fast as the models themselves – because it has to.&lt;/p&gt;

&lt;h2&gt;
  
  
  And What Does It Mean For All Of Us?
&lt;/h2&gt;

&lt;p&gt;That’s the real question.&lt;/p&gt;

&lt;p&gt;I was coding in the 90s during the original internet boom and bust. It wasn’t like this. Code lasted years. Systems were stable. Patterns endured.&lt;/p&gt;

&lt;p&gt;This time is different – not because the tech is smarter, but because the pace is relentless.&lt;/p&gt;

&lt;p&gt;Laughing at AI’s mistakes is fine. It &lt;em&gt;is&lt;/em&gt; funny.  But it’s also a distraction.&lt;/p&gt;

&lt;p&gt;Assume the world is changing before you notice it.&lt;/p&gt;

&lt;p&gt;If you’re building: design for failure, assume the system will outgrow you mid-flight, and plan accordingly.&lt;/p&gt;

&lt;p&gt;And maybe stop counting Rs.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
    <item>
      <title>How Many Rs Are In Strawberry: AI Is So Stupid</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Mon, 16 Feb 2026 14:42:21 +0000</pubDate>
      <link>https://forem.com/guypowell/how-many-rs-are-in-strawberry-ai-is-so-stupid-27e3</link>
      <guid>https://forem.com/guypowell/how-many-rs-are-in-strawberry-ai-is-so-stupid-27e3</guid>
      <description>&lt;p&gt;How many Rs are there in the word &lt;em&gt;strawberry&lt;/em&gt;? AI can’t tell you. Apparently. You’ve all seen it. Screenshots, Reddit threads, smug tweets. Models tripping over letters like toddlers. Everyone pointing and laughing. Reassuring stuff.&lt;/p&gt;

&lt;p&gt;Wind the clock back a little.&lt;/p&gt;

&lt;p&gt;It’s 2023. Image generation is exploding. It’s magical. Also: why does that hand have five fingers &lt;em&gt;and&lt;/em&gt; a thumb?&lt;/p&gt;

&lt;p&gt;A year later and we’ve uncovered a new, devastating limitation. AI cannot render a wine glass completely full. Half the internet concludes: preposterous technology, case closed.&lt;/p&gt;

&lt;p&gt;By 2025 things are truly dire. Models still can’t reliably count the Rs in strawberry. Ask for a seahorse emoji and they spiral into what looks suspiciously like an existential crisis.&lt;/p&gt;

&lt;p&gt;These examples don’t matter. Not really.&lt;/p&gt;

&lt;p&gt;What’s interesting is how obsessively we return to them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F063pyj9cfnp2actawt66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F063pyj9cfnp2actawt66.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  It Will Never Be Able To Code Though
&lt;/h2&gt;

&lt;p&gt;The memes are obvious if you use AI regularly. But this reflex isn’t limited to casual users. Technical people do it too and often more loudly.&lt;/p&gt;

&lt;p&gt;Early 2023: ChatGPT can spit out a half-decent for loop. Sometimes it even answers technical questions correctly. Incredible. But obviously it can’t build an app.&lt;/p&gt;

&lt;p&gt;Late 2024: we’ve got basic code-generation tools. Still, no danger. It makes too many mistakes. Barely junior level.&lt;/p&gt;

&lt;p&gt;2025: the year of the vibe coder. Suddenly everyone can spin up a website. Sure, it’s riddled with security holes and questionable decisions. So again: no threat. We’ll just clean it up. AI is junk.&lt;/p&gt;

&lt;p&gt;For years now, we’ve watched models repeatedly blow past their previous ceilings. Each time, the criticism simply slides sideways to the next obvious limitation.&lt;/p&gt;

&lt;p&gt;Reddit is still full of people pointing out how stupid AI is. They’re not wrong. They’re just always late and missing the important part.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is AI Stupid Though?
&lt;/h2&gt;

&lt;p&gt;Before getting philosophical, it’s worth grounding this in reality. These glitches exist for reasons. If you’re building with AI, you need to understand them.&lt;/p&gt;

&lt;p&gt;How often have you seen a photograph of a wine glass filled perfectly to the brim? Until recently: almost never. That means that the model hasn’t either. It’s not failing, it’s interpolating from a deeply human dataset.&lt;/p&gt;

&lt;p&gt;Why do seahorse emojis cause chaos? Because at some point the internet collectively decided a seahorse emoji existed. Reddit talked about it. Joked about it. Imagined it. The model learns that seahorse emoji is plausible and goes to insert it.  Then, mid-generation, it realizes that it doesn’t exist and starts chasing its own tail ad-infinitum.&lt;/p&gt;

&lt;p&gt;Why does AI-generated code contain errors? Because it’s trained on Stack Overflow, blogs, gists, half-finished examples and heroic hacks. You didn’t ask it to be secure. You didn’t constrain it. It’s doing exactly what humanity taught it to do.&lt;/p&gt;

&lt;p&gt;People say AI is a mirror to the user. It’s also a mirror to humanity… and a lot of what we’re seeing reflected back isn’t flattering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvg2zxa518m31apy9wjf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvg2zxa518m31apy9wjf.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Does It Matter?
&lt;/h2&gt;

&lt;p&gt;Because this isn’t abstract. It has real consequences – for society and for anyone building real products with AI baked in. If you’re developing on top of AI and you don’t understand how it fails, you’re already in trouble.&lt;/p&gt;

&lt;p&gt;At Brunelly we assume AI is an intern who found a 20-year-old Stack Overflow answer and ran with it. We prompt heavily, guide explicitly, and still don’t trust the output. Everything passes through multiple agents to surface bugs, performance issues, and security concerns.&lt;/p&gt;

&lt;p&gt;The only viable starting point is: it will underperform… so how do we correct it?&lt;/p&gt;

&lt;p&gt;But this misunderstanding goes wider than product design.&lt;/p&gt;

&lt;p&gt;Stack Overflow is effectively dead. Let that sink in. Once the backbone of developer knowledge, now barely visited. Why? Because ChatGPT gives faster, better, contextual answers.&lt;/p&gt;

&lt;p&gt;Music, images, stock photography – already flooded. Half of the lo-fi playlists on Spotify are AI-generated. We just stopped calling it slop.&lt;br&gt;
Remember when everyone complained about AI slop in early 2025? Bad news: it’s still AI. It’s just a lot less sloppy.&lt;/p&gt;

&lt;p&gt;Jobs are changing. Trust is changing. Evidence is changing. When you can’t trust photos, videos, reviews or faces then everything downstream shifts with it.&lt;/p&gt;

&lt;p&gt;If you’re focused on strawberries, you’re going to wake up one day and wonder when the world quietly re-organised itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5akk82rojiza7ef6058c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5akk82rojiza7ef6058c.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do We Fixate Though?
&lt;/h2&gt;

&lt;p&gt;Because known failure modes are comforting.&lt;/p&gt;

&lt;p&gt;They give us a boundary. Something to point at. Something to laugh at. A place where we still feel safely on top.&lt;/p&gt;

&lt;p&gt;Finding a bug in YouTube is annoying. Finding a bug in AI is reassuring.&lt;br&gt;
The problem is that these failures don’t last.&lt;/p&gt;

&lt;p&gt;Our mental model of AI already lags reality, and that gap is widening. &lt;/p&gt;

&lt;p&gt;Even if AI progress stopped tomorrow, it would take years for organisations to fully exploit what already exists. Orchestration is immature. Skills are scarce. Understanding is shallow.&lt;/p&gt;

&lt;p&gt;This isn’t about whether LLMs lead to AGI or consciousness. It doesn’t matter. The systems we already have are enough to reshape everything if we actually learn how to use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does It Mean For Builders?
&lt;/h2&gt;

&lt;p&gt;It means stability is gone. The model you used last month is obsolete. The workaround you wrote last week no longer applies. Every solved edge case is replaced by three new ones.&lt;/p&gt;

&lt;p&gt;This isn’t like JavaScript frameworks. This is orders of magnitude faster.&lt;br&gt;
You have to design for an environment that mutates continuously. Trust becomes a UX problem, not a marketing one. AI labels actively reduce confidence.&lt;/p&gt;

&lt;p&gt;Textbox-and-send is not a product strategy.&lt;/p&gt;

&lt;p&gt;Trust nothing. Convert outputs into constrained state machines. Design experiences that absorb failure gracefully.&lt;/p&gt;

&lt;p&gt;We didn’t build Brunelly because AI is magical. We built it because AI is a tool that can be harnessed and nobody else was doing it right. And the orchestrator underneath it evolves almost as fast as the models themselves – because it has to.&lt;/p&gt;

&lt;h2&gt;
  
  
  And What Does It Mean For All Of Us?
&lt;/h2&gt;

&lt;p&gt;That’s the real question.&lt;/p&gt;

&lt;p&gt;I was coding in the 90s during the original internet boom and bust. It wasn’t like this. Code lasted years. Systems were stable. Patterns endured.&lt;/p&gt;

&lt;p&gt;This time is different – not because the tech is smarter, but because the pace is relentless.&lt;/p&gt;

&lt;p&gt;Laughing at AI’s mistakes is fine. It is funny.  But it’s also a distraction.&lt;/p&gt;

&lt;p&gt;Assume the world is changing before you notice it.&lt;/p&gt;

&lt;p&gt;If you’re building: design for failure, assume the system will outgrow you mid-flight, and plan accordingly.&lt;/p&gt;

&lt;p&gt;And maybe stop counting Rs.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>discuss</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Beyond the Vibes: Vibe Coding Changed Who Can Build, Not How Software Should Be Built</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Thu, 05 Feb 2026 06:00:00 +0000</pubDate>
      <link>https://forem.com/guypowell/beyond-the-vibes-vibe-coding-changed-who-can-build-not-how-software-should-be-built-1n0e</link>
      <guid>https://forem.com/guypowell/beyond-the-vibes-vibe-coding-changed-who-can-build-not-how-software-should-be-built-1n0e</guid>
      <description>&lt;p&gt;In the last few years, vibe coding has taken center stage by changing who can build software, but not what it takes to build it well. It is a development style defined by natural language prompts, rapid iteration, and an emphasis on getting things working fast. &lt;/p&gt;

&lt;p&gt;Powered by AI-assisted tools and accessible platforms, vibe coding has genuinely democratized building . Startups, solo devs, and even non-technical founders can now create prototypes in hours, not months. That’s worth celebrating.&lt;/p&gt;

&lt;p&gt;But as the hype grows, an important distinction is getting lost in the noise.&lt;/p&gt;

&lt;p&gt;We’re starting to confuse &lt;strong&gt;vibe coding with software engineering.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And while they both involve code, they serve very different purposes and come with very different risks. &lt;/p&gt;

&lt;h2&gt;
  
  
  Where Vibe Coding Shines
&lt;/h2&gt;

&lt;p&gt;Vibe coding works best when you’re: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testing an idea &lt;/li&gt;
&lt;li&gt;Prototyping fast &lt;/li&gt;
&lt;li&gt;Building internal tools &lt;/li&gt;
&lt;li&gt;Exploring creatively&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It accelerates iteration and lowers the cost of experimentation. It’s a massive enabler for innovation, especially in early-stage product work.&lt;/p&gt;

&lt;p&gt;The market agrees. According to &lt;a href="https://www.rootsanalysis.com/vibe-coding-market" rel="noopener noreferrer"&gt;Roots Analysis&lt;/a&gt;, the global vibe coding market is expected to grow from &lt;strong&gt;$2.96B in 2025 to $325B by 2040&lt;/strong&gt; - a &lt;strong&gt;36.79% CAGR&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;But the faster something grows, the more important it becomes to ask: &lt;br&gt;
&lt;strong&gt;Is this still the right tool for the job?&lt;/strong&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  The Foundations Vibe Coding Often Skips
&lt;/h2&gt;

&lt;p&gt;What vibe coding often skips, and what experienced developers obsess over, are the foundations that keep systems standing: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clear, stable requirements &lt;/li&gt;
&lt;li&gt;Non-functional constraints (scale, security, latency) &lt;/li&gt;
&lt;li&gt;Architectural boundaries &lt;/li&gt;
&lt;li&gt;Testing strategies &lt;/li&gt;
&lt;li&gt;Maintainability &lt;/li&gt;
&lt;li&gt;Long-term risk &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Most Expensive Problems Don’t Show Up in Demos
&lt;/h2&gt;

&lt;p&gt;In vibe coding, it’s easy to build something that feels finished, but ultimately collapses when it’s time to expose it to real users, real load, or when it’s time to scale. We’ve seen projects that look great on the surface but require complete rewrites just to support users, integrate with systems, or handle basic growth. &lt;/p&gt;

&lt;p&gt;It’s not a failure of intent but a &lt;strong&gt;misunderstanding of complexity.&lt;/strong&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Traditional Engineering Brings Weight
&lt;/h2&gt;

&lt;p&gt;Professional software development brings structure and, with it, intentional weight:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It’s more expensive &lt;/li&gt;
&lt;li&gt;It takes longer &lt;/li&gt;
&lt;li&gt;It often requires external talent (agencies, architects, senior engineers) &lt;/li&gt;
&lt;li&gt;And it can feel heavy for early-stage work &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But when the goal is durability, this is the discipline that delivers it. You’re building something to last. You need it to handle change, load, integration, regulation - things that don’t show up in a prototype demo. &lt;/p&gt;

&lt;p&gt;Still, this is where many builders get stuck: &lt;strong&gt;cost and speed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's where many builders hit a wall.&lt;/p&gt;

&lt;h2&gt;
  
  
  A New Middle Ground: Orchestrated Multi-Agent Systems
&lt;/h2&gt;

&lt;p&gt;So, what comes next? &lt;/p&gt;

&lt;p&gt;We believe the next evolution isn’t about choosing between speed or structure, it’s about deliberately combining both. &lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;multi-agent systems (MAS)&lt;/strong&gt;. Autonomous agents that specialize in different aspects of the software lifecycle (planning, architecture, coding, testing, optimization).  &lt;/p&gt;

&lt;h2&gt;
  
  
  Without Orchestration, AI Just Scales Chaos
&lt;/h2&gt;

&lt;p&gt;Crucially, the breakthrough isn't the agents themselves. It's in the &lt;strong&gt;orchestration layer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without orchestration, agents operate in silos. &lt;br&gt;
With orchestration, they act like a coordinated engineering team. &lt;/p&gt;

&lt;p&gt;What MAS Orchestration Enables: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sequenced collaboration&lt;/strong&gt; across AI agents (e.g. planner → coder → reviewer → tester) &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrated workflows&lt;/strong&gt; across tools, platforms, and services &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallel execution&lt;/strong&gt; to reduce latency and speed up delivery &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability&lt;/strong&gt; through modular agent updates without breaking the system &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smarter fallback and reliability mechanisms&lt;/strong&gt; (e.g. retries, circuit breakers, role reassignment)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short: orchestration turns "vibe" into "system". &lt;/p&gt;

&lt;h2&gt;
  
  
  We Use This Because We Want To Ship
&lt;/h2&gt;

&lt;p&gt;At Brunelly, we didn’t adopt orchestration as a theory. We use it because we have to ship real systems. Our CTO refers to LLMs as “a slightly messier version of me.” And that is impressive. &lt;/p&gt;

&lt;p&gt;If you want to read more about Brunelly’s orchestration, check out our CTO’s &lt;a href="https://substack.com/@guypowell1" rel="noopener noreferrer"&gt;Guy Powell’s Substack&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Or if you prefer to test it out for yourself, feel free it’s live! &lt;/p&gt;

&lt;h2&gt;
  
  
  Three Phases of Modern Software Building
&lt;/h2&gt;

&lt;p&gt;As we move into 2026, here’s the shift we see: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rx08rkv766zhqwwo26x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rx08rkv766zhqwwo26x.png" alt=" " width="719" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  You Don’t Need Extremes. You Need Intent.
&lt;/h2&gt;

&lt;p&gt;You don’t need to abandon vibe coding or overinvest in full-stack teams before you’re ready. &lt;/p&gt;

&lt;p&gt;But if you're trying to build something credible and scalable, and you're looking for that elusive balance between speed and structure, multi-agent orchestration may offer a smarter third path. &lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thought: Speed Is Optional. Clarity Isn’t.
&lt;/h2&gt;

&lt;p&gt;The real question isn’t whether vibe coding is “good” or “bad.” &lt;br&gt;
The question is: &lt;strong&gt;What are you building, and what will it take to get it there?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;If you're testing the waters, move fast and explore. &lt;br&gt;
If you're building the backbone of a product or company, slow down, think deeply, and choose the right system. &lt;/p&gt;

&lt;p&gt;And 2026 is going to reward the teams who can do both intelligently. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>architecture</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>AI Context in the Real-World</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Mon, 02 Feb 2026 16:00:00 +0000</pubDate>
      <link>https://forem.com/guypowell/ai-context-in-the-real-world-3a6b</link>
      <guid>https://forem.com/guypowell/ai-context-in-the-real-world-3a6b</guid>
      <description>&lt;h2&gt;
  
  
  Context in Context
&lt;/h2&gt;

&lt;p&gt;It’s time to make context concrete.  I recently wrote about AI context management techniques including optimisation, compaction, trimming and rolling.  Today I wanted to show what these techniques look like in action. &lt;/p&gt;

&lt;p&gt;I’ve been working on some large enterprise AI agents recently and I’m going to share exactly how we use our in-house orchestration to build long-running agents that never exhaust their context. &lt;/p&gt;

&lt;h2&gt;
  
  
  A Real-World Example
&lt;/h2&gt;

&lt;p&gt;We’ve recently added fully automated AI test agents to Brunelly.  You give them a URL, a brief set of instructions and off they go to explore, generate test cases and use the app as a real user. &lt;/p&gt;

&lt;p&gt;The first time I set an agent running an exploratory test was great.  Screenshots, bug reports, proposed new test cases.  Then a failure once the context window was exhausted.  It took less than ten minutes for the agent to fail. &lt;/p&gt;

&lt;p&gt;To enable a test agent we provide tools that translate webpage DOMs into a much shorter JSON schema that is friendly to AI models.  But there’s still a lot of data there – maybe 5KB for a complex page load.  Several pages in and the context was full of past test steps. &lt;/p&gt;

&lt;p&gt;Our in-house AI orchestrator, Maitento, provides built in context compaction and trimming functionality.  Each AI interaction can set a couple of lines of configuration that define whether to enable trimming, how to apply it to tool calls, whether to enable rolling context, which messages to protect and if we want to enable compaction. &lt;/p&gt;

&lt;p&gt;Our test runner is configured to auto-trim tool calls once they are 5-cycles deep and simply replace the response JSON with ‘Tool call result was trimmed’ so that the model is aware it made a call and that its result has now been removed.  We also enable rolling context but keep the system messages protected so that it is aware of the overall task.  We have rolling context setup to be bursty to get some benefit of caching so that once we hit 90% we roll back to 75% and then grow to 90% again.  We do not enable compaction at all. &lt;/p&gt;

&lt;p&gt;In this scenario we balance the cost of the interaction (trimming invalidates the cache quite frequently) with a smaller context size and never losing the overall strategy that’s being followed.  This is very important for an autonomous agent that needs to know exactly what it’s exploring and where it’s up to. &lt;/p&gt;

&lt;p&gt;Our orchestration engine is aware of the context window of each model and so presents different context to each agent based on the configuration.  We could have several agents working together all seeing a different representation of the full context.  We keep the full transcript in storage – trimming only affects what the model sees and the ground truth is never lost. &lt;/p&gt;

&lt;p&gt;The test agent gets given access to an internal API within Brunelly to perform its tests.  Firstly, we craft each endpoint to be tailored to how a model will work best.  We combine actions to make sense to an AI using composite functionality rather than object-level views of things.  This could fill up the rest of this context pretty quickly. &lt;/p&gt;

&lt;p&gt;That’s where the last hidden gem of Maitento comes in.  The orchestrator itself provides a pre-populate and post-transform phase in that can be defined in every tool call.  The ID of the test that’s being run, specific credentials, tenant details, etc. are all completely removed from the model’s view of the world.  Our runtime takes the OpenAPI or MCP schema, removes pre-bound elements and then presents a much smaller version to the model for requests and it does the same for responses just return relevant paths that the model needs.  This means that even with an already optimised API we reduce the context bloat of tool calls by around ~30% on average in our workloads. &lt;/p&gt;

&lt;p&gt;Your orchestrator matters. &lt;/p&gt;

&lt;h2&gt;
  
  
  Don’t Be Afraid to Combine Techniques
&lt;/h2&gt;

&lt;p&gt;Each agent is different.  This setup increases token costs as our tool trimming invalidates cache quickly – but balances it against intent-drift and longevity. &lt;/p&gt;

&lt;p&gt;We combine rolling and trimming with window ranges to create our bursty rolling window with this agent.  Others don’t need it. &lt;/p&gt;

&lt;p&gt;Our tool calls are optimised to reduce the number of calls an agent needs to make to one per cycle and pipeline transforms sort the rest out. &lt;/p&gt;

&lt;p&gt;We have other agents that just take in a huge amount of data, call dozens of APIs and then translate it into a large JSON blob.  They don’t need any of this as they’re lifecycle and tool content is too small to matter.  Optimise a lot where it matters and less so elsewhere. &lt;/p&gt;

&lt;h2&gt;
  
  
  Ultimately… It’s Just Short-Term Memory
&lt;/h2&gt;

&lt;p&gt;In some ways AI models aren’t that different to humans.  We have short-term and long-term memory.  Context is the short-term memory. &lt;/p&gt;

&lt;p&gt;If you’re writing code how often can you remember exactly what each line of code said in the file you were in 30 seconds ago?  Would it even help if you did remember? &lt;/p&gt;

&lt;p&gt;We generally have extremely vivid short-term memory in the realm of seconds that gradually tails off to less detail over minutes, hours and lifetimes.  I’m sure I don’t need to remember in excruciating detail what some nginx log told me 15 years ago. &lt;/p&gt;

&lt;p&gt;The same is true for models.  Give them what they need to work their best – a context tailored to the task at hand that is present in a way your model prefers. &lt;/p&gt;

&lt;p&gt;There is no one size fits all approach to context management, but if you aren’t actively architecting the design of your context there’s no way you’re ever going to create any long-running agentic systems that provide real-world use. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>architecture</category>
      <category>agents</category>
    </item>
    <item>
      <title>Managing Your AI Context in Real Apps</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Thu, 29 Jan 2026 09:40:06 +0000</pubDate>
      <link>https://forem.com/guypowell/managing-your-ai-context-in-real-apps-4328</link>
      <guid>https://forem.com/guypowell/managing-your-ai-context-in-real-apps-4328</guid>
      <description>&lt;h2&gt;
  
  
  What is Context and Why Does it Matter?
&lt;/h2&gt;

&lt;p&gt;Everything that a running model is aware of is its context.  Its messages from you, its previous responses, tool calls, tool responses and even system and developer messages.  Every interaction with a model grows its context. &lt;/p&gt;

&lt;p&gt;Context is a limited and scarce resource.  Each model has a context limit measured in tokens – the numbers can seem large but are often deceptive.  Put a few MCP tool calls in there and what seems like an endless context can fill up rapidly.  Once it’s full no more messages can be sent to the model. &lt;/p&gt;

&lt;p&gt;If you’re creating any serious AI application you’re going to fill your context and decide what happens next.  The model is only aware of what’s in its context – start a new one from scratch and it knows nothing of the conversation to date. &lt;/p&gt;

&lt;p&gt;Understanding context is the difference between playing with AI wrappers and truly building something that lasts. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Impacts Context
&lt;/h2&gt;

&lt;p&gt;Most people underestimate what impacts context and often forget all the invisible parts that quickly flood in. &lt;/p&gt;

&lt;p&gt;Input and output messages fill up context.  Potentially 300,000 words may seem like a lot but if you’re getting ChatGPT to edit a 5,000 word article for you it translates to only 30 back-and-forth full drafts and the whole of the context is full. &lt;/p&gt;

&lt;p&gt;Now imagine you’re developing an agent that’s scraping websites and receiving JSON API responses that are several KB each.  Without management your agent is going to grind to a halt quickly. &lt;/p&gt;

&lt;p&gt;Uploaded files count towards your context – depending on the model it could count the full file towards your context, or part of it. &lt;/p&gt;

&lt;p&gt;Any model that is given access to tool calls will create at least two context messages for each use – one to call the tool and one for the response.  The amount of data in the request and response will depend on the tool but you can imagine that scraping several websites when average HTML page sizes now sit around 100k will add a lot to your context. &lt;/p&gt;

&lt;p&gt;Every message is also wrapped in a structure for each model defining the message type.  This all eats into your context as well. &lt;/p&gt;

&lt;p&gt;If you’ve got several tool calls these are going to be described by the orchestrator to the model.  That’s more context. &lt;/p&gt;

&lt;p&gt;This all adds up very quickly. &lt;/p&gt;

&lt;p&gt;So how do you deal with it when your context fills up? &lt;/p&gt;

&lt;h2&gt;
  
  
  Memories are Not Directly Context
&lt;/h2&gt;

&lt;p&gt;The concepts of memories and context are separate.  Memory is retrieval, context is execution.  Many AI systems (but importantly not models) contain memory systems.  These allow a model to access information that was discussed with them previously.  These sit outside of the model and can be thought of as a datastore. &lt;/p&gt;

&lt;p&gt;In some cases, models may explicitly say “give me a memory about this” whilst in other instances the orchestrator may detect the need for a memory to be present and provide it to the model. &lt;/p&gt;

&lt;p&gt;In either case the relevant memories end up in the context (either as system messages or tool calls) – but the entire memory story is separate.  So, memories do impact context – but only in the same way as other system or tool messages. &lt;/p&gt;

&lt;h2&gt;
  
  
  Compaction is a Blunt Tool
&lt;/h2&gt;

&lt;p&gt;Context compaction takes the entire context and summarises it.  When the context is almost full the orchestrator will give the entire context to another model and say ‘Create a summary of this context in a few thousand words’.  If you use Claude code you’ll see this happen quite often and can view the generated summary.  A compaction will often take up more than 20% of your new context. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5h5yoeu3rpwkss28e74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5h5yoeu3rpwkss28e74.png" alt=" " width="618" height="213"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Image: Claude Code – compaction allocated 22.5% of entire context&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This works – but it’s a blunt tool.  All detail is lost.  Compaction optimises space, not intent.  Specific messages won’t be present and key knowledge may be lost.  It’s up to another AI model to determine what it thinks is relevant and everything else is gone. &lt;/p&gt;

&lt;p&gt;This gets worse when you fill your context window up again (now 20% smaller than it was before) and the next agent will create a summary of both the original summary and the new messages.  Earlier information gradually becomes a copy of a copy. &lt;/p&gt;

&lt;p&gt;If you’re building a long-running app do you really want your agent to forget what it’s been working towards? &lt;/p&gt;

&lt;h2&gt;
  
  
  Trim or Roll Your Context
&lt;/h2&gt;

&lt;p&gt;Trimming the context is the more surgical approach.  The orchestrator will dynamically select how to manage individual messages in the context. &lt;/p&gt;

&lt;p&gt;You could remove older tool calls entirely from the context, or delete large responses (such as big API responses). &lt;/p&gt;

&lt;p&gt;If a large part of the context is filled with system messages you may dynamically adapt them (such as removing unused memories). &lt;/p&gt;

&lt;p&gt;The exact way to trim context depends on what you are doing with your agent.  Some context may be critical and should never be trimmed but other context may be irrelevant moments later. &lt;/p&gt;

&lt;p&gt;Any serious app is going to need to implement some form of context trimming otherwise you’ll end up with agents that can’t run for extended periods of time without losing their focus and purpose. &lt;/p&gt;

&lt;p&gt;Another related technique is a rolling context.  A rolling context is a first-in-first-out approach where old messages age out of context as it fills up.  Think of trimming as surgical-precision alteration of memories. &lt;/p&gt;

&lt;p&gt;You can combine rolling and trimming together.  You could even combine rolling, trimming and compaction if it makes sense in your context. &lt;/p&gt;

&lt;h2&gt;
  
  
  Don’t Just Throw Everything in There
&lt;/h2&gt;

&lt;p&gt;In your own system the only thing going into your context is what you put in it.  If you add 100 MCP servers that each exhibit 20 tool calls then your model is going to have a huge amount of its context filled up with JSON schema definitions that don’t actually help you. &lt;/p&gt;

&lt;p&gt;Is it better to provide hundreds of tools to your agent or a handful of perfectly crafted tools that require only a minimal input and minimal output to be given to the model?  Does it need all those properties on that gigantic object or just a couple? &lt;/p&gt;

&lt;p&gt;How does an agent need to use the context?  Does it need to read a whole file?  If not give it a tool to search within it and never load the full file into context. &lt;/p&gt;

&lt;p&gt;In our orchestrator we treat tool design as an integral part of agentic and context design.  I’ve learned the hard way that shrinking the surface area of what we expose makes a big difference. &lt;/p&gt;

&lt;p&gt;Do your system prompts need to be the size of a small novel?  Probably not. &lt;/p&gt;

&lt;p&gt;Doing a different task than you were a minute ago that doesn’t need full continuity?  Wipe your context and start afresh. &lt;/p&gt;

&lt;p&gt;Be smarter with context.  Any technique to tidy-up your context – whether trimming or compacting – ultimately impacts an agent’s working memory and that’s never as good as just not filling your context in the first place.  Give a model just what it needs and no more. &lt;/p&gt;

&lt;h2&gt;
  
  
  Context Impacts Performance
&lt;/h2&gt;

&lt;p&gt;Some models provide better output with a certain amount of their context filled and then it degrades after a certain point.  Other models may just become slow at a certain point.  Each model is different. &lt;/p&gt;

&lt;p&gt;I find that Anthropic’s Opus model gives its best outputs at around 60-75% of the context window.  The first half can be a slog getting it up to speed and the final part feels like it has a tendency to drift. &lt;/p&gt;

&lt;p&gt;Plan your context to the task and the models. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Real-World Cost
&lt;/h2&gt;

&lt;p&gt;It’s not often that developers like to think about cost – but your context directly impacts it.  There is a cost for every token sent to a model and received from a model.  Each time you send a request the entire previous context is sent with it. &lt;/p&gt;

&lt;p&gt;Many providers offer caching whereby if you make a follow-up call with starting tokens that match a previous request then the cost of those tokens will be less than non-cached tokens. &lt;/p&gt;

&lt;p&gt;Giant context costs a lot of tokens, but a lot of it may be cached. &lt;/p&gt;

&lt;p&gt;Constantly trimmed context may be smaller in token count but will never be cached because the bulk of it changes every time. &lt;/p&gt;

&lt;p&gt;Compacted context where the compaction message is immediately after system messages will provide a good basis for token cache optimisation but will grow quickly and may result in more messages required due to a lack of understanding. &lt;/p&gt;

&lt;p&gt;Context size does not necessarily equate to cost.  Everything is a trade-off of: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Number of tokens in the context &lt;/li&gt;
&lt;li&gt;Proportion of cacheable context &lt;/li&gt;
&lt;li&gt;Whether a shorter context results in the model ultimately requiring more context than it would otherwise &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should optimize on a case-by-case basis blending a combination of optimisations, compaction, rolling and trimming where it makes sense. &lt;/p&gt;

&lt;p&gt;How you structure your context will have a direct impact on your monthly AI bill and the difference can be staggering.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Next Time…
&lt;/h2&gt;

&lt;p&gt;I’ll be following up shortly with part two taking a look at a real-world example of how I’ve used a combination of all the techniques detailed here to build a software testing AI agent that can run for hours without exhausting context or drifting off-task. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>productivity</category>
      <category>architecture</category>
    </item>
    <item>
      <title>From ScrumBuddy to Brunelly: Bad Requirements Are Still Killing Software Projects</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Wed, 28 Jan 2026 12:46:50 +0000</pubDate>
      <link>https://forem.com/guypowell/from-scrumbuddy-to-brunellybad-requirements-are-stillkilling-software-projects-1apd</link>
      <guid>https://forem.com/guypowell/from-scrumbuddy-to-brunellybad-requirements-are-stillkilling-software-projects-1apd</guid>
      <description>&lt;p&gt;A note from Brunelly's CEO:&lt;/p&gt;

&lt;p&gt;ScrumBuddy started with a problem every seasoned developer or tech lead still runs into: bad requirements quietly killing otherwise capable teams.&lt;/p&gt;

&lt;p&gt;If you’ve built software in the real world, this will sound familiar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alignment Breaks Long Before Code Fails
&lt;/h2&gt;

&lt;p&gt;Most projects kick off with what looks like shared understanding, until delivery reveals the team never aligned on the real users’ needs.&lt;/p&gt;

&lt;p&gt;The result? Predictable: rework.&lt;/p&gt;

&lt;p&gt;Closely behind came another frustration: estimation failures caused not by inexperience, but by underspecified thinking.&lt;/p&gt;

&lt;p&gt;Even in mature Agile or Scrum environments, teams miss estimates roughly 70% of the time. Planning becomes fragile, delivery unpredictable, and trust harder to maintain.&lt;/p&gt;

&lt;p&gt;The tools don’t fix this. Clarity does.&lt;/p&gt;

&lt;p&gt;Jira.&lt;/p&gt;

&lt;p&gt;Azure DevOps.&lt;/p&gt;

&lt;p&gt;Agile coaches.&lt;/p&gt;

&lt;p&gt;Ceremonies.&lt;/p&gt;

&lt;p&gt;Retrospectives.&lt;/p&gt;

&lt;p&gt;The tooling is there. The process is there. But outcomes rarely change.&lt;br&gt;
It isn’t the people. It isn’t the tools. It’s missing clarity at the foundation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Iteration Without Clarity Just Accelerates Churn
&lt;/h2&gt;

&lt;p&gt;Iteration is vital. But iteration without clarity isn’t progress, it’s faster churn.&lt;/p&gt;

&lt;p&gt;Vague requirements build assumptions into your foundation, accumulate technical debt, and increase expensive course corrections.&lt;/p&gt;

&lt;p&gt;Every product team hits the same tension: when is it safe enough to start building?&lt;/p&gt;

&lt;p&gt;Momentum alone can’t rescue broken thinking. Speed moves you faster, but potentially in the wrong direction.&lt;/p&gt;

&lt;p&gt;That’s what led us to build ScrumBuddy.&lt;/p&gt;

&lt;h2&gt;
  
  
  ScrumBuddy Started as a Fix, Not a Product
&lt;/h2&gt;

&lt;p&gt;Good requirements change everything.&lt;/p&gt;

&lt;p&gt;Clear requirements lift the entire delivery chain: improving estimation, planning, quality, and decision-making.&lt;/p&gt;

&lt;p&gt;ScrumBuddy surfaced gaps, contradictions, and assumptions before code was written. Teams moved faster, waste declined, and planning became grounded.&lt;/p&gt;

&lt;p&gt;But over time, we realized a deeper truth: requirements don’t just define scope, they define everything downstream.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context Loss Is the Real Scaling Problem
&lt;/h2&gt;

&lt;p&gt;Requirements often get written once, split into tickets, copied across tools, re-explained in meetings, and reinterpreted by different people.&lt;/p&gt;

&lt;p&gt;As work moves through modern delivery systems, context erodes faster than teams realize. Changes trigger compensations: more meetings, more process, more coordination. Less progress. The system itself becomes the bottleneck.&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-Functional Requirements Are Where Products Fail
&lt;/h2&gt;

&lt;p&gt;Functional features are just the surface. Most failures come from missed non-functional requirements (NFRs):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance&lt;/li&gt;
&lt;li&gt;Scaling&lt;/li&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Resilience&lt;/li&gt;
&lt;li&gt;Operational realities&lt;/li&gt;
&lt;li&gt;Data growth and compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NFRs are expensive to bolt on later and every new requirement interacts with existing systems. Without clear understanding, “small” changes destabilize the system. Technical debt, more often than not, accumulates from incomplete understanding.&lt;/p&gt;

&lt;p&gt;Requirements must stay connected to architecture, code, and quality throughout the lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Didn’t Need Another Tool. We Needed a System
&lt;/h2&gt;

&lt;p&gt;Improving requirements alone wasn’t enough. Fragmentation was a problem.&lt;/p&gt;

&lt;p&gt;What teams really need is clarity that travels: from planning to architecture to implementation to review.&lt;/p&gt;

&lt;p&gt;Not another plugin. Not another Jira layer. A connected system that keeps context intact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Brunelly
&lt;/h2&gt;

&lt;p&gt;ScrumBuddy helped fix requirements in Scrum teams. Over time, we realized the same problems exist across planning, architecture, code, and tests far beyond Scrum. The old name was boxing us in. We needed something bigger: Brunelly.&lt;/p&gt;

&lt;p&gt;Brunelly remembers requirements as they flow into architecture, code, and tests. It keeps non-functional requirements visible. It shows what a new requirement touches before approval. It maintains clarity, so teams can act confidently.&lt;/p&gt;

&lt;p&gt;Brunelly is a semi-autonomous engineering system. Humans set direction, validate assumptions, and apply judgment. Brunelly takes on the structured, repetitive, execution-heavy work that slows teams down.&lt;/p&gt;

&lt;p&gt;It’s AI for teams who care about longevity, structure, and clarity, and not just living under the illusion that momentum equates to progress.&lt;/p&gt;

&lt;p&gt;The name? Inspired by Isambard Kingdom Brunel, one of history’s most ambitious engineers. He built systems that scaled with purpose and endured. That’s what Brunelly represents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Coding Works. Until It Doesn’t
&lt;/h2&gt;

&lt;p&gt;Fast, fluid, AI-assisted development is what we call “vibe coding”. Its a great way to experiment. But when it’s time to build for real users, real scale, and real change, momentum isn’t enough.&lt;/p&gt;

&lt;p&gt;We explored this in depth in my next article → stay tuned&lt;/p&gt;

&lt;h2&gt;
  
  
  A Clearer View of the Next Phase of Software
&lt;/h2&gt;

&lt;p&gt;Software teams need clarity, structure, and the ability to evolve. Brunelly is built for that next phase: building software that lasts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://brunelly.com/" rel="noopener noreferrer"&gt;Try Brunelly NOW!&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Test AI Orchestration and Start Building Software Right</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Mon, 12 Jan 2026 08:52:48 +0000</pubDate>
      <link>https://forem.com/guypowell/test-ai-orchestration-and-start-building-software-right-5aol</link>
      <guid>https://forem.com/guypowell/test-ai-orchestration-and-start-building-software-right-5aol</guid>
      <description>&lt;p&gt;I have developed by own AI orchestration and I'm looking for users to provide me with constructive feedback so my team and I can build a platform users actually want.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ScrumBuddy?
&lt;/h2&gt;

&lt;p&gt;ScrumBuddy is an AI-powered product development platform engineered to solve the oldest and most costly problem in software: broken requirements lead to broken software.&lt;/p&gt;

&lt;p&gt;We replicate how the best software teams think, plan, refine, and deliver, but with automation, orchestration, and intelligence that scales far beyond human capacity.&lt;/p&gt;

&lt;p&gt;ScrumBuddy is built for founders, solo developers, and teams who want to deliver production-ready systems without chaos, rework, or fragmented toolchains.&lt;/p&gt;

&lt;p&gt;ScrumBuddy helps you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate clean, structured user stories&lt;/li&gt;
&lt;li&gt;Identify gaps, risks, and missing requirements&lt;/li&gt;
&lt;li&gt;Rewrite unclear or ambiguous specs&lt;/li&gt;
&lt;li&gt;Produce acceptance criteria &amp;amp; test cases&lt;/li&gt;
&lt;li&gt;Create refinement-ready content for Jira / Azure DevOps&lt;/li&gt;
&lt;li&gt;Generate summaries &amp;amp; documentation for sprint ceremonies&lt;/li&gt;
&lt;li&gt;Clearer specs → better software → faster delivery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have an idea, but not the team to deliver it, try ScrumBuddy NOW and build your dream product in half the time for a fraction of the cost: &lt;a href="https://app.scrumbuddy.com/" rel="noopener noreferrer"&gt;Build with ScrumBuddy&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>learning</category>
      <category>showdev</category>
    </item>
    <item>
      <title>ScrumBuddy Beta Is Live! Here’s the Real Story Behind It (For the Solopreneurs Who’ll Get It Most)</title>
      <dc:creator>Guy</dc:creator>
      <pubDate>Thu, 11 Dec 2025 15:04:41 +0000</pubDate>
      <link>https://forem.com/guypowell/scrumbuddy-beta-is-live-heres-the-real-story-behind-it-for-the-solopreneurs-wholl-get-it-most-4b79</link>
      <guid>https://forem.com/guypowell/scrumbuddy-beta-is-live-heres-the-real-story-behind-it-for-the-solopreneurs-wholl-get-it-most-4b79</guid>
      <description>&lt;p&gt;When we first started building ScrumBuddy, we thought we knew what we were doing. &lt;/p&gt;

&lt;p&gt;The plan was harmless enough: &lt;br&gt;
&lt;strong&gt;A light companion to help scrum teams do refinement better, clean up backlogs, and survive ceremonies with fewer headaches.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nothing revolutionary, just useful. &lt;/p&gt;

&lt;p&gt;But then we made the classic mistake every solopreneur, indie hacker, and small team eventually makes: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We built something… and immediately ran face-first into our own real problems.&lt;/strong&gt; &lt;br&gt;
Not scrum problems. &lt;br&gt;
Not process problems. &lt;br&gt;
Not textbook-productivity problems. &lt;/p&gt;

&lt;p&gt;Software development problems. &lt;br&gt;
The same ones all of you deal with daily: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vague requirements that feel like an incomplete grocery list. &lt;/li&gt;
&lt;li&gt;AI generating code that looks right until you try to run it. &lt;/li&gt;
&lt;li&gt;Estimates that are basically dice rolls. &lt;/li&gt;
&lt;li&gt;Context scattered across Jira, Slack, Google Docs, your brain, and four forgotten Notion pages. &lt;/li&gt;
&lt;li&gt;A thousand tabs = a thousand tiny failures waiting to happen. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And it hit us: &lt;br&gt;
&lt;strong&gt;Scrum wasn’t the pain. Software development itself was the pain.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;So we pivoted. &lt;br&gt;
Then pivoted again. &lt;br&gt;
Then again, until ScrumBuddy stopped being a “scrum tool” and started becoming &lt;strong&gt;a new engineering system.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Early Days (AKA: The “Maybe We Should Throw This Out” Phase)
&lt;/h2&gt;

&lt;p&gt;Early versions of ScrumBuddy were &lt;em&gt;fine&lt;/em&gt;. &lt;br&gt;
They refined stories. They helped with estimates. They made ceremonies less painful. &lt;/p&gt;

&lt;p&gt;But they didn’t solve &lt;em&gt;our&lt;/em&gt; problems. &lt;/p&gt;

&lt;p&gt;And nothing wakes a product team up faster than the realization that they wouldn’t even use the thing they’re building. &lt;/p&gt;

&lt;p&gt;Market research confirmed another uncomfortable truth: &lt;br&gt;
&lt;strong&gt;Developers hear the word “scrum” and instantly brace for impact.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So we stopped building for “scrum teams” and started building for: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Solo devs &lt;/li&gt;
&lt;li&gt;Founders &lt;/li&gt;
&lt;li&gt;Freelancers &lt;/li&gt;
&lt;li&gt;Vibe coders &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anyone building software without a 30-person engineering department doing PRDs, QA, and architecture reviews &lt;/p&gt;

&lt;p&gt;Once we focused on solving our own problems, the direction snapped into place. &lt;/p&gt;

&lt;h2&gt;
  
  
  Building in Public (And Unexpected Turning Points)
&lt;/h2&gt;

&lt;p&gt;We built in public because, honestly, we didn’t have the luxury not to. We needed early feedback, real users, and honest reactions. &lt;/p&gt;

&lt;p&gt;There were moments that shaped the product: &lt;/p&gt;

&lt;p&gt;1.“No tool actually fixes this problem.” &lt;/p&gt;

&lt;p&gt;This wasn’t just something users said, it was our own team’s frustration. &lt;br&gt;
There was no platform that solved the core issue of &lt;strong&gt;incomplete requirements leading to broken AI outputs and endless rework.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;So we built the tool we wished existed. &lt;/p&gt;

&lt;p&gt;2.Surprising use cases &lt;/p&gt;

&lt;p&gt;We saw solo devs using ScrumBuddy not for Scrum at all, but for: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scoping new SaaS ideas &lt;/li&gt;
&lt;li&gt;Validating architecture &lt;/li&gt;
&lt;li&gt;Grooming freelance client requirements &lt;/li&gt;
&lt;li&gt;Rewriting terrible briefs into something actually buildable &lt;/li&gt;
&lt;li&gt;Auto-generating test plans &lt;/li&gt;
&lt;li&gt;Turning vague prompts into production-ready specs &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The solopreneurs basically taught us what ScrumBuddy &lt;em&gt;really&lt;/em&gt; was. &lt;/p&gt;

&lt;p&gt;3.Feedback that changed direction &lt;/p&gt;

&lt;p&gt;The biggest shift: &lt;br&gt;
Users didn’t want another AI assistant. &lt;br&gt;
They wanted an AI system. Something orchestrated, structured, opinionated. &lt;/p&gt;

&lt;p&gt;That one insight changed the entire architecture. &lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Challenges (AKA: The Octopus Wrestling Matches)
&lt;/h2&gt;

&lt;p&gt;If you’ve read the Substack posts, you’ve seen the scars. &lt;/p&gt;

&lt;p&gt;Some of the biggest battles: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LLMs hallucinating architecture&lt;/strong&gt; → solved through strict spec-driven pipelines &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jira/Azure integration madness&lt;/strong&gt; → APIs that behave like moody houseplants &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Estimation logic modelling&lt;/strong&gt; → building estimation that isn’t “AI guessing” but actual scope logic &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-agent code generation&lt;/strong&gt; → managing multiple LLMs coordinating without stepping on each other &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UI that doesn’t overwhelm&lt;/strong&gt; → how do you show structured specifications without building a cockpit dashboard? &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every victory took three failures to earn. &lt;br&gt;
But the system we have now is something we’re genuinely proud of. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Emotional Journey: All the Messy Parts
&lt;/h2&gt;

&lt;p&gt;This is the stuff solopreneurs will understand immediately: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Days when momentum felt unstoppable &lt;/li&gt;
&lt;li&gt;Nights where one dependency bug made us question everything &lt;/li&gt;
&lt;li&gt;Long loops of “are we insane?” mixed with “this might actually change things” &lt;/li&gt;
&lt;li&gt;A Slack channel full of screenshots labeled simply: WHY?? &lt;/li&gt;
&lt;li&gt;The quiet 3am breakthroughs no one sees except your version history &lt;/li&gt;
&lt;li&gt;The growing belief that we weren’t chasing an app, we were chasing a new way to build &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ScrumBuddy is the product of equal parts frustration, obsession, and refusal to accept the status quo of modern dev workflows. &lt;/p&gt;

&lt;h2&gt;
  
  
  So… What Does the Beta Mean? And Why Should Solopreneurs Care?
&lt;/h2&gt;

&lt;p&gt;The beta isn’t “a launch.” &lt;br&gt;
It’s our &lt;strong&gt;first public step into AI-orchestrated software engineering&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;ScrumBuddy is built on a simple belief: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When requirements are complete, AI can do extraordinary things. When they’re incomplete, everything breaks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So ScrumBuddy doesn’t guess. &lt;br&gt;
It structures. &lt;/p&gt;

&lt;p&gt;It takes ideas and turns them into: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Features &lt;/li&gt;
&lt;li&gt;Scenarios &lt;/li&gt;
&lt;li&gt;Acceptance criteria &lt;/li&gt;
&lt;li&gt;Architecture &lt;/li&gt;
&lt;li&gt;Estimates &lt;/li&gt;
&lt;li&gt;Test plans &lt;/li&gt;
&lt;li&gt;Safe, multi-agent code generation &lt;/li&gt;
&lt;li&gt;Validation &lt;/li&gt;
&lt;li&gt;PR guidance &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All inside one system. &lt;/p&gt;

&lt;p&gt;No scattered tools. &lt;br&gt;
No duct-taped workflows. &lt;br&gt;
No hallucinated code that ignores half the requirements. &lt;/p&gt;

&lt;p&gt;Just clarity → structure → orchestration → delivery. &lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next? (And Where We Want Solopreneurs to Walk With Us)
&lt;/h2&gt;

&lt;p&gt;This beta is built for: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Indie hackers launching fast &lt;/li&gt;
&lt;li&gt;Solo founders shipping without a team &lt;/li&gt;
&lt;li&gt;Freelancers needing leverage &lt;/li&gt;
&lt;li&gt;Micro teams wanting enterprise-level consistency &lt;/li&gt;
&lt;li&gt;Anyone tired of the “reinvent the process every project” chaos &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But long term? &lt;/p&gt;

&lt;p&gt;We want ScrumBuddy to be the platform that unifies the ecosystem; where design, engineering, AI agents, validation, and delivery all live inside one intelligent system. &lt;/p&gt;

&lt;p&gt;We’re not replacing developers. &lt;br&gt;
&lt;strong&gt;We’re removing the waste between their ideas and their output.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And this beta is the beginning. &lt;/p&gt;

&lt;h2&gt;
  
  
  If You’re a Solopreneur on Indie Hackers… This Is Your Invitation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://app.scrumbuddy.com/" rel="noopener noreferrer"&gt;Try the beta.&lt;/a&gt; &lt;br&gt;
Break it. &lt;br&gt;
Push it. &lt;br&gt;
Tell us where it sucks and where it shines. &lt;/p&gt;

&lt;p&gt;You’re exactly the type of builder we designed ScrumBuddy for because you’re the ones who don’t have 20 engineers, QA teams, PMs, and architects smoothing the edges for you. &lt;/p&gt;

&lt;p&gt;We built this because the tools we wanted didn’t exist. &lt;/p&gt;

&lt;p&gt;Now they do. &lt;/p&gt;

&lt;p&gt;And we’d love for you to build the next chapter with us. &lt;/p&gt;

&lt;p&gt;Test ScrumBuddy for yourself: &lt;a href="https://app.scrumbuddy.com/" rel="noopener noreferrer"&gt;https://app.scrumbuddy.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>devops</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
