<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ollie Church</title>
    <description>The latest articles on Forem by Ollie Church (@olliechurch).</description>
    <link>https://forem.com/olliechurch</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/olliechurch"/>
    <language>en</language>
    <item>
      <title>Don't understand the system? Start fixing it anyway.</title>
      <dc:creator>Ollie Church</dc:creator>
      <pubDate>Wed, 01 Apr 2026 13:29:09 +0000</pubDate>
      <link>https://forem.com/olliechurch/dont-understand-the-system-start-fixing-it-anyway-3ka2</link>
      <guid>https://forem.com/olliechurch/dont-understand-the-system-start-fixing-it-anyway-3ka2</guid>
      <description>&lt;p&gt;My first professional engineering job dropped me into a sizeable microservices platform carrying significant technical debt. Nearly 1,000 user applications were stuck in a catch-all "fault" status. The faults were caused by bugs across different services, but the status didn't distinguish between them. You couldn't tell whether you were looking at a one-off edge case or a symptom of something affecting hundreds of users. From the outside, the system was impenetrable.&lt;/p&gt;

&lt;p&gt;The engineering department was split into multiple teams, each owning a handful of services with deep knowledge of their slice. That created silos. Problems that spanned services fell between the cracks, and communication across team boundaries was slow. It puts me in mind of Conway's Law: the observation that a system's architecture tends to reflect the communication structure of the organisation that built it (Melvin Conway, 1967).&lt;/p&gt;

&lt;p&gt;The team was overwhelmed. Customer support tickets were flooding in, and developers were handling them one at a time. Everyone knew the system had deep-rooted problems. But investigating root causes takes time and headspace, and both were consumed by the immediate backlog. Without any way to distinguish between faults, you couldn't even prioritise where a deep dive would have the most impact.&lt;/p&gt;

&lt;p&gt;The instinct when you land in a situation like this is to try and understand the whole system first. Read the docs. Trace the architecture. Build a complete mental model. But the docs were written by people with deep context at a specific point in time, not kept up to date, and lacking the detail that someone coming to it fresh would need. That's not a criticism. It's one of the most common problems in software engineering. Monitoring was basically nonexistent. And I was a junior engineer on my first professional job with no existing knowledge to fall back on.&lt;/p&gt;

&lt;p&gt;So I didn't try to understand the whole system. I asked a smaller question: how can I break up this fault status into something useful?&lt;/p&gt;

&lt;p&gt;The first thing I built was a simple alert: how many applications went into fault in the last 24 hours. That number was significantly higher than the number of customers getting in touch. For the first time, we had a true picture of the scale of the problem, and that made it far easier to build the case for investing time in root cause work.&lt;/p&gt;

&lt;p&gt;From there, it was about finding patterns. I built a spreadsheet and asked developers to add context to their tickets as they closed them for me to review and correlate. It was a necessary short-term step to give us an initial direction, but it added friction to an already overwhelming workload, and resistance was understandable. Once I had enough to identify initial targets, I started encoding what I'd found into alerts: database queries that detected known data patterns and surfaced them in Slack. If a new problem emerged that didn't match any existing alert, it would still land in the generic fault bucket, and that bucket becoming noisy again was its own signal that something new needed investigating.&lt;/p&gt;

&lt;p&gt;Some patterns were simple. Others required tracing back through a user's history, cross-referencing changes made across services and reconstructing what the user had done on the front end, then figuring out how to recognise that in the data. Some of those investigations had real conspiracy-board energy: pinning fragments together until a shape emerged. And a pattern didn't always map to one root cause. Sometimes it turned out to be two or three distinct problems, which got fed back into the alerting to split the bucket further. It was a constant feedback loop: detect, investigate, refine.&lt;/p&gt;

&lt;p&gt;The biggest buckets got investigated first. I built workaround documentation as a stopgap: quick, repeatable steps that any developer could follow to resolve a known issue at an individual level, getting that application back on track before the customer even noticed. That kept the backlog from growing while the longer work of root cause fixes played out, and for the first time in a long while, freed up developer time for new feature work.&lt;/p&gt;

&lt;p&gt;The first root causes we resolved cut the backlog by half almost immediately. Within a couple of months, it had dropped by 98.8% and stayed down.&lt;/p&gt;

&lt;p&gt;None of that started with understanding every microservice in the platform. It started with building observability in focused areas: one pattern, one alert, one bucket at a time. The deep system knowledge came as a byproduct of the fixing, not as a prerequisite for it. The low-hanging fruit taught me individual services. The harder problems taught me how they connected. That is systems thinking: stepping back from individual problems and building the layer that turns chaos into something you can reason about.&lt;/p&gt;

&lt;p&gt;This all happened before AI coding tools existed. The pattern recognition was entirely manual. I'd be curious to see what a problem like this looks like with today's tools. The systems thinking still needs a human: knowing what to look for, what matters, how to structure the problem. But the grunt work of finding patterns in data feels like exactly the kind of thing that could be dramatically accelerated.&lt;/p&gt;

&lt;p&gt;If you're staring at a messy, opaque system, don't wait until you understand everything. Pick one thing. Build visibility on that one thing. The rest follows.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Human written, AI assisted.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Martin Fowler, &lt;a href="https://martinfowler.com/bliki/ConwaysLaw.html" rel="noopener noreferrer"&gt;Conway's Law&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>monitoring</category>
      <category>microservices</category>
      <category>architecture</category>
      <category>devops</category>
    </item>
    <item>
      <title>The Biggest AI Productivity Hack? Doing What We Should Have Done All Along</title>
      <dc:creator>Ollie Church</dc:creator>
      <pubDate>Mon, 16 Mar 2026 19:15:42 +0000</pubDate>
      <link>https://forem.com/olliechurch/the-biggest-ai-productivity-hack-doing-what-we-should-have-done-all-along-b9i</link>
      <guid>https://forem.com/olliechurch/the-biggest-ai-productivity-hack-doing-what-we-should-have-done-all-along-b9i</guid>
      <description>&lt;p&gt;Everyone's optimising for AI right now. Writing clearer requirements. Documenting features properly. Structuring code cleanly. Maintaining READMEs. Breaking work into small, well-defined tasks. Keeping track of technical debt.&lt;/p&gt;

&lt;p&gt;It's all good stuff. It genuinely helps AI produce better output. But here's what's nagging at me: we already knew all of this was worthwhile. We knew it before AI showed up. We just never had enough reason to do it consistently.&lt;/p&gt;

&lt;p&gt;These are the practices that every engineering team has preached for years. Understand the requirements before you start coding. Do some upfront design. Write things down. Break the work into the smallest possible tasks. None of this is new. And human beings have always produced better results when these things are done well. The problem was never that the practices didn't work. It was that the feedback loop was too slow. A developer who receives well-written requirements delivers better code, but that plays out over weeks and across handoffs. The business never had a clean comparison because you can't rewind the clock and run the same project again without the good practices to prove the difference.&lt;/p&gt;

&lt;p&gt;AI changed that. With AI, you can see the difference in minutes. Give it vague requirements, get mediocre output. Give it clear, well-structured input, get something genuinely useful. The cause and effect are immediate, and that makes the value of the groundwork undeniable to stakeholders who previously couldn't connect the dots. That's the real shift: not that these practices suddenly became valuable, but that the speed of the feedback loop finally made the value visible.&lt;/p&gt;

&lt;p&gt;There's also a time-saving element. AI makes the groundwork itself easier to produce. Documentation, detailed ticket descriptions, technical write-ups — the overhead of doing the right thing has dropped. But the practices themselves aren't new. The enthusiasm is.&lt;/p&gt;

&lt;p&gt;This is actually why I chose Kiro as my AI coding tool. When AWS launched it, the headline feature was "spec-driven development." In practice, that means: understand the requirements, produce a design, write up a task list, then start work. At the time, that was a unique approach. Most mainstream alternatives, Cursor included, leaned more towards just generating code. Kiro's insistence on doing the thinking first was exactly what good engineering teams have been trying to get developers to do for years. The fact that it works so well as an AI workflow is, to me, evidence that these were always the right practices. AI just proved it faster than we ever could with humans alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The headline numbers don't hold up
&lt;/h2&gt;

&lt;p&gt;People are reporting enormous productivity gains from AI. 40%, 80%, even higher. The research paints a different picture.&lt;/p&gt;

&lt;p&gt;The METR study from mid-2025 found that experienced developers took 19% longer to complete tasks when using AI, despite believing they'd been sped up by 20%. That study used early-2025 models, and things have moved significantly since. METR themselves acknowledged this when attempting a follow-up in early 2026, but struggled to complete it. As one developer in the study put it: "my head's going to explode if I try to do too much the old fashioned way because it's like trying to get across the city walking when all of a sudden I was more used to taking an Uber."&lt;/p&gt;

&lt;p&gt;More recent is Laura Tacho's research at DX, presented at the Pragmatic Summit in February 2026. DX measures productivity using a combination of direct metrics (time saved per developer per week) and indirect metrics (PR throughput, delivery rate, developer experience). Their survey covered 121,000 developers across 450+ companies. The headline: measured productivity gains from AI sit around 10%. A long way from what you see in LinkedIn posts.&lt;/p&gt;

&lt;p&gt;The more striking finding came from a deeper analysis of 67,000 developers over the same period. The outcomes were sharply divided: some companies saw customer-facing incidents cut by 50%, while others saw them double. The difference wasn't the AI. It was the organisation around it. In well-structured teams, AI acted as a force multiplier. In struggling ones, it exposed existing problems. As Tacho put it, AI won't fix deeper organisational issues unless you tackle those problems first.&lt;/p&gt;

&lt;p&gt;Anthropic's own research adds another layer. In a controlled trial with junior developers, the group using AI completed tasks faster but scored 17% lower on comprehension. They shipped code they didn't understand. The developers who retained understanding were the ones who used AI to ask questions and build comprehension, not just to generate output.&lt;/p&gt;

&lt;h2&gt;
  
  
  This is good news, actually
&lt;/h2&gt;

&lt;p&gt;I'm not arguing that AI provides no productivity gain. It clearly does. The speed of output is faster, the friction on routine tasks is lower, and the tooling keeps improving. AI is absolutely a variable in the productivity equation.&lt;/p&gt;

&lt;p&gt;But it's not the only variable, and I think we're over-crediting it while under-crediting the groundwork. If your team started writing better requirements, documenting decisions, breaking work into smaller tasks, and structuring code more thoughtfully, you'd see productivity gains with or without AI in the picture.&lt;/p&gt;

&lt;p&gt;The optimistic read is that AI has been the catalyst we needed. It gave us a reason to finally do what we always knew was right. The practices are sticking because the feedback loop is visible, the tooling makes them easier, and the results are immediate.&lt;/p&gt;

&lt;p&gt;Just don't let anyone tell you that those productivity gains are all down to the AI.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Human written, AI assisted.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;METR, "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity" (July 2025) — &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;metr.org&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;METR, "We are Changing our Developer Productivity Experiment Design" (February 2026) — &lt;a href="https://metr.org/blog/2026-02-24-uplift-update/" rel="noopener noreferrer"&gt;metr.org&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Laura Tacho / DX, "Measuring Developer Productivity &amp;amp; AI Impact" — presented at Pragmatic Summit, February 2026. Reported by &lt;a href="https://shiftmag.dev/this-cto-says-93-of-developers-use-ai-but-productivity-is-still-10-8013/" rel="noopener noreferrer"&gt;ShiftMag&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Anthropic, "How AI assistance impacts the formation of coding skills" — &lt;a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="noopener noreferrer"&gt;anthropic.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>discuss</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>The AI Calculation Everyone's Making</title>
      <dc:creator>Ollie Church</dc:creator>
      <pubDate>Sun, 01 Mar 2026 16:44:14 +0000</pubDate>
      <link>https://forem.com/olliechurch/the-ai-calculation-everyones-making-3p5h</link>
      <guid>https://forem.com/olliechurch/the-ai-calculation-everyones-making-3p5h</guid>
      <description>&lt;p&gt;If you've been anywhere near AI discourse this last week, you've probably seen the Citrini Research piece, "The 2028 Global Intelligence Crisis." If you haven't: it's a speculative memo written from June 2028, describing how AI-driven productivity gains trigger mass white-collar unemployment, collapsing consumer spending, and cascading financial crisis. S&amp;amp;P down 38%, unemployment above 10%. &lt;a href="https://www.citriniresearch.com/p/2028gic" rel="noopener noreferrer"&gt;You can read the full thing here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The piece went viral over the weekend, racking up around 16 million views on X. Michael Burry shared it with the comment "And you think I'm bearish." Then the markets opened on Monday, and the Dow dropped over 800 points. IBM fell 13%, its biggest single-day decline since 2000. Software, payments, and delivery stocks all took hits. A speculative thought experiment, clearly labelled as fiction, moved billions.&lt;/p&gt;

&lt;p&gt;That's why the response has been so fierce. Economists and strategists have lined up to explain why the mechanics don't work - production generates income, compute costs create natural brakes, institutions adapt. They're probably right. And 2028 feels like shock value. AI is moving fast, but "two years to economic collapse" is doing a lot of heavy lifting.&lt;/p&gt;

&lt;p&gt;What I can't shake is that the rebuttals are all about timeline and mechanics. They're not about direction. And the direction feels true.&lt;/p&gt;

&lt;p&gt;I work at a fintech startup. We're a small team, and we use AI deliberately to stay that way - not just in engineering, but across the business. Research, operations, manual processes. Not because we're callous about employment, but because if we don't, someone else will. Every startup I come across is making the same calculation. It's not even a calculation anymore. It's just how you build now.&lt;/p&gt;

&lt;p&gt;This is the dynamic Citrini describes: individually rational decisions that sum to something collectively worrying. I can see where this leads. I'm walking there with everyone else. And I don't know what the alternative is. Stopping wouldn't change anything - we'd just get trampled while everyone else kept moving.&lt;/p&gt;

&lt;p&gt;The economists say institutions will adapt. Governments will respond. New industries will emerge. Maybe. But I watched governments try to coordinate during COVID, and I don't have much faith that systems designed for slower-moving problems can react to something that compounds every quarter.&lt;/p&gt;

&lt;p&gt;That's the macro concern, and there's not much I can do about it. So I find myself thinking about what this means closer to home - for my career, for the decisions I make in my role, for what kind of work actually matters in this environment.&lt;/p&gt;

&lt;p&gt;I've been mulling over the difference between what I'd call a developer and what I'd call a software engineer. This isn't a distinction most people make - the terms get used interchangeably - but I think it matters now.&lt;/p&gt;

&lt;p&gt;A developer, in this framing, receives requirements and builds to spec. The ticket says what to do, they do it, the ticket closes. They trust that someone else has thought about the bigger picture. That's the work AI does well. Requirements in, code out, no need to question the wider context.&lt;/p&gt;

&lt;p&gt;A software engineer is different. They take ownership. They hold the whole system in their head. They interrogate requirements rather than just executing them. They ask whether this is even the right thing to build, whether there's a better way, whether the edge cases have been considered. They're part of the decision-making, not downstream of it.&lt;/p&gt;

&lt;p&gt;That work still needs humans. Probably for a while yet.&lt;/p&gt;

&lt;p&gt;But it raises a question I keep coming back to: where do future software engineers come from?&lt;/p&gt;

&lt;p&gt;The craft has always been learned by doing. You write bad code, ship things that break, debug production incidents at 2am, slowly build intuition for why certain patterns exist. The junior and mid-level years aren't just about output - they're about developing judgment. If those roles hollow out, what happens to the pipeline?&lt;/p&gt;

&lt;p&gt;I don't know. History suggests this worry comes up every time the industry abstracts upward. When the move happened from assembly to higher-level languages, people said something essential would be lost. The industry found ways to cope. Maybe this is the same.&lt;/p&gt;

&lt;p&gt;But previous abstractions still required human judgment at every layer. This one might not. And we won't know until the current generation of senior engineers starts retiring and we see who's behind them.&lt;/p&gt;

&lt;p&gt;I'm not predicting anything. I don't have the economics background to say whether Citrini's scenario holds together, and the people who do seem sceptical.&lt;/p&gt;

&lt;p&gt;But I'm also not dismissing it. Something about the direction feels right, even if the timeline doesn't. And if there's any safeguard for engineers in this environment, I think it's in ownership - understanding why something should exist, not just how to make it exist. Being part of the thinking, not just the execution. That's harder to automate than writing code to spec.&lt;/p&gt;

&lt;p&gt;At least for now.&lt;/p&gt;

&lt;p&gt;I don't have a neat conclusion. I suspect most of us don't. We're just building, and watching, and wondering what it adds up to.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Human written, AI assisted.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Citrini Research, &lt;a href="https://www.citriniresearch.com/p/2028gic" rel="noopener noreferrer"&gt;"The 2028 Global Intelligence Crisis"&lt;/a&gt;, February 2026&lt;/li&gt;
&lt;li&gt;Fortune, &lt;a href="https://fortune.com/2026/02/23/will-ai-take-my-job-cause-recession-crash-james-val-geelen-citrini/" rel="noopener noreferrer"&gt;"'Ghost GDP,' a white-collar recession, and the death of friction: Substack's top finance writer warns of AI's 2028 crisis that nobody sees coming"&lt;/a&gt;, February 2026&lt;/li&gt;
&lt;li&gt;Fortune, &lt;a href="https://fortune.com/2026/02/26/citadel-demolishes-viral-doomsday-ai-essay-citrini-macro-fundamentals-engels-pause/" rel="noopener noreferrer"&gt;"Citadel Securities demolishes viral AI doomsday essay, arguing the real 'Global Intelligence Crisis' is ignorance of macro fundamentals"&lt;/a&gt;, February 2026&lt;/li&gt;
&lt;li&gt;Carlo Iacono, &lt;a href="https://hybridhorizons.substack.com/p/the-2028-global-intelligence-dividend" rel="noopener noreferrer"&gt;"The 2028 Global Intelligence Dividend"&lt;/a&gt;, Hybrid Horizons, February 2026&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>softwareengineering</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Every service I build will die</title>
      <dc:creator>Ollie Church</dc:creator>
      <pubDate>Sun, 01 Mar 2026 09:00:00 +0000</pubDate>
      <link>https://forem.com/olliechurch/every-service-i-build-will-die-3i8g</link>
      <guid>https://forem.com/olliechurch/every-service-i-build-will-die-3i8g</guid>
      <description>&lt;p&gt;And that's exactly the point.&lt;/p&gt;

&lt;p&gt;I'm a senior software engineer at Ontime Payments, a fintech startup enabling direct-from-salary bill payments. We've deliberately built a modular, event-driven serverless architecture, and every service within it is expected to be replaced eventually. Some won't be. But we build as if they will, and that shapes everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The basic idea
&lt;/h2&gt;

&lt;p&gt;Serverless has well-known benefits: no servers to manage, no patches to apply, compute that scales automatically. But the thing that's changed how we work isn't just the managed infrastructure. It's what happens when you combine small, focused functions with event-driven communication and a philosophy that any component can be killed and replaced when required.&lt;/p&gt;

&lt;p&gt;If you're unfamiliar with how serverless infrastructure works at its most basic: a user hits an API Gateway, which triggers a Lambda function. That function does its job (say, processing a payment), returns a response to the user, and raises an event to EventBridge.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi13wk4wh0f5mzqvlk39a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi13wk4wh0f5mzqvlk39a.png" alt="Event-driven architecture diagram: User to API Gateway to Lambda (Process Payment) to EventBridge, which fans out to two Lambdas (Notify Warehouse and Send Receipt). Response flows back from Lambda to API Gateway" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EventBridge sits in the middle, routing events based on rules you define. The payment function doesn't know anything about warehouse operations or email receipts. It just announces what it did, and other services listen and act accordingly. The producer has no idea how many consumers are listening, or who they are. Each function does one thing. That combination of small components and loose coupling is what makes everything else possible.&lt;/p&gt;

&lt;p&gt;So that's the infrastructure. But the real benefit comes when you pair it with a mindset: build for today's requirements, expect to replace things tomorrow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everything is smoother
&lt;/h2&gt;

&lt;p&gt;We had a monitoring service that started simple but grew into a mess of conditional logic as the system expanded. Every new feature meant more edge cases, more setup requirements, more brittleness. So we built a v2 from scratch.&lt;/p&gt;

&lt;p&gt;That goes against the usual advice (never rewrite, always iterate). But because the original service was small and focused, starting fresh was actually less work than continuing to patch. The scope was manageable.&lt;/p&gt;

&lt;p&gt;Both versions ran side by side, similar to the strangler fig pattern you'd use when migrating away from a monolith.[^1] They consumed the same events from EventBridge. The services raising those events didn't need to change anything; they had no idea whether one consumer or two (or ten) were listening. We validated v2 was working, then switched off v1. The rest of the system didn't notice.&lt;/p&gt;

&lt;p&gt;That's not a special story. We've merged modules, split them apart, replaced entire services. An early email-sending service got absorbed into a broader notifications module handling Slack, webhooks, and email together. The old service just switched off. This is the normal way we work, not some heroic migration effort. The benefit isn't a single moment that made it all worthwhile. It's that everything, all the time, is easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prediction isn't flexibility
&lt;/h2&gt;

&lt;p&gt;A lot of developers try to achieve flexibility through prediction, building for every possible future requirement from day one. But that's not flexibility. That's just widening the scope of your rigidity.&lt;/p&gt;

&lt;p&gt;The flexibility we've found doesn't come from planning ahead. It comes from the architecture itself. Lambda encourages small, focused components. Event-driven design keeps them isolated from each other. Each piece is small enough to understand, focused enough to replace, and isolated enough that replacing it doesn't ripple outward.&lt;/p&gt;

&lt;p&gt;The keep-it-simple-stupid philosophy,[^2] but taken seriously at the architecture level. We build for today's requirements, not for every possible future. That frees up mental energy, increases deployment velocity, and means we're not overcomplicating things trying to cope with scenarios that may never arrive. We build for the foreseeable future. When a service no longer fits the requirements, we kill it and replace it with something that does. And because of how we've built, that replacement is easier than it would otherwise be.&lt;/p&gt;

&lt;p&gt;Every service will die. That's what makes the system live.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Human written, AI assisted.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[^1]: Martin Fowler, &lt;a href="https://martinfowler.com/bliki/StranglerFigApplication.html" rel="noopener noreferrer"&gt;Strangler Fig Application&lt;/a&gt;. The pattern describes incrementally replacing a legacy system by building new functionality around it, rather than attempting a risky big-bang rewrite.&lt;/li&gt;
&lt;li&gt;[^2]: The &lt;a href="https://en.wikipedia.org/wiki/KISS_principle" rel="noopener noreferrer"&gt;KISS principle&lt;/a&gt; ("Keep it simple, stupid") was coined by Kelly Johnson at Lockheed Skunk Works in the 1960s. The idea: systems work best when kept simple, and unnecessary complexity introduces failure points.&lt;/li&gt;
&lt;li&gt;For more on event-driven serverless architecture with AWS Lambda and EventBridge, see the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/concepts-event-driven-architectures.html" rel="noopener noreferrer"&gt;AWS documentation on event-driven architectures&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ontime.co" rel="noopener noreferrer"&gt;Ontime Payments&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>architecture</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Google Is About to Kill the Laptop</title>
      <dc:creator>Ollie Church</dc:creator>
      <pubDate>Sun, 22 Feb 2026 14:29:36 +0000</pubDate>
      <link>https://forem.com/olliechurch/google-is-about-to-kill-the-laptop-8nl</link>
      <guid>https://forem.com/olliechurch/google-is-about-to-kill-the-laptop-8nl</guid>
      <description>&lt;p&gt;Here's a prediction: Google is going to kill the laptop. And the feature that does it is already available on Pixel 8 and later devices, buried in developer settings.&lt;/p&gt;

&lt;p&gt;I've spent the last couple of days using my Pixel as a desktop computer. Not through some hacky workaround or third-party app, but through Android's hidden desktop mode. After a morning of writing, emailing, and switching between apps on a portable monitor with a Bluetooth keyboard and mouse, I'm convinced this is the future of personal computing.&lt;/p&gt;

&lt;p&gt;What strikes me is how ready this already is. If someone sat me down and told me it was a Chromebook, I wouldn't bat an eyelid. I can drag windows around and switch between apps like I would on a laptop. Google Maps, Docs, Slides, Sheets. They all work as you'd expect. Even apps that were never designed for desktop work well. There are quirks, like a floating keyboard bar that doesn't quite know where to live and a resolution cap at 1080p that makes 4K monitors look rough. But these feel like polish issues, not fundamental problems.&lt;/p&gt;

&lt;p&gt;The groundwork is there. ChromeOS is already transitioning to the Android Linux kernel, a multi-year project Google started in 2024. At I/O 2025, they announced 'connected displays' as a way to transform Android devices into large-screen workstations. The pieces are moving into place.&lt;/p&gt;

&lt;p&gt;The comparison I keep coming back to is Android Auto and Apple CarPlay. Car manufacturers spent years trying to build infotainment systems before collectively realising they were never going to win the software race. Now most have conceded: plug in your phone, and it becomes the brain of the car. Why fight it? Your phone already has your contacts, your music, your maps, your preferences. The laptop market feels ripe for the same disruption.&lt;/p&gt;

&lt;p&gt;Think about what this means practically. When I commute into the office, I carry a laptop. Any laptop, even the lightest, has a size and weight you can't escape. It takes up space. You know it's there. With Android's desktop mode, I'd carry my phone (which I'm carrying anyway) and nothing else. The keyboard, mouse, and monitor would already be at my desk. For travel, you'd need a Bluetooth keyboard and mouse, but you'd need those with a laptop anyway if you want a decent setup. The difference is swapping the laptop and its bulky charger for a lightweight portable monitor and a small power supply. The space and weight savings are significant.&lt;/p&gt;

&lt;p&gt;Google has every reason to push this hard. The Chromebook isn't a status symbol; nobody's queuing up for the latest model. Disrupting that market costs them little. And if they can position the Pixel as a phone that's also a laptop, that's a genuine differentiator against Samsung and Apple. And Apple has no motivation to follow. They've historically held back the iPad to protect MacBook sales. Cannibalising their own product line isn't in their playbook. For Google, it's an open goal.&lt;/p&gt;

&lt;p&gt;Everyone is talking about AI. Every company is racing to ship their latest model, and the improvements are starting to blur together. Claude, ChatGPT, Gemini. They're all converging on similar capabilities. It's becoming table stakes. A genuine hardware and software play, something that changes how people use their devices rather than just what the devices can answer, could cut through that noise in a way another model update can't.&lt;/p&gt;

&lt;p&gt;My prediction: Google announces this properly at I/O 2026. The feature is already functional enough that the timeline could be aggressive. Resolution support needs work. Some UI elements need homes. But the core experience is there, hidden in plain sight, waiting for its moment. When it arrives, it won't just be a nice feature for power users. It'll be a reason to switch.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Human written, AI assisted.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://chromeunboxed.com/chromeos-was-quiet-at-google-i-o-2025-heres-likely-why/" rel="noopener noreferrer"&gt;Chrome Unboxed&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://9to5google.com/2025/06/23/chromebook-plus-future-android/" rel="noopener noreferrer"&gt;9to5Google&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>android</category>
      <category>google</category>
      <category>mobile</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I love vibe coding. I don't trust it.</title>
      <dc:creator>Ollie Church</dc:creator>
      <pubDate>Sun, 22 Feb 2026 14:21:46 +0000</pubDate>
      <link>https://forem.com/olliechurch/i-love-vibe-coding-i-dont-trust-it-312n</link>
      <guid>https://forem.com/olliechurch/i-love-vibe-coding-i-dont-trust-it-312n</guid>
      <description>&lt;p&gt;Greg Brockman says engineers should reach for an agent before they reach for an editor. I've been testing that for months. The answer is: it depends - and knowing when is everything.&lt;/p&gt;




&lt;p&gt;I vibe code constantly. My side project, TaskGlitch, got more features in the last month than it had in the previous two years. A Vue 2 to Vue 3 migration that I would have put off forever? Ten minutes. A confetti burst when you complete your schedule? Done before lunch. Features I would never have bothered to build myself are now just... there.&lt;/p&gt;

&lt;p&gt;The appeal is real. When I started working with Claude Code on TaskGlitch, I had a list of ideas that had been gathering dust for years. Life gets in the way - new job, other priorities - and the fun side project stays frozen. But now I can set the AI running, do something else, come back, test it, give a couple of notes, and ship. The pace is intoxicating.&lt;/p&gt;

&lt;p&gt;And there's something else: the rubber duck that responds. I'd been carrying this project around in my head for years, but nobody in my life particularly cares about a to-do app that does a very specific thing I want. Suddenly I had something to discuss it with. To get excited about features with. To turn a conversation into a roadmap, and then turn that roadmap into working code. That felt like magic.&lt;/p&gt;

&lt;p&gt;So why don't I trust it?&lt;/p&gt;




&lt;p&gt;Because stakes change everything.&lt;/p&gt;

&lt;p&gt;TaskGlitch is used by me. If it breaks, it's annoying. If the scheduling algorithm does something weird, I'll notice, I'll fix it, life goes on. The risk is low, so the trade-off makes sense: I get features fast, I don't fully understand how they work, and that's fine.&lt;/p&gt;

&lt;p&gt;But my day job involves people's money. Every role I've had in software has involved payments, banking, financial infrastructure. You can't shrug that off the same way. You might get the right outcome - £50 was meant to go to Kate, and Kate received £50 - but where did it come from? Did it go through the right processes? Did we just give away money from the wrong account? The stakes are completely different.&lt;/p&gt;

&lt;p&gt;And it's not just money. Health systems. Military applications. Security infrastructure. The higher the stakes, the more you need to actually understand how the code reaches its outcome. Because AI will confidently tell you it's done it correctly. It will tell you it's secure. And then you look closer, and it isn't.&lt;/p&gt;

&lt;p&gt;We saw this play out spectacularly with Moltbook. The founder proudly announced he "didn't write a single line of code" - just had a vision, and AI made it reality. Within days, security researchers found the entire database exposed. 1.5 million API keys, 35,000 email addresses, private messages - all accessible to anyone who looked. The platform that captured the internet's imagination became a cautionary tale almost overnight.&lt;/p&gt;

&lt;p&gt;That's what happens when you focus entirely on outcomes without understanding the process. The AI got him to a working product. It just happened to be a security nightmare.&lt;/p&gt;




&lt;p&gt;Here's the part that worries me most about the "all-in" narrative.&lt;/p&gt;

&lt;p&gt;Every article, every interview, every thought leader pushing agentic coding will throw in a caveat: "of course, you should maintain good code review practices." As if that solves the problem.&lt;/p&gt;

&lt;p&gt;But code review is already imperfect. Developers skim large PRs. They say "looks good to me" when they're busy. And AI generates &lt;em&gt;a lot&lt;/em&gt; of code - often more than necessary, full of abstractions for futures that will never arrive. Thousands of lines that need reviewing by humans who, increasingly, didn't write any of it themselves.&lt;/p&gt;

&lt;p&gt;If everyone is vibe coding, who actually understands the system?&lt;/p&gt;

&lt;p&gt;The knowledge that comes from writing code yourself - from thinking through edge cases as you go, from debugging and discovering the weird interactions between components - that knowledge doesn't transfer when the AI does the work. You might read the code afterwards. But reading code you didn't write, for a system you don't intimately know, to find bugs the AI couldn't see? That's a much harder job.&lt;/p&gt;

&lt;p&gt;And debugging is where you really learn a system. When something breaks and you have to trace through the logic, understand why it's failing, figure out the fix - that's when you build the mental model that lets you spot similar issues in future. Hand that to the AI, and you've robbed yourself of that understanding.&lt;/p&gt;

&lt;p&gt;I've been there. We've all been there. The begging cycle, where you're four prompts deep into "please just make the tests pass" and the AI keeps telling you it's fixed it, and it hasn't, and you realise you don't actually know enough about what it built to fix it yourself. That's the trap.&lt;/p&gt;




&lt;p&gt;There's a study that captures this perfectly. METR ran a randomised controlled trial with experienced open-source developers working on their own codebases. When they used AI tools, they took 19% longer to complete tasks. But here's the striking part: those same developers believed the AI had made them 20% &lt;em&gt;faster&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The dopamine hit is real. Code appears on screen at superhuman speed. The blank page problem vanishes. It &lt;em&gt;feels&lt;/em&gt; like progress. But the actual work - the reviewing, the debugging, the cleaning up - that all still happens. It just happens later, and we don't count it the same way.&lt;/p&gt;




&lt;p&gt;I recently listened to Grady Booch on The Pragmatic Engineer podcast, talking about AI as part of the "third golden age of software." One thing he said stuck with me: we're in the hobbyist phase.&lt;/p&gt;

&lt;p&gt;Think about home computers in the early days. They weren't mainstream business tools. They were things that people with an interest could get hold of, take apart, experiment with. And slowly, through that experimental play, those hobbyists found uses that made their way into productivity and business and everything else.&lt;/p&gt;

&lt;p&gt;That's where I think we are with AI coding. The place where it works best for me - where I'm finding the most value - is exactly those hobby projects where I can experiment and play and see what happens. Where the consequences of getting it wrong are low, and the joy of seeing ideas come to life is high.&lt;/p&gt;

&lt;p&gt;We've seen abstraction cycles before. Every time a new layer appears - higher-level languages, frameworks, cloud services - people worry that it will make the previous skills redundant, that it won't be "real coding" anymore. And every time, we've adapted. Software engineering hasn't disappeared; it's evolved.&lt;/p&gt;

&lt;p&gt;I think there's still a place for the engineer in a future where AI handles more of the code. But I don't know exactly what that looks like yet. We're not there. And pretending we are - going all-in on production systems with the same approach I use for TaskGlitch - feels like a mistake we'll regret.&lt;/p&gt;




&lt;p&gt;Keep playing with this stuff. It's groundbreaking, it's cool, and it's genuinely fun. But know when the stakes demand more. Know when you need to understand the code, not just the outcome. Know when "it works" isn't the same as "it's right."&lt;/p&gt;

&lt;p&gt;At its highest stakes, there are lives at risk. We cannot gamble with that.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Human written, AI assisted.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Greg Brockman's post on agentic software development at OpenAI: &lt;a href="https://x.com/gdb/status/2019566641491963946" rel="noopener noreferrer"&gt;x.com/gdb&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Wiz security research on Moltbook: &lt;a href="https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys" rel="noopener noreferrer"&gt;wiz.io/blog&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Grady Booch on The Pragmatic Engineer podcast: &lt;a href="https://newsletter.pragmaticengineer.com/p/the-third-golden-age-of-software" rel="noopener noreferrer"&gt;newsletter.pragmaticengineer.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;METR study on AI coding productivity: &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;metr.org/blog&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;TaskGlitch - Intelligent task scheduling that scores your backlog and builds and optimised work session automatically - Early Access &lt;a href="https://taskglitch.netlify.app" rel="noopener noreferrer"&gt;taskglitch&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
