<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Patrick Müller</title>
    <description>The latest articles on Forem by Patrick Müller (@sneakypad).</description>
    <link>https://forem.com/sneakypad</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sneakypad"/>
    <language>en</language>
    <item>
      <title>The LLM Hype Train: A Pamphlet[?] You Should Read With Your Manager</title>
      <dc:creator>Patrick Müller</dc:creator>
      <pubDate>Tue, 17 Mar 2026 16:09:01 +0000</pubDate>
      <link>https://forem.com/sneakypad/the-llm-hype-train-a-pamphlet-you-should-read-with-your-manager-5gec</link>
      <guid>https://forem.com/sneakypad/the-llm-hype-train-a-pamphlet-you-should-read-with-your-manager-5gec</guid>
      <description>&lt;p&gt;What if I told you ChatGPT is the end of software engineering? Would you believe it? Three years ago OpenAI changed the game in the AI field with ChatGPT. ChatGPT is based on the foundation of Large Language Models, in short LLM. Since then, AI has finally made it onto the aisles of the majority of companies. Even in Germany.&lt;/p&gt;

&lt;h2&gt;
  
  
  When AI Hit the Office
&lt;/h2&gt;

&lt;p&gt;I believe that every extreme is bad. That includes the current LLM and agentic hype. There’s always a trending topic in tech. It mostly starts with academia, catches fire in startups, and soon becomes glorified on LinkedIn or any other social media platform. That’s not new. For LLMs the same happened. BUT, What is new is how accessible LLMs are and with that, AI became “saloon ready”.&lt;/p&gt;

&lt;p&gt;Everyone is capable of thinking of a killer use-case for applying this technology and turn around a sinking boat. This is good from the perspective that everyone is kind of enabled to come up with ideas. But there's a dark side: people oversimplify what LLMs actually are.&lt;/p&gt;

&lt;p&gt;Prompt + text in → Solution out.&lt;/p&gt;

&lt;p&gt;So simple. So seductive. So bad for (ML) software engineers. All of a sudden, every regex becomes a prompt. Every problem is solvable - just ask a LLM.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Illusion of Simplicity
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Prompt in, solution out — but at what cost?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let me quote what Pydantic says on their website as of 12.07.25&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://ai.pydantic.dev/logfire/?ref=patrickm.de" rel="noopener noreferrer"&gt;https://ai.pydantic.dev/logfire/&lt;/a&gt; (Debugging &amp;amp; Monitoring)&lt;br&gt;&lt;br&gt;
Applications that use LLMs have some challenges that are well known and understood: LLMs are  &lt;strong&gt;slow&lt;/strong&gt; ,  &lt;strong&gt;unreliable&lt;/strong&gt;  and  &lt;strong&gt;expensive&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;These applications also have some challenges that most developers have encountered much less often: LLMs are  &lt;strong&gt;fickle&lt;/strong&gt;  and  &lt;strong&gt;non-deterministic&lt;/strong&gt;. Subtle changes in a prompt can completely change a model's performance, and there's no &lt;code&gt;EXPLAIN&lt;/code&gt; query you can run to understand why.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warning&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
From a software engineers point of view, you can think of LLMs as the worst database you've ever heard of, but worse.&lt;br&gt;&lt;br&gt;
If LLMs weren't so bloody useful, we'd never touch them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before LLMs we had simple LMs - Language Models. I remember a tutorial in university in which we built a tweet bot of a very active politician on Twitter. That Bot could generate a tweet after another. Same base principle as the first Large Language Models, but very limited in their general capabilities and much more like a parrot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Autonomous Agents Often Fail in The Real World
&lt;/h3&gt;

&lt;p&gt;Now, when you thought the hype couldn’t even get bigger, agents came along and knocked on your companies door. Agents are LLM-powered bots that autonomously execute tasks using well-defined APIs, typically via MCP (Model-Context Protocol). The foundation is still a language model, but wrapped in orchestration logic that chains steps together. The chat interface is what makes it so magical. The LLM decides then in a chain-of-thought, similar flow, when and what information it needs to request to fulfill a task.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;You ask your Agent to book the cheapest flight from Berlin to Paris on next weekend. It looks up APIs, navigates on websites, compares prices, reads content - and bam, your flight has been booked! &lt;strong&gt;Très bien&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Except … maybe you are going to Prague. On a Tuesday. In business class. It’s bittersweet, because the technology is great, but errors accumulate. In May 2025, I saw a talk called "The Future of AI: Building the Most Impactful Technology Together" from &lt;em&gt;Leandro von Werra&lt;/em&gt; who works at HuggingFace on PyCon DE &amp;amp; PyData 2025 in Wiesbaden, in which he exactly explained that issue. If an agent solves each subtask of that one big task to book a flight with a 90% accuracy, you end up with a &lt;code&gt;0.9 x 0.9 x 0.9 x 0.9 x 0.9 = 0,59%&lt;/code&gt; success rate. That’s almost a Bernoulli experiment, like a coin flip. Just cheaper in time and money, or you think of your travel budget as the coin.&lt;/p&gt;

&lt;p&gt;To be fair, APIs - that the agent uses via MCP - can introduce determinism, bringing much more joy to this rigged game. If the function call is stable and predictable, you regain some control. But the agent still decides what input to send — and that’s where the chaos may return.&lt;/p&gt;

&lt;h2&gt;
  
  
  LLMs - The Swiss Army Knife of AI Models
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Why generalist tools aren’t always the right choice&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;LLMs are impressive. But they are generalists (so far). Whenever I talk to someone about LLMs and their capabilities, I tell them that I see them as a Swiss Army knife. They are good at many things, but they are not specialists and are therefore only excellent at a few. Let’s circle back three years. Before LLMs had come along, I’d argue we had the major fields of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Natural Language Processing&lt;/li&gt;
&lt;li&gt;Computer Vision&lt;/li&gt;
&lt;li&gt;Recommender System&lt;/li&gt;
&lt;li&gt;Tabular data… plus fields you would find across all of those. They were independent of the bigger field, for example, Explainable AI (XAI), on-device, federated learning, and generative AI - the list goes on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;… wait.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;gt; Generative AI?!&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Yes, that’s right. We had this before. Not only does a Twitter bot count as generative AI, but also sampling from a distribution to generate sophisticated, close to real, input data counts as generative AI.&lt;/p&gt;

&lt;p&gt;Today, this feels like the ancient way, parts that have been forgotten, buried as relics in many ML temples across the globe. I’ve recently seen an explanatory poster in the company I work for, in the coffee corner, which hierarchically organizes AI terms among each other. It went approximately like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;generative AI (subfield of) → Deep Learning (subfield of) → ML (subfield of) → AI.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That’s misleading. It’s not ideal on two levels. First, generative AI is not restricted to Deep Learning. Yes, you could argue DL is a subfield of ML, and hence, this is right, but it’s not. This chain sets DL as a requirement. Second, our ML/AI Zoo is full of plenty of other beautiful technologies and fields. Don’t limit it to gen AI. Educate holistically and don’t give them the sugar they already had anyway.&lt;/p&gt;

&lt;p&gt;Coming back to the analogy of a Swiss army knife. Let's take a sentiment classification use case. I’ve seen this plenty of times before LLMs were a thing. What’s the solution?&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;gt; “A supervised classifier” you say?&amp;lt; “Good”, I reply, you learned much.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But I reckon we’ve all seen people throwing LLMs on this. It’s overkill. Remember: They’re slow, expensive, and not deterministic. Only do that for prototyping. If you think this is useful as a feature or product, then build a proper model for this. Compared to the Swiss Army Knife, the proper classification model is more like a drilling machine. Any “traditional” classifier, in fact. Perfect for one task, but only for one. You would certainly fail for using it to hammer in a nail, but you wouldn’t think of that anyway, because it’s not the right tool and you know it. That’s something most people haven’t grasped yet when it comes to AI/ML and LLMs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coding With LLMs: Why Thinking Still Matters
&lt;/h2&gt;

&lt;p&gt;LLMs for coding are impressive, but keep in mind that they are trained on vast amounts of data, and all kinds: genius and garbage. That means they are trained on code that is very high in quality, but also on data that is poor in quality.&lt;/p&gt;

&lt;p&gt;Let’s assume they balance each other out, then we get an average software engineer (Since there are fewer experts than juniors, it does not balance out.). I’ve heard this many times, for example, on &lt;a href="https://realpython.com/podcasts/rpp/248/?ref=patrickm.de" rel="noopener noreferrer"&gt;&lt;strong&gt;The Real Python Podcast: Ep 248&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;,&lt;/strong&gt; with &lt;a href="https://www.raymondcamden.com/?ref=patrickm.de" rel="noopener noreferrer"&gt;Raymond Camden&lt;/a&gt; in which he said that it was a huge help to him with Python, since he was a novice in that field, but not so with JavaScript, since he’s an expert in that field. For me, it’s the other way around. I learn a lot from LLMs in JS and get stuff done, but I believe that the Python code I mostly get is not ideal.&lt;/p&gt;

&lt;p&gt;Even though I do not own a glass bowl to look into the future, I can imagine that the trend with LLMs is going more towards specific LLMs.&lt;/p&gt;

&lt;p&gt;However, the problem with LLMs is that we partially have better solutions for specific use-cases, and differentiating between “Yes, that’s a good LLM task” and “No, we use a traditional ML/DL approach” seems difficult. I found them bloody useful myself, but most often for creative tasks. I think first, before I ask the LLM. No vibe coding for me. Why would I let the LLM do the fun part? Software engineering is a craft, and I take the productivity boost that I get with LLMs cheerfully, but I never forget that I need to think through problems, design, and architecture myself, before I start ping-pong on my ideas with an LLM. If you don’t train a muscle, it degrades, and that most likely happens with your coding skills too, when you entirely let your LLM code for you. (Besides the fact that LLMs still make a lot of errors anyway, and how would you judge them as an error if you don‘t have the expertise - busted.).&lt;/p&gt;

&lt;p&gt;We also see a lot of videos about software engineers being replaced by sophisticated AI software agents. Now, on a bad day, I might listen to that, but on any other day, I see other jobs replaced much earlier if we were talking about replacement. Everything that is mostly about organizing, managing, and decision-making is way easier to handle with LLMs. It’s just that drawing that image that software engineering can be replaced with is so much more powerful. For sure, as software engineers, we need to adjust and use what boosts our productivity, but always keep in mind that you are your greatest human capital. What do you think who will perform better in a software engineering job interview? The one person who vibe coded an app in seven days or the software engineer who thoughtfully crafted this app, which has a high coverage, good code design, and modular architecture in four weeks?&lt;/p&gt;

&lt;h2&gt;
  
  
  Disruption → Drawbacks → Mitigation: Is That a Typical Tech Cycle?!
&lt;/h2&gt;

&lt;p&gt;We’ve seen this before, and I believe that this is somehow typical for new technology. There’s a disruptive technology, and it comes with drawbacks. Once there is a breakthrough, we start using this technology and figure out solutions to mitigate our invention along the way. We’ve seen this with other technology, too, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automobiles: have added to accident rates and air pollution. Later on, we got seatbelts, airbags, and emission standards.&lt;/li&gt;
&lt;li&gt;The internet itself: Privacy is a big pain these days, but at least we got GDPR.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another bummer is the mere energy consumption LLMs need. Did you know that generating an image with ChatGPT is equivalent to fully charging your phone? We already knew that the training of an LLM is costly, but inference - that is, the process of computing an answer to you, regardless of whether text, image, or voice (multi-modal) - costs a lot of computation too. And this is ongoing.&lt;/p&gt;

&lt;p&gt;Another challenge: most of our AI tools are made in the US. We in Europe shouldn’t neglect this. First, this will become a huge privacy issue in the future, because LLMs like ChatGPT know a lot about you, your work, inclinations, etc. Second, we make ourselves dependent on the big vendors. This is an issue, in my opinion, for two reasons:&lt;/p&gt;

&lt;p&gt;a) We rely on what they offer&lt;/p&gt;

&lt;p&gt;b) We assume that this will always be available, but the first days of the trade dispute between Europe and the US showed that it can be faulty to assume that we can always rely on them. Luckily, the European ecosystem is getting stronger with companies such as HuggingFace, Mistral, Aleph Alpha, Stability.ai, DeepL, and Black Forest Labs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts: Be a Thinker, Not Just a Prompt Engineer
&lt;/h2&gt;

&lt;p&gt;I currently see two types of companies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Those still struggling to digitize their ecosystem&lt;/li&gt;
&lt;li&gt;Companies that are at the forefront of innovation (or at least they think they are) and are applying the latest research&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even at big firms like SAP or SICK, there’s a wide spectrum. And an IT consultant from Freiburg I know tells me the same thing: you can't use AI if you haven’t even digitized your workflows yet. So whenever you hear somebody talking about LLMs, ChatGPT, Agents, and all the latest hypes, think of whether those mentioned technologies are the right tool to get the job done. Chances are - you can guess, it’s mostly not. Take that discussion. Eventually, you will help to build a better AI ecosystem within your company.&lt;/p&gt;

&lt;p&gt;As I mentioned, LLMs are powerful. I use them. Often. I just recently reverse-engineered an API. Creativity. Boilerplate. Those are the things I aim for. But I still think before I prompt.&lt;/p&gt;

&lt;p&gt;I also tried out the agentic mode in VS Code. Yes, it's impressive! It's a very powerful tool, particularly when it runs code and fixes its own mistakes and bugs. I've been trying it with a Svelte app. It was blazingly fast. It's ideal for prototyping. Although it's phenomenal, I haven't learnt much. I wouldn't be able to replicate the LLM's work. I believe that my Svelte coding skills are generally not as good as those of the LLM. So, if I continue to use the LLM in agent mode, I need to ensure that I have a certain level of expertise and continue to develop myself. How would that look like?&lt;/p&gt;

&lt;p&gt;And, in case you were wondering: I used LLMs to critique the structure and style of this article. That’s it. I wrote it entirely by myself, but I find it helpful to get feedback. I do not let it write my article, because it’s fun. I enjoy writing. It keeps me sharp. And it’s how I keep the edge that no LLM can replicate: &lt;em&gt;My own thinking&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I publish deeper Python and software engineering guides like this on my blog: &lt;a href="https://patrickm.de" rel="noopener noreferrer"&gt;https://patrickm.de&lt;/a&gt; 💙&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>agentic</category>
    </item>
    <item>
      <title>How bugs made me believe in TDD</title>
      <dc:creator>Patrick Müller</dc:creator>
      <pubDate>Fri, 02 May 2025 06:30:45 +0000</pubDate>
      <link>https://forem.com/sneakypad/how-bugs-made-me-believe-in-tdd-3696</link>
      <guid>https://forem.com/sneakypad/how-bugs-made-me-believe-in-tdd-3696</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgf96dkemx368up7gov2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgf96dkemx368up7gov2.png" alt="How bugs made me believe in TDD" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I always knew that testing was important, but I neglected it for a long time. During my studies, the subject was unfortunately given far too little attention and there was also a lack of practical relevance. However, as I gained more professional experience, I learned that I always have to expect a certain error rate and unpredictable bugs. TDD is crucial for recognizing these at an early stage and achieving good productivity in the long term.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Software Testing at SAP&lt;/li&gt;
&lt;li&gt;Software Testing During Self-Employment &amp;amp; Open Source&lt;/li&gt;
&lt;li&gt;How Does TDD Work?&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Software Testing at SAP
&lt;/h2&gt;

&lt;p&gt;During my time at SAP and in the first team I was in, we wrote no or very few unit tests (I definitely didn't). The focus back then was on end-to-end testing (E2E). Later, in a different team at SAP, I came into contact with tests and test coverage for the first time, but still had little intrinsic motivation to write any. I saw it more as a necessary evil, with the idea of writing a test after the actual function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Testing During Self-Employment &amp;amp; Open Source
&lt;/h2&gt;

&lt;p&gt;With the launch of my company LemonHeap GmbH and the LemonSpeak product, I then wrote unit tests occasionally and depending on their importance, but I was still very far away from a TDD (Test Driven Development) approach. LemonSpeak was a one-man show. Hence the question: Why write so many tests if I'm the only one developing the software anyway? My opinion at the time was that tests only have a right to exist for software that several people are working on. Then the advantage is that you don't have to completely understand the overall construct, but only the individual components that are changed or added. The existing tests then check whether errors occur and, in the case of a “pass”, allow the conclusion to be drawn that there are no side effects.&lt;/p&gt;

&lt;p&gt;I changed my opinion towards the end of LemonSpeak, when I looked back and assessed how much support was caused by bugs and whether I could have avoided this through testing. You guessed it: most of it could have been avoided. At that time, I also got more involved with TDD and familiarized myself with the topic.&lt;/p&gt;

&lt;p&gt;The second moment I realized the importance of tests was when I made my first open source contribution to Pydantic Logfire. The first pull request was without tests, the maintainer told me to add tests and shortly after the code was merged, it caused another bug for a user. That was a real eye-opener for me, because if the tests had been more thorough, it would have been found. The user wouldn't have opened a bug report, the maintainer wouldn't have pointed it out to me and I wouldn't have had to spend time fixing the bug again. Three people were directly affected. For me, avoiding this has something to do with professionalism.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does TDD Work?
&lt;/h2&gt;

&lt;p&gt;TDD stands for Test-Driven Development and is not new: the concept was introduced by Kent Beck at the end of the 1990s. The idea is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You come up with a list of test conditions.&lt;/li&gt;
&lt;li&gt;Take the first one from that list.&lt;/li&gt;
&lt;li&gt;The third step is to write the test condition before the actual function and think about when the result is a “pass” and when the result is a “fail”.&lt;/li&gt;
&lt;li&gt;Now write your function so that the test is fulfilled.&lt;/li&gt;
&lt;li&gt;Optionally, in the next step, you can think about the abstraction and design of your code and refactor it. That was already one cycle. If you still have test conditions left, the cycle starts again from the beginning (goto #1). Iterations are part of TDD, because you want your function to fulfill further test conditions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This cycle is also known as the red-green refactor. Red because your assert fails first. Congratulations if you are at Green, because then your test has received a pass. The refactor step was difficult for me to understand at first. Mainly because I took it for obviously. Once you have a green, you can refactor your code and structure it differently. Be it a pattern or a different approach. That's entirely up to you. The nice thing about it is that you have the assurance that everything will still work, as your previous tests still have to run. Here is a small visualization of TDD:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdis0n2sfegm5pmcl4e6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdis0n2sfegm5pmcl4e6.webp" alt="How bugs made me believe in TDD" width="800" height="519"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Iteration Cycle of TDD&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In my opinion, while TDD is great in theory, it needs a certain amount of repetition in practice to become routine. Martin Fowler has written a very good introduction to &lt;a href="https://martinfowler.com/bliki/TestDrivenDevelopment.html?ref=patrickm.de" rel="noopener noreferrer"&gt;TDD&lt;/a&gt;. Even more interesting, however, is the article &lt;a href="https://tidyfirst.substack.com/p/canon-tdd?ref=patrickm.de" rel="noopener noreferrer"&gt;“Canon TDD”&lt;/a&gt; by Kent Beck himself, in which he clears up some misunderstandings and misconceptions about TDD. Due to the negative examples that Beck points out, the information content is very high.&lt;/p&gt;

&lt;p&gt;In my opinion, the advantage of TDD lies not only in the increased reliability of the software, but also in the fact that I have to think intensively about how I design the interface to my code and the function (keyword: differentiation between interface and implementation → good design). To illustrate this: When I write a new function, TDD forces me to define the interface first, otherwise I couldn't even test it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are many books, such as "Clean Code" or "Practical Engineer", that address the fact that high test coverage is a must. And although I now see the necessity, without the TDD approach I would find it difficult to write the tests afterwards.&lt;/p&gt;

&lt;p&gt;Because as soon as I have developed a feature or fixed a bug, the next issue is already waiting around the corner. Unfortunately, writing a test for the previous component is often sink into oblivion. It's like tidying up at home: if something is lying around, it's tidier in the end if you tidy it up straight away instead of postponing the task.&lt;/p&gt;

&lt;p&gt;How much and how intensively you test naturally depends on the importance of the software. But testing has become indispensable for professional software. Whether this involves unit tests, integration tests or end-to-end tests depends heavily on the architecture, the goal and the aforementioned importance of the software.&lt;/p&gt;

&lt;p&gt;Meanwhile, I have become very familiar with TDD. In my current work, I have already been able to use it to prevent bugs during development, which just feels great. Nevertheless, the topic is still new territory for me in this intensity. As I am learning a lot in this area myself, I would like to share this knowledge with you in the next few articles.&lt;/p&gt;

&lt;p&gt;Have you made good experiences with TDD or do you see it differently? Do you know any good resources? Let me know!&lt;/p&gt;

&lt;p&gt;I publish deeper Python and software engineering guides like this on my blog: &lt;a href="https://patrickm.de" rel="noopener noreferrer"&gt;https://patrickm.de&lt;/a&gt; 💙&lt;/p&gt;

</description>
      <category>softwaretesting</category>
      <category>tdd</category>
      <category>testdrivendevelopment</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Could a Simple Title Change Skyrocket Your Podcast’s Popularity?</title>
      <dc:creator>Patrick Müller</dc:creator>
      <pubDate>Wed, 12 Jul 2023 06:17:12 +0000</pubDate>
      <link>https://forem.com/sneakypad/could-a-simple-title-change-skyrocket-your-podcasts-popularity-2g04</link>
      <guid>https://forem.com/sneakypad/could-a-simple-title-change-skyrocket-your-podcasts-popularity-2g04</guid>
      <description>&lt;p&gt;A Small Change, A Big Impact: Unraveling the Hidden Influence of Your Podcast Titles on Audience Reach, Engagement, and the Overall Success of Your Show.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l2q7ze1nwfs8jnad0ex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l2q7ze1nwfs8jnad0ex.png" alt="Chewbacca as a Podcaster — AI Generated Art By SDXL" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Episode titles are often neglected, even though they add a lot of value. Not only for SEO, but also for users. This short issue explains why and shows you a tool that grades your title and generates optimized alternatives.&lt;/p&gt;

&lt;p&gt;Surely, you’ve heard of SEO and how much it helps your podcast grow, but how do you differentiate between people who are trying to sell you their latest product and what is actually helpful? I’ll touch on two points.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The actual value of SEO for search engines&lt;/li&gt;
&lt;li&gt;The far more important value for listeners&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  SEO: Increasing your content visibility online 🔎
&lt;/h3&gt;

&lt;p&gt;SEO means Search Engine Optimization and describes how well a search engine finds your content. This can be a podcast, a website, or anything else that is online. Since I am developing tools for podcasters, I’ve come across a couple of podcasts. A good title should be the default, but I’ve seen many that do not put much weight on it. Episode 20, Episode 21, Ep 05, there are many variations. This is not very appealing, even though the content might be great. I understand the trouble: preparing, recording, editing, and now a creative title? This might be annoying for some people. Imagine the best book with a non-expressive title: Book 1, Book 2, Book 3 — it won’t be a New York Times bestseller, don’t you think?&lt;/p&gt;

&lt;p&gt;Think of it that way: How should a search engine pick up on your great podcast without giving it something to work with? A SEO-friendly title is not a novel concept; nevertheless, it’s sometimes overlooked because it can be tedious and time-consuming. Creating a transcript would also help. I wrote about this and another free tool as well: &lt;a href="https://lemonspeak.com/blog/free-podcast-transcription-app" rel="noopener noreferrer"&gt;LemonSpeak&lt;/a&gt;🍋&lt;/p&gt;

&lt;h3&gt;
  
  
  UEO (User Engagement Optimization): Engaging your audience beyond clickbaits 👥
&lt;/h3&gt;

&lt;p&gt;This is far more important, so let’s break it down. You don’t come across many articles about UEO, so it’s a bit of a leap of faith, but the pieces fit together pretty well. User engagement describes how much a user engages with something. That can be physical or digital, it doesn’t matter, but to get people to interact with your content you have to trigger something in them. There are many sides to this. One of them is known as “click baiting”, which is usually achieved by either inducing fear or making you, the user, feel like you’re missing out (FOMO, e.g. If you don’t read this … ). It’s the bad side of SEO and the internet.&lt;/p&gt;

&lt;h4&gt;
  
  
  From ‘The Sea Cook’ to ‘Treasure Island’: A lesson in User Engagement Optimization
&lt;/h4&gt;

&lt;p&gt;I’ll give you a positive example that’s actually super old! Have you ever heard of “Treasure Island”? An adventure novel about sailing, pirates and buried gold, published in 1883. It’s a bestseller, but it didn’t start out that way. It had already been published as “The Sea Cook: A Story for Boys” in 1881. The book with that title was not particularly successful when it was first published. The title was probably changed to ‘Treasure Island’ because it better conveyed the adventure and intrigue of the story. The new title is more evocative and intriguing, suggesting a story of exploration, adventure and, of course, treasure.&lt;/p&gt;

&lt;p&gt;Do you feel the difference? “Treasure Islands” sounds much more exciting to me. It’s a perfect example that showcases User Engagement Optimization.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;TLDR&lt;/em&gt;: A well-chosen episode title helps with SEO and UEO. Please do it.✅&lt;/p&gt;

&lt;h4&gt;
  
  
  The episode title grader: A free app to optimize podcast titles 🛠️
&lt;/h4&gt;

&lt;p&gt;Best of all, I have developed a free tool that will grade your chosen title. It will then suggest alternative, optimized titles.&lt;/p&gt;

&lt;p&gt;Note: What started as a free tool has been integrated into LemonSpeak, providing a seamless flow for you. You can find it here: &lt;a href="https://lemonspeak.com" rel="noopener noreferrer"&gt;LemonSpeak&lt;/a&gt;🎈&lt;/p&gt;

&lt;h3&gt;
  
  
  Exploring more
&lt;/h3&gt;

&lt;p&gt;I build tools for podcasters to create more content: &lt;a href="https://lemonspeak.com" rel="noopener noreferrer"&gt;https://lemonspeak.com&lt;/a&gt; 👈🏆&lt;/p&gt;

</description>
      <category>podcast</category>
      <category>podcastingtips</category>
      <category>machinelearning</category>
      <category>seo</category>
    </item>
  </channel>
</rss>
