<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Evan Morris</title>
    <description>The latest articles on Forem by Evan Morris (@evanmorris).</description>
    <link>https://forem.com/evanmorris</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/evanmorris"/>
    <language>en</language>
    <item>
      <title>AI Sycophancy: Is AI Too Nice?</title>
      <dc:creator>Evan Morris</dc:creator>
      <pubDate>Mon, 22 Dec 2025 01:41:17 +0000</pubDate>
      <link>https://forem.com/evanmorris/ai-sycophancy-is-ai-too-nice-4pda</link>
      <guid>https://forem.com/evanmorris/ai-sycophancy-is-ai-too-nice-4pda</guid>
      <description>&lt;p&gt;AI tools are incredibly helpful — and sometimes that’s the problem.&lt;/p&gt;

&lt;p&gt;Large language models tend to agree with you. They validate your approach, confirm your assumptions, and tell you your code “looks good.” That confidence boost can feel earned, even when it isn’t.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;As engineers, we should be cautious of that.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I use tools like Cursor, Gemini, and Copilot every day. They’ve absolutely increased my productivity. But I’ve noticed a consistent pattern: getting high-quality output often takes multiple attempts. The first response is usually fine. Rarely is it critical.&lt;/p&gt;

&lt;p&gt;That’s not because the model is bad. It’s because it’s doing exactly what it was trained to do: be helpful. &lt;/p&gt;

&lt;p&gt;And “helpful” often means agreeable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you ask an AI model to review code in a vague way, you’ll usually get a vague review. Polite suggestions. Nothing that seriously challenges your implementation.&lt;/p&gt;

&lt;p&gt;For example: Generic prompt&lt;/p&gt;

&lt;p&gt;“Can you review this code for bugs?”&lt;/p&gt;

&lt;p&gt;You’ll get something that sounds reasonable, but likely misses deeper issues — security assumptions, error handling gaps, or production risks.&lt;/p&gt;

&lt;p&gt;Now compare that to this: Improved prompt&lt;/p&gt;

&lt;p&gt;“Act as a strict senior software engineer. Review this code as if it will run in production and handle sensitive data. Focus on security issues, poor error handling, and unsafe assumptions. Call out anything that could cause failures and suggest concrete fixes.”&lt;/p&gt;

&lt;p&gt;The difference in output quality is usually immediate.&lt;/p&gt;

&lt;p&gt;What Changed?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You set a clear role (strict senior engineer)&lt;/li&gt;
&lt;li&gt;You defined scope (security, error handling, production risk)&lt;/li&gt;
&lt;li&gt;You explicitly asked for pushback, not validation&lt;/li&gt;
&lt;li&gt;You required actionable feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This matters because AI models are optimized to agree unless you give them permission — and direction — to challenge you.&lt;/p&gt;

&lt;p&gt;The Real Takeaway&lt;/p&gt;

&lt;p&gt;The problem isn’t that AI is “too dumb” or that we need better models. The problem is that vague prompts turn AI into a yes-man.&lt;/p&gt;

&lt;p&gt;If you want value, don’t ask AI to review your work.&lt;/p&gt;

&lt;p&gt;Ask it to try to break it.&lt;/p&gt;

&lt;p&gt;Just like a Quality Assurance Engineer’s job in a development team is to try and break the software before they approve of an implementation.&lt;/p&gt;

&lt;p&gt;AI works best when you stop asking it to be nice and start asking it to be honest.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Selenium vs Puppeteer vs Playwright: The Browser Automation Reality Check</title>
      <dc:creator>Evan Morris</dc:creator>
      <pubDate>Wed, 17 Dec 2025 02:07:52 +0000</pubDate>
      <link>https://forem.com/evanmorris/selenium-vs-puppeteer-vs-playwright-the-browser-automation-reality-check-404a</link>
      <guid>https://forem.com/evanmorris/selenium-vs-puppeteer-vs-playwright-the-browser-automation-reality-check-404a</guid>
      <description>&lt;p&gt;It’s nearly 2026. The "Holy Trinity" of browser automation is still fighting for dominance in your tech stack.&lt;/p&gt;

&lt;p&gt;If you’re starting a new project today, which one actually deserves your time? Let’s do a fast, brutally honest breakdown.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selenium: The "OG"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The industry veteran defining automation since 2004. It’s reliable, universal, but showing its age.&lt;/p&gt;

&lt;p&gt;✅ The Good:&lt;br&gt;
• Universal: Supports nearly every browser (even IE) and language (Java, Python, C#, JS, Ruby).&lt;br&gt;
• Community: Two decades of StackOverflow answers cover every possible edge case.&lt;/p&gt;

&lt;p&gt;❌ The Bad:&lt;br&gt;
• WebDriver Hell: Managing separate driver versions for every browser update is a maintenance nightmare.&lt;br&gt;
• Flakiness: Its architecture is slower and notoriously prone to flaky tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Puppeteer: The Chrome Specialist&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Google's answer. It bypasses WebDriver to speak directly to Chrome DevTools Protocol (CDP).&lt;/p&gt;

&lt;p&gt;✅ The Good:&lt;br&gt;
• Speed: Without the WebDriver middleman, it is blazing fast.&lt;br&gt;
• DevTools Power: Incredible network interception and performance analysis capabilities.&lt;/p&gt;

&lt;p&gt;❌ The Bad:&lt;br&gt;
• The "Chrome" Lock: It’s meant for Chromium browsers. No native Safari (WebKit) or Firefox support.&lt;br&gt;
• JS/TS Only: It’s a Node library.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Playwright: The Modern Standard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Microsoft’s contender designed to fix the flaws of its predecessors.&lt;/p&gt;

&lt;p&gt;✅ The Good:&lt;br&gt;
• Auto-Waits: The killer feature. Playwright automatically waits for elements to be actionable before clicking. This kills flakiness.&lt;br&gt;
• True Cross-Browser: Runs on Chromium, Firefox, and WebKit (Safari) with one unified API.&lt;br&gt;
• Elite Tooling: The Trace Viewer (time-travel debugging) and Codegen are best-in-class.&lt;/p&gt;

&lt;p&gt;❌ The Bad:&lt;br&gt;
• Younger Ecosystem: The community is exploding, but it doesn't have Selenium's 20-year backlog of plugins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 2025 Verdict&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For any greenfield project in 2025, the choice is Playwright.&lt;/p&gt;

&lt;p&gt;It solved the biggest pain point of automation (flakiness via auto-waits) and supports multiple languages (Python, Java, JS, .NET).&lt;br&gt;
• Only stick with Selenium for legacy enterprise suites needing IE support.&lt;br&gt;
• Only stick with Puppeteer for niche Chrome-scraping tasks.&lt;/p&gt;

&lt;p&gt;What are you running in your pipelines next year? Let me know!&lt;/p&gt;

</description>
      <category>testing</category>
      <category>automation</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
