<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Emjey</title>
    <description>The latest articles on Forem by Emjey (@michle).</description>
    <link>https://forem.com/michle</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/michle"/>
    <language>en</language>
    <item>
      <title>Your pytest retries are lying to you. The hidden cost of --reruns, and the plugin I wrote so I could actually see what my tests were doing.</title>
      <dc:creator>Emjey</dc:creator>
      <pubDate>Wed, 22 Apr 2026 11:55:01 +0000</pubDate>
      <link>https://forem.com/michle/your-pytest-retries-are-lying-to-you-the-hidden-cost-of-reruns-and-the-plugin-i-wrote-so-i-27fh</link>
      <guid>https://forem.com/michle/your-pytest-retries-are-lying-to-you-the-hidden-cost-of-reruns-and-the-plugin-i-wrote-so-i-27fh</guid>
      <description>&lt;p&gt;Picture this. A test fails in CI. It's been flaky all week — fails on push, passes when you rerun. So you add &lt;code&gt;--reruns 2&lt;/code&gt; to the pytest command. Now the suite passes. Green build. Ship it.&lt;/p&gt;

&lt;p&gt;A week later, the same test fails in production in a way that only happens under load. You go back to look at the build that passed, and the report says... "passed." One line. No context. No hint that the test ever failed before, let alone what it failed with.&lt;/p&gt;

&lt;p&gt;This is what pytest looks like to most of us: a final verdict. It's not wrong, exactly — the test did pass, eventually. But "eventually" is hiding the interesting information. Why did it fail the first two times? What error? Was it a race condition? A flaky fixture? A genuine bug that only manifests one in three runs?&lt;/p&gt;

&lt;p&gt;pytest doesn't tell you, and by default &lt;code&gt;pytest-rerunfailures&lt;/code&gt; doesn't preserve that context in a form you can easily inspect. Add &lt;code&gt;-n auto&lt;/code&gt; via xdist and it gets worse — your reports become a collage of retry artifacts spread across worker JSONs, and figuring out which attempt ran first on which worker is its own forensic exercise.&lt;/p&gt;

&lt;p&gt;When a test goes &lt;code&gt;fail → fail → pass&lt;/code&gt;, I want to see all three attempts. I want to see each error message. I want to see the order. I want to be able to go, "oh, the first two failures were &lt;code&gt;ConnectionError&lt;/code&gt; but the third was clean — that's a network flake, I'll mark it as such" — instead of assuming a pass is a pass is a pass.&lt;/p&gt;

&lt;p&gt;So I wrote &lt;a href="https://pypi.org/project/pytest-html-plus/" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does (just this, for now)
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;pytest-html-plus&lt;/code&gt; hooks into pytest and &lt;code&gt;pytest-rerunfailures&lt;/code&gt; and preserves the full retry history in its HTML report. When a test has multiple attempts, you see all of them — passed, failed, errored — with their individual logs, errors, and tracebacks, in the order they ran.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zq29eqjq17oaoy4c0hy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zq29eqjq17oaoy4c0hy.gif" alt="Show it" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Crucially, it also merges xdist worker JSONs back into a single cohesive story, so even if a test ran across two workers, you see the full chronological attempt history in one place.&lt;/p&gt;

&lt;p&gt;Everything else the plugin does — combined XML export, automatic screenshots on failure, markers, email — is secondary. The retry visibility is the reason I built it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install and see it&lt;/strong&gt;&lt;br&gt;
The setup is three commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pytest-html-plus pytest-rerunfailures
pytest &lt;span class="nt"&gt;--reruns&lt;/span&gt; 2
open report_output/report.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No config file. No conftest changes. No hooks to register. It works with or without xdist. If a test goes through retries, you'll see the full story in the HTML report automatically.&lt;/p&gt;

&lt;p&gt;If you want to try it without touching your own suite, there's a &lt;a href="https://reporterplus.io/pytest-html-plus/" rel="noopener noreferrer"&gt;live demo report here&lt;/a&gt;. The flaky tests on that page show exactly the retry history I'm describing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters more than it sounds like
&lt;/h2&gt;

&lt;p&gt;Knowing which attempt failed" sounds like a nice-to-have until you're actually triaging flakes in a 2,000-test suite. Three concrete things it unlocks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distinguishing flaky from genuinely broken.&lt;/strong&gt;&lt;br&gt;
If a test goes &lt;code&gt;fail → pass&lt;/code&gt;, that's a flake. If it goes &lt;code&gt;fail → fail → pass&lt;/code&gt; there's probably a real bug that just doesn't reproduce deterministically. The attempt count alone is a diagnostic signal, and you lose it in a standard report.&lt;/p&gt;

&lt;p&gt;And when a test goes &lt;code&gt;fail → fail → fail&lt;/code&gt;, the &lt;em&gt;first&lt;/em&gt; failure is often the most diagnostic one. Later attempts can fail for downstream reasons, polluted state left behind by the first failure, for example: so the symptom you see on attempt three may have nothing to do with the original bug. A classic tell is &lt;em&gt;a test that's quietly depending on state from another test that ran before it; you only see it clearly in the first attempt.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding correlated flakes.&lt;/strong&gt; When three tests all fail their first attempt with &lt;code&gt;ConnectionError&lt;/code&gt; but pass on retry, you don't have three flaky tests — you have one network issue. The cross-test retry log makes that pattern visible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Honest CI reports.&lt;/strong&gt; There's a real difference between "this build passed on the first try" and "this build passed after eight retries across twelve tests." Both show &lt;code&gt;passed&lt;/code&gt; in a default pytest run. Both should not be treated the same by a reviewer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I didn't build (on purpose)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're looking for a full test-management platform with dashboards, trends, or historical tracking across runs — this isn't that. &lt;code&gt;pytest-html-plus&lt;/code&gt; is a single-run reporter. It writes one self-contained HTML file per run. That's by design: it's portable, it works without a backend, you can archive it as a CI artifact, you can email it(by just enabling &lt;code&gt;--email&lt;/code&gt; flag).&lt;/p&gt;

&lt;p&gt;If you want Allure, use Allure — it's a different product solving a different problem. If you want a server with a database tracking flakes across months, that's also a different product. This plugin is for the specific moment when you're triaging a single build and want to see what actually happened.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it, tell me it's broken&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The project is on GitHub at &lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;reporterplus/pytest-html-plus&lt;/a&gt;. I maintain it alone. If you install it and something doesn't work on your suite, please open an issue — the most valuable feedback is from people whose pytest setup is weirder than mine. Exotic fixtures, unusual xdist configurations, custom reruns logic — those are the corners where bugs hide, and I can't find them alone.&lt;/p&gt;

&lt;p&gt;And if the retry history helps you catch one bug you would have otherwise shipped, that's enough. That's what I wrote it for.&lt;/p&gt;

</description>
      <category>python</category>
      <category>pytest</category>
      <category>testing</category>
      <category>playwright</category>
    </item>
    <item>
      <title>pytest-html-plus — Your Default Pytest Reporter</title>
      <dc:creator>Emjey</dc:creator>
      <pubDate>Sat, 24 Jan 2026 08:37:47 +0000</pubDate>
      <link>https://forem.com/michle/pytest-html-plus-your-default-pytest-reporter-203g</link>
      <guid>https://forem.com/michle/pytest-html-plus-your-default-pytest-reporter-203g</guid>
      <description>&lt;h2&gt;
  
  
  Why another Pytest HTML reporter?
&lt;/h2&gt;

&lt;p&gt;If you’ve been running Pytest for a while, chances are you’ve used pytest-html. It works. It’s stable. It does the job.&lt;/p&gt;

&lt;p&gt;But over time, many teams run into the same quiet problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reports feel static&lt;/li&gt;
&lt;li&gt;Debugging still requires jumping between logs, terminals, and CI artifacts&lt;/li&gt;
&lt;li&gt;Customization often means patching or workarounds&lt;/li&gt;
&lt;li&gt;The report exists… but it makes you think than looking at quick debugging.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt; was built to solve those gaps — not by replacing testing or test fundamentals, but by rewriting how simpler debugging has to be or  what a default test report should feel like in 2025.&lt;/p&gt;

&lt;h2&gt;
  
  
  What “default reporter” actually means
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A default reporter should:&lt;/li&gt;
&lt;li&gt;Work out of the box&lt;/li&gt;
&lt;li&gt;Require zero configuration and codechanges to be useful&lt;/li&gt;
&lt;li&gt;Guide you when data is missing&lt;/li&gt;
&lt;li&gt;Stay readable even as test suites grow&lt;/li&gt;
&lt;li&gt;Fit naturally into CI pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt; is designed with exactly that mindset.&lt;/p&gt;

&lt;p&gt;You install it, run Pytest, and you already get something meaningful. No Java needed, No extra decorators needed, no conftest, no merge plugins(even if you use xdist or rerun-failures)&lt;/p&gt;

&lt;p&gt;No ceremony.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What makes &lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt; different&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Reports that explain themselves&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of silently showing empty values, the report tells you why something is missing and how to fix it.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;Missing environment?&lt;br&gt;
→ The report nudges you to use --rp-env&lt;/p&gt;

&lt;p&gt;Metadata not populated?&lt;br&gt;
→ It tells you what flag or config to add&lt;/p&gt;

&lt;p&gt;This removes tribal knowledge and makes reports usable even for someone new to the project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Designed for real debugging with auto screenshots&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The report is structured to help you answer real questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What failed?&lt;/strong&gt; pytest-html-plus displays all the failures right there&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Where did it fail?&lt;/strong&gt; pytest-html-plus displays all the errors, traces, logs in seperate block that you can simply click copy to send to your team or debug.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Under which environment and configuration?&lt;/strong&gt; pytest-html-plus lets you use your own filters when you pass &lt;code&gt;pytest.mark.{filtername}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What context do I need right now?&lt;/strong&gt; &lt;code&gt;pytest-html-plus&lt;/code&gt; shows the external JIRA tickets or test cases if you had tagged it using markers like &lt;code&gt;pytest.mark.link&lt;/code&gt; in your codebase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You spend less time re-running tests just to reproduce context.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. CI-first, not CI-afterthought
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt; plays nicely with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions&lt;/li&gt;
&lt;li&gt;Archived artifacts&lt;/li&gt;
&lt;li&gt;JSON and optional xml outputs that can be reused by other tools or test management tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The report is not just an HTML file — it’s part of a pipeline and its unified without making you use a dozen of other plugins or keep writing conftest.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Opinionated, but not heavy
&lt;/h2&gt;

&lt;p&gt;The goal is clarity, not feature overload.&lt;/p&gt;

&lt;p&gt;Instead of adding 20 toggles and panels, the focus is on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean layout&lt;/li&gt;
&lt;li&gt;Warm, readable colors&lt;/li&gt;
&lt;li&gt;Information density without noise&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;If a feature makes the report harder to scan, it doesn’t belong in pytest-html-plus.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why should it be my default reporter?
&lt;/h2&gt;

&lt;p&gt;If pytest-html is the reliable base,&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt;&lt;/strong&gt; is the version shaped by years of daily test runs, CI failures at 2 a.m., and “why didn’t the report tell me this?” moments.&lt;/p&gt;

&lt;p&gt;For many teams, it naturally becomes the default reporter.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Teams running Pytest daily&lt;/li&gt;
&lt;li&gt;Engineers debugging from CI and in the local runs&lt;/li&gt;
&lt;li&gt;Projects where reports are read by humans, not just machines&lt;/li&gt;
&lt;li&gt;Anyone who wants reports to help, not just exist&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Closing thought
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Good tooling fades into the background. Great tooling quietly makes you faster.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt; aims to be the reporter you stop thinking about — because it already gave you what you needed.
&lt;/h2&gt;

</description>
      <category>testing</category>
      <category>python</category>
      <category>pytest</category>
      <category>programming</category>
    </item>
    <item>
      <title>A Small VS Code Extension to Shorten the Pytest Fix Loop</title>
      <dc:creator>Emjey</dc:creator>
      <pubDate>Thu, 01 Jan 2026 09:36:15 +0000</pubDate>
      <link>https://forem.com/michle/a-small-vs-code-extension-to-shorten-the-pytest-fix-loop-52gp</link>
      <guid>https://forem.com/michle/a-small-vs-code-extension-to-shorten-the-pytest-fix-loop-52gp</guid>
      <description>&lt;p&gt;When working with pytest locally, the loop after a test failure usually looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run tests&lt;/li&gt;
&lt;li&gt;scan terminal output&lt;/li&gt;
&lt;li&gt;find the failed test&lt;/li&gt;
&lt;li&gt;locate the file&lt;/li&gt;
&lt;li&gt;scroll to the failure&lt;/li&gt;
&lt;li&gt;repeat if there are multiple failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It works, but it’s not particularly calm — especially when failures are spread across files or when you’re re-running tests frequently.&lt;/p&gt;

&lt;p&gt;As part of maintaining pytest-html-plus, we’ve been thinking about how to make the local fix loop a little smoother — without introducing new concepts or replacing existing tools.&lt;/p&gt;

&lt;p&gt;We’ve taken a similar approach before with the GitHub Actions integration: instead of asking users to learn a new workflow, the action simply plugs into existing CI YAML and publishes the same report artifact automatically.&lt;/p&gt;

&lt;p&gt;Along the same lines, we’ve been experimenting with a small VS Code extension that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reads the generated report&lt;/li&gt;
&lt;li&gt;surfaces failed tests in a sidebar&lt;/li&gt;
&lt;li&gt;lets you jump directly to the failure location in code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjk2vl1lu93rcevbg8oza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjk2vl1lu93rcevbg8oza.png" alt=" " width="800" height="751"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One deliberate design choice is that the extension does not try to run or discover tests.&lt;/p&gt;

&lt;p&gt;Instead, it consumes a stable artifact — final_report.json — produced by pytest-html-plus.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That means it doesn’t matter:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;where pytest was run (terminal, CI, container, xdist, docker etc.)&lt;/li&gt;
&lt;li&gt;how the tests were executed&lt;/li&gt;
&lt;li&gt;whether they ran inside or outside the editor&lt;/li&gt;
&lt;li&gt;As long as a report exists, the extension can surface failures and navigate to their source.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;This keeps the extension:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;runner-agnostic&lt;/li&gt;
&lt;li&gt;predictable&lt;/li&gt;
&lt;li&gt;decoupled from test execution itself&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we added
&lt;/h2&gt;

&lt;p&gt;We’ve released a small VS Code extension that focuses on one thing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;helping you stay in the editor while fixing failures, and only switch to the HTML report when you actually need it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The extension reads the existing generated report produced by pytest-html-plus and presents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a quick summary (passed / failed / skipped)&lt;/li&gt;
&lt;li&gt;failed tests grouped by file&lt;/li&gt;
&lt;li&gt;inline error context&lt;/li&gt;
&lt;li&gt;one-click navigation to the failure line&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There’s no test execution, no report rendering, and no screenshots in the editor.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this extension is not
&lt;/h2&gt;

&lt;p&gt;This is important to be clear about.&lt;/p&gt;

&lt;p&gt;The VS Code extension does not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;replace the HTML report&lt;/li&gt;
&lt;li&gt;render screenshots or CI artifacts&lt;/li&gt;
&lt;li&gt;analyze failures&lt;/li&gt;
&lt;li&gt;re-run tests&lt;/li&gt;
&lt;li&gt;change how pytest works&lt;/li&gt;
&lt;li&gt;Try to be just another extension to run pytest and show results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The HTML report remains the source of truth for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;detailed inspection&lt;/li&gt;
&lt;li&gt;screenshots&lt;/li&gt;
&lt;li&gt;metadata&lt;/li&gt;
&lt;li&gt;CI usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The extension is intentionally scoped to local development, when the goal is simply to move from failure → fix as quickly as possible while already inside the editor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we kept it small
&lt;/h2&gt;

&lt;p&gt;There’s a temptation to turn editor extensions into dashboards.&lt;/p&gt;

&lt;p&gt;We resisted that.&lt;/p&gt;

&lt;p&gt;Instead, the extension is designed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;complement the terminal, not replace it&lt;/li&gt;
&lt;li&gt;respect existing workflows&lt;/li&gt;
&lt;li&gt;stay out of the way when you don’t need it&lt;/li&gt;
&lt;li&gt;You open it when you want orientation.&lt;/li&gt;
&lt;li&gt;You close it when you don’t.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Configuration, on your terms&lt;/p&gt;

&lt;p&gt;The extension supports a few lightweight ways to configure the report path:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;browse for the report file&lt;/li&gt;
&lt;li&gt;enter the path manually&lt;/li&gt;
&lt;li&gt;auto-detect reports in the workspace&lt;/li&gt;
&lt;li&gt;Nothing is auto-configured without asking, and nothing is mandatory.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where this fits in the ecosystem
&lt;/h2&gt;

&lt;p&gt;With this addition, the pytest-html-plus ecosystem now looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Core library → generate a unified report (local + CI)&lt;/li&gt;
&lt;li&gt;GitHub Action → publish reports automatically in CI&lt;/li&gt;
&lt;li&gt;VS Code extension → act on failures locally, inside the editor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each piece does one thing, at one moment, for one context.&lt;/p&gt;

&lt;p&gt;If your workflow already feels smooth with the terminal alone, you may not need this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But if you often:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;deal with multiple failures&lt;/li&gt;
&lt;li&gt;jump between test files&lt;/li&gt;
&lt;li&gt;re-run tests while iterating locally&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;this extension might reduce a bit of friction in that loop.&lt;/p&gt;

&lt;p&gt;As with the rest of pytest-html-plus, the goal isn’t to change how you work — just to remove a few unnecessary steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;p&gt;VS Code extension: &lt;a href="https://marketplace.visualstudio.com/items?itemName=reporterplus.pytest-html-plus-vscode" rel="noopener noreferrer"&gt;Pytest HTML Plus&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Core library: &lt;a href="https://pypi.org/project/pytest-html-plus/" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github actions: &lt;a href="https://github.com/marketplace/actions/pytest-html-plus-action" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt;&lt;/p&gt;

</description>
      <category>pytest</category>
      <category>testing</category>
      <category>programming</category>
      <category>playwright</category>
    </item>
  </channel>
</rss>
