<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Cypress</title>
    <description>The latest articles on Forem by Cypress (@cypress).</description>
    <link>https://forem.com/cypress</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cypress"/>
    <language>en</language>
    <item>
      <title>How I stopped declaring login in each of my 5k tests</title>
      <dc:creator>Marcelo C.</dc:creator>
      <pubDate>Fri, 27 Feb 2026 19:26:44 +0000</pubDate>
      <link>https://forem.com/cypress/how-i-stopped-declaring-login-in-each-of-my-5k-tests-37km</link>
      <guid>https://forem.com/cypress/how-i-stopped-declaring-login-in-each-of-my-5k-tests-37km</guid>
      <description>&lt;p&gt;Have you ever encountered a testing codebase that many portions are repeated over and over? We all have! Of course, I could be talking about DRY princples (Don't Repeat Yourself), but lets keep that aside for now and focus on a Cypress trick up it's sleeve that can go unnoticed for many senior devs: the global hooks.&lt;/p&gt;

&lt;p&gt;And what do I mean by "global" hooks? They're called like that because you only declare them once and they're applied to all your tests instantly. &lt;/p&gt;

&lt;p&gt;So let's get to the grain here: when installing Cypress it already comes with a file at created at &lt;code&gt;cypress/support/e2e.&amp;lt;ts|js&amp;gt;&lt;/code&gt; level. This is usually where you declare some important commands &lt;em&gt;imports&lt;/em&gt; that your E2E testing will need to access and run properly.&lt;/p&gt;

&lt;p&gt;But it's also responsible for adding let's say &lt;em&gt;before&lt;/em&gt;, &lt;em&gt;beforeEach&lt;/em&gt;, or &lt;em&gt;after&lt;/em&gt;, &lt;em&gt;afterEach&lt;/em&gt; hooks that will be applied by all your tests. This can be responsible for login hooks, clean-ups to the database after the tests run, screenshots resolutions configuration -- a million of possibilities here.&lt;/p&gt;

&lt;p&gt;I guess that the "global hooks" makes more sense now, right? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The challenge&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At my workplace, I encountered the command to login, called &lt;code&gt;cy.login&lt;/code&gt;, declared in each of the 5.634 tests that we have. For a while it bothered me a lot, because I always wanted to remove this codelines, and make my life easier with a simple &lt;code&gt;e2e.ts&lt;/code&gt; file that handled all existing and new login logic for future tests.&lt;/p&gt;

&lt;p&gt;But if I knew how it worked, why didn't I just do it already? &lt;/p&gt;

&lt;p&gt;Becuase, for the first time (for me), I was dealing with a really complex system: a big chunk of the legacy tests ran with &lt;code&gt;testIsolation: false&lt;/code&gt;. That meant that they only login once, and the &lt;code&gt;it&lt;/code&gt; blocks don't load a new baseURL after each one is done. They do it all in one session.&lt;/p&gt;

&lt;p&gt;Because? Well, that would be another story, so let's just accept their done like that for "system requirements" at the time.&lt;/p&gt;

&lt;p&gt;Ok, so I basically needed:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;before() → cy.login()&lt;br&gt;
beforeEach() → cy.login() only if testIsolation is true&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Easy right? Not yet. The complexity resides because different spec families have different needs: different credentials, different environments, different session strategies, and different testIsolation settings. &lt;/p&gt;

&lt;p&gt;There are two needs here to login:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Which login method?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cy.login(): Default specs with standard app with default credential.&lt;/li&gt;
&lt;li&gt;cy.loginDemo(): Presentation specs with different environment, likely SSO or different credentials.&lt;/li&gt;
&lt;li&gt;Custom (own login): Need specific credentials or totally different E2E tests outside enviroment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to login? (before vs beforeEach)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is driven by Cypress's testIsolation setting:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;testIsolation: true&lt;/code&gt; — Cypress clears browser state (cookies, storage) between each test. Session is lost → must re-login before each test.&lt;br&gt;
&lt;code&gt;testIsolation: false&lt;/code&gt; — Cookies persist across tests in the same spec → login once is enough.&lt;/p&gt;

&lt;p&gt;Now, here is where the plot thickens. I had to declare folders, tests, paths that needed to be skipped (or not), because on the same folder I had &lt;code&gt;testIsolation: true/false&lt;/code&gt;. So, follow me along:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Legacy specs (/legacy/ folder)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;before()&lt;/code&gt; → skip entirely&lt;br&gt;
&lt;code&gt;beforeEach()&lt;/code&gt; → only call cy.login() if testIsolation is true&lt;/p&gt;

&lt;p&gt;Legacy specs use specific credentials. If the global before() called cy.login() first, cy.session() would cache the wrong credentials, causing 500 errors when the spec then tries to login with different ones. So global before() is completely skipped. In beforeEach(), it only re-validates when state is actually cleared (testIsolation: true).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Training portal specs (/training/ folder) + Custom login specs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;before()&lt;/code&gt; → skip entirely&lt;br&gt;
&lt;code&gt;beforeEach()&lt;/code&gt; → skip entirely&lt;/p&gt;

&lt;p&gt;These specs manage their own login end-to-end. The global hooks stay completely out of the way. Some tests likely test login flows themselves or use role-specific credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. BEFORE_LOGIN_DEMO_SPECS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;before()&lt;/code&gt; → &lt;code&gt;cy.loginDemo() + cy.goToHome()&lt;/code&gt;&lt;br&gt;
&lt;code&gt;beforeEach()&lt;/code&gt; → skip (already logged in)&lt;/p&gt;

&lt;p&gt;Login once per spec, reused across all tests. These are demo/presentation specs where re-logging in per test would be slow and unnecessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. BEFORE_EACH_LOGIN_DEMO_SPECS&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;code&gt;before()&lt;/code&gt; → nothing&lt;br&gt;
&lt;code&gt;beforeEach() → cy.loginDemo() + cy.goToHome()&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Similar to above but login is repeated per test because &lt;code&gt;testIsolation: true&lt;/code&gt;, the state is cleared between tests, so they must re-login each time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. LOGIN_DEMO_SPECIAL_SPECS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;before() → nothing&lt;br&gt;
beforeEach() → override baseUrl + timeout + idp_active env, then cy.loginDemo()&lt;/p&gt;

&lt;p&gt;This spec runs against a completely different server, with an IDP/SSO active flag and a much longer timeout. The config must be set before each test because test isolation may reset Cypress config state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Default specs (everything else, A.K.A the MOST important logic!)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;before() → cy.login()&lt;br&gt;
beforeEach() → cy.login() only if testIsolation is true&lt;/p&gt;

&lt;p&gt;Standard app, default credentials. Login once in before(), then beforeEach() only re-validates the session if Cypress actually cleared it (testIsolation: true). If testIsolation is false, cookies persist and calling cy.login() again would navigate back to home, breaking any test that expects to be on a specific page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The key insight: why before AND beforeEach?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You're asking "are you mental? why on earth would you declare login TWICE?", basically because:&lt;/p&gt;

&lt;p&gt;before()     → establishes the initial session (runs once)&lt;br&gt;
beforeEach() → re-validates/restores the session if testIsolation cleared it&lt;/p&gt;

&lt;p&gt;For specs with testIsolation: false, beforeEach() is a no-op (or returns early) because the session is still alive. For specs with testIsolation: true, beforeEach() must re-run the login to restore the cleared session — but it uses cy.session() internally which caches credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4p9u788bh4lr91levn43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4p9u788bh4lr91levn43.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's a sneak peek of how my &lt;code&gt;e2e.ts&lt;/code&gt; file looks in it's (hopefully) final form:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdf703lpxrsubqex85z3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdf703lpxrsubqex85z3.png" alt=" " width="800" height="782"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And I basically removed the cy.login function from more than 1k files in my codebase, leaving me or any other engineer to not worry anymore about the login being declared at test level, it handles everything for me now, as it should.&lt;/p&gt;

&lt;p&gt;What about you? Have you ever encountered a challenging and complex test codebase to deal with? What was the most difficult change you had to make? Leave me a comment, I would love to hear!&lt;/p&gt;

</description>
      <category>cypress</category>
      <category>e2etesting</category>
      <category>typescript</category>
      <category>login</category>
    </item>
    <item>
      <title>Cypress in the Age of AI Agents: Orchestration, Trust, and the Tests That Run Themselves</title>
      <dc:creator>Vladimir Mikhalev</dc:creator>
      <pubDate>Thu, 26 Feb 2026 11:33:21 +0000</pubDate>
      <link>https://forem.com/cypress/cypress-in-the-age-of-ai-agents-orchestration-trust-and-the-tests-that-run-themselves-43go</link>
      <guid>https://forem.com/cypress/cypress-in-the-age-of-ai-agents-orchestration-trust-and-the-tests-that-run-themselves-43go</guid>
      <description>&lt;p&gt;Last year, I wrote about &lt;a href="https://dev.to/cypress/docker-cypress-in-2025-how-ive-perfected-my-e2e-testing-setup-4f7j"&gt;Docker and Cypress&lt;/a&gt; for this blog. It covered containers, layer caching, and parallel runners. Good stuff. Useful stuff.&lt;/p&gt;

&lt;p&gt;But I'm not writing that article again.&lt;/p&gt;

&lt;p&gt;Here's why.&lt;/p&gt;

&lt;p&gt;I could write a perfect container config in my sleep. So could Claude. So could GPT. So could any intern with a prompt. &lt;strong&gt;Syntax has become a commodity.&lt;/strong&gt; The Dockerfile isn't the hard part anymore.&lt;/p&gt;

&lt;p&gt;The hard part?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orchestration and trust when AI agents run the tests.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let me explain.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Shift Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;In 2025, Cypress shipped &lt;code&gt;cy.prompt()&lt;/code&gt;. Write tests in plain English. The AI figures out the selectors. It even self-heals when your UI changes.&lt;/p&gt;

&lt;p&gt;That's powerful. And that's dangerous.&lt;/p&gt;

&lt;p&gt;Not because the tool is bad. It's genuinely impressive. But because it changes &lt;strong&gt;who is making decisions&lt;/strong&gt; in your pipeline. And most teams haven't thought about that.&lt;/p&gt;

&lt;p&gt;Before &lt;code&gt;cy.prompt()&lt;/code&gt;, the chain of trust was simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A human wrote the test&lt;/li&gt;
&lt;li&gt;A human reviewed it&lt;/li&gt;
&lt;li&gt;CI ran it&lt;/li&gt;
&lt;li&gt;If it failed, a human fixed it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every link in that chain had a name attached.&lt;/p&gt;

&lt;p&gt;Now?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AI writes the test&lt;/li&gt;
&lt;li&gt;An AI picks the selectors&lt;/li&gt;
&lt;li&gt;An AI heals the test when it breaks&lt;/li&gt;
&lt;li&gt;The human sees green checkmarks&lt;/li&gt;
&lt;li&gt;Everybody ships&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Until something goes wrong. And nobody knows why.&lt;/p&gt;




&lt;h2&gt;
  
  
  Autonomy vs. Augmentation: The Framework That Matters
&lt;/h2&gt;

&lt;p&gt;The industry keeps confusing two very different things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Autonomy&lt;/strong&gt; means the agent acts &lt;em&gt;for you&lt;/em&gt;. You find out later what happened.&lt;br&gt;
Think: self-driving car. You're the passenger. The AI makes every turn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Augmentation&lt;/strong&gt; means the agent helps &lt;em&gt;you decide&lt;/em&gt;. You still make the call.&lt;br&gt;
Think: GPS navigation. It suggests the route. You drive.&lt;/p&gt;

&lt;p&gt;Most AI testing tools sell autonomy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Never write a test again!"&lt;/li&gt;
&lt;li&gt;"Self-healing pipelines!"&lt;/li&gt;
&lt;li&gt;"Zero maintenance!"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That sounds great in a demo.&lt;/p&gt;

&lt;p&gt;It falls apart in production.&lt;/p&gt;

&lt;p&gt;Google's testing team found that 1.5% of all test runs were flaky (2016 study). Nearly 16% of tests showed some flakiness over time. Microsoft reported 49,000 flaky tests across 100+ product teams (2022). These numbers haven't gotten better. Now imagine those tests were written by AI.&lt;/p&gt;

&lt;p&gt;You don't have a testing problem.&lt;/p&gt;

&lt;p&gt;You have a &lt;strong&gt;trust problem&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Actually Happens When AI Writes Your Cypress Tests
&lt;/h2&gt;

&lt;p&gt;I've watched AI code assistants generate test suites. Here's the pattern I see every time:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day one:&lt;/strong&gt; Beautiful. High coverage numbers. Clean syntax. The PR merges fast. Everyone celebrates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week two:&lt;/strong&gt; A UI change breaks three tests. The self-healing kicks in. Tests pass again. Nobody checks what changed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month two:&lt;/strong&gt; The self-healed selectors are now targeting the wrong elements. The tests pass. But they're testing the wrong things. Your coverage number says 90%. Your real coverage is closer to 40%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quarter end:&lt;/strong&gt; A production bug ships. The test suite was green. The post-mortem reveals the AI "healed" a critical login test. It now clicks a decorative button instead of the submit button. Both are blue. Both say "Continue."&lt;/p&gt;

&lt;p&gt;The AI didn't fail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The architecture failed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nobody designed a system where AI decisions get verified.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Architecture Cypress Teams Actually Need
&lt;/h2&gt;

&lt;p&gt;Here's the playbook I'd build for any team using Cypress with AI in 2026.&lt;/p&gt;


&lt;h3&gt;
  
  
  Layer 1: AI Generates, Humans Gate
&lt;/h3&gt;

&lt;p&gt;Use &lt;code&gt;cy.prompt()&lt;/code&gt; (or any AI tool) to draft tests. That's the accelerator.&lt;/p&gt;

&lt;p&gt;But treat AI-generated tests like pull requests from a junior developer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// cy.prompt() generates the test&lt;/span&gt;
&lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Visit the login page&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Type admin@company.com into the email field&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Type the password into the password field&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Click the sign in button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Verify the dashboard loads&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then eject that code. Review the selectors. Commit the explicit version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// The reviewed, committed version&lt;/span&gt;
&lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;visit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/login&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[data-cy=email]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;admin@company.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[data-cy=password]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Cypress&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;TEST_PASSWORD&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[data-cy=submit-login]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;url&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;should&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;include&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/dashboard&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[data-cy=welcome-banner]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;should&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;be.visible&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI got you there faster. A human verified the result.&lt;/p&gt;

&lt;p&gt;That's augmentation.&lt;/p&gt;




&lt;h3&gt;
  
  
  Layer 2: The Trust Boundary in CI
&lt;/h3&gt;

&lt;p&gt;Your pipeline needs a clear line:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On one side: things AI can do alone&lt;/li&gt;
&lt;li&gt;On the other: things that need human eyes
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# GitHub Actions - Trust Architecture&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ai-generated-tests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v6&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run AI-assisted test suite&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;docker compose -f docker-compose.cypress.yml up \&lt;/span&gt;
            &lt;span class="s"&gt;--abort-on-container-exit \&lt;/span&gt;
            &lt;span class="s"&gt;--exit-code-from cypress&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Validate no self-healed selectors&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;# Flag any tests that healed since last commit&lt;/span&gt;
          &lt;span class="s"&gt;# Note: Requires a custom script to parse&lt;/span&gt;
          &lt;span class="s"&gt;# Cypress Cloud API or stdout logs&lt;/span&gt;
          &lt;span class="s"&gt;node ./scripts/check-healed-tests.js&lt;/span&gt;
          &lt;span class="s"&gt;# If selectors changed, block the merge&lt;/span&gt;
          &lt;span class="s"&gt;# Force a human review&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Screenshot diff on healed tests&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;failure()&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;# Capture what the AI "fixed"&lt;/span&gt;
          &lt;span class="s"&gt;# Attach to PR for human review&lt;/span&gt;
          &lt;span class="s"&gt;npx cypress run --spec "healed-tests/**" \&lt;/span&gt;
            &lt;span class="s"&gt;--config screenshotOnRunFailure=true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key:&lt;/p&gt;

&lt;p&gt;Self-healed tests don't auto-merge. They create a review request. A human looks at what changed. Then decides.&lt;/p&gt;




&lt;h3&gt;
  
  
  Layer 3: The Accountability Layer
&lt;/h3&gt;

&lt;p&gt;Every AI decision in your pipeline needs a log.&lt;/p&gt;

&lt;p&gt;Not just "test passed."&lt;/p&gt;

&lt;p&gt;But: "test healed selector from &lt;code&gt;.btn-primary&lt;/code&gt; to &lt;code&gt;.btn-action&lt;/code&gt; on Feb 15."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// cypress.config.js&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;e2e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;experimentalPromptCommand&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nf"&gt;setupNodeEvents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;after:spec&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Parse Cypress stdout or Cloud API for healing events.&lt;/span&gt;
        &lt;span class="c1"&gt;// Self-healing data appears in the Command Log&lt;/span&gt;
        &lt;span class="c1"&gt;// but isn't yet exposed in results.stats.&lt;/span&gt;
        &lt;span class="c1"&gt;//&lt;/span&gt;
        &lt;span class="c1"&gt;// Option A: Parse terminal output for "Self-Healed" tags&lt;/span&gt;
        &lt;span class="c1"&gt;// Option B: Query Cypress Cloud API for spec run details&lt;/span&gt;
        &lt;span class="c1"&gt;// Option C: Build a custom Cypress plugin that listens&lt;/span&gt;
        &lt;span class="c1"&gt;//           to command events during the run&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;healingEvents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parseHealingFromLogs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;healingEvents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nf"&gt;logToAuditTrail&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;healed&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;healingEvents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="na"&gt;details&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;healingEvents&lt;/span&gt;
          &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When something breaks in production, you can trace it back:&lt;/p&gt;

&lt;p&gt;"The AI changed this selector on this date. Nobody reviewed it. That's the gap."&lt;/p&gt;

&lt;p&gt;Without this layer, your pipeline is a black box.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Green doesn't mean correct. It means unchallenged.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Layer 4: Docker as the Trust Container
&lt;/h3&gt;

&lt;p&gt;Docker isn't just for consistency anymore.&lt;/p&gt;

&lt;p&gt;It's your isolation boundary for AI-generated tests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.cypress.yml&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cypress-human-authored&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile.cypress&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="s"&gt;npx cypress run&lt;/span&gt;
      &lt;span class="s"&gt;--spec "cypress/e2e/human-authored/**"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./results/human:/results&lt;/span&gt;

  &lt;span class="na"&gt;cypress-ai-generated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile.cypress&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="s"&gt;npx cypress run&lt;/span&gt;
      &lt;span class="s"&gt;--spec "cypress/e2e/ai-generated/**"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./results/ai:/results&lt;/span&gt;
    &lt;span class="c1"&gt;# AI tests run in a separate container&lt;/span&gt;
    &lt;span class="c1"&gt;# Different reporting, different trust level&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Separate the results. Report them differently.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Human-authored tests&lt;/strong&gt; are your source of truth&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-generated tests&lt;/strong&gt; are your early warning system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When both agree: high confidence.&lt;br&gt;
When they disagree: investigate.&lt;br&gt;
When only AI tests pass: be suspicious.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Uncomfortable Question
&lt;/h2&gt;

&lt;p&gt;Here's where I need to be honest.&lt;/p&gt;

&lt;p&gt;I've been in tech for 20 years, and spent the last 15 building delivery pipelines. I can debug a failing Docker container at 2 AM with my eyes half closed. I've configured CI/CD systems that run thousands of tests across dozens of services.&lt;/p&gt;

&lt;p&gt;And I'm watching AI tools do parts of that job faster than I can.&lt;/p&gt;

&lt;p&gt;That's not a threat.&lt;/p&gt;

&lt;p&gt;That's a signal.&lt;/p&gt;

&lt;p&gt;The value isn't in writing the &lt;code&gt;cy.get()&lt;/code&gt; selector anymore.&lt;/p&gt;

&lt;p&gt;The value is in designing the system where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI-generated selectors get verified&lt;/li&gt;
&lt;li&gt;Self-healing gets audited&lt;/li&gt;
&lt;li&gt;Trust has a paper trail&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Executor writes the test.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architect designs the trust system.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most teams are building AI-powered testing without building AI-accountable testing. They're adding speed without adding trust.&lt;/p&gt;

&lt;p&gt;That's technical debt with a new name.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Do This Week
&lt;/h2&gt;

&lt;p&gt;If I ran a Cypress team today, here's my Monday morning plan:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Separate your test suites.&lt;/strong&gt; Human-authored in one folder. AI-generated in another. Track them separately.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Add an audit log for self-healing.&lt;/strong&gt; Every time &lt;code&gt;cy.prompt()&lt;/code&gt; (or any AI tool) changes a selector, log it. Make it visible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Block auto-merge on healed tests.&lt;/strong&gt; Self-healed tests go into a review queue. A human approves. Every time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run AI tests in a separate Docker container.&lt;/strong&gt; Different reporting pipeline. Compare results against human-authored tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Measure real coverage.&lt;/strong&gt; Not line coverage. Not selector coverage. "Does this test actually verify the behavior we care about?" AI can inflate coverage numbers without testing anything meaningful.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;None of this is anti-AI.&lt;/p&gt;

&lt;p&gt;All of this is pro-trust.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;Cypress + AI is the future. I believe that. &lt;code&gt;cy.prompt()&lt;/code&gt; is a genuine leap forward.&lt;/p&gt;

&lt;p&gt;The ability to write tests in plain English, the self-healing, the lower barrier to entry — all of it matters.&lt;/p&gt;

&lt;p&gt;But the teams that win won't be the ones who automate the most.&lt;/p&gt;

&lt;p&gt;They'll be the ones who &lt;strong&gt;trust the right things&lt;/strong&gt; and &lt;strong&gt;verify everything else.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The bot that ships the wrong build doesn't get fired. You do.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Design accordingly.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.cypress.io/api/commands/prompt" rel="noopener noreferrer"&gt;cy.prompt() Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.cypress.io/app/continuous-integration/overview" rel="noopener noreferrer"&gt;Cypress Docker Images&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://testing.googleblog.com/2016/05/flaky-tests-at-google-and-how-we.html" rel="noopener noreferrer"&gt;Google: Flaky Tests and How We Mitigate Them (2016)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devblogs.microsoft.com/engineering-at-microsoft/improving-developer-productivity-via-flaky-test-management/" rel="noopener noreferrer"&gt;Microsoft: Flaky Test Management (2022)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Valdemar is a Docker Captain and Cypress Ambassador based in Canada. He builds CI/CD pipelines that don't lie to you. Find him at &lt;a href="https://valdemar.ai" rel="noopener noreferrer"&gt;valdemar.ai&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cypress</category>
      <category>ai</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Leading Quality Through Change: Balancing Speed, AI, and the Fundamentals That Matter</title>
      <dc:creator>Ronald Williams</dc:creator>
      <pubDate>Thu, 29 Jan 2026 16:41:46 +0000</pubDate>
      <link>https://forem.com/cypress/leading-quality-through-change-balancing-speed-ai-and-the-fundamentals-that-matter-oi7</link>
      <guid>https://forem.com/cypress/leading-quality-through-change-balancing-speed-ai-and-the-fundamentals-that-matter-oi7</guid>
      <description>&lt;p&gt;As software delivery accelerates and AI driven tooling reshapes how teams approach testing, many QA leaders are facing the same challenge: how to evolve quality practices without losing the fundamentals that keep teams effective, scalable, and trusted.&lt;/p&gt;

&lt;p&gt;This tension shows up in real leadership decisions every day. framework selection, automation trade offs, skill development, and responsible adoption of emerging tools. The conversations are rarely just about tools. They are about judgment, mindset, and how to guide teams through constant change without sacrificing long term quality.&lt;/p&gt;

&lt;p&gt;In this reflection, Lyle Smart, Director of Quality Assurance and Test Automation (SDET) at &lt;a href="https://www.continued.com/" rel="noopener noreferrer"&gt;Continued&lt;/a&gt;, shares perspective shaped by real world leadership decisions. His experience offers practical guidance for QA and engineering leaders navigating speed, complexity, and sustainability in today’s delivery landscape.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When a technical decision becomes a leadership one&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One defining moment for Lyle came during a new platform build, when he was faced with choosing a test automation framework.&lt;/p&gt;

&lt;p&gt;On the surface, the decision appeared technical. In reality, it carried significant leadership weight.&lt;/p&gt;

&lt;p&gt;That choice would influence how the team collaborated, how quickly engineers could onboard, and how quality practices would scale over time. For Lyle, the decision was less about picking the “best” framework and more about setting the foundation for how the team would work and grow together.&lt;/p&gt;

&lt;p&gt;Framework decisions shape culture. They signal what a team values, how approachable quality is for new contributors, and whether testing becomes a shared responsibility or a bottleneck.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Excitement, innovation, and the need for discipline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lyle describes QA leadership today as genuinely exciting. The pace of innovation, especially with AI, has opened up new possibilities for how teams think about testing and quality engineering.&lt;/p&gt;

&lt;p&gt;At the same time, strong fundamentals still matter.&lt;/p&gt;

&lt;p&gt;From his perspective, leading QA requires balancing innovation with discipline. Skilled QA professionals remain essential to guide quality decisions, apply context, and ensure tools are used intentionally rather than for novelty. AI can accelerate workflows, but it cannot replace judgment.&lt;/p&gt;

&lt;p&gt;The role of QA leadership is increasingly about knowing when to lean into new capabilities and when to slow down and ask harder questions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What really keeps QA leaders up at night&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For leaders like Lyle, the biggest concern is not adopting the next tool. It is ensuring the department has the right mix of skills, both technical and interpersonal, to succeed in the future. This matters at a leadership level because it directly affects sustainability.&lt;/p&gt;

&lt;p&gt;Teams need more than expertise with tools. They need communication skills, critical thinking, and the ability to adapt as systems, products, and expectations evolve. Without those skills, even the most advanced tooling can become a liability rather than an advantage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fast adoption, thoughtful use&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lyle points to real world examples where new capabilities required careful leadership, not just enthusiasm. When new features such as cy.prompt were introduced, adoption needed to be fast but thoughtful. The challenge was ensuring teams understood not only how the feature worked, but when it should be used and when it should not.&lt;br&gt;
As a leader, he felt responsible for helping the team avoid unnecessary complexity or misuse that could reduce effectiveness instead of improving it. Clear guidance, shared standards, and open conversations became just as important as documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slowing down to move forward&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These experiences shaped how Lyle approaches leadership today. Pressure can push teams toward fast solutions, but rushed quality decisions often create more work later. He now places greater emphasis on evaluating broader impact and long term consequences, especially when introducing new tools or practices.&lt;br&gt;
Slowing down is not resistance to progress. It is a way to protect teams from churn, burnout, and fragile systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A final reflection for QA and engineering leaders&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If Lyle could leave other QA or engineering leaders with one reflection, it would be this:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Keep learning. The future is exciting, and new tools and skills are essential. Just do not lose sight of the core principles of quality that make those tools effective in the first place.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you are looking for leadership content that keeps you and your team ahead of the curve, &lt;a href="https://cypress.registration.goldcast.io/events/670deb6c-06ee-4ce8-858b-8a4db3a62eb1?utm_source=dev_to&amp;amp;utm_medium=lyle_leadership_blog&amp;amp;utm_campaign=cypressconf2026&amp;amp;utm_term=01-29-2026&amp;amp;utm_content=cypressconf" rel="noopener noreferrer"&gt;register for CypressConf 2026&lt;/a&gt; and learn from industry leaders defining success in modern software development.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>leadership</category>
      <category>testing</category>
    </item>
    <item>
      <title>🚀 Enhancing Cypress Test Stability and Retry Capabilities</title>
      <dc:creator>Laerte Neto</dc:creator>
      <pubDate>Tue, 13 Jan 2026 15:31:18 +0000</pubDate>
      <link>https://forem.com/cypress/enhancing-cypress-test-stability-and-retry-capabilities-1497</link>
      <guid>https://forem.com/cypress/enhancing-cypress-test-stability-and-retry-capabilities-1497</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;cypress-retry-after-run&lt;/code&gt; plugin brings a smart way to &lt;strong&gt;rerun only the tests that failed in a previous Cypress run&lt;/strong&gt;, saving pipeline time, infrastructure resources, and frustration when dealing with flaky tests. It was designed mainly for real-world teams that run large suites in CI/CD and do not want to pay the price of re-executing the entire test set just because a handful of tests were unstable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem this plugin solves
&lt;/h2&gt;

&lt;p&gt;In modern QA pipelines, it is very common to have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large suites that take minutes (or hours) to run and consume a lot of CI resources.&lt;/li&gt;
&lt;li&gt;Intermittent tests (flaky tests) that fail occasionally due to environment instability, networking, data issues, and so on.&lt;/li&gt;
&lt;li&gt;A real need to isolate and rerun only what failed, instead of running everything again manually or rerun on demand, as the data can be corrupted at runtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Native Cypress retries will retry the test within the same execution, but that is not always what you want. In many pipelines, you first want to run the full suite, then do something (deploy fresh data, restart a service, clean up the environment), and &lt;strong&gt;only then&lt;/strong&gt; trigger a new execution focused exclusively on the failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core idea of &lt;code&gt;cypress-retry-after-run&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The plugin implements a two-step flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;During the normal run, it &lt;strong&gt;listens to test execution and records failed tests&lt;/strong&gt; into a &lt;code&gt;.cypress-failures.json&lt;/code&gt; file at the project root. &lt;/li&gt;
&lt;li&gt;Then you trigger a CLI command (&lt;code&gt;cypress-retry&lt;/code&gt;) that reads this file, uses &lt;code&gt;@cypress/grep&lt;/code&gt; under the hood, and starts a new Cypress run that &lt;strong&gt;executes only the tests that failed before&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, the plugin turns “Run once, record failures, and rerun only what matters”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concrete benefits for CI/CD
&lt;/h2&gt;

&lt;p&gt;For CI/CD pipelines, &lt;code&gt;cypress-retry-after-run&lt;/code&gt; delivers very tangible advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time savings&lt;/strong&gt;: instead of running the entire suite twice, the second run usually will be much smaller and focused only on the failed specs/cases.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lower infrastructure cost&lt;/strong&gt;: fewer runner minutes, fewer containers, less CPU and memory usage on shared environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More focused feedback&lt;/strong&gt;: you quickly get a clean “retry run” that shows only the behavior of the failed tests, which helps distinguish real bugs from pure flakiness (either due to the tests or due to the environment, bad data, or other things), so it will make your debugging much faster, specially in larger suites as mentioned.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This pattern is especially useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pipelines that run multiple times a day.
&lt;/li&gt;
&lt;li&gt;Monorepos with dozens or hundreds of specs (which was my case, by the way).&lt;/li&gt;
&lt;li&gt;Quality gates that only block merges if failures persist even after a dedicated retry run.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How the plugin works under the hood
&lt;/h2&gt;

&lt;p&gt;The internal logic is simple but powerful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Execution hook:&lt;/strong&gt; the plugin plugs into Cypress via &lt;code&gt;setupNodeEvents&lt;/code&gt; and listens to information about failed tests as the run progresses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistence:&lt;/strong&gt; at the end of the run, it writes a &lt;code&gt;.cypress-failures.json&lt;/code&gt; file with identifiers of the tests/specs that failed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart CLI:&lt;/strong&gt; the &lt;code&gt;cypress-retry&lt;/code&gt; command reads this file, builds the proper filters, and starts a new Cypress execution using &lt;code&gt;@cypress/grep&lt;/code&gt; to run only the relevant tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Effectively, you get a &lt;strong&gt;selective replay&lt;/strong&gt; of the failing tests, fully automated and integrated into your normal Cypress workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Easy installation
&lt;/h2&gt;

&lt;p&gt;Installation follows the standard pattern for modern Cypress plugins, with no extra friction.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With npm:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;npm install --save-dev cypress-retry-after-run @cypress/grep&lt;/code&gt;. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;With yarn:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;yarn add -D cypress-retry-after-run @cypress/grep&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The only additional requirement is &lt;code&gt;@cypress/grep&lt;/code&gt;, which the plugin uses to filter tests on the retry run, so it is installed alongside the plugin in a single command.&lt;/p&gt;

&lt;h2&gt;
  
  
  JavaScript and Typescript configuration
&lt;/h2&gt;

&lt;p&gt;This plugin can be used in both JS and TS projects. Refer to the plugin's official npm link to get the full instructions on how to set it up and how to run it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/cypress-retry-after-run" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/cypress-retry-after-run&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pipeline and automation integration
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;cypress-retry-after-run&lt;/code&gt; fits naturally into any CI/CD pipeline design.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt; – Main run:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A standard job that runs the full suite (if you want) and generates &lt;code&gt;.cypress-failures.json&lt;/code&gt; if there are failures.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 2 (Optional)&lt;/strong&gt; – Do anything, like cleaning a database, or any operation in the environment you want, if necessary.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt; – Automated retry:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A second job/step that executes &lt;code&gt;npm run retry&lt;/code&gt; (or an equivalent command you created) only if the failures file exists and/or if the previous job failed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This design enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Conditional pipelines (retry only if there were actual failures).
&lt;/li&gt;
&lt;li&gt;Richer monitoring (separate dashboards for the full run and the retry run).
&lt;/li&gt;
&lt;li&gt;Smarter alerting (a test that still fails even after a dedicated retry can trigger a stronger alert or block a merge).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why this plugin stands out
&lt;/h2&gt;

&lt;p&gt;A few things make &lt;code&gt;cypress-retry-after-run&lt;/code&gt; stand out in the Cypress ecosystem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built from real QA pain points: the “run everything, fix the environment, then rerun only failures” flow comes directly from production CI/CD needs.&lt;/li&gt;
&lt;li&gt;Native integration with &lt;code&gt;@cypress/grep&lt;/code&gt;: instead of reinventing filtering, it relies on a widely used community library, staying aligned with the Cypress ecosystem.&lt;/li&gt;
&lt;li&gt;Minimal configuration: just a few lines in &lt;code&gt;cypress.config&lt;/code&gt; and in the support file are enough to adopt it in both new and legacy projects.&lt;/li&gt;
&lt;li&gt;Lightweight and focused: small package, no unnecessary dependencies, easy to drop into any repository without bloating your project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For teams that already care deeply about automation quality and are tired of flakiness and wasted CI resources, &lt;code&gt;cypress-retry-after-run&lt;/code&gt; is a strong ally to make pipelines more efficient, predictable, and truly professional.&lt;/p&gt;

&lt;p&gt;You can check both links below with the plugin and a LinkedIn post with everything here, but summarized:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/cypress-retry-after-run" rel="noopener noreferrer"&gt;Npm Official plugin page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/feed/update/urn:li:activity:7407547409788215296/" rel="noopener noreferrer"&gt;Linkedin post&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cypress</category>
      <category>automation</category>
      <category>qa</category>
      <category>ci</category>
    </item>
    <item>
      <title>Triggering Cypress End-to-End Tests Manually on Different Browsers with GitHub Actions</title>
      <dc:creator>Talking About Testing</dc:creator>
      <pubDate>Fri, 19 Dec 2025 19:29:32 +0000</pubDate>
      <link>https://forem.com/cypress/triggering-cypress-end-to-end-tests-manually-on-different-browsers-with-github-actions-223i</link>
      <guid>https://forem.com/cypress/triggering-cypress-end-to-end-tests-manually-on-different-browsers-with-github-actions-223i</guid>
      <description>&lt;h2&gt;
  
  
  A Practical Guide to Cross-Browser Testing
&lt;/h2&gt;

&lt;p&gt;One of the most practical features of GitHub Actions is the ability to manually trigger workflows and pass parameters at runtime. This is especially useful for end-to-end (E2E) testing, where you may want to select which browser to run the tests against rather than hard-coding that choice.&lt;/p&gt;

&lt;p&gt;In this post, I'll walk you through a GitHub Actions workflow written in YAML that allows you to manually trigger Cypress tests and select the target browser directly from the GitHub UI.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Full Workflow
&lt;/h3&gt;

&lt;p&gt;Here's the workflow I'll be explaining:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;End-to-end tests 🧪&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;browser&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Browser&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tests"&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;choice&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;chrome&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;chrome&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;edge&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;electron&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;firefox&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;safari&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cypress-run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-24.04&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v6&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install WebKit system deps (Safari)&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event.inputs.browser == 'safari' }}&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npx playwright install-deps webkit&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cypress run&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cypress-io/github-action@v6&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run test:${{ github.event.inputs.browser }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload screenshots (selected browser, on failure)&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;failure()&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;screenshots-${{ github.event.inputs.browser }}&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cypress/screenshots&lt;/span&gt;
          &lt;span class="na"&gt;if-no-files-found&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ignore&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Naming the Workflow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;End-to-end tests 🧪&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the friendly name shown in the GitHub Actions UI. Adding an emoji is optional, but it makes workflows easier to scan—especially when you have many of them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual Trigger with &lt;code&gt;workflow_dispatch&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;workflow_dispatch&lt;/code&gt; event enables manual execution of the workflow. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The workflow won't run automatically on &lt;code&gt;push&lt;/code&gt; or &lt;code&gt;pull_request&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A "Run workflow" button will appear in the Actions tab&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is ideal for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ad-hoc test runs&lt;/li&gt;
&lt;li&gt;Debugging browser-specific issues&lt;/li&gt;
&lt;li&gt;Running tests before a release&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Defining Input Parameters
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;browser&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Browser&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;run&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tests"&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;choice&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;chrome&lt;/span&gt;
    &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;chrome&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;edge&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;electron&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;firefox&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;safari&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the heart of the workflow.&lt;/p&gt;

&lt;h4&gt;
  
  
  What's happening here?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;browser&lt;/code&gt; is a required input&lt;/li&gt;
&lt;li&gt;The user must select one value from a predefined list&lt;/li&gt;
&lt;li&gt;The default option is &lt;code&gt;chrome&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On GitHub, this renders as a dropdown selector in the UI.&lt;/p&gt;

&lt;p&gt;This prevents invalid values and makes the workflow safer and more user-friendly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Defining the Job
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cypress-run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-24.04&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The workflow has a single job called &lt;code&gt;cypress-run&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;It runs on the &lt;code&gt;ubuntu-24.04&lt;/code&gt; GitHub-hosted runner&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ubuntu runners are commonly used for Cypress because they're fast, stable, and well supported by the Cypress GitHub Action.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Checking Out the Code
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v6&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step pulls your repository code into the runner so that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cypress tests&lt;/li&gt;
&lt;li&gt;&lt;code&gt;package.json&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Configuration files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;are available during execution.&lt;/p&gt;

&lt;p&gt;This step is required in almost every CI workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Installing Safari (WebKit) Dependencies Conditionally
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install WebKit system deps (Safari)&lt;/span&gt;
  &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event.inputs.browser == 'safari' }}&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npx playwright install-deps webkit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an excellent example of conditional execution in GitHub Actions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why is this needed?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Cypress runs Safari tests via WebKit&lt;/li&gt;
&lt;li&gt;WebKit requires additional system dependencies on Linux&lt;/li&gt;
&lt;li&gt;These dependencies are unnecessary for other browsers&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What the condition does
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;if&lt;/code&gt; expression ensures that this step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs only when &lt;code&gt;safari&lt;/code&gt; is selected&lt;/li&gt;
&lt;li&gt;Is skipped for all other browsers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This keeps the workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster&lt;/li&gt;
&lt;li&gt;Cleaner&lt;/li&gt;
&lt;li&gt;Easier to maintain&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Notes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's worth mentioning that for Cypress to work with WebKit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;experimentalWebKitSupport&lt;/code&gt; property has to be set to &lt;code&gt;true&lt;/code&gt; in the &lt;code&gt;cypress.config.js&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;playwright-webkit&lt;/code&gt; has to be installed as a dev dependency (e.g., &lt;code&gt;npm i playwright-webkit -D&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt; Make sure to version not only the &lt;code&gt;package.json&lt;/code&gt; file, but also the &lt;code&gt;package-lock.json&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Running Cypress with the Selected Browser
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cypress run&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cypress-io/github-action@v6&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run test:${{ github.event.inputs.browser }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step uses the official Cypress GitHub Action image.&lt;/p&gt;

&lt;h4&gt;
  
  
  Key idea
&lt;/h4&gt;

&lt;p&gt;The browser input is injected dynamically into the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run &lt;span class="nb"&gt;test&lt;/span&gt;:chrome
npm run &lt;span class="nb"&gt;test&lt;/span&gt;:firefox
npm run &lt;span class="nb"&gt;test&lt;/span&gt;:safari
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This implies that your &lt;code&gt;package.json&lt;/code&gt; contains scripts like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"test:chrome"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cypress run --browser chrome"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"test:firefox"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cypress run --browser firefox"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"test:safari"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cypress run --browser webkit"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern keeps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The workflow generic&lt;/li&gt;
&lt;li&gt;Browser-specific logic inside your project configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Uploading Screenshots on Failure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload screenshots (selected browser, on failure)&lt;/span&gt;
  &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;failure()&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@v4&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;screenshots-${{ github.event.inputs.browser }}&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cypress/screenshots&lt;/span&gt;
    &lt;span class="na"&gt;if-no-files-found&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ignore&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step runs only if the job fails.&lt;/p&gt;

&lt;h4&gt;
  
  
  What it does
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Uploads Cypress screenshots as workflow artifacts&lt;/li&gt;
&lt;li&gt;Names the artifact based on the selected browser&lt;/li&gt;
&lt;li&gt;Avoids failing the workflow if no screenshots exist&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why this matters
&lt;/h4&gt;

&lt;p&gt;When an E2E test fails:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Screenshots provide visual context&lt;/li&gt;
&lt;li&gt;Browser-specific issues are easier to diagnose&lt;/li&gt;
&lt;li&gt;Artifacts are preserved for later analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This workflow demonstrates a clean and scalable way to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manually trigger Cypress tests&lt;/li&gt;
&lt;li&gt;Select the target browser at runtime&lt;/li&gt;
&lt;li&gt;Handle browser-specific dependencies&lt;/li&gt;
&lt;li&gt;Collect meaningful artifacts on failure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's a powerful pattern for teams that care about cross-browser confidence without overloading their CI pipelines with unnecessary runs.&lt;/p&gt;

&lt;p&gt;More than just automation, this approach puts control and observability back in the team's hands—exactly where quality belongs.&lt;/p&gt;




&lt;p&gt;For the complete implementation, take a look at the &lt;a href="https://github.com/wlsf82/cross-browser-testing-gha" rel="noopener noreferrer"&gt;cross-browser-testing-gha&lt;/a&gt; GitHub repository.&lt;/p&gt;




&lt;p&gt;Would you like to learn E2E Testing with Cypress from scratch until your tests are running on GitHub Actions and integrated with the Cypress Cloud?&lt;/p&gt;

&lt;p&gt;Consider subscribing to my course: "&lt;a href="https://www.udemy.com/course/cypress-from-zero-to-the-cloud/?referralCode=CABCDDFA5ADBB7BE2E1A" rel="noopener noreferrer"&gt;Cypress, from Zero to the Cloud&lt;/a&gt;."&lt;/p&gt;

</description>
      <category>testing</category>
      <category>cicd</category>
      <category>github</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to validate tables, rows or any content of an Excel file using Cypress</title>
      <dc:creator>Marcelo C.</dc:creator>
      <pubDate>Fri, 31 Oct 2025 11:43:24 +0000</pubDate>
      <link>https://forem.com/cypress/how-to-validate-a-content-of-an-xlsx-file-using-cypress-45da</link>
      <guid>https://forem.com/cypress/how-to-validate-a-content-of-an-xlsx-file-using-cypress-45da</guid>
      <description>&lt;p&gt;At the company I work for, we already have many test cases to validate a key behavior of our SaaS, which through the user downloads a table as an Excel file of the information needed. But there was a need to validate some edge cases, in which we also needed to validate that the content corresponds to what the table showed.&lt;/p&gt;

&lt;p&gt;This would mean that Cypress needs to deterministically validate rows, numbers, names and even colors inside the Excel file set by our user flows. After some research, we basically came upon two &lt;em&gt;Node.js&lt;/em&gt; libs: &lt;code&gt;@e965/xlsx&lt;/code&gt; and &lt;code&gt;exceljs&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;While &lt;code&gt;@e965/xlsx&lt;/code&gt; is mostly used for data content validation, as in validating a JSON rows straight from the sheet - &lt;code&gt;exceljs&lt;/code&gt; is more focused for style assertion, meaning assertions like “is A1 light-green?”. All right, so now we could split keeps tests readable and fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring &lt;a class="mentioned-user" href="https://dev.to/e965"&gt;@e965&lt;/a&gt;/xlsx library&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, wire up the configuration in Cypress. Head off to Node with cy.task(). It’s the official way to run filesystem code from Cypress tests: register tasks in setupNodeEvents and they’ll return values back to your spec.&lt;/p&gt;

&lt;p&gt;Remember to also import the package on the config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//cypress.config.ts

const xlsx = require('@e965/xlsx');
...
...
...
      on('task', {
        async readExcelByPattern(pattern: string, timeoutMs = 15000) {
          const re = new RegExp(pattern);
          const end = Date.now() + timeoutMs;

          while (Date.now() &amp;lt; end) {
            const files = fs.readdirSync(downloadsDir).filter(f =&amp;gt; re.test(f) &amp;amp;&amp;amp; !f.endsWith('.crdownload') &amp;amp;&amp;amp; !f.endsWith('.tmp'));

            if (files.length) {
              const { fullPath } = files
                .map(f =&amp;gt; {
                  const fullPath = path.join(downloadsDir, f);
                  return { fullPath, mtime: fs.statSync(fullPath).mtimeMs };
                })
                .sort((a, b) =&amp;gt; b.mtime - a.mtime)[0];

              await sleep(200);

              const wb = xlsx.readFile(fullPath);
              const sheet = wb.Sheets[wb.SheetNames[0]];
              const data = xlsx.utils.sheet_to_json(sheet);
              return { fileName: path.basename(fullPath), data };
            }

            await sleep(300);
          }

          throw new Error(`File .xlsx taht matches /${pattern}/ not found on "${downloadsDir}" inside ${timeoutMs}ms`);
        },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see that &lt;code&gt;readExcelByPattern&lt;/code&gt; is the task we should call to validate the content like rows, tables and any information inside the Excel file. You can then define it inside your test context and methods (or define it globally over &lt;code&gt;commands.ts&lt;/code&gt; if you plan to use it in many tests), but for a single test it should look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//my-testing-context.ts

  @step('Read downloaded excel values')
  readExcelDownloadedFile(
    pathToFile: string = 'excel_export/bugs/',
    fixture: string,
    fileName: string
  ): ExportPrintableReportContext&amp;lt;TParent&amp;gt; {
    cy.fixture(pathToFile + fixture).then((expected: any[]) =&amp;gt; {
      cy.task('readExcelByPattern', fileName).then(({ data }: { data: any[] }) =&amp;gt; {
        expect(data.length, 'Table length').to.equal(expected.length);
        expected.forEach((expectedRow, i) =&amp;gt; {
          const actualRow = data[i];
          Object.entries(expectedRow).forEach(([key, expectedValue]) =&amp;gt; {
            const actualValue = actualRow[key] === undefined ? null : actualRow[key];
            expect(actualValue, `Row ${i} - Column "${key}"`).to.equal(expectedValue);
          });
        });
      });
    });
    return this;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see it's pretty straight forward, it calls for a JSON file inside 'fixtures/excel_export/bugs/' that already has the values you want to validate and should be equal to the Excel file and executes a forEach of the Table length, and each row, which already awaits for a value.&lt;/p&gt;

&lt;p&gt;And this is how it would look inside a test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { testContext } from '@/my-testing-context'

const testingContext = new testContext();

describe('example of reading excel files', =&amp;gt; ()
  it('case 1', () =&amp;gt; {
    testingContext.readExcelDownloadedFile('excel_export/bugs/', tk, 'Excel.xlsx');
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Basically it checked 171 rows of the file content and succeeded in 40 seconds.&lt;/p&gt;

&lt;p&gt;For the second part of this tutorial, I'll expand on how to validate Excel colors as well. Happy testing!&lt;/p&gt;

</description>
      <category>node</category>
      <category>javascript</category>
      <category>testing</category>
      <category>automation</category>
    </item>
    <item>
      <title>How to get most out of cy.prompt() - 6 tips and tricks for your new AI tool!</title>
      <dc:creator>Marcelo C.</dc:creator>
      <pubDate>Thu, 09 Oct 2025 13:54:06 +0000</pubDate>
      <link>https://forem.com/cypress/how-to-get-most-out-of-cyprompt-6-tips-and-tricks-for-your-new-ai-tool-425l</link>
      <guid>https://forem.com/cypress/how-to-get-most-out-of-cyprompt-6-tips-and-tricks-for-your-new-ai-tool-425l</guid>
      <description>&lt;p&gt;I know, I know, Cypress has just announced a game changing feature with &lt;a href="https://www.cypress.io/blog/cy-prompt-frequently-asked-questions" rel="noopener noreferrer"&gt;cy.prompt()&lt;/a&gt; that is going to change the way we test - or at least approach how we think of it. You're going to use Natural Language all the way to test your new app? Read through my recommendations then!&lt;/p&gt;

&lt;p&gt;As a Cypress Ambassador I was lucky enough to be using &lt;a href="https://dev.to/marcelo_sqe/how-cypress-will-revolutionize-the-use-of-ai-in-testing-with-cyprompt-5gm2-temp-slug-2006966?preview=9c4ef5e51e58155f38ad1dcbf9f431475aa6874a9ce433001e022fdfd3b97ca85fd89420f5701d27bf06a28628c31b13e4802cb49d6966bf50a7aaee"&gt;cy.prompt&lt;/a&gt; for the past weeks and here are a few tips to make your testing and usage go a bit smoothly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1) Start your phrase with the action or assertion you want&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of giving it an instruction like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;When the page loads, check that the header is seen and then click on Create button&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Would be better to:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Wait 8 seconds for the page to load&lt;/code&gt;&lt;br&gt;
&lt;code&gt;Assert that the header is visible&lt;/code&gt;&lt;br&gt;
&lt;code&gt;Click on create button&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now Cypress will translate your instructions more easily - a hardcoded &lt;em&gt;wait&lt;/em&gt;, followed by an &lt;em&gt;assertion&lt;/em&gt;, followed by a &lt;em&gt;click&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) Try to separate instructions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The previous step gave it away already! Cypress prompt works as any other LLM - give clear instructions of what you want to do and it'll have a better chance to execute it.&lt;/p&gt;

&lt;p&gt;Do not mix assertions, with force clicks, with reloads in the same line of action! The prompt needs to go through, so in a way try to act as a &lt;em&gt;prompt engineer&lt;/em&gt; and step by step you'll get there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) You can have up to 20 steps for each prompt you execute&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;20 is the limit, ok. But that doesn't mean that you need to have 20 steps each prompt. Also, the more steps you add, the prone it is to ask for clarifications or make mistakes.&lt;/p&gt;

&lt;p&gt;Think of it as this: each plain English text line you introduce is an abstraction layer, right? Do want an over-complicated test, or a easy to read through, understandable (for non-developers specially) test?&lt;/p&gt;

&lt;p&gt;Lesser is better in some cases!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcdw6hj3hgfmfmt31sfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcdw6hj3hgfmfmt31sfs.png" alt=" " width="800" height="686"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4) Leave some tests in prompt in order to validate flaky behavior&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Got a new feature? Want to avoid brittle in your E2E test? Want to check for any weird behavior here and there? Then cy.prompt is your way to go! You can always leave your tests in plain English to see if the BDD/TDD behavior stays the same. &lt;/p&gt;

&lt;p&gt;Remember: it works in both local and CI - but it only supports Chrome or Chromium browsers (Edge/Electron). Any others are out (sorry Firefox!). Leave it a few days or weeks in your CI in prompt scenario and see what happens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5) Portability first?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For me any test that is written in plain English has a natural advantage over each other. It doesn't need to be refactored into any other programming language. So if you already know that the application you're working now is starting to be ported into another modern framework, leave your tests in prompt format. Your devs will appreaciate!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6) It's always cached&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another advantage with cy.prompt is that once it runs, it will cache the steps in order to avoid LLM interaction. But if you change one line in your prompt - wait 15 seconds instead of 8, for example - it will execute all over again.&lt;/p&gt;

&lt;p&gt;Remember this to focus on speed and reliability in your tests!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>testing</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How Cypress will revolutionize the use of AI in testing with cy.prompt()</title>
      <dc:creator>Marcelo C.</dc:creator>
      <pubDate>Thu, 09 Oct 2025 10:28:22 +0000</pubDate>
      <link>https://forem.com/cypress/how-cypress-will-revolutionize-the-use-of-ai-in-testing-with-cyprompt-fe9</link>
      <guid>https://forem.com/cypress/how-cypress-will-revolutionize-the-use-of-ai-in-testing-with-cyprompt-fe9</guid>
      <description>&lt;p&gt;Cypress has become the go-to testing framework for SDETs and QA engineers to validate modern web apps. It’s fast, reliable, and backed by a mature ecosystem—both in &lt;a href="https://docs.cypress.io/app/references/changelog" rel="noopener noreferrer"&gt;software updates&lt;/a&gt; and excellent &lt;a href="https://docs.cypress.io/app/get-started/why-cypress" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. Add to that the vibrant community building powerful &lt;a href="https://dev.to/sebastianclavijo/my-top-191-favorite-cypress-plugins-for-testing-with-wick-like-precision-3fhh"&gt;plugins and extensions&lt;/a&gt;, and it’s clear why Cypress dominates the testing landscape.&lt;/p&gt;

&lt;p&gt;Cypress is taking a bold step into AI-powered testing with the upcoming &lt;a href="https://www.cypress.io/blog/cy-prompt-frequently-asked-questions" rel="noopener noreferrer"&gt;cy.prompt()&lt;/a&gt;. Unlike typical AI integrations that act as external copilots or rely on general-purpose &lt;em&gt;MCP-style&lt;/em&gt; assistants, &lt;code&gt;cy.prompt()&lt;/code&gt; adds the intent (what we want) built directly into the testing workflow.&lt;/p&gt;

&lt;p&gt;This means no context switching, no juggling between an IDE plugin and your test runner. Instead, Cypress allows you to describe your intent in plain English, and the AI automatically generates selectors, actions, and assertions right inside your test.&lt;/p&gt;

&lt;p&gt;It’s a shift from writing tests line by line to guiding your tests conversationally. Think less about &lt;code&gt;cy.get()&lt;/code&gt; or &lt;code&gt;cy.click()&lt;/code&gt; and more about telling Cypress what you want verified, letting the framework translate that into executable code.&lt;/p&gt;

&lt;p&gt;Here’s a video demonstration of &lt;code&gt;cy.prompt()&lt;/code&gt; in action:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/Z_u8R3Z5spw"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;This is the code that I used in the validation:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz55buxl9xe4od78p2oox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz55buxl9xe4od78p2oox.png" alt=" raw `cy.prompt()` endraw  in action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is what the prompt suggests of code locators right after is executed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag177k1he4h8hj438fik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag177k1he4h8hj438fik.png" alt="After usage,  raw `cy.prompt()` endraw  can insert the needed code into your IDE"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, you can leave the prompt as it is and push to your CI/CD pipeline:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxcvxemz08kdy0nwyw91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxcvxemz08kdy0nwyw91.png" alt="Github Actions running directly with the prompt and passing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;cy.prompt()&lt;/code&gt;, Cypress is no longer “just” a testing framework—it’s stepping into the AI-assisted development era. For SDETs and QA engineers, this means faster authoring, smarter locator handling, and easier onboarding for teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The possibilities of cy.prompt()&lt;/strong&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cy.prompt focus on the intent:&lt;/strong&gt; what we want, not how do do it. It's a great tool for non-developers, or anyone who doesn’t want to dive deep into app implementation.

&lt;/li&gt;
&lt;li&gt;Imagine writing the &lt;strong&gt;BDD&lt;/strong&gt; (Behavior Driven Development) acceptance criteria directly into the test. You'll have the best of both worlds here, BDD criteria that is understood by all stakeholders (Project Managers, Product Owners), and the code being executed in the background.

&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TDD&lt;/strong&gt; (Test-Driven-Development) is also covered for the developers. Imagine developing a feature until is ready and, step by step (word by word, line by line) it start to pass. Until is ready for deployment.

&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portability is here:&lt;/strong&gt; Need to refactor your project? Move from one programming language to another? Don't need to change a thing in your tests written in plain English, they can be easily shared, exported, or integrated across different systems.

&lt;/li&gt;
&lt;li&gt;Also, another great benefit here are the &lt;strong&gt;self-healing&lt;/strong&gt; tests, they’re more resilient to changes in the DOM or selectors. This feature could fundamentally change how we approach automation. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The future of QA is not just code—it’s collaboration between AI and testers.&lt;/p&gt;

&lt;p&gt;What are your thoughts? &lt;/p&gt;

&lt;p&gt;Share, comment or connect with me directly in &lt;a href="https://www.linkedin.com/in/marceloc/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>cypress</category>
      <category>ai</category>
      <category>promptengineering</category>
      <category>testing</category>
    </item>
    <item>
      <title>Six Technical Sessions That Will Change How You Think About Testing</title>
      <dc:creator>Ronald Williams</dc:creator>
      <pubDate>Tue, 02 Sep 2025 15:04:00 +0000</pubDate>
      <link>https://forem.com/cypress/six-technical-sessions-that-will-change-how-you-think-about-testing-1ngb</link>
      <guid>https://forem.com/cypress/six-technical-sessions-that-will-change-how-you-think-about-testing-1ngb</guid>
      <description>&lt;p&gt;Tired of expensive learning materials that take weeks to complete but teach you nothing you can't Google? Fed up with content that assumes you're still figuring out basic assertions when you're managing complex test architectures?&lt;/p&gt;

&lt;p&gt;CypressConf 2025 workshops solve the learning problem that plagues experienced developers: finding advanced, practical content that matches your skill level and specific to your setup without the cost barriers or time wasting. Over two exclusive workshop days (October 23–24), global industry practitioners will teach you competitive skills you can implement Monday morning. These workshops were designed based on years of attendee feedback from our global community, focusing on the real problems you've told us you're solving right now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cypress.registration.goldcast.io/events/5e06455f-45f2-49c3-98dd-e0ae952e79a0?utm_source=dev_to&amp;amp;utm_medium=cyconf_workshops&amp;amp;utm_campaign=cypressconf" rel="noopener noreferrer"&gt;&lt;strong&gt;Slack 'n' Roll: CI/CD Pipelines with GHA&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Led by: Tanya Sahni, Software Developer in Test at Fashion Cloud&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Your CI/CD pipeline should work for you, not against you. Tanya will  show how to integrate Cypress into GitHub Actions so your tests run automatically and notify your team intelligently. This intermediate to advanced workshop assumes you're already comfortable with CI/CD concepts and want to build pipelines whose scale reliably", eliminating the manual check dance that wastes hours every sprint and positioning you as the developer who builds systems that move as fast as your code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cypress.registration.goldcast.io/events/5e06455f-45f2-49c3-98dd-e0ae952e79a0?utm_source=dev_to&amp;amp;utm_medium=cyconf_workshops&amp;amp;utm_campaign=cypressconf" rel="noopener noreferrer"&gt;&lt;strong&gt;Advocate for Quality Within Your Company&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Led by: Péter Földházi, Quality Architect at EPAM Systems&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Quality isn't just a QA responsibility. It's an organizational capability. This senior-level workshop teaches experienced QA professionals how to become effective advocates for quality across engineering teams. You'll learn to speak business language while maintaining technical standards, turning quality from a cost center into a competitive advantage and transforming how your organization views testing so you lead change instead of reacting to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cypress.registration.goldcast.io/events/5e06455f-45f2-49c3-98dd-e0ae952e79a0?utm_source=dev_to&amp;amp;utm_medium=cyconf_workshops&amp;amp;utm_campaign=cypressconf" rel="noopener noreferrer"&gt;&lt;strong&gt;Operationalizing Quality with Data That Matters&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Presented by: Dan Johansen, Senior Product Manager at Cypress.io&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Raw testing data is noise. Actionable insights are signal. Dan demonstrates how to turn test results into meaningful metrics that improve release decisions. This intermediate to senior workshop teaches which data points actually matter and how to present them in ways that influence engineering strategy, enabling you to make data-driven quality decisions that leadership understands and supports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cypress.registration.goldcast.io/events/5e06455f-45f2-49c3-98dd-e0ae952e79a0?utm_source=dev_to&amp;amp;utm_medium=cyconf_workshops&amp;amp;utm_campaign=cypressconf" rel="noopener noreferrer"&gt;&lt;strong&gt;Data Driven Testing with Cypress&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Delivered by: Marko Kolasinac, CEO at Assert QA, and Dejan Živković, QA Automation Engineer at Assert QA&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Hardcoded test data creates maintenance nightmares. This intermediate workshop shows how to design resilient, data-powered testing strategies that don't interfere with production systems. Marko and Dejan assume you understand testing fundamentals and focus on building migration approaches that scale without the complexity overhead that kills productivity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cypress.registration.goldcast.io/events/5e06455f-45f2-49c3-98dd-e0ae952e79a0?utm_source=dev_to&amp;amp;utm_medium=cyconf_workshops&amp;amp;utm_campaign=cypressconf" rel="noopener noreferrer"&gt;&lt;strong&gt;Simplifying Cypress Testing&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Led by: Walmyr Filho, Instructor and Founder at Talking About Testing&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Writing Cypress tests isn't just writing JavaScript. It requires different thinking. Whether you're new to Cypress or have been using it for years, Walmyr shares practical techniques for writing maintainable tests that grow with your product complexity. You'll learn patterns that prevent technical debt before it accumulates, building tests that remain valuable as your codebase evolves rather than becoming liabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cypress.registration.goldcast.io/events/5e06455f-45f2-49c3-98dd-e0ae952e79a0?utm_source=dev_to&amp;amp;utm_medium=cyconf_workshops&amp;amp;utm_campaign=cypressconf" rel="noopener noreferrer"&gt;&lt;strong&gt;Authentication Workflows with Cypress &amp;amp; Mailosaur&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Led by: Filip Hric, Developer Educator at filiphric.com&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Authentication testing is notoriously brittle. Filip walks through testing authentication flows so your login and access systems remain reliable across environments. This intermediate to advanced workshop handles real-world scenarios including email verification, multi-factor authentication, and role-based access, helping you secure reliable user experiences without compromising test stability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Makes These Workshops Different&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These sessions were built from years of global community feedback. Developers told us they needed advanced content that respects their experience level, practical sessions they could apply immediately, and learning opportunities that didn't require expensive course subscriptions or weeks of commitment.&lt;/p&gt;

&lt;p&gt;Each workshop delivers concentrated expertise from practitioners who've built testing systems at scale. No generic tutorials. No basic concepts you already know. Just advanced techniques that solve real problems you're encountering in production environments.&lt;/p&gt;

&lt;p&gt;Workshop seats are intentionally limited and fill fast. You must be registered for CypressConf 2025 to access workshops, and early registrants get first access before standby lists open.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cypress.registration.goldcast.io/events/5e06455f-45f2-49c3-98dd-e0ae952e79a0?utm_source=dev_to&amp;amp;utm_medium=cyconf_workshops&amp;amp;utm_campaign=cypressconf" rel="noopener noreferrer"&gt;Register for CypressConf 2025&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More workshops are coming soon. &lt;/p&gt;

</description>
      <category>cypress</category>
      <category>qualityengineering</category>
      <category>devto</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Cypress v15: A Better User Experience</title>
      <dc:creator>Talking About Testing</dc:creator>
      <pubDate>Tue, 19 Aug 2025 23:45:47 +0000</pubDate>
      <link>https://forem.com/cypress/cypress-v15-a-better-user-experience-10d3</link>
      <guid>https://forem.com/cypress/cypress-v15-a-better-user-experience-10d3</guid>
      <description>&lt;h2&gt;
  
  
  Streamlined Features and Improvements for Modern Testing
&lt;/h2&gt;

&lt;p&gt;Cypress v15 is just around the corner, and one of the exciting changes is an improved user experience in the command logs of the test runner. If you’ve spent time debugging tests in v14, you’ll immediately notice how v15 makes test execution logs easier to scan, parse, and act upon.&lt;/p&gt;

&lt;p&gt;In this post, we’ll walk through the key UX improvements.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At the end of this post, screenshots will illustrate the differences between v14 and v15.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Hierarchical Grouping
&lt;/h3&gt;

&lt;p&gt;In v15, the test log is segmented into clear sections &lt;strong&gt;with borders&lt;/strong&gt; to distinguish between: SESSIONS, BEFORE EACH, and TEST BODY.&lt;br&gt;
In v14, although SESSIONS, BEFORE EACH, and TEST BODY also appear, there's no border to differentiate and isolate them better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleaner Visual Density
&lt;/h3&gt;

&lt;p&gt;The v15 layout introduces tighter grouping and spacing, so related blocks are easier to scan. The result is less eye travel compared to v14’s list.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Cypress v15 isn’t just a version bump—it’s a thoughtful step forward in how developers and testers interact with their test runner. By rethinking the command log experience with hierarchical grouping and cleaner visual density, Cypress reduces cognitive load and makes debugging more intuitive.&lt;/p&gt;

&lt;p&gt;If you’ve ever found yourself lost in the flat logs of v14, v15 will feel like a breath of fresh air. These changes may seem subtle, but they directly improve day-to-day productivity and test clarity. And when it comes to testing, minor UX improvements often translate into big wins for speed, focus, and confidence.&lt;/p&gt;

&lt;p&gt;As Cypress continues to evolve, v15 is a reminder that great tooling isn’t just about raw features—it’s about delivering a user experience that helps teams ship quality software faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Illustrations comparing Cypress versions 14 and 15
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;v14 session collapsed&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1e832m2njts5ha8kil3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1e832m2njts5ha8kil3.png" alt=" " width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v15 session collapsed&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcme21op16sjrtrvat2xv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcme21op16sjrtrvat2xv.png" alt=" " width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v14 session expanded&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F374kwenx8suhjcn0t2dv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F374kwenx8suhjcn0t2dv.png" alt=" " width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;v15 session expanded&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66wdms8xx7qu1oag2lka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66wdms8xx7qu1oag2lka.png" alt=" " width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Would you like to learn more about web testing with Cypress?&lt;br&gt;
Check out the "&lt;a href="https://www.udemy.com/course/cypress-from-zero-to-the-cloud/?referralCode=CABCDDFA5ADBB7BE2E1A" rel="noopener noreferrer"&gt;Cypress, from Zero to the Cloud&lt;/a&gt;" course from the Talking About Testing online school, and happy testing!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>XLSX File Uploads in Cypress</title>
      <dc:creator>Paul Astorga</dc:creator>
      <pubDate>Tue, 29 Jul 2025 19:01:25 +0000</pubDate>
      <link>https://forem.com/cypress/xlsx-file-uploads-in-cypress-4l7e</link>
      <guid>https://forem.com/cypress/xlsx-file-uploads-in-cypress-4l7e</guid>
      <description>&lt;p&gt;XLSX File Uploads in Cypress: A Comprehensive Guide&lt;/p&gt;

&lt;p&gt;As QA automation engineers, we often rely on fixtures to manage test data in Cypress. However, there are times when fixtures aren't enough, especially when dealing with file uploads. In my experience, interacting with XLSX files can present unique challenges, particularly when the application requires dynamic, rather than static, file uploads. While resources exist online, they often provide fragmented information, making it difficult to piece together a complete solution.&lt;/p&gt;

&lt;p&gt;This blog post aims to consolidate that scattered knowledge into a single, comprehensive guide. If you're looking to add robust XLSX file upload testing to your Cypress test suite, this post will provide you with a step-by-step approach, complete with code examples and explanations.&lt;/p&gt;

&lt;p&gt;The Challenge: Dynamic XLSX File Generation&lt;/p&gt;

&lt;p&gt;While Cypress provides the &lt;strong&gt;cy.writeFile&lt;/strong&gt; command for file creation, it falls short when it comes to generating functional XLSX files. Attempting to create an XLSX file using &lt;strong&gt;cy.writeFile&lt;/strong&gt; results in a file that either fails to open or is recognized as an unsupported/corrupted format. This is because the command doesn't write the file in the correct binary format required for XLSX files.&lt;/p&gt;

&lt;p&gt;The Solution: Leveraging &lt;strong&gt;Node.js&lt;/strong&gt; and the XLSX Library&lt;/p&gt;

&lt;p&gt;To overcome this limitation, we can harness the power of &lt;strong&gt;Node.js&lt;/strong&gt; and the xlsx library. This library allows us to create XLSX files programmatically in the correct format.&lt;/p&gt;

&lt;p&gt;Step 1: Installing the XLSX Package&lt;/p&gt;

&lt;p&gt;First, let's install the xlsx package using npm:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install xlsx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Configuring Cypress to Use the XLSX Library&lt;/p&gt;

&lt;p&gt;Next, we need to configure Cypress to use the xlsx library by adding a task to our cypress.config.js (or plugins/index.js) file. This task will handle the XLSX file creation process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// cypress.config.js (or plugins/index.js)
const xlsx = require('xlsx');
module.exports = {
  e2e: {
    setupNodeEvents(on, config) {
      on('task', {
        writeXLSX({ filePath, data, sheetName = 'Sheet1' }) {
          const ws = xlsx.utils.json_to_sheet(data);
          const wb = xlsx.utils.book_new();
          xlsx.utils.book_append_sheet(wb, ws, sheetName);
          xlsx.writeFile(wb, filePath);
          return null; // Tasks must return a value or a Promise
        },
      });
      return config;
    },
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation:&lt;/p&gt;

&lt;p&gt;We require the xlsx library.&lt;br&gt;
We define a task called &lt;strong&gt;writeXLSX&lt;/strong&gt; that accepts the file path, data, and sheet name as arguments.&lt;br&gt;
Inside the task, we use the xlsx library to convert the data (in JSON format) to a worksheet, create a workbook, append the worksheet to the workbook, and then write the workbook to the specified file path.&lt;/p&gt;

&lt;p&gt;Step 3: Creating a Cypress Test to Generate and Upload the XLSX File&lt;/p&gt;

&lt;p&gt;Now that we have configured Cypress to use the xlsx library, we can create a test to generate and upload the XLSX file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;describe('XLSX Creation and Upload Test', () =&amp;gt; {
  it('should write data to an XLSX file and upload it', () =&amp;gt; {
    const testData = [
      { Name: 'John Doe', Age: 30, City: 'New York' },
      { Name: 'Jane Smith', Age: 25, City: 'London' },
    ];
    const filePath = 'cypress/fixtures/output.xlsx';
    cy.task('writeXLSX', { filePath, data: testData }).then(() =&amp;gt; {
      cy.log('XLSX file written successfully!');
      cy.get('[type="file"]').selectFile(filePath, { force: true });
    });
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation:&lt;/p&gt;

&lt;p&gt;We define an array of objects (testData) representing the data we want to write to the XLSX file. Each object represents a row in the spreadsheet.&lt;br&gt;
We define the file path where we want to save the XLSX file.&lt;br&gt;
We use the &lt;strong&gt;cy.task&lt;/strong&gt; command to call the writeXLSX task we defined in the &lt;strong&gt;cypress.config.js&lt;/strong&gt; file.&lt;br&gt;
Once the task completes, we use the &lt;strong&gt;cy.get&lt;/strong&gt; command to locate the file input element on the page (identified by the &lt;strong&gt;type="file" attribute&lt;/strong&gt;) and then use the cy.selectFile command to select the generated XLSX file. The &lt;strong&gt;{ force: true }&lt;/strong&gt; option is used to bypass any visibility checks.&lt;/p&gt;

&lt;p&gt;Understanding &lt;strong&gt;cy.selectFile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;cy.selectFile&lt;/strong&gt; command is a powerful tool for simulating file uploads in Cypress. It allows us to select a file from our system and upload it to the HTML input element, effectively simulating the user dragging the file into the browser. This eliminates the need to interact with the "Browse" button and the file search modal, making our tests more robust and easier to maintain.&lt;/p&gt;

&lt;p&gt;Additional Resources:&lt;/p&gt;

&lt;p&gt;Cypress writeFile Command: &lt;a href="https://docs.cypress.io/api/commands/writefile" rel="noopener noreferrer"&gt;https://docs.cypress.io/api/commands/writefile&lt;/a&gt;&lt;br&gt;
Cypress selectFile Command: &lt;a href="https://docs.cypress.io/api/commands/selectfile" rel="noopener noreferrer"&gt;https://docs.cypress.io/api/commands/selectfile&lt;/a&gt;&lt;br&gt;
XLSX Library: &lt;a href="https://www.npmjs.com/package/xlsx" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/xlsx&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;/p&gt;

&lt;p&gt;This guide provides a clear and concise approach to handling XLSX file uploads in Cypress. By combining the power of Node.js, the xlsx library, and Cypress commands like &lt;strong&gt;cy.task&lt;/strong&gt; and &lt;strong&gt;cy.selectFile&lt;/strong&gt;, you can create robust and reliable tests that accurately simulate user interactions with file uploads. I hope this guide helps you streamline your Cypress testing process and confidently tackle XLSX file uploads in your applications.&lt;/p&gt;

&lt;p&gt;Happy testing!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>From 1-Hour Nightmares to 7-Minute Dreams: Our Cypress Cloud Journey</title>
      <dc:creator>S Chathuranga Jayasinghe</dc:creator>
      <pubDate>Wed, 23 Jul 2025 06:00:46 +0000</pubDate>
      <link>https://forem.com/cypress/from-1-hour-nightmares-to-7-minute-dreams-our-cypress-cloud-journey-23h7</link>
      <guid>https://forem.com/cypress/from-1-hour-nightmares-to-7-minute-dreams-our-cypress-cloud-journey-23h7</guid>
      <description>&lt;p&gt;&lt;em&gt;How we transformed our testing workflow at TrackMan and why Cypress Cloud became our testing superhero&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Picture this: It’s 3 PM on a Friday, you’ve just pushed what you think is a small fix, and now you’re staring at your terminal watching Cypress tests crawl by at a snail’s pace. One test… two tests… still going… Your weekend plans are slowly evaporating as you realize you’ve got another 45 minutes to wait before you know if your code actually works.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note: Don’t deploy on Fridays!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sound familiar? That was our reality at TrackMan not too long ago.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;The Dark Ages: When Testing Felt Like Punishment&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Let me paint you a picture of how things used to be. Our Cypress test suite had grown into this massive beast that took nearly an hour to complete. An entire hour! We were running everything sequentially, watching tests execute one by one like we were back in the dial-up internet era.&lt;/p&gt;

&lt;p&gt;The worst part? When something broke (and trust me, things broke), debugging was an absolute nightmare. You’d get a cryptic failure message, maybe a screenshot if you were lucky, and then you’d have to play detective, trying to figure out what went wrong. It was like trying to solve a murder mystery with half the clues missing.&lt;/p&gt;

&lt;p&gt;We were using Cypress Custom Commands, which seemed like a good idea at the time, but as our test suite grew, maintaining and understanding the flow became increasingly difficult. The whole experience was just… painful.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;The Lightbulb Moment: Enter Cypress Cloud&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;After months of frustration and countless hours lost to slow test runs, we finally decided enough was enough. We’d heard whispers about Cypress Cloud, but like many teams, we were hesitant to make the switch. “Another tool to learn? Another service to manage?” But honestly, we were desperate.&lt;/p&gt;

&lt;p&gt;Best decision we ever made!&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;The Transformation: From Hours to Minutes&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Here’s where things get exciting. Remember that nearly 1-hour test suite I mentioned? After moving to Cypress Cloud and implementing parallelization, it now runs in just 7 minutes.&lt;/p&gt;

&lt;p&gt;Let me repeat that: &lt;strong&gt;7 minutes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I’m not exaggerating. We went from grabbing coffee, checking emails, and sometimes even taking walks during test runs to barely having time to refill our water bottles.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;What Made the Magic Happen&lt;/strong&gt;
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Smart Test Orchestration: The Brain Behind the Operation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Cypress Cloud’s Smart Test Orchestration is like having a really smart project manager for your tests. Instead of running tests in some random order, it analyzes your test history and intelligently distributes them across multiple machines. Tests that typically take longer get started first, while quicker tests fill in the gaps. It’s beautiful to watch in action.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Parallelization: Divide and Conquer&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We’re currently running with parallelization enabled, and it’s a game-changer. Instead of one machine plodding through all our tests, we have multiple machines working simultaneously. Think of it like having multiple checkout lanes at a grocery store instead of making everyone wait in one long line.&lt;/p&gt;

&lt;p&gt;With around 44,000 test results per month (yes, we test a lot!), this parallel execution saves us countless hours every single month.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Page Object Model: A Much-Needed Architectural Shift&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While we were at it, we also moved from Cypress Custom Commands to the Page Object Model (POM). This wasn’t directly related to Cypress Cloud, but it complemented our new setup perfectly.&lt;/p&gt;

&lt;p&gt;Custom Commands felt scattered and hard to maintain as our test suite grew. With POM, everything is organized, reusable, and much easier to understand. Each page has its own class with methods that represent the actions you can perform on that page. It’s clean, it’s logical, and it makes onboarding new team members so much smoother.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Test Replay: The Debugging Superhero We Never Knew We Needed&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now, let’s talk about the feature that has truly saved our sanity: Test Replay.&lt;/p&gt;

&lt;p&gt;Remember those debugging nightmares I mentioned earlier? They’re basically extinct now. When a test fails, Test Replay captures everything. And I mean everything. Every click, every hover, every network request, every DOM change it’s all there, recorded and ready to be analyzed.&lt;/p&gt;

&lt;p&gt;You can literally watch your test execution step by step, like you’re sitting right next to the browser as it runs. You can see exactly where things went wrong, what the page looked like at that moment, and what data was flowing through your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Test Analytics: Your Personal Test Performance Detective&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;But Test Replay is just the tip of the iceberg. Cypress Cloud’s Test Analytics dashboard has become our command-center for test optimization. Here’s where things get really interesting.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Slowest Tests&lt;/strong&gt; view is pure gold. It shows you exactly which tests are dragging down your entire suite. We discovered that three specific tests were accounting for nearly 40% of our total execution time! Once we optimized those, we shaved off another couple of minutes from our already improved 7-minute runtime.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Failure Reasons&lt;/strong&gt; analytics helped us identify patterns we never would have caught manually. Turns out, we had several tests failing due to timing issues that only happened when running in parallel. The dashboard made it crystal clear which tests were problematic and why.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Flaky Test Detection: No More “It Works on My Machine”&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s something that used to drive us absolutely crazy: flaky tests. You know, those tests that pass 90% of the time and then randomly fail, usually right before an important release.&lt;/p&gt;

&lt;p&gt;Cypress Cloud’s &lt;strong&gt;Flaky Test Detection&lt;/strong&gt; is like having a data scientist dedicated to analyzing your test stability. It tracks your test results over time and flags tests that show inconsistent behavior. We can now see exactly how flaky a test is (like “passes 85% of the time”) and prioritize which ones to fix first.&lt;/p&gt;

&lt;p&gt;The best part? It gives you insights into what might be causing the flakiness. Network timeouts? Element loading issues? Race conditions? The data helps you pinpoint the root cause instead of just crossing your fingers and hoping for the best.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Cypress Cloud AI: The Future is Here&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;And then there’s the newest addition that honestly feels like magic: &lt;strong&gt;Cypress Cloud AI&lt;/strong&gt;. When a test fails, it doesn’t just show you what happened it analyzes the failure and suggests what might have gone wrong.&lt;/p&gt;

&lt;p&gt;I’m talking about actual AI-powered suggestions like “This test appears to be failing due to a network timeout. Consider increasing the timeout or mocking this network request.” It’s like having a senior QA engineer looking over your shoulder, except this one never gets tired or takes coffee breaks.&lt;/p&gt;

&lt;p&gt;The level of detail you get across all these features is incredible. Network timings, console logs, screenshots at every step it’s like having X-ray vision for your tests, but with a smart assistant helping you interpret what you’re seeing.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;The Real-World Impact: Beyond Just Speed&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Sure, going from 1 hour to 7 minutes is impressive, but the real benefits go much deeper:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer Happiness&lt;/strong&gt;: Our team actually looks forward to running tests now. No more dreading that pre-deployment test run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faster Feedback Loops&lt;/strong&gt;: We can iterate much quicker. Push a fix, wait 7 minutes, know if it works. The faster feedback makes us more confident and more productive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Better Test Coverage&lt;/strong&gt;: When tests run quickly, you’re more likely to write more of them. We’ve actually expanded our test coverage because the pain of long execution times is gone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easier Onboarding&lt;/strong&gt;: New team members can understand our test failures quickly thanks to Test Replay. No more spending hours explaining “what probably went wrong.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Predictable Releases&lt;/strong&gt;: With reliable, fast tests and amazing debugging tools, our releases became much more predictable and less stressful.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;The Numbers Don’t Lie&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Let me put this in perspective with some real numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test execution time&lt;/strong&gt;: From ~60 minutes to 7 minutes (88% reduction)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monthly test results&lt;/strong&gt;: 44,000+ tests handled smoothly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Debugging time&lt;/strong&gt;: Reduced by approximately 70% thanks to Test Replay&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Developer productivity&lt;/strong&gt;: Immeasurably better (seriously, the mood in our standups improved!)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;What We Learned Along the Way&lt;/strong&gt;
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;It’s Not Just About the Tools&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While Cypress Cloud provided the technical foundation for our improvement, we learned that success also required some process changes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Embrace parallelization thinking&lt;/strong&gt;: We had to restructure some tests that were inadvertently dependent on each other.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Invest in good test data management&lt;/strong&gt;: With parallel execution, you need to be more careful about test data isolation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor and optimize continuously&lt;/strong&gt;: The analytics provided by Cypress Cloud help us continuously improve our test suite.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The ROI is Real&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Yes, Cypress Cloud is a paid service, but the time savings alone justify the cost. When you factor in developer productivity, faster release cycles, and reduced debugging time, it’s honestly a no-brainer.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Looking Forward&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Moving to Cypress Cloud wasn’t just a technical upgrade — it was a complete transformation of how we approach testing. We went from viewing tests as a necessary evil to treating them as a competitive advantage.&lt;/p&gt;

&lt;p&gt;If you’re sitting there reading this while waiting for your own slow test suite to finish, or if you’re tired of spending hours debugging mysterious test failures, I can’t recommend Cypress Cloud enough. The combination of Smart Test Orchestration, parallelization, and Test Replay creates a testing experience that’s not just faster, but genuinely enjoyable.&lt;/p&gt;

&lt;p&gt;Trust me, your experience will thank me!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Have you made the switch to Cypress Cloud, or are you still on the fence? I’d love to hear about your testing experiences in the comments below. And if you decide to give Cypress Cloud a try, let me know how it goes, I’m always excited to hear about teams escaping the slow-test nightmare!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cypresscloud</category>
      <category>testautomation</category>
      <category>qualityengineering</category>
      <category>cypress</category>
    </item>
  </channel>
</rss>
