<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mario Casciaro</title>
    <description>The latest articles on Forem by Mario Casciaro (@mariocasciaro).</description>
    <link>https://forem.com/mariocasciaro</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mariocasciaro"/>
    <language>en</language>
    <item>
      <title>Implementing smoke testing in production and staging. DIY vs Ad-Hoc Tool</title>
      <dc:creator>Mario Casciaro</dc:creator>
      <pubDate>Thu, 27 Feb 2020 11:30:28 +0000</pubDate>
      <link>https://forem.com/mariocasciaro/implementing-smoke-testing-in-production-and-staging-diy-vs-ad-hoc-tool-1n6b</link>
      <guid>https://forem.com/mariocasciaro/implementing-smoke-testing-in-production-and-staging-diy-vs-ad-hoc-tool-1n6b</guid>
      <description>&lt;h2&gt;
  
  
  What's smoke testing?
&lt;/h2&gt;

&lt;p&gt;Smoke testing allows us to &lt;strong&gt;quickly asses the status&lt;/strong&gt; of an application by running a set of end-to-end tests targeted at checking the most important (or the most significant) user flows.&lt;br&gt;
It should be run just after a fresh deploy and ideally at regular intervals after that. &lt;/p&gt;

&lt;p&gt;Smoke testing is different from a full &lt;strong&gt;regression testing&lt;/strong&gt; in mainly two aspects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The coverage of smoke testing is &lt;strong&gt;wide and shallow&lt;/strong&gt;. Its purpose is to roughly test as much functionality as possible in a short time. On the other hand, regression testing is meant to be as thorough as possible to make sure that any existing functionality was not negatively affected by the new changes introduced by a new commit or release.&lt;/li&gt;
&lt;li&gt;Smoke testing should be &lt;strong&gt;fast&lt;/strong&gt; compared to regression testing, as its main purpose is to quickly assess the main user flows within an application. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this two reasons, smoke testing are well suited to be &lt;strong&gt;run continuously&lt;/strong&gt; at regular intervals to check the status of an application over time. In fact, running tests just after a fresh deploy will only validate the application while in that particular state, just after a new restart. This is why, running a suite of smoke tests at regular intervals will make sure that the application behaves as it should while in different states at different moments in time.&lt;/p&gt;

&lt;p&gt;Now, let's see what are some frequently asked questions about smoke testing, before moving onto more practical matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Smoke testing FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---3F56FoA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/givdaasv4efomcvt9pmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---3F56FoA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/givdaasv4efomcvt9pmm.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Should I run my smoke tests on production or on staging?
&lt;/h4&gt;

&lt;p&gt;Ideally on both. Smoke testing is useful on staging because it allows you to be notified about important problems faster. A thorough regression testing suite usually takes a few minutes to run, sometimes even 30 minutes or an hour or even more, depending on the size of the application. This means that after a new commit or release we may have to wait a long time before knowing if our code broke something major. With smoke testing, instead, we have an almost &lt;strong&gt;immediate feedback&lt;/strong&gt; on the status of a new commit or release. Usually, on staging or testing environments a smoke test is &lt;strong&gt;followed by a more detailed regression test&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  How long should a smoke test suite take?
&lt;/h4&gt;

&lt;p&gt;For a smoke test to be effective it should be &lt;strong&gt;fast&lt;/strong&gt;. I'd say that a reasonable upper limit is 5 minutes for a small application, 10 for a medium application and 15 for a large application. But, of course the shortest the better.&lt;/p&gt;

&lt;h4&gt;
  
  
  How can I make sure that my smoke test takes as little time as possible?
&lt;/h4&gt;

&lt;p&gt;There are actually a couple of tricks you can use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Limit the use of static &lt;code&gt;wait&lt;/code&gt; instructions, such as waiting a predefined time (e.g. 10 seconds) for a page to load. Use smart wait instructions instead, such as &lt;em&gt;wait for an element to appear&lt;/em&gt;. Even better, use a tool that can do that for you under the hood (implicit waiting).&lt;/li&gt;
&lt;li&gt;Run the tests in parallel. Make sure that there are no relationships between the various tests in a suite, so that they can be run in parallel. This can usually make a big difference in the total running time.&lt;/li&gt;
&lt;li&gt;Remember to only test the most significant parts of an application. Don't go too much into the details and make sure to test features that can indicate, with a certain degree of confidence, if other parts of the application are broken too.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  When should I run my smoke tests?
&lt;/h4&gt;

&lt;p&gt;Smoke tests should be run at least  &lt;strong&gt;afte revery new deploy&lt;/strong&gt; to staging or production. Even better, you can run your smoke tests after each commit since they won't take long to run anyway. Finally, you can also use your smoke tests to check the status of your production or staging environment at regular intervals.&lt;/p&gt;

&lt;h4&gt;
  
  
  Is there a preferred framework for running smoke tests?
&lt;/h4&gt;

&lt;p&gt;Generally speaking no. Any end-to-end testing framework should just do the job. However, as we will see later, you can also go no-framework and use a pre-packaged cloud testing solution to simplify and reduce the maintenance of your smoke testing solution.&lt;/p&gt;

&lt;h4&gt;
  
  
  Can smoke testing replace regression testing?
&lt;/h4&gt;

&lt;p&gt;No. Smoke testing won't be able to cover most of the functionality of an application and its purpose is different than that of regression testing. For a serious product, I recommend having both. That said, if you currently don't have neither of the two, then having at least a smoke test suite is a  &lt;strong&gt;giant leap forward&lt;/strong&gt; because it can potentially catch the a good chunk of critical issues before they become a problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Option 1. Building a smoke testing solution from scratch
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--91yrUuds--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/oci11mzxe99jum4qzlc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--91yrUuds--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/oci11mzxe99jum4qzlc2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's see what it takes to build a smoke testing solution from scratch. The core requirement that we have, is that the smoke test suite must be able to run after a new deploy and ideally run continuously at regular interval.&lt;/p&gt;

&lt;p&gt;The first requirement of our smoke testing solution can be actually simple if you already have an articulated build process or a &lt;strong&gt;CI infrastructure in place&lt;/strong&gt;. In this case, in fact, you could just add another step in your build script to run your smoke tests. &lt;/p&gt;

&lt;h3&gt;
  
  
  Choose a testing framework
&lt;/h3&gt;

&lt;p&gt;Next, choose a good &lt;strong&gt;end-to-end testing framework&lt;/strong&gt;. If you are using JavaScript for your tests, then you have many good options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.cypress.io/"&gt;Cypress&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://taiko.dev/"&gt;Taiko&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/selenium-webdriver"&gt;Selenium&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://webdriver.io/"&gt;WebdriverIO&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/puppeteer/puppeteer"&gt;Puppeteer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devexpress.github.io/testcafe/"&gt;Testcafe&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these frameworks has its pros and cons. For example Cypress and Taiko only support Chromium-based browsers. While Selenium is notoriously the most difficult to deal with. At the same time, Cypress, Taiko and Puppeteer have a very good &lt;a href="https://frontendrobot.com/blog/5-web-testing-tools-with-the-best-tester-experience/"&gt;tester experience&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose a test runner (optional)
&lt;/h3&gt;

&lt;p&gt;Optionally, if your testing framework already doesn't integrate one, you have to choose a &lt;strong&gt;test runner&lt;/strong&gt; (make sure it's compatible with the testing framework you chose). In this case too, there are many options to choose from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mochajs.org/"&gt;Mocha&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jestjs.io/"&gt;Jest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jasmine.github.io/"&gt;Jasmine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cucumber.io/"&gt;Cucumber&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Get coding
&lt;/h3&gt;

&lt;p&gt;Next, code your tests manually or use a &lt;strong&gt;test recorder&lt;/strong&gt; such as &lt;a href="https://www.selenium.dev/selenium-ide/"&gt;Selenium IDE&lt;/a&gt;. Note, however, that if you use Selenium IDE, your framework choices may be limited. &lt;/p&gt;

&lt;p&gt;After you have your tests ready, make sure they are checked-in in a repository (you can decide if in the repository of the main application or somewhere else) and make sure they are available to the CI when it runs.&lt;/p&gt;

&lt;p&gt;Finally, when a test fails, most CIs will let you know with a notification (an email, Slack message, SMS, etc.), so there is nothing else to add on that compartment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose a CI platform (if you already don't have one)
&lt;/h3&gt;

&lt;p&gt;It's probably redundant to say that if you don't have a CI already in place, you probably need one to run your smoke tests. Your options are many in this occasion too:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://travis-ci.org/"&gt;Travis CI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://codeship.com/"&gt;Codeship&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://circleci.com/"&gt;CircleCI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://semaphoreci.com/"&gt;Semaphore&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.gitlab.com/ee/ci/"&gt;Gitlab CI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/features/actions"&gt;Github actions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Running a test suite at regular intervals
&lt;/h3&gt;

&lt;p&gt;Now, if you wanted to run your smoke tests at &lt;strong&gt;regular intervals&lt;/strong&gt;, that would be a different kind of beast altogether. In this case, in fact, your CI infrastructure is mostly useless as you would need an always-on infrastructure to be able to continuously run your tests. The options in these case are mainly two:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have a dedicated machine in the cloud for running your tests. Setup a cron job to start the smoke test suite at your desired interval.&lt;/li&gt;
&lt;li&gt;Setup your tests to be run in a serverless environment, such as &lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt;, &lt;a href="https://azure.microsoft.com/en-us/services/functions/"&gt;Azure functions&lt;/a&gt;, &lt;a href="https://cloud.google.com/functions"&gt;Google Cloud Functions&lt;/a&gt;. In this case the serverless function can be setup to run at specified intervals natively, without the need of any cronjob.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The second options is probably easier to maintain in the long run, but it requires some prior knowledge of serverless programming and the particular cloud platform in use. In both situations, however, you would have to implement ways to keep the test suites up to date on the remote machine or on the serverless platform, plus you would need to implement a solution for getting notified of failing tests. You can, for example, send an email to yourself or to you entire team, or a send a Slack notification on your preferred channel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Option 2. Using a web testing tool built for the purpose
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6GKIe5KG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6fecjwyqr5oi7f3am4kx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6GKIe5KG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6fecjwyqr5oi7f3am4kx.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we have seen in the previous section, implementing a web smoke testing solution from scratch can be a quite laborious matter. If you already have a CI infrastructure in place, then you may get out of it a little bit easier, but still, if you want to run a test suite continuously, then it's almost impossible to avoid to implement and maintain extra tooling. All those homemade solutions will likely become &lt;strong&gt;technical debt&lt;/strong&gt;, as over time the focus shifts to implementing tests rather than maintaining the infrastructure on which they run.&lt;/p&gt;

&lt;p&gt;Based on those assumptions, choosing a tool specifically built for the purpose (without reinventing one) is often the most sensible option, which will repay itself many times even in the short run. To give you an idea of what those tools are capable of, take a look at this list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With a dedicated tool, usually you just need to create tests and run them. &lt;strong&gt;There is no need to choose frameworks, CI platforms, test runners or test recorders&lt;/strong&gt;. All of them are integrated into one single product and completely hidden to you.&lt;/li&gt;
&lt;li&gt;You can easily &lt;strong&gt;run a test suite on-demand&lt;/strong&gt;, using webhooks.&lt;/li&gt;
&lt;li&gt;You can schedule a test suite to &lt;strong&gt;run at regular intervals&lt;/strong&gt; using a built-in scheduler.&lt;/li&gt;
&lt;li&gt;You can have a &lt;strong&gt;full report&lt;/strong&gt; of the last test run as well as stats about the previous runs.&lt;/li&gt;
&lt;li&gt;All kinds of &lt;strong&gt;notifications&lt;/strong&gt; are taken care of out-of-the-box.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, the purpose of those tools is shifting the efforts of testing from setting up and maintaining an infrastructure to just actually writing and maintaining tests. The only disadvantage that I can think of, is that most of these tools are not well suited to satisfy very custom requirements, in which case you may need to go back and do it yourself. But to be honest, this is a common rule in software development.&lt;/p&gt;

&lt;p&gt;There are a few tools out there that match the description we've given above. Each tool has its pros and cons, some are cheap other more expensive, but in general they all can improve dramatically your smoke testing experience.&lt;/p&gt;

&lt;p&gt;One of those tools is &lt;a href="https://frontendrobot.com/"&gt;Frontend Robot&lt;/a&gt;. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nFAqtwum--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mygtqygkbh9439bkwj5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nFAqtwum--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mygtqygkbh9439bkwj5w.png" alt="The Frontend Robot Dashboard"&gt;&lt;/a&gt;&lt;br&gt;
Frontend Robot is a fully cloud-based tool that simplifies the whole testing experience. It provides &lt;a href="https://frontendrobot.com/docs/smart-triggers/"&gt;Smart Triggers&lt;/a&gt; to trigger the execution of a test suite on-demand, and the ability to run tests at regular intervals with its &lt;a href="https://frontendrobot.com/docs/scheduling-runs/"&gt;Run Scheduler&lt;/a&gt;. But this only scratches the surface, &lt;a href="https://frontendrobot.com/"&gt;take a look the our homepage&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Smoke testing ia a crucial part of a software development pipeline, however is often overlooked and misunderstood. Like any other type of tests, setting up and maintaining a custom built solution is often tedious and expensive, with results that most of the time are sub-optimal. Those are exactly the pain points that modern testing tools try to solve. Using a dedicated tools for implementing your smoke testing solution is often the best choice in terms of adoption time, costs and overall developer/tester happiness. The only exception would be if your requirements are so custom that a pre-packaged solution is not an option, but this is usually quite rare occurrence.&lt;/p&gt;

&lt;p&gt;PS: For an overview of more testing tools with a good tester experience, refer to the following article: &lt;a href="https://frontendrobot.com/blog/5-web-testing-tools-with-the-best-tester-experience/"&gt;5 web testing tools with the best Tester Experience (TX)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;As always, if you have any questions or feedback you can send me an email at &lt;a href="//mailto:support@frontendrobot.com"&gt;mario@frontendrobot.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>devops</category>
    </item>
    <item>
      <title>5 web testing tools with the best Tester Experience (TX)</title>
      <dc:creator>Mario Casciaro</dc:creator>
      <pubDate>Thu, 06 Feb 2020 11:36:29 +0000</pubDate>
      <link>https://forem.com/mariocasciaro/5-web-testing-tools-with-the-best-tester-experience-tx-45k2</link>
      <guid>https://forem.com/mariocasciaro/5-web-testing-tools-with-the-best-tester-experience-tx-45k2</guid>
      <description>&lt;p&gt;&lt;em&gt;Disclaimer: I work at &lt;a href="https://frontendrobot.com"&gt;Frontend Robot&lt;/a&gt;, so I'm a little biased here. Where I express personal considerations I will try to make it as clear as possible that they are my own views.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The best thing that happened to the software industry
&lt;/h2&gt;

&lt;p&gt;If we take a look at the evolution of web development in the last few years, I can only think to one aspect that really changed the face of the industry and that's &lt;strong&gt;Developer Experience&lt;/strong&gt; (DX). Based on the slogan "developers are people too", the DX movement is bringing the ergonomics and the concepts of User Experience (UX) into the developer's world.&lt;/p&gt;

&lt;p&gt;Developer experience spans tools, APIs, SDKs, documentation and workflows; essentially, most aspects of a developer's job. In general, if a developer uses or interacts with a product of some kind, then, DX is involved.&lt;/p&gt;

&lt;p&gt;Personally, the first time that I became DX-aware, was in 2015 when I started using &lt;a href="https://github.com/gaearon/react-hot-loader"&gt;React Hot Loader&lt;/a&gt; after watching a &lt;a href="https://www.youtube.com/watch?v=xsSnOQynTHs"&gt;talk by Dan Abramov&lt;/a&gt; and his subsequent &lt;a href="https://www.youtube.com/watch?v=qXVakfdA040"&gt;interview with Kent C. Dodds&lt;/a&gt; where they explicitly talk about "Developer Experience and Tools". For those not familiar with the tool, React Hot Loader allows developers to see changes made to a React application live, without the need to reload the page. It works like magic. React Hot Loader became for me the perfect example of what a good Developer Experience means; &lt;strong&gt;less frustration, more productivity and putting back the word &lt;em&gt;fun&lt;/em&gt; into software development&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tester Experience&lt;/strong&gt; (TX), encompasses the same ergonomics principles that characterize UX and DX, but applied to software testers. Luckily, in the last few years, the industry has made giant steps in the area of TX as well. Similarly to UX and DX, Tester Experience is not just about the ease of use of a product, but also about the &lt;strong&gt;emotional impact&lt;/strong&gt; that it has on the person.&lt;/p&gt;

&lt;p&gt;From open-source tools to cloud-based services, we have witnessed a surge of web testing tools aimed at radically improving the testing experience, making it simpler, more rewarding and even fun.&lt;/p&gt;

&lt;p&gt;The one that follows is a list of the top 5 testing tools born in the last few years, which shine for providing a great Tester Experience. Eeach one has its pros and cons, so just choose the one that works best for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. &lt;a href="https://frontendrobot.com"&gt;Frontend Robot&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://frontendrobot.com"&gt;Frontend Robot&lt;/a&gt; is a cloud platform which will get you &lt;strong&gt;up and running with automated testing in just a couple of minutes&lt;/strong&gt; (literally). Frontend Robot was born two years ago from my frustration with setting up and writing automated tests for the web. Just to write a simple test, I needed to choose a framework, set it up, and have a build machine to run it on. On top of that, writing tests was a laborious matter with a constant back and forth between the code editor and the browser to check the results and create element selectors. That wasn't fun at all, I just wanted to create and run tests! &lt;/p&gt;

&lt;p&gt;Now, with Frontend Robot it's just a matter of opening the browser, creating a test, and adding actions and assertions while there is a &lt;strong&gt;live test session, which immediately shows the results on screen&lt;/strong&gt;. Paste the &lt;em&gt;smart hook&lt;/em&gt; URL in your deployment script and have the test running at every deploy. It's as simple as that. &lt;/p&gt;

&lt;p&gt;Frontend Robot also provides a nice set of goodies that will help with writing tests and debugging, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DOM Snapshots&lt;/li&gt;
&lt;li&gt;Smart CSS selector picking&lt;/li&gt;
&lt;li&gt;Time travel&lt;/li&gt;
&lt;li&gt;Variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Frontend Robot was designed from the ground up to provide the &lt;strong&gt;best possible Tester Experience&lt;/strong&gt;, studied in every detail to remove any obstacles or complications to make writing tests less frustrating and fun again.  If you are mainly after &lt;strong&gt;simplicity and an enjoyable testing experience&lt;/strong&gt;, then Frontend Robot is the right tool for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. &lt;a href="https://ghostinspector.com/"&gt;Ghost Inspector&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Born in 2014, Ghost Inspector was created by Justin Klemm from similar motivations to those that pushed me to create Frontend Robot. I definitely consider Ghost Inspector one of the main source of inspirations for Frontend Robot. If you are curious about the story behind Ghost Inspector, you can take a look at Justin's blogpost &lt;a href="https://ghostinspector.com/blog/why-build-ghost-inspector/"&gt;Why Did We Build Ghost Inspector?&lt;/a&gt; where he highlights the main pain points that Ghost Inspector tries to solve. &lt;/p&gt;

&lt;p&gt;I consider Ghost Inspector one of the most &lt;strong&gt;seasoned and stable tools&lt;/strong&gt; in this list. If you look at their documentation, it's clear that they worked closely with their customers over the years to build a tool that can cover almost every major need in test automation. From screenshots comparison to file uploads and testing emails, they have almost &lt;strong&gt;everything you need to write a complete test&lt;/strong&gt;. The only drawback in my opinion is a UI that looks outdated and a Tester Experience which can be improved.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. &lt;a href="https://www.cypress.io/"&gt;Cypress&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;While the other tools in this list bet on codeless test authoring, Cypress focused mainly on &lt;strong&gt;reinventing the traditional code-based testing framework&lt;/strong&gt;. Cypress built a test runner from scratch, allowing to provide some interesting features out-of-the-box, such as: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.cypress.io/guides/guides/stubs-spies-and-clocks.html#Capabilities"&gt;Spies, stubs and clocks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Network traffic control&lt;/li&gt;
&lt;li&gt;Time travel and DOM snapshots&lt;/li&gt;
&lt;li&gt;Real-time reload&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Being code-based, plus having a custom runner built on Chromium, allow developers and testers to write very &lt;strong&gt;advanced and complex test routines&lt;/strong&gt;, which would be impossible to create with the other tools in this list. &lt;/p&gt;

&lt;p&gt;Despite being code-based, Cypress strives to provide the &lt;strong&gt;best Developers Experience in the industry&lt;/strong&gt;. In fact, their API is crafted for maximum efficiency and their documentation very detailed and easy to read. They even have a dedicated &lt;em&gt;Developer Experience&lt;/em&gt; team! &lt;/p&gt;

&lt;p&gt;The Cypress team has big plans for the future, however being still a young product, it has some limitations. For example, as of today it only support Chromium-based browsers. Plus, it's not a 100% cloud-based solution, something that can be a good news for some but a bad news for others. In fact, Cypress has an open-source runner that everybody can use for free, and a paid dashboard service to collect and organize test results. However, it's still responsibility of the developers/tester to setup the tests to run in a separate CI (Continuous Integration) provider.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. &lt;a href="https://www.testim.io/"&gt;Testim&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Testim launched in 2014 and it's currently &lt;strong&gt;backed by $20M in funding&lt;/strong&gt;, which makes it one the most well funded products in this list. Testim is a &lt;strong&gt;hybrid tool&lt;/strong&gt; which allows to both record tests using their Chrome extension or using code written with your favourite IDE. Testim is marketed as an AI-powered testing tool, featuring self-healing selectors which should require less maintenance. But to be honest with you, besides this feature, I'm not sure how much other AI is in the rest of the product. Said that, Testim looks like a &lt;strong&gt;solid, featureful product&lt;/strong&gt;, which strives to provide a simplified and more enjoyable testing experience.&lt;/p&gt;

&lt;p&gt;One interesting feature is the ability to integrate with third party browser grids, such as &lt;a href="https://saucelabs.com/"&gt;Saucelabs&lt;/a&gt; and &lt;a href="https://www.browserstack.com/"&gt;Browserstack&lt;/a&gt; allowing you to run your tests on a variety of desktop and mobile browsers.&lt;/p&gt;

&lt;p&gt;For some, the only real negative side of Testim may be its price (at the time of writing starting at $450/month) which will make it off limits for many individual developers or small companies.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. &lt;a href="https://www.mabl.com/"&gt;Mabl&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Mabl is the most well funded tool in this list with a staggering &lt;strong&gt;$30M invested in it&lt;/strong&gt;. Founded by two ex-Google employees, Mabl defines itself as a "unified DevTestOps platform". In other words, Mabl is a &lt;strong&gt;cloud-based codeless tool&lt;/strong&gt; that simplifies the entire testing experience, from setting up tests to running them in their Google Cloud based grid. Their test recorder, called the &lt;em&gt;mabl trainer&lt;/em&gt;, is a Chrome plugin which allows to transform browser actions into test steps.&lt;/p&gt;

&lt;p&gt;Mabl is the only product in this list to natively support &lt;strong&gt;cross-browser testing&lt;/strong&gt; on their own cloud, with Chrome, Firefox, IE, and Safari being the available browsers. Like Testim, also Mabl has some AI embedded in it, allowing self-healing tests which adapt to UI changes, resulting in more robust tests that break less often.&lt;/p&gt;

&lt;p&gt;Mabl doesn't provide public pricing information, so my guess is that it's mainly targeted at medium to big businesses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;The products we've just analyzed are, at the end of the day, very similar in their goal. They all aim to improve the &lt;em&gt;Tester Experience&lt;/em&gt; making the tester's job simpler, more productive and more enjoyable. However each one of them is targeted at a specific group of testers/developers. &lt;a href="https://frontendrobot.com"&gt;Frontend Robot&lt;/a&gt; and Ghost Inspector are the products to choose if you don't want any fuss or extra complications with writing your tests. Cypress can be a good choice if you still want to write code for your tests and need more powerful features and control. Testim and Mabl are mainly targeted at larger companies, who need more support and enterprise features, at the cost of a more expensive subscription.&lt;/p&gt;

&lt;p&gt;There are many other products that I've left out from this list, and that doesn't mean they are less good. However, there is an important player that I've left out just because it's still very young and in active development, and this is the new &lt;a href="https://selenium.dev/selenium-ide/"&gt;Selenium IDE&lt;/a&gt;, which recently took the place of the old, Selenium IDE. The team is working on a new Electron-based editor, which promises to bring some interesting features.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;As always, if you have any questions or feedback you can send me an email at &lt;a href="//mailto:support@frontendrobot.com"&gt;mario@frontendrobot.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>productivity</category>
      <category>ux</category>
    </item>
    <item>
      <title>Run your end-to-end tests on production like a boss</title>
      <dc:creator>Mario Casciaro</dc:creator>
      <pubDate>Fri, 26 Jul 2019 10:38:10 +0000</pubDate>
      <link>https://forem.com/mariocasciaro/run-your-end-to-end-tests-on-production-like-a-boss-3dkf</link>
      <guid>https://forem.com/mariocasciaro/run-your-end-to-end-tests-on-production-like-a-boss-3dkf</guid>
      <description>&lt;p&gt;End-to-end tests are usually run on a testing or staging environment, and for good reasons. But there are instances when running your tests on production is not that crazy after all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not on production?
&lt;/h2&gt;

&lt;p&gt;There are good reasons for not running your tests on a production environment. &lt;/p&gt;

&lt;p&gt;First of all, &lt;strong&gt;when your code is on production it's usually already too late for testing 😓&lt;/strong&gt;. You want to run your tests so that your customers don't get a broken product, so usually you should test your code &lt;em&gt;before&lt;/em&gt; it arrives into your customers' hands.&lt;/p&gt;

&lt;p&gt;Secondly, &lt;strong&gt;you don't want to mess with production&lt;/strong&gt; without first messing with staging. This is really a consequence of the point made above. If you haven't run your tests on a staging environment first and your tests break things up, they'll break production and that's - to put it mildly - undesirable. Tests are meant to find bugs, and bugs are unpredictable, you can never be sure of their extent. You may end up tearing down your production environment or even destroying data, who knows. On the positive side, it's better that your tests find the problem instead of your customers, at least you get notified 😝.&lt;/p&gt;

&lt;p&gt;Another reason for not running your tests on production is the data noise. Imagine running you tests multiple times a day for years. You will &lt;strong&gt;end up having a lot of noise in your production data&lt;/strong&gt;, which will make it difficult to extract meaningful information from it. And data nowadays is a precious resource. On a testing/staging environment you can just refresh the database every so often, if you ever need to, to get rid of all the data created by tests. Clearly, this is not so simple on production.&lt;/p&gt;

&lt;h2&gt;
  
  
  So why running on production after all?
&lt;/h2&gt;

&lt;p&gt;Even if running your tests on production may not seem a good idea at first, it has some important advantages. Of course, as we will see later, you shouldn't run your tests on production without having a proper plan in place to mitigate the negative aspects we've just talked about. That said, let's see why instead &lt;strong&gt;you should&lt;/strong&gt; run some tests on production.&lt;/p&gt;

&lt;p&gt;Well, first of all, &lt;strong&gt;your staging environment is not what your customers use&lt;/strong&gt;. A staging environment should replicate as closely as possible the production setup, however, your staging environment will never be exactly the same as your production environment. At the very least you will have different URLs, but you can also have different API keys for external services or variables that enable/disable things when on production rather than on staging. These are your dark spots in your QA strategy, and can only be removed by testing your production environment as well.&lt;/p&gt;

&lt;p&gt;Also, the &lt;strong&gt;good health of your application doesn't come only from its code&lt;/strong&gt;, but from a combination of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;integrity of the data&lt;/strong&gt; it uses.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;status of any external system&lt;/strong&gt; (e.g. databases, storage services, etc.) it depends on.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;status of the runtime environment&lt;/strong&gt; on which it runs (e.g. the server, the PaaS, etc.).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the above factors are necessarily different between your staging and your production environments. If your data becomes corrupt on production because of a bug you missed, your application may become unstable. If your storage provider has a hiccup, your user won't be able to upload their files. If you have wrongly setup some file permissions on your production server only, you may not be able to run some parts of your application or access some of the files it needs.&lt;/p&gt;

&lt;p&gt;After all, end-to-end testing &lt;strong&gt;it's also a type of integration testing&lt;/strong&gt; so you will be able to catch all the above issues - hopefully before your customers will notice them - with a properly designed test plan running on production.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to properly run your tests on production
&lt;/h2&gt;

&lt;p&gt;So we have assessed that running some end-to-end tests on production is indeed beneficial, but also that there are some caveats we should take into account. So, below is a list of good practices to consider when running some end-to-end tests on production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Don't rely just on "production" for your tests
&lt;/h3&gt;

&lt;p&gt;This is probably the most important aspect. You should at least run the same set of tests on your application &lt;em&gt;before&lt;/em&gt; it reaches production, on a staging or testing environment. This will make sure that your tests won't break anything on production, but it will also catch any major bug before your customers do.&lt;/p&gt;

&lt;p&gt;In an ideal setup, we would have a three-tier testing plan:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run a full regression testing suite on staging before the code goes to production.&lt;/li&gt;
&lt;li&gt;Run a smaller set of tests (smoke test) on staging after each commit and deploy&lt;/li&gt;
&lt;li&gt;Run an even smaller set of tests on production at regular time intervals&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Test less, test often
&lt;/h3&gt;

&lt;p&gt;What we really want to do here is to &lt;em&gt;monitor&lt;/em&gt; the production environment rather than just testing it. This means that there should be a very limited number of highly meaningful tests, running continuously at short time intervals. This will &lt;strong&gt;ensure that your application is actually running and its major components work as they should&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;The interval at which the tests should run may vary and it depends on how long the tests take to run. But a good value falls between 15-30 minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleanup after your tests
&lt;/h3&gt;

&lt;p&gt;Your production data is important. You can use it to improve your product, your customer experience and nowadays even to build powerful Machine Learning models. So it's important not to have unnecessary noise created by our tests. &lt;/p&gt;

&lt;p&gt;A solution is to remove all the data that we have created during our tests. There are various strategies that we can use - and we'll have a separate blog post to talk about them in more detail - but probably the easiest approach is to use the same test automation to perform the cleanup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's call it frontend monitoring
&lt;/h2&gt;

&lt;p&gt;The practice of regularly checking that a website or web application is running as it should is also called &lt;strong&gt;monitoring&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;At its most low-level implementation we have &lt;strong&gt;uptime monitoring&lt;/strong&gt;, which simply makes sure that our services are up and running. This is usually done by checking the response to a call to a HTTP endpoint. There are &lt;a href="https://www.supermonitoring.com/blog/the-updated-list-of-website-monitoring-services/"&gt;many providers&lt;/a&gt; out there to chose from.&lt;/p&gt;

&lt;p&gt;If we decide to run our end-to-end tests on production at regular intervals, we are essentially implementing some &lt;strong&gt;frontend monitoring&lt;/strong&gt; (some call it &lt;em&gt;user flow monitoring&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;The scope and purpose of the two techniques is different but it overlaps in many points. Both are intended to make sure that a production application is running as it should. Uptime monitoring usually runs very basic checks that require instants to run, and that's why the monitoring interval is usually very short, down to 1 minute in many instances. The checks (or tests) performed for frontend monitoring on the other hand require longer to run, but are also more exhaustive as they will take into account also the User Interface subsystem and its integration with the backend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;The words &lt;em&gt;test&lt;/em&gt; and &lt;em&gt;production&lt;/em&gt; rarely appear on the same sentence and that's due to very good reasons. However we have seen that with a proper testing strategy, we can safely implement a &lt;strong&gt;frontend monitoring&lt;/strong&gt; solution on production. &lt;/p&gt;

&lt;p&gt;If you are interested in setting up a frontend monitoring solution, &lt;a href="https://frontendrobot.com"&gt;Frontend Robot&lt;/a&gt; has a  &lt;a href="https://frontendrobot.com/docs/scheduling-runs/"&gt;test scheduler&lt;/a&gt; specifically designed for this use case.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>automation</category>
      <category>frontend</category>
      <category>monitoring</category>
    </item>
  </channel>
</rss>
