<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Thomas Ladd</title>
    <description>The latest articles on Forem by Thomas Ladd (@tryeladd).</description>
    <link>https://forem.com/tryeladd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tryeladd"/>
    <language>en</language>
    <item>
      <title>Preact in the Shadow DOM</title>
      <dc:creator>Thomas Ladd</dc:creator>
      <pubDate>Fri, 01 Jan 2021 23:36:38 +0000</pubDate>
      <link>https://forem.com/tryeladd/preact-in-the-shadow-dom-ao8</link>
      <guid>https://forem.com/tryeladd/preact-in-the-shadow-dom-ao8</guid>
      <description>&lt;p&gt;The &lt;a href="https://developer.mozilla.org/en-US/docs/Web/Web_Components/Using_shadow_DOM"&gt;shadow DOM&lt;/a&gt; is typically associated with &lt;a href="https://developer.mozilla.org/en-US/docs/Web/Web_Components"&gt;Web Components&lt;/a&gt;, but its style encapsulation properties can also be useful on its own. Up until recently, React's event system presented problems in the Shadow DOM, but those issues have been &lt;a href="https://github.com/facebook/react/issues/10422#issuecomment-674928774"&gt;resolved in React 17&lt;/a&gt;. So while this post focuses on Preact since its small size is a good fit for the cases that style encapsulation is also useful, the same process will also work with React.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Shadow DOM
&lt;/h2&gt;

&lt;p&gt;The main reason to use the shadow DOM is for style encapsulation. CSS rules do not cross the shadow DOM in either direction, although inherited properties are still inherited as usual (for instance, font-family, color, etc).&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/TLadd/embed/PoGoQeV?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The above example demonstrates the style encapsulating properties of the shadow DOM. The red box is in the normal light DOM and the purple box contents are in a shadow DOM. Even though there is a style rule in the &lt;code&gt;index.html&lt;/code&gt; file to set the background-color of all buttons to red, it does not affect the button that is in the shadow DOM. Conversely, the style set in the shadow DOM to set the color of all &lt;code&gt;p&lt;/code&gt; tags to purple and &lt;code&gt;font-weight&lt;/code&gt; to bold does not affect the paragraph in the light DOM&lt;/p&gt;

&lt;p&gt;For most apps, this sort of encapsulation is not necessary. Assuming you are in full control of all app styles, you can ensure the styles do not interfere. Style encapsulation can be incredibly useful, however, if you are building something that gets embedded onto host pages that you do not control. For instance the &lt;a href="https://www.grow.me/"&gt;Grow.me&lt;/a&gt;, &lt;a href="https://onesignal.com/"&gt;OneSignal&lt;/a&gt; or &lt;a href="https://www.intercom.com/"&gt;Intercom&lt;/a&gt; widgets (note that not all of them use shadow DOM). In these cases, the style encapsulation behavior the shadow DOM provides is very useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shadow DOM with Preact
&lt;/h2&gt;

&lt;p&gt;Rendering Preact or React into the shadow DOM is pretty simple. The target element that the initial Preact render call attaches to just needs to be within a shadow DOM.&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/TLadd/embed/xxExYJX?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;That's all there is to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats
&lt;/h2&gt;

&lt;p&gt;For the most part, everything works normally. I have, however, come across a few cases that required extra consideration.&lt;/p&gt;

&lt;h4&gt;
  
  
  styled-components
&lt;/h4&gt;

&lt;p&gt;By default, styled-components injects styles into the head node. When rendering components into the shadow DOM, this doesn't work since those styles can't cross the shadow DOM barrier. Luckily, styled-components provides a &lt;a href="https://styled-components.com/docs/api#stylesheetmanager"&gt;StyleSheetManager component&lt;/a&gt; that allows customizing the target node that the styles are injected into. Setting the target to the root element inside the shadow DOM works.&lt;/p&gt;

&lt;h4&gt;
  
  
  Global Click Listeners
&lt;/h4&gt;

&lt;p&gt;Click events still bubble out of the shadow DOM, but the events are &lt;a href="https://polymer-library.polymer-project.org/2.0/docs/devguide/shadow-dom#event-retargeting"&gt;retargeted&lt;/a&gt; when observed outside of the originating shadow DOM. One case where this is particularly problematic is menu libaries that setup click listeners on &lt;code&gt;window&lt;/code&gt; to determine if you click outside of the menu and automatically close it. The target ends up being the shadow DOM root when observed from the window event listener and that logic likely no longer functions properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison to iframe
&lt;/h2&gt;

&lt;p&gt;For building apps that get embedded onto others' sites, iframes have long been the most common means of ensuring encapsulation. Typically a very thin script is loaded onto the page that is primarily responsible for initializing an iframe loads the app. One thing iframes get you that the shadow DOM does not is javascript encapsulation in addition to the style encapsulation. The hosting site could do any number of heinous to the global Javascript namespace and your app would continue to work fine unaffected.&lt;/p&gt;

&lt;p&gt;The cost of that full encapsulation is a lot of overhead when it comes to interacting with the host site or perhaps other iframes if your embedded app requires multiple widgets. The &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage"&gt;postMessage API&lt;/a&gt; is great for cross-frame communication, but not having to communicate across frames at all is a whole lot less hassle. If your application doesn't demand the guarantees Iframe's provide, I think using the shadow DOM is preferrable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Note
&lt;/h2&gt;

&lt;p&gt;When I read Shadow DOM, it is always in the voice of a Yugioh villain.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/yFm5V__72yg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>react</category>
      <category>shadowdom</category>
      <category>preact</category>
    </item>
    <item>
      <title>How We Perform Frontend Testing on Our Customer Portal</title>
      <dc:creator>Thomas Ladd</dc:creator>
      <pubDate>Fri, 08 Nov 2019 17:56:35 +0000</pubDate>
      <link>https://forem.com/stackpath/how-we-perform-frontend-testing-on-our-customer-portal-985</link>
      <guid>https://forem.com/stackpath/how-we-perform-frontend-testing-on-our-customer-portal-985</guid>
      <description>&lt;p&gt;An effective automated testing strategy is crucial for ensuring teams can deliver quality updates to web applications quickly. We are lucky to have a lot of great options in the space right now for testing. However, with a lot of options comes the difficulty of sorting through which one(s) to pick. Then, once the tools are chosen, you need to decide when to use each one.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://www.stackpath.com"&gt;StackPath&lt;/a&gt; we're very happy with the level of confidence we've achieved in our customer portal. So, in this post, we will share the set of tools we use to test our customer portal and inspire confidence in its performance.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing Principles
&lt;/h1&gt;

&lt;p&gt;Before diving into specific tools, it’s worth thinking about what good tests look like. Prior to starting work on &lt;a href="https://control.stackpath.com/login"&gt;the customer portal&lt;/a&gt;, we wrote down the principles we wanted to follow when writing tests. Going through that process first helped us decide which tools to select.&lt;/p&gt;

&lt;p&gt;The four principles we wrote down (with a little bit of hindsight thrown in) are listed below.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Tests should be thought of as an optimization problem
&lt;/h3&gt;

&lt;p&gt;An effective testing strategy is about maximizing value (confidence in the application working) and minimizing cost (time spent maintaining tests and running tests). Questions we often ask when writing tests related to this principle are:&lt;/p&gt;

&lt;p&gt;What’s the possibility that this test actually catches a bug?&lt;br&gt;
Is this test adding value and does that value justify its cost?&lt;br&gt;
Could I derive the same level of confidence as I do from this test with another test that is easier to write/maintain/run?&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Avoid excessive mocking
&lt;/h3&gt;

&lt;p&gt;One of my favorite explanations of mocking is &lt;a href="http://blog.testdouble.com/posts/2018-03-06-please-dont-mock-me"&gt;Justin Searls’s talk at Assert.js 2018&lt;/a&gt;. He goes into a lot more detail and subtlety than I will here, but in the talk, he refers to mocking as punching holes in reality, and I think that’s a very instructive way of looking at mocks. While mocking does have a place in our tests, we have to weigh the reduction of cost the mock provides by making the test easier to write and run against the reduction in value caused by punching that hole in reality.&lt;/p&gt;

&lt;p&gt;Previously, engineers on our team relied heavily on unit tests where all child dependencies were mocked using &lt;a href="https://airbnb.io/enzyme/docs/api/shallow.html"&gt;enzyme’s shallow rendering API&lt;/a&gt;. The shallow rendered output would then be verified using &lt;a href="https://jestjs.io/docs/en/snapshot-testing"&gt;Jest snapshots&lt;/a&gt;. All of these sorts of tests followed a similar template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;renders &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;wrapper&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;shallow&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="c1"&gt;// Optionally interact with wrapper to get the component in a certain state&lt;/span&gt;
  &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;wrapper&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;toMatchSnapshot&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These sorts of tests punch a ton of holes in reality. You can pretty easily get to 100% test coverage with this strategy. The tests take very little thought to write, but without something testing all of the numerous integration points they provide very little value. The tests may all pass, but I’m not too sure if my app works or not. Even worse, all of the mocking has a hidden cost that pops up later.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Tests should facilitate refactoring—not make it more painful
&lt;/h3&gt;

&lt;p&gt;Tests like the one shown above make refactoring more difficult. If I notice I have the same repeated code over and over again and extract it to another component later, every test I had for components that use that new component will fail. The shallow rendered output is different; where before I had the repeated markup, now I have the new component.&lt;/p&gt;

&lt;p&gt;A more complicated refactoring that involves adding some components and removing others results in even more churn as I have to add new test files and remove others. Regenerating the snapshots is easy, but what value are these tests really providing me? Even if they could catch a bug, I’m more likely to miss it amongst the number of snapshot changes and just accept the newly generated ones without thinking too hard about it.&lt;/p&gt;

&lt;p&gt;So these sorts of tests don’t help much with refactoring. Ideally, no test should fail when I refactor and no user-facing behavior is changed. Conversely, if I do change user-facing behavior, at least one test should fail. If our tests follow these two rules, they are the perfect tool for ensuring I didn’t change any user-facing behavior while refactoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Tests should mimic how a user actually uses the application
&lt;/h3&gt;

&lt;p&gt;If I want my tests to only fail when user-facing behavior changes, it follows that my tests ought to interact with my application in the same sort of way an actual user would. For example, my tests should actually interact with form elements and type in input fields the same way a user would. They should never reach into a component and manually call lifecycle methods, set state, or anything else that is implementation-specific. Since the user-facing behavior is what I’m ultimately wanting to assert, it’s logical that the tests should be operating in a way that closely matches a real user.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing Tools
&lt;/h1&gt;

&lt;p&gt;Now that we’ve defined what our goals are for our tests, let’s look at what tools we ultimately chose.&lt;/p&gt;

&lt;h3&gt;
  
  
  Typescript
&lt;/h3&gt;

&lt;p&gt;We use TypeScript throughout our codebase. Our backend services are written in Go and communicate using gRPC, which allows us to generate typed gRPC clients for use in our GraphQL server. The GraphQL server’s resolvers are typed using generated types from &lt;a href="https://github.com/dotansimha/graphql-code-generator"&gt;graphql-code-generator&lt;/a&gt;. Finally, our queries, mutations, and subscriptions components/hooks are also generated with full type coverage. End-to-end type coverage eliminates an entire class of bugs resulting from the shape of data not being what you expect. Generating types from schema and protobuf files ensures our entire system remains consistent across the stack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Jest (Unit Tests)
&lt;/h3&gt;

&lt;p&gt;We use &lt;a href="https://jestjs.io/"&gt;Jest&lt;/a&gt; as our unit testing framework along with &lt;a href="https://testing-library.com/docs/react-testing-library/intro"&gt;@testing-library/react&lt;/a&gt;. In these tests, we test functions or components in isolation from the rest of the larger system. We typically test functions/components that are used frequently throughout the app and/or have a lot of different code paths that are difficult to target all of in an integration or end-to-end (E2E) test.&lt;/p&gt;

&lt;p&gt;For us, unit tests are about testing the fine-grained details. Integration and E2E tests do a good job of handling the broad strokes of the application generally working, but sometimes you need to make sure little details are correct and it would be too costly to write an integration test for each possible case.&lt;/p&gt;

&lt;p&gt;For instance, we want to ensure that keyboard navigation works for our dropdown select component, but we don’t need to verify every instance of it in our app. We test the behavior in depth in isolation so that we can just focus on higher-level concerns when testing the pages that use that component.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cypress (Integration Tests)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.cypress.io/"&gt;Cypress&lt;/a&gt; integration tests are at the core of our testing suite. When we started building out the StackPath portal they were the first tests we wrote because they deliver a lot of value for fairly small cost. Cypress renders our whole app in a browser and runs through test scenarios. Our entire frontend is running exactly as it would for a user. The network layer, however, is mocked. Every network request that would go to our GraphQL server is instead mocked with fixture data.&lt;/p&gt;

&lt;p&gt;Mocking the network layer provides a number of benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tests are faster.&lt;/strong&gt; Even if your backend is super fast, the number of calls made for an entire test suite run add up. With the responses being mocked, they can return instantly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests are more reliable.&lt;/strong&gt; One of the difficulties with full E2E tests is accounting for variability in the network and stateful backend data. When every request is mocked that variability is gone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hard-to-replicate scenarios can be simulated with ease.&lt;/strong&gt; For instance, it would be difficult to reliably force calls to fail. If we want to test that our app responds correctly when a call fails, being able to force that failure is helpful.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While mocking our entire backend may seem like a problem, all of our fixture data is typed using the same generated TypeScript types our app uses, so it is guaranteed to be at least structurally equivalent to what an unmocked backend would return. For most of our tests, we are happy with the tradeoff that mocking the network provides.&lt;/p&gt;

&lt;p&gt;The developer experience with Cypress is also really good. The tests run in the Cypress Test Runner which shows your tests on the left and your app running in a main iframe performing those tests. After a test run, you can highlight individual steps in your tests to see what your app was doing at that point. Since the test runner is itself running in a browser, you also have access to developer tools to help debug tests.&lt;/p&gt;

&lt;p&gt;Oftentimes when writing frontend tests it can take a lot of time to assess what a test is actually doing and what state the DOM is in at a particular point in the test. Cypress makes this part really easy because you can just see it happening right in front of you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C2wgKrph--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/omzz4ZP.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C2wgKrph--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/omzz4ZP.gif" alt="Cypress test runner gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These tests exemplify a lot of our state testing principles. Cost to value ratio is favorable, the tests very closely mimic how an actual user interacts with the app, and the only thing being mocked is the network layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cypress (E2E Tests)
&lt;/h3&gt;

&lt;p&gt;Our E2E tests are also written in Cypress, but for these we do not mock the network (or anything else). Our tests run against our actual GraphQL server which communicates with actual instances of our backend services.&lt;/p&gt;

&lt;p&gt;E2E tests are immensely valuable because they can definitively tell you if something works or not. Nothing is being mocked, so it’s using the app exactly as a user would. E2E tests are higher cost as well though. They are slower, take more thought to prevent intermittent failures, and take more work to ensure your tests are always in a known state before running.&lt;/p&gt;

&lt;p&gt;Tests typically need to start from a known state, do some operations, and then arrive at some other known expected state. With the integration tests, this is easy to accomplish because the API calls are mocked and thus are the same every test run. For E2E tests, it’s more complicated because the backend storage now holds state which could be mutated as the result of a test. Somehow, you have to ensure that when you start a test, you’re in a known state.&lt;/p&gt;

&lt;p&gt;At the beginning of our E2E test run, we run a script that seeds a new account with new stacks, sites, workloads, monitors, etc by making API calls directly. Each test run operates on different instances of the data, but the test setup is identical. The seed script emits a file with the data our tests use when running (instance ids and domains mostly). This seed script is what allows us to get into a known state before running our tests.&lt;/p&gt;

&lt;p&gt;Since these E2E tests are higher cost, we write less of them than integration tests. We cover the critical functionality of our app: user registration/login, creating and configuring a site/workload, etc. From our extensive integration tests, we know that our frontend generally works, so these just need to ensure that there isn’t something slipping through the cracks when hooked up to the rest of the system.&lt;/p&gt;

&lt;h1&gt;
  
  
  Downsides to this multipronged testing strategy
&lt;/h1&gt;

&lt;p&gt;While we’ve been really happy with our tests and the general stability of our app, there are definitely downsides to going with this sort of multipronged testing strategy.&lt;/p&gt;

&lt;p&gt;First, it means everyone on the team needs to be familiar with multiple testing tools instead of just one. Everyone needs to know Jest, @testing-library/react, and Cypress. Not only do we have to know how to write tests in these different tools; we also have to make decisions all the time about which tool to use. Should I write an E2E test covering this functionality or is just writing an integration test fine? Do I need unit tests covering some of these finer-grain details as well?&lt;/p&gt;

&lt;p&gt;There is undoubtedly a mental load here that isn’t present if you only have one choice. In general, we start with integration tests as the default and then add on an E2E test if we feel the functionality is particularly critical and backend-dependent. Or we start with unit tests if we feel integration tests cannot reasonably cover the number of different details involved.&lt;/p&gt;

&lt;p&gt;We definitely still have some gray areas, but patterns start to emerge after going through this thought process enough times. For instance, testing form validation tends to be done in unit tests due to the number of different scenarios and everyone on the team is aware of that at this point.&lt;/p&gt;

&lt;p&gt;Another downside to this approach is that collecting test coverage, while not impossible, is more difficult. While chasing test coverage can result in bad tests just for the sake of making a number go up, it can still be a useful automated way of finding holes in your tests. The trouble with having multiple testing tools is that you have to combine test coverage to find out which parts of your app are truly not covered. It’s possible, but it’s definitely more complicated.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;While some challenges exist when using many different testing tools, each tool serves its purpose and we think is worth including in our overall testing strategy. When starting a new application, or adding tests to an existing one, integration tests are a great place to start. Adding some base-level E2E tests around absolutely critical functionality early on is a good idea as well.&lt;/p&gt;

&lt;p&gt;With those two pieces in place, you should be able to make changes to your application with pretty reasonable confidence. If you start to notice bugs creeping in, stop and assess what sort of tests could have caught those bugs and if it indicates a deficiency in the overall strategy.&lt;/p&gt;

&lt;p&gt;We definitely did not arrive at our current test setup overnight and it is something we expect to keep evolving as we continue growing. For the time being though, we feel good about our current approach to testing.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>typescript</category>
    </item>
  </channel>
</rss>
