<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alex 🦅 Eagle</title>
    <description>The latest articles on Forem by Alex 🦅 Eagle (@jakeherringbone).</description>
    <link>https://forem.com/jakeherringbone</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jakeherringbone"/>
    <language>en</language>
    <item>
      <title>Stamping Bazel builds with selective delivery</title>
      <dc:creator>Alex 🦅 Eagle</dc:creator>
      <pubDate>Fri, 22 Oct 2021 04:03:36 +0000</pubDate>
      <link>https://forem.com/bazel/stamping-bazel-builds-with-selective-delivery-56o3</link>
      <guid>https://forem.com/bazel/stamping-bazel-builds-with-selective-delivery-56o3</guid>
      <description>&lt;p&gt;The obvious next step after building a nice CI pipeline around Bazel is Continuous Deployment. So no surprise that one of the frequent questions on Bazel slack is&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How do I release the artifacts built by Bazel?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;and the answer is really not well documented anywhere. Here's what I've learned.&lt;/p&gt;

&lt;h1&gt;
  
  
  Stamping
&lt;/h1&gt;

&lt;p&gt;Bazel is mostly unaware of version control, and that's good because coupling causes intended feature interactions. But sometimes you want the git SHA to appear in the binary so your monitoring system can tell which version is crash-looping. This is where stamping is used. Bazel keeps two files sitting in &lt;code&gt;bazel-out&lt;/code&gt; all the time, &lt;code&gt;stable-status.txt&lt;/code&gt; and &lt;code&gt;volatile-status.txt&lt;/code&gt;, which are populated from local environment info like the hostname, and can be inputs to build actions.&lt;/p&gt;

&lt;p&gt;The files are just sitting there in the output tree after any build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat bazel-out/stable-status.txt 
BUILD_EMBED_LABEL 
BUILD_HOST system76-pc
BUILD_USER alexeagle
$ cat bazel-out/volatile-status.txt 
BUILD_TIMESTAMP 1634865540
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;You can fill in more values in this file by adding &lt;code&gt;--workspace_status_command=path/to/my_script.sh&lt;/code&gt; to your .bazelrc and writing a script that emits values, often by calling &lt;code&gt;git&lt;/code&gt;. Note that adding this flag to every build can mean slow git operations slowing down developers, so you might want to include this flag only on CI.&lt;br&gt;
As an aside, instead of just a git SHA let me recommend &lt;a href="https://twitter.com/jakeherringbone/status/1324871225898749953"&gt;https://twitter.com/jakeherringbone/status/1324871225898749953&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The "stable" statuses are meant to be relatively constant over rebuilds on your machine. So your username is stable. The stable status file is part of Bazel's cache key for actions. So if your value of &lt;code&gt;--embed_label&lt;/code&gt; changes, it will be reflected in the BUILD_EMBED_LABEL line of stable-status.txt and you'll get a cache miss for every stamped action. They will be re-run to find out the new value.&lt;/p&gt;

&lt;p&gt;The "volatile" statuses change all the time, like the timestamp. These are not part of an action key, as that would make the cache useless.&lt;/p&gt;

&lt;p&gt;Bazel only rebuilds an artifact if the stable stamp or one of the declared inputs changes. Otherwise you can get a cache hit, with a stale value of a volatile stamp.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Due to using a volatile stamp, we had a bug when we made Angular's release process. As a workaround, to make sure all the artifacts were versioned together, we had to do a clean build when releasing. I always felt bad for whoever was doing the push on their laptop and waiting.&lt;br&gt;
This was the wrong approach, it should have used stable stamping.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  When to stamp
&lt;/h1&gt;

&lt;p&gt;Bazel has a flag &lt;code&gt;--stamp&lt;/code&gt;. Very sadly, it is not exposed to Bazel rules in any consistent way, and so many rules have a fixed boolean attribute &lt;code&gt;stamp = True|False&lt;/code&gt;. This inconsistency is too bad, and causes a lot of friction around correct stamping.&lt;/p&gt;

&lt;p&gt;You should not enable &lt;code&gt;--stamp&lt;/code&gt; on your CI builds. When any stable status value changes, you'll bust the cache and re-do a lot of work. Even if you don't use stable status values, some ruleset you depend on might.&lt;/p&gt;

&lt;p&gt;This is also a key element of how we'll find the changed artifacts later. We don't want any stamp info in them at all, so their content hash is deterministic.&lt;/p&gt;

&lt;h1&gt;
  
  
  Finding the releasable artifacts
&lt;/h1&gt;

&lt;p&gt;Use a custom Bazel rule to describe your release artifacts. Delivery styles vary a lot, so I haven't seen one of these that works for everyone. The custom rule can produce a manifest file of whatever info your continuous delivery system needs to know.&lt;/p&gt;

&lt;p&gt;After a green CI build and test step, your pipeline should use bazel query to find all of the release artifacts.&lt;/p&gt;

&lt;h1&gt;
  
  
  Selective release
&lt;/h1&gt;

&lt;p&gt;We could release everything all the time, but &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;we don't want to push duplicate artifacts&lt;/li&gt;
&lt;li&gt;stamped artifacts should always reflect the version info of the last change that affected them&lt;/li&gt;
&lt;li&gt;downstream systems will be confusing for users to operate since there are too many versions to pick from&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the recipe:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;CI already ran without --stamp, so the release artifacts are deterministic from sources.&lt;/li&gt;
&lt;li&gt;Query for the release artifacts and loop over them&lt;/li&gt;
&lt;li&gt;for each artifact, compute the content hash (or just take the existing .digest output from sth like docker that supplies it)&lt;/li&gt;
&lt;li&gt;run a reliable key/value store to act like a bloom filter (Redis SETNX is good for this) which quickly tells you that the content hash is different than before&lt;/li&gt;
&lt;li&gt;loop over these newly-seen artifacts labels and run again with &lt;code&gt;bazel run --stamp thing.deploy&lt;/code&gt; or whatever you need to do to promote them to the next stage in the CD pipeline&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Since most of the actions in the dependency graph shouldn't be stamp-aware, the last step here should still be fairly incremental.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Bazel can write to the source folder!</title>
      <dc:creator>Alex 🦅 Eagle</dc:creator>
      <pubDate>Thu, 21 Oct 2021 15:19:01 +0000</pubDate>
      <link>https://forem.com/bazel/bazel-can-write-to-the-source-folder-b9b</link>
      <guid>https://forem.com/bazel/bazel-can-write-to-the-source-folder-b9b</guid>
      <description>&lt;p&gt;Bazel is Google's open-sourced build tool. When used internally at Google, it comes along with a bunch of idioms which Googlers naturally take for granted, and associate with Bazel. These can accidentally become part of the accepted dogma around Bazel migration.&lt;/p&gt;

&lt;p&gt;Most frequently, the accident I see is a false perception "Bazel cannot write to the source folder, so you can no longer check in generated files, nor have them in the sources but ignored from VCS".&lt;/p&gt;

&lt;h2&gt;
  
  
  Typically you shouldn't do it
&lt;/h2&gt;

&lt;p&gt;Intermediate outputs in Bazel are meant to be used directly as inputs to another target in the build. For example, if you generate language-specific client stubs from a &lt;code&gt;.proto&lt;/code&gt; file, those stay in the &lt;code&gt;bazel-out&lt;/code&gt; folder and a later compiler step should be configured to read them from there.&lt;/p&gt;

&lt;p&gt;However there are plenty of cases where outputs do need to go in the source folder:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;workaround for an editor plugin that only knows to read in the source folder and can't be configured to look in bazel-out&lt;/li&gt;
&lt;li&gt;"golden" or "snapshot" files used for tests&lt;/li&gt;
&lt;li&gt;generated documentation that's checked in next to sources&lt;/li&gt;
&lt;li&gt;files that you need to be able to search or browse from your version control GUI&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Yes you can do it
&lt;/h2&gt;

&lt;p&gt;If you restrict yourself to only &lt;code&gt;bazel build&lt;/code&gt; and &lt;code&gt;bazel test&lt;/code&gt;, then it's true that neither of these commands can mutate the source tree. Bazel is strictly a transform tool from the sources to its own bazel-out folder. However, &lt;code&gt;bazel run&lt;/code&gt; has no such limitation, and in fact always sets an environment variable &lt;code&gt;BUILD_WORKSPACE_DIRECTORY&lt;/code&gt; which makes it easy to find your sources and modify them.&lt;/p&gt;

&lt;p&gt;This leads us to the "Write to Sources" pattern for Bazel. We'll use &lt;code&gt;bazel run&lt;/code&gt; to make the updates, and &lt;code&gt;bazel test&lt;/code&gt; to make sure developers don't allow the file in the source folder to drift from what Bazel generates.&lt;/p&gt;

&lt;p&gt;Here's the basic recipe, which I've adapted to many scenarios. For example, many of the core Bazel rulesets now use this pattern to keep their generated API markdown files in sync with the sources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@bazel_skylib//rules:diff_test.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"diff_test"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@bazel_skylib//rules:write_file.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"write_file"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Config:
# Map from some source file to a target that produces it.
# This recipe assumes you already have some such targets.
&lt;/span&gt;&lt;span class="n"&gt;_GENERATED&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"some-source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"//:generated.txt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;# ...
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Create a test target for each file that Bazel should
# write to the source tree.
&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;diff_test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"check_"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c1"&gt;# Make it trivial for devs to understand that if
&lt;/span&gt;        &lt;span class="c1"&gt;# this test fails, they just need to run the updater
&lt;/span&gt;        &lt;span class="c1"&gt;# Note, you need bazel-skylib version 1.1.1 or greater
&lt;/span&gt;        &lt;span class="c1"&gt;# to get the failure_message attribute
&lt;/span&gt;        &lt;span class="n"&gt;failure_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Please run:  bazel run //:update"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;file1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;file2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;_GENERATED&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Generate the updater script so there's only one target for devs to run,
# even if many generated files are in the source folder.
&lt;/span&gt;&lt;span class="n"&gt;write_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"gen_update"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;out&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"update.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="c1"&gt;# This depends on bash, would need tweaks for Windows
&lt;/span&gt;        &lt;span class="s"&gt;"#!/usr/bin/env bash"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c1"&gt;# Bazel gives us a way to access the source folder!
&lt;/span&gt;        &lt;span class="s"&gt;"cd $BUILD_WORKSPACE_DIRECTORY"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="c1"&gt;# Paths are now relative to the workspace.
&lt;/span&gt;        &lt;span class="c1"&gt;# We can copy files from bazel-bin to the sources
&lt;/span&gt;        &lt;span class="s"&gt;"cp -fv bazel-bin/{1} {0}"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="c1"&gt;# Convert label to path
&lt;/span&gt;            &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;_GENERATED&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# This is what you can `bazel run` and it can write to the source folder
&lt;/span&gt;&lt;span class="n"&gt;sh_binary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"update"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;srcs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"update.sh"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_GENERATED&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;values&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may want to tweak the recipe, for example if the output files are markdown I'll append ".md" to the keys. If your files follow a convention you might be able to configure it with just a list rather than a dictionary.&lt;/p&gt;

&lt;p&gt;Note that this pattern does have one downside, compared with build tools that allow a build to directly output into the source tree. Until you run the tests, it's possible that you're working against an out-of-date file in the source folder. This could mean you spend some time developing, only to find on CI that the generated file needs to be updated, and then after updating it, you have to make some fixes to the code you wrote.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>CBOI: Continuous Build, Occasional Integration</title>
      <dc:creator>Alex 🦅 Eagle</dc:creator>
      <pubDate>Tue, 01 Jun 2021 17:46:24 +0000</pubDate>
      <link>https://forem.com/jakeherringbone/cboi-continuous-build-occasional-integration-4hgm</link>
      <guid>https://forem.com/jakeherringbone/cboi-continuous-build-occasional-integration-4hgm</guid>
      <description>&lt;p&gt;Is your organization practicing CBOI? If you haven't heard this hot new industry acronym, it stands for "Continuous Build, Occasional Integration." A lot of big companies are using this technique. It's a different way of approaching Continuous Integration (CI).&lt;/p&gt;

&lt;p&gt;By different, I mean a lot worse.&lt;/p&gt;

&lt;p&gt;In fact, your organization should &lt;em&gt;not&lt;/em&gt; practice CBOI. So why write an article about it? Because, sadly, most organizations who claim to do CI are actually doing CBOI. I'll explain why that is, and how you can stop.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is CI?
&lt;/h2&gt;

&lt;p&gt;Let's break down the terms a bit to start. "Continuous" is just a way of saying "infinite loop" - we trigger on every change or on a regular interval, and give feedback to the development cycle, such as alerting developers that they broke an automated test. Easy, and not controversial.&lt;/p&gt;

&lt;p&gt;"Integration" is a much more nuanced term. In most software shops, what we mean here is that we bring together the artifacts from independent engineering teams into a functioning system. A common example that I'll use in this article is a Frontend and a Backend.&lt;/p&gt;

&lt;p&gt;In a small organization, with only a few developers, Integration isn't much of a problem. Every engineer develops on the whole stack, and runs the complete system locally. As the organization scales, however, teams break up and specialize. The full system is eventually too complex to fit in one person's head, though the Architect tries mightily. The more the org structure gets broken up, the more different software systems diverge and the harder it is to guarantee that the code they're writing works when integrated.&lt;/p&gt;

&lt;p&gt;In order to perform Continuous Integration, then, you need an automated way to integrate the full stack. In working with a number large companies, I've rarely observed this automation. Instead, individual developers just work on their code (not surprising since they would prefer to work in isolation, reducing their cognitive load and learning curve). They aren't able to bring up other parts of the system, for a variety of reasons I'll list later. However, the engineers know (or their managers instruct them) to set up a "CI" for their code. So they take the build and test system they use locally, and put it on a server running in a loop. In our example, the backend team runs their backend tests on Jenkins.&lt;/p&gt;

&lt;p&gt;Is that CI? There's an easy litmus test to determine that.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to tell if you're doing CBOI rather than CI
&lt;/h2&gt;

&lt;p&gt;Let's say the backend team makes a change, that will break the frontend code. To avoid certain objections, I'll add that this change isn't something we expected to be part of the API contract between these layers: let's say we just caused the ordering of results from a query to change. At what point in your development cycle will you discover the problem?&lt;/p&gt;

&lt;p&gt;In organizations doing CBOI, the answer is that they'll find out in production when customers discover the defect. That's because the automation couldn't run the frontend tests against the HEAD version of the backend, and since the change appeared API-compatible, no one tried to manually verify it either. When you're discovering your bugs in prod, you should start asking the hard questions in your post-mortem: why didn't our CI catch this? And in our example, the answer shocks our engineers: they didn't have CI after all.&lt;/p&gt;

&lt;p&gt;Instead of CI, their setup was individual teams testing their code in a loop, which is a Continuous Build (CB). Then when they released to prod, the Release Engineer performed the actual integration, by putting the code from different teams together in the finished system. They only do those releases on a less-frequent cadence. That's Occasional Integration (OI). &lt;/p&gt;

&lt;p&gt;If a developer wanted to debug the problem, they'd be forced to "code in production". With no way to reproduce the full stack, they have to push speculative changes and look at production logs to see if they've fixed it. SSH'ing into a production box to make edits is the opposite of what we want. For space, I won't go into details on this as it merits a separate article (and is maybe obvious to you).&lt;/p&gt;

&lt;p&gt;So we've finally defined what CBOI is, and seen how it causes production outages and scary engineering practices. Ouch!&lt;/p&gt;

&lt;h2&gt;
  
  
  How to stop doing CBOI
&lt;/h2&gt;

&lt;p&gt;I have to start this section with a warning: it isn't going to be easy. The Continuous Build was setup because it was trivial: take the build/test tool the developers were running for their code and put it on a server in a loop. There isn't a similarly easy way to integrate the full stack. It may even require some changes to your build/test tools, or to the entry-point of your software. However if your organization has a problem with defects in production (or wants to avoid such a problem), this work is worth doing.&lt;/p&gt;

&lt;p&gt;Also, although the example so far was a Frontend and a Backend, which are runnable applications, CI is just as important for other vertices of your dependency graph, such as shared libraries or data model schemas.&lt;/p&gt;

&lt;p&gt;I'll break this down into a series of problems:&lt;/p&gt;

&lt;p&gt;1) developers can't run the full stack&lt;br&gt;
2) no integration test fixture exists that can detect the defect&lt;br&gt;
3) resource constraints make it uneconomical to run all the tests&lt;/p&gt;

&lt;p&gt;Along the way (spoiler alert) I'll explain how one Integration tool (&lt;a href="http://bazel.build"&gt;http://bazel.build&lt;/a&gt;) solves the technical problems.&lt;/p&gt;

&lt;p&gt;However we'll conclude with a final problem, the people problem:&lt;/p&gt;

&lt;p&gt;4) the organization is averse to integrating dev processes&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;People problems are always harder than software problems, as I learned from early Google luminary Bill Coughran.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why devs can't run the full stack
&lt;/h2&gt;

&lt;p&gt;As I mentioned earlier, our ideal integration happens on the developers machine. After making that non-order-preserving backend change, you'd just run the frontend tests to discover the breakage. In practice this is much harder than it should be.&lt;/p&gt;

&lt;p&gt;First, you might need your machine in a very particular state. You need compilers and toolchains installed, at just the right versions, statically linked against the right system headers, and running on an OS that's compatible with prod. Most teams don't have an up-to-date "onboarding" instructions that carefully covers this, and since the underlying systems are always churning, you don't even know whether your instructions will work for the next person trying to run your code.&lt;/p&gt;

&lt;p&gt;Next, many systems require shared runtime infrastructure ("the staging environment") or credentials. These either aren't made available to engineers, or they're a contended resource where only one person can have their changes running at a time.&lt;/p&gt;

&lt;p&gt;It's also common that knowledge of how to bring up a fresh copy of the system isn't written down anywhere, and hasn't been scripted. Only the sysadmin has the steps roughly documented in an unsaved notepad.exe buffer, so when you need to bring up a server, that person clicks around the AWS UI to do so.&lt;/p&gt;

&lt;p&gt;To solve these problems, and unlock your developers ability to run the whole system, you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A tool like Bazel that manages the toolchains and keeps the configuration roughly hermetic, so a dev can "parachute" into someone else's code and run it at HEAD without any setup to maintain.&lt;/li&gt;
&lt;li&gt;The ability to cheaply spin up a new environment anywhere. For example if you deploy to a Kubernetes cluster, use something like minikube to make a miniature local environment that mimics production and re-uses most of the same configs.&lt;/li&gt;
&lt;li&gt;Robust scripting that automates the release engineer's job. It should be possible for a test to run the same setup logic to make a fresh copy of the system under test.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The configurations need to be "democratized" for this to work well. Under Jenkins you might have had some centralized Groovy code that looks at changed directories or repositories and determines tests to run. This doesn't scale in a big org where many engineers have to edit these files. Instead, you should push configuration out to the leaves as much as possible: co-locate the description of build&amp;amp;test for some code at the nearest common ancestor directory of those inputs. Bazel's &lt;code&gt;BUILD.bazel&lt;/code&gt; files are a great example of how to do this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration test fixtures
&lt;/h2&gt;

&lt;p&gt;Remember that tests are written in three parts, sometimes called "Arrange, Act, Assert". The first part is to bring up the "System under test" (SUT). ( &lt;a href="https://en.wikipedia.org/wiki/Test_fixture#Software"&gt;https://en.wikipedia.org/wiki/Test_fixture#Software&lt;/a&gt; and other links )&lt;/p&gt;

&lt;p&gt;In order to assert that the frontend and backend work together, our automated test first needs to integrate the frontend and backend, by building both of them at HEAD and running them in a suitable environment, with the wiring performed so they can reach each other for API calls. You'll need a high-level, language-agnostic tool to orchestrate these builds, in order to build dependencies from head. Again, Bazel is great for this.&lt;/p&gt;

&lt;p&gt;You'll find there is natural resistance here: the "first mover" cost is very high. An engineer could easily spend a week writing one test to catch the ordering defect I mentioned earlier. In the scope of that post-mortem, someone will object "we can't possibly make time for that." But of course, the fixture is reusable, and once it's written you can add more true "integration tests", even writing them at the same time you make software changes rather than as regression tests for a post-mortem.&lt;/p&gt;

&lt;p&gt;If the code is in many repositories, that also introduces a burden. You'll either need some "meta-versioning" scheme that says what SHA of each repo to fetch when integrating, or you'll need to co-locate the code into a single monorepo (which has its own cost/benefit analysis).&lt;/p&gt;

&lt;h2&gt;
  
  
  Not economical to run all the tests
&lt;/h2&gt;

&lt;p&gt;The last technical problem I'll mention is test triggering. In the CBOI model, you only needed to run the backend tests when the backend changed, and the frontend tests when the frontend changed. And they were smaller tests that only required a single system in their test fixture. CI is going to require that we write tests with heavier fixtures, and run them on more changes.&lt;/p&gt;

&lt;p&gt;Triggering across projects is tricky. Our goal is to avoid running all the tests every time, but to run the "necessary" ones. You could write some logic that says "last time we touched that backend we broke something, so those changes also trigger this other CI". This logic is likely flawed and quickly rusts, so I don't think it's a good strategy. You could automate that logic using some heuristics, like &lt;a href="https://www.launchableinc.com/"&gt;Launchable&lt;/a&gt; does. But to make this calculation reliably correct, ensuring that &lt;em&gt;all&lt;/em&gt; affected tests are run for a given change, you need a dependency graph. Bazel is great for expressing and querying that graph, for example finding every test that transitively depends on the changed sources.&lt;/p&gt;

&lt;p&gt;In a naive solution, it's also too slow to build everything from HEAD. You need a shared cache of intermediate build artifacts. Bazel has a great remote caching layer that can scale to a large monorepo, ensuring that you keep good incrementality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Organization Averse to Integrating
&lt;/h2&gt;

&lt;p&gt;Lastly, I mentioned there's a non-technical problem as well. Even with clever engineers and the right tools, like Bazel, this might be what sinks your effort.&lt;/p&gt;

&lt;p&gt;Engineers want to work in isolation from each other. For example, the backend engineers think JavaScript is a mess and don't want to learn anything about frontend code. Engineers are amazingly tribal! Try asking a Mac user to develop on Windows or vice-versa.&lt;/p&gt;

&lt;p&gt;To do CI, we're asking that the backend engineers have to look at the frontend test results when something is red, to determine if their changes caused a regression. We're asking the frontend engineers to wait for a build of the backend to run their tests against. These teams never had to work closely together in the past.&lt;/p&gt;

&lt;p&gt;Worse, we're also asking the managers to act differently. This is an infrastructure investment for the future, requiring some plumbing changes in the build system. So only an organization willing to make strategic decisions will be able to prioritize and consistently staff their CI project. Also, the managers from different parts of the org will have to reach some technical agreement between their teams about standardizing on build/test tooling that can span across projects. This may run into the same friction you always have when making shared technical decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Epilogue: coverage
&lt;/h2&gt;

&lt;p&gt;I like to beat up on test coverage as a metric, because it weights entirely on executing lines of code, but not on making assertions. In the context of CBOI, test coverage is also misleading. You might have 100% test coverage of the frontend, and 100% test coverage of the backend, but 0% test coverage of defects seen when integrating the two. I think this contributes to the misunderstanding among engineering managers.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Bazel: what you give, what you get</title>
      <dc:creator>Alex 🦅 Eagle</dc:creator>
      <pubDate>Tue, 01 Jun 2021 17:46:08 +0000</pubDate>
      <link>https://forem.com/jakeherringbone/bazel-what-you-give-what-you-get-5a91</link>
      <guid>https://forem.com/jakeherringbone/bazel-what-you-give-what-you-get-5a91</guid>
      <description>&lt;p&gt;There are a few ways I like to describe Bazel to an engineer who hasn't used it. If they have used similar build tools like Gradle or Make, I'll usually start with comparing the configuration affordances or differences in execution strategies. But most engineers have only used the canonical tooling for the language they write, and only superficially interact with it. After all, they're busy writing the code, and the build system normally tries to hide behind the scenes. With Bazel, we're asking engineers to understand a bit more, and here's where I like to start:&lt;/p&gt;

&lt;p&gt;Bazel offers you this proposition: you describe the dependencies of your application, and Bazel will keep your outputs up-to-date.&lt;/p&gt;

&lt;h1&gt;
  
  
  Describe your dependencies
&lt;/h1&gt;

&lt;p&gt;Most build systems allow any code to depend on anything. As a result, they are limited in how aggressively they can minimize re-build times. This is extra work you'll need to do, to get Bazel's benefits.&lt;/p&gt;

&lt;p&gt;Your job is to describe your sources, by grouping them into "targets". For example, "a TypeScript library", "a Go package", "a dynamic-linked Swift library", etc. You say which source files in your repo are part of each target, and then what other targets it "depends" on. Sometimes you can give some other bits of description, like the module name this code should be imported by, or options for compiling it. Sometimes you'll have to indicate runtime dependencies as well, such as some data file read during one of your tests.&lt;/p&gt;

&lt;p&gt;That's it - you don't have to tell Bazel what to do with these sources.&lt;/p&gt;

&lt;p&gt;The amount of work varies. Since your source code generally hints at the dependencies (like with &lt;code&gt;import&lt;/code&gt; statements) it's possible for tooling to automate 80% of the work, and such BUILD file generators exist for a small, increasing number of languages. It's also up to you how detailed to be - you can just make one coarse-grained target saying "all the Java files in this whole directory tree are one big library", or you could make fine-grained ones for each subdirectory, or something in the middle.&lt;/p&gt;

&lt;p&gt;The more correct your dependency graph, the more guarantees Bazel provides. If your graph is missing some inputs, then Bazel can't know to invalidate caches when those inputs change. This is called non-hermeticity. If your tools produce different outputs for the same inputs (like including a timestamp or non-stable ordering), then Bazel will be less incremental than it should since dependent targets will have to re-build. This is called non-determinism.&lt;/p&gt;

&lt;p&gt;As a side benefit of describing your dependencies, sometimes you'll also discover undesired dependencies, so you can fix those and/or add constraints to prevent bad dependencies from being introduced in your code.&lt;/p&gt;

&lt;h1&gt;
  
  
  Keeps your outputs up-to-date
&lt;/h1&gt;

&lt;p&gt;In exchange for your work in describing your dependencies, you get a fantastic property: fast, incremental, and correct outputs.&lt;/p&gt;

&lt;p&gt;Your outputs are a filesystem tree, usually in the &lt;code&gt;bazel-out&lt;/code&gt; folder. Bazel populates some subset of this tree depending what you ask for. If you ask for the default outputs of a Java library, Bazel places a &lt;code&gt;.jar&lt;/code&gt; file in the output tree. If you ask for a test to be run, Bazel places the exit code of that test runner in the output tree (representing the pass/fail status).&lt;/p&gt;

&lt;p&gt;Bazel does the minimum work required to update the output tree. In the trivial case, Bazel queries the dependency graph and determines that the inputs to a given step are the same as a previous build, and does no work. This "cache hit" is the common case. If you don't have a cache hit locally on your machine, Bazel will fetch one from a remote cache.&lt;/p&gt;

&lt;p&gt;If you change one file, then any nodes in the dependency graph that directly depend on it must be re-evaluated. That might mean a compiler is re-run. However if the result is the same as a previous run, then there is no more work to be done. This avoids "cascading re-builds" where a whole spine of the tree is re-evaluated.&lt;/p&gt;

&lt;p&gt;There are a lot of things you can do with an incrementally-updated output tree. For example, you can set up your CI to just run &lt;code&gt;bazel test //...&lt;/code&gt; (test everything) and then rely on Bazel incrementality and caching to be sure only the minimal build&amp;amp;test work happens for each change.&lt;/p&gt;

&lt;p&gt;There's a lot more to Bazel, but I find this description fits well in a two-minute attention span and conveys the basic value proposition. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is a build system and what is CI?</title>
      <dc:creator>Alex 🦅 Eagle</dc:creator>
      <pubDate>Tue, 01 Jun 2021 17:45:52 +0000</pubDate>
      <link>https://forem.com/jakeherringbone/what-is-a-build-system-and-what-is-ci-566d</link>
      <guid>https://forem.com/jakeherringbone/what-is-a-build-system-and-what-is-ci-566d</guid>
      <description>&lt;p&gt;For a long time, I thought I knew the answer to that question. A build system understands your software, how to build and test it. And a CI is a loop that runs the build system on a server.&lt;/p&gt;

&lt;p&gt;When I was the tech lead for Angular CLI, I asked a lot of our big corporate users "what build system do you currently use" and the most common response was "Jenkins". Of course with my preconception of what these terms mean, I thought they were just wrong.&lt;/p&gt;

&lt;p&gt;It turns out they were right, because they turned Jenkins into the build system. They probably started in the small scale with a build system for the frontend code (let's say npm scripts) and a build system for the backend (let's say Maven), and at that time Jenkins would have run these independently. As things got complex and interconnected, they'd need integration tests, so no surprise, the one common place these could be added is using some Groovy code or a plugin to Jenkins (There should be a software aphorism "any tool with sufficient adoption grows a plugin ecosystem, thereby rendering it redundant with tools it should have complimented".)&lt;/p&gt;

&lt;p&gt;Now they created a build system you can't run locally on your machine, only in the CI environment, and kinda ruined CI for everyone. Now engineers have to wait forever to go through a CI loop to get something green, because it's too hard to reproduce the failure locally to fix it. It's sad, but understandable the way this evolved.&lt;/p&gt;

&lt;p&gt;There are build systems which are meant to generalize across the stack and are locally-reproducable (&lt;a href="http://bazel.build"&gt;Bazel&lt;/a&gt; of course) - but what I've learned is that to sell that solution, you have to frame it as replacing CI, not replacing the build system. After switching to Bazel, you actually have a "CI" you can run locally on your machine, with a server that runs that thing in a loop. And you try not to get too hung up on how the term "CI" lost all its meaning in the process.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What’s happening with TypeScript typings</title>
      <dc:creator>Alex 🦅 Eagle</dc:creator>
      <pubDate>Tue, 17 Nov 2020 17:34:55 +0000</pubDate>
      <link>https://forem.com/jakeherringbone/what-s-happening-with-typescript-typings-jan</link>
      <guid>https://forem.com/jakeherringbone/what-s-happening-with-typescript-typings-jan</guid>
      <description>&lt;p&gt;I work on the Angular 2 team, which is a fantastic chance to make some big improvements in developer productivity (or happiness, just as well). I’ve been in this field for 6 years now and I’ve started to see some patterns. One of them is, many developers start out their career with an aversion to changing or adapting their workflow.&lt;/p&gt;

&lt;p&gt;This is true for editors and IDEs, and developer tools in general. Beginners are a little lost among the options, and rather than increase that feeling of discomfort you already have about your lack of experience relative to your peers, you stick with something you know. It’s whatever editor you used in your CS classes, perhaps, which you started using because it was the one your teaching assistant showed you, or the one that was convenient to access on your college network. I’ve never met someone who started out by trying every editor for a week, then picking the one that was most ergonomic for them.&lt;/p&gt;

&lt;p&gt;Really, you should re-evaluate your toolset all the time. How can you make yourself more productive? There is such a wide range of techniques out there. Hack your brain. Meditation. Read a technical book. Get a l33t keyboard. And yes, maybe try another editor. Maybe that editor can do something to increase your productivity. I’ve seen developers gain more experience, and use their self-confidence to take the short-term hit of not knowing where any of the buttons and dials are anymore. Because they know that over the hump, there is possibly a big payoff over several years.&lt;/p&gt;

&lt;p&gt;I’ll get on topic, finally. I think the biggest productivity feature in your editor is its ability to understand the code you’re writing and help you get it correct the first time, and later to make safe changes so maintenance work stays the minority of your time. And editors can only understand code if you make the code machine-readable. That means not putting documentation in comments, or test cases like in an untyped language. The editor needs you to tell it the types so that it can be a co-pilot.&lt;/p&gt;

&lt;p&gt;Was I about to get on-topic? TypeScript! A few of us on the Angular team focus almost entirely on using the language tools to power smart stuff. It turns out, when you build something directly into the compiler, you have the perfect environment to understand the code perfectly, and do something other than produce the executable output.&lt;/p&gt;

&lt;p&gt;TypeScript is only as smart as the types you assign (or it can infer) in your code. When you use a library, things get a lot trickier. We need to discover the types in the APIs you’re using. In other languages that were typed from the beginning, like Java, the type information always accompanies the compiled code. But for TypeScript, which is just a superset of JavaScript, there is nowhere for the type information to go in the executable form of the code. JavaScript has no type syntax, and even something like JSDoc annotations doesn’t work in general because the code is so de-sugared (eg. turning classes into complex IIFEs) that information about where the type lived is lost. We really need a foolproof way for the types of the library to be available whenever that library shows up to the TypeScript compiler, without making developers chase down the type information and re-attach it themselves. Sadly, this is not the case today! Let’s fix it!&lt;/p&gt;

&lt;p&gt;There are a few cases which have different prognoses.&lt;/p&gt;

&lt;p&gt;The easiest case is when the library is authored in TypeScript, as you’d expect. The compiler produces “header” files, ending with .d.ts, which are included alongside the .js executable code. Now in your program, you import {} from ‘library’. TypeScript understands a few ways to interpret where the ‘library’ may be found on disk; we even customize this in some things like our custom builder (included in angular-cli).&lt;/p&gt;

&lt;p&gt;If the library is not written in TypeScript, but the maintainers want to support TypeScript clients, then they could hand-write a .d.ts file and ship it along with the library, so the client can’t tell the difference between authoring languages. In practice, I have not seen this approach taken a single time. Including something in your distro means taking responsibility for its bugs, and it’s pretty hard to write automated tests to ensure that the TypeScript typings you ship match your sources. Maybe we can write some more tooling to support this.&lt;/p&gt;

&lt;p&gt;The vast majority case is that the library is not written in TypeScript. I hope we can improve this situation by providing library owners with a pull request that gives them the typings, the distribution semantics, and also a README.md to help them maintain the typings. Most importantly, we have to give them a means to automatically determine if the .d.ts content is still correct as they make changes to the library. For example, we could try to type-check all their examples using the .d.ts file.&lt;/p&gt;

&lt;p&gt;There will always be the case when the library maintainers don’t want to own the typings (or there are no maintainers to be found). For libraries which target nodejs, you can be sure they have some commonjs-format exported symbol, and this can convieniently be attached to typings. But a lot of libraries only have the side effect of sticking some symbol onto the window object when they are loaded. These can only be typed by sticking the typings into a global namespace as well, and just as global namespace pollution is bad at runtime (is $ the one from jQuery or Protractor?), it is bad at type-check time. These global typings are typically called “ambient”. Ambient typings work by declaring global variables, or “namespaces” which is a TypeScript term for some object that just contains some properties. You can tell something is ambient if there is no ES6 import statement that causes the symbols to be visible in your source file.&lt;/p&gt;

&lt;p&gt;A perfect example is the type of Promise. This is an ES6 API, so when you are compiling to target ES5, the compiler rightly gives you a type-check error that the symbol doesn’t exist, because it won’t at runtime either. However, you might be using a browser that supports the Promise API in ES6, or you might be using a shim like corejs that implements it for you. Now you could tell the compiler to target ES6, but maybe there are other APIs that are not implemented in the target browser. Really your target is now ES5+es6-promise. To make the type-checker see this, you just add an ambient typing for es6-promise into the compilation unit (by a /// anywhere in your code, or to avoid brittle relative paths, by handing the file as an explicit compiler input). How do you get this typing on your machine so you can hand it to the compiler? What’s the correct version? Well, the TypeScript team is already working on that. By splitting the stdlib file for ES6 (called lib.es6.d.ts) into many small files, one per feature, you’ll be able to effectively target ES5+es6-promise with only the stuff shipped with the language. Note that this solution for ambient typings only works for standardized APIs (like es7-reflect-metadata) where you could choose any conforming implementation at runtime.&lt;/p&gt;

&lt;p&gt;Ambient typings for non-standard libraries are harder. The compiler won’t ship with types for all libraries in the world, so we’ll have to fetch them from somewhere. One design the team is considering is, can we have a parallel distribution mechanism for types, such as an npm scoped package. Now the registry where you resolve the package, as well as the version of the runtime, could be translated simply into a corresponding registry location for the compatible typings. And, we can follow the dependency tree, so you have types installed for the transitive closure of dependencies. There’s a wrinkle here, which is that the library won’t release a new version when you make bugfixes to the typings, so you need a way to say “you have version 1.2.3 of the typings for &lt;a href="mailto:library@1.2.3"&gt;library@1.2.3&lt;/a&gt;, but we now have a newer version 1.2.3 of the typings”. So some npm changes would be needed, making this a big effort.&lt;/p&gt;

&lt;p&gt;I mentioned the problem of the global namespace of ambient typings, which is ripe for collision. The other kind of typings are called “external modules” which are much better (confusingly, there are no longer “internal modules”, these became namespaces). You can tell something is an external module if there is an ES6 import statement that brings it into scope. This gives you a location to rename the symbols, so you can use the “util” object provided by libraryA in the same file where you use the “util” object provided by libraryB, using something like “import {util as utilB} from ‘libraryB’”.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="http://github.com/typings"&gt;http://github.com/typings&lt;/a&gt; project, @blakeembrey has done an interesting trick of fetching typings which were defined as Ambient, and making an external module out of them. This encapsulates the otherwise global pollution, and works as long as the library provides some export.&lt;/p&gt;

&lt;p&gt;Long term, @blakeembrey and the TypeScript team, as well as the Angular team, are all collaborating to find a mechanism for most users to have the type-checker “just work” for most libraries. It’s a tough problem but a lot of fun to be involved to help solve it.&lt;/p&gt;

</description>
      <category>angular</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Angular ❤️ Bazel leaving Angular Labs</title>
      <dc:creator>Alex 🦅 Eagle</dc:creator>
      <pubDate>Fri, 29 May 2020 03:00:48 +0000</pubDate>
      <link>https://forem.com/bazel/angular-bazel-leaving-angular-labs-51ja</link>
      <guid>https://forem.com/bazel/angular-bazel-leaving-angular-labs-51ja</guid>
      <description>&lt;p&gt;With Angular 9.0 released, including the new Ivy compiler and runtime, it's a good time to ask "what's next for Angular?". You might even ask "is Bazel coming next?". The short answer is: we are spinning off the Bazel effort to be independent of Angular, and to work for ALL frontend frameworks or Node.js backends. However, Bazel will never be the default build tool in Angular CLI, and we expect that most applications will not switch.&lt;/p&gt;

&lt;h1&gt;
  
  
  What we've learned
&lt;/h1&gt;

&lt;p&gt;We've been working on Angular with Bazel for a few years. As a quick refresher, Bazel is Google's build tool which is incremental - a small change results in a small re-build/test. It also lets your build steps use a shared cache and execute remotely in parallel on a farm of machines. It's the key to Google's ability to write large applications with thousands of engineers in a massive monorepo. For Angular to be usable internally at Google, the team must maintain Angular+Bazel for Google engineers.&lt;/p&gt;

&lt;p&gt;Bazel has been available in Angular Labs as an opt-in preview for over a year, which gave us a chance to put some miles on it and learn from users. We do have several companies relying on this toolchain, and I've heard from a couple of them who plan to write up a case study about the benefits they've gotten.&lt;/p&gt;

&lt;p&gt;One thing we've learned is that most Angular applications don't have the problem Bazel solves. For these applications, we don't want to introduce another complex bit of build system machinery - no matter how well we encapsulate it in the Angular CLI, it's a leaky abstraction and you'll probably encounter Bazel as an end user. For this reason, we don't intend to ever make it the default for Angular CLI users.&lt;/p&gt;

&lt;p&gt;Another thing we've learned is that Bazel migration should happen in tiny steps. Any breaking change is a major obstacle for enterprise apps. Bazel can run any toolchain: while Bazel is responsible for calculating which build steps need to re-run, it doesn't care what those steps do. This means we have the option of migrating to Bazel while keeping the toolchain the same. For Angular developers, this means that every application which works with the CLI should work with Bazel.&lt;/p&gt;

&lt;p&gt;We have tried a few approaches for that migration. First, in Angular 4 we introduced support for Google's Closure Compiler. This produces the smallest bundles, but it's an expert tool that requires a lot of work to adopt. Then we introduced a hybrid toolchain, using Google's approach for compiling TypeScript, Angular, Sass, and so on, but with Rollup as the bundler. This is a lot more usable, but still not always a drop-in replacement; migrating to Google's tooling still has some cost.&lt;/p&gt;

&lt;h1&gt;
  
  
  Generalizing Bazel
&lt;/h1&gt;

&lt;p&gt;So essentially, we had hoped to export the Google-internal toolchain, but it has some incompatibilities and even the smallest incompatibility is unacceptable. So late last year, we released a 1.0 stable version of Bazel's JavaScript support (rules_nodejs) with a novel feature: run &lt;em&gt;any&lt;/em&gt; JS ecosystem tool under Bazel without any custom plugin code (Bazel calls these "rules").&lt;/p&gt;

&lt;p&gt;I wrote about this in &lt;a href="https://dev.to/bazel/layering-in-bazel-for-web-389h"&gt;Layering in Bazel for Web&lt;/a&gt;. The TL;DR of that article: if you install some JS tooling of your choice, say&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;mocha domino @babel/core @babel/cli @babel/preset-env http-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you can now configure Bazel to use that toolchain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@npm//@babel/cli:index.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"babel"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@npm//mocha:index.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"mocha_test"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@npm//http-server:index.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"http_server"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;babel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"compile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;outs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"app.es5.js"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;http_server&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"server"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="s"&gt;"index.html"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"app.es5.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;mocha_test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"unit_tests"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"*.spec.js"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What does this mean for Angular developers? Well, since Bazel now runs any JS ecosystem tooling, it should be able to run exactly the tooling you're using today. To explain how we do that, we need to pick apart the Angular CLI a little.&lt;/p&gt;

&lt;p&gt;One simple model of Angular CLI is:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ng&lt;/code&gt; command -&amp;gt; Builder -&amp;gt; webpack&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;ng&lt;/code&gt; command reads your &lt;code&gt;angular.json&lt;/code&gt; file to find which Builder should be used. The Builder layer is internally called "Architect", so look in your &lt;code&gt;angular.json&lt;/code&gt; for a key "architect", and you'll see mappings for what builder to use. For example, say you run &lt;code&gt;ng build&lt;/code&gt;; the default builder is &lt;code&gt;@angular-devkit/build-angular:browser&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read more about Angular CLI builders in the &lt;a href="https://angular.io/guide/cli-builder"&gt;docs&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is actually a standalone program you could run outside of Angular CLI. The &lt;code&gt;@angular-devkit/architect-cli&lt;/code&gt; package provides a command-line tool called architect. So instead of &lt;code&gt;ng build&lt;/code&gt;, it's totally equivalent to peel off one layer of abstraction and run &lt;code&gt;npx architect frontend:build&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now we can put the parts together. If Bazel runs arbitrary JS tooling, and we know how to run individual steps of your current Angular build using Architect, then we can have Bazel run the &lt;code&gt;architect&lt;/code&gt; CLI to exactly reproduce the build you're doing today. We have an &lt;a href="https://github.com/bazelbuild/rules_nodejs/blob/master/examples/angular_bazel_architect/"&gt;example app demonstrating this&lt;/a&gt; - if you look at the &lt;code&gt;BUILD.bazel&lt;/code&gt; file in the example you see that we just call the architect command when Bazel wants to build or test the Angular app.&lt;/p&gt;

&lt;h1&gt;
  
  
  What does this mean for me?
&lt;/h1&gt;

&lt;p&gt;First of all, if your team is satisfied with Angular CLI (or with Nx) then there is nothing for you to do. Bazel doesn't affect you and won't in the future.&lt;/p&gt;

&lt;p&gt;What if you do have a scaling problem with today's tooling? This is software engineering, so there are trade-offs. By making this build system 100% compatible with all existing Angular applications, we have lost some of the incrementality guarantees of Bazel. If we just run architect, the most granular our build can be is to have a bunch of Angular libraries, and an app that consumes them. Then only the affected libraries need to be re-built after a change. This is very similar to what Nx does.&lt;/p&gt;

&lt;p&gt;We think it's now possible to get the best possible on-ramp: first use Bazel to orchestrate your existing build steps, then customize the build graph to improve incrementality, starting from the slowest, most frequently-executed steps.&lt;/p&gt;

&lt;p&gt;There's another interesting consequence of this approach. Angular is not special, any frontend or Node.js backend code can be built by Bazel today without any work required from the team. For this reason, our plan is to migrate the Bazel-specific APIs (the &lt;code&gt;@angular/bazel&lt;/code&gt; package) out of Angular itself, and allow the Bazel effort to proceed totally decoupled from Angular teams goals. This gives the Bazel effort more autonomy, and means it immediately applies to React, Vue, Next.js, or any other framework/technology that provides a CLI.&lt;/p&gt;

&lt;p&gt;As for who supports what: I'm now working on rules_nodejs but no longer on the Angular team, so our layering is quite clear. Angular team supports the CLI builders, so any bugs you observe from using them can be reported to Angular. The orchestration of these builders is owned by rules_nodejs and we'll do our best to support you. Note that the latter is an all-volunteer OSS project.&lt;/p&gt;

&lt;p&gt;Here's a short summary of changes happening now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Angular is deprecating the &lt;code&gt;@angular/bazel&lt;/code&gt; package for v10, see &lt;a href="https://github.com/angular/angular/pull/37190"&gt;the Pull Request&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;The Angular CLI builder is now in the &lt;a href="https://www.npmjs.com/package/@bazel/angular"&gt;&lt;code&gt;@bazel/angular&lt;/code&gt; package&lt;/a&gt; which is published from rules_nodejs&lt;/li&gt;
&lt;li&gt;There is no automatic Bazel configuration for now. We expect users will opt-in to using Bazel, so you'll have to configure it with WORKSPACE/BUILD files. There are a number of community-contributed tools for maintaining the config, such as &lt;a href="https://github.com/Evertz/bzlgen"&gt;Evertz/bzlgen&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You no longer need the &lt;code&gt;ng_module&lt;/code&gt; Bazel rule which was in &lt;code&gt;@angular/bazel&lt;/code&gt;. The migration path is to use &lt;code&gt;ts_library&lt;/code&gt; with an Angular plugin. See &lt;a href="https://github.com/bazelbuild/rules_nodejs/tree/master/examples/angular"&gt;the canonical Angular example&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll keep updating &lt;a href="https://github.com/bazelbuild/rules_nodejs/blob/master/docs/examples.md#angular"&gt;the docs&lt;/a&gt; and you can follow along with this effort in the #angular channel on &lt;a href="https://slack.bazel.build"&gt;https://slack.bazel.build&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I'm super excited to continue rolling out Bazel's unique capabilities to the frontend developer community! Thank you so much to all the contributors and users who have shaped this solution.&lt;/p&gt;

</description>
      <category>angular</category>
    </item>
    <item>
      <title>Things a program must not do under Bazel</title>
      <dc:creator>Alex 🦅 Eagle</dc:creator>
      <pubDate>Thu, 24 Oct 2019 20:27:23 +0000</pubDate>
      <link>https://forem.com/bazel/things-a-program-must-not-do-under-bazel-38g9</link>
      <guid>https://forem.com/bazel/things-a-program-must-not-do-under-bazel-38g9</guid>
      <description>&lt;p&gt;Don't expect the environment to have implicit things like tools in the PATH. Bazel builds are meant to be portable so you can invoke them remotely or share cache hits with your coworkers. Be explicit about getting tools you need as inputs, or using Bazel's toolchain feature to locate them on the disk.&lt;/p&gt;

&lt;p&gt;Don't rely on the working directory to be next to the inputs. Bazel always sets the working directory in the root of the workspace, next to the WORKSPACE file. (Of course you can always chdir inside the process after Bazel starts it. In NodeJS, for example, you can do this with a &lt;a href="https://github.com/bazelbuild/rules_nodejs/issues/1840#issuecomment-619277667"&gt;&lt;code&gt;--require&lt;/code&gt; script&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;Don't try to write to the sources. The input directory will be in a read-only filesystem by default. For example Angular CLI runs the &lt;code&gt;ngcc&lt;/code&gt; program which tries to edit &lt;code&gt;package.json&lt;/code&gt; files in the inputs:&lt;br&gt;
&lt;a href="https://github.com/angular/angular/pull/33366"&gt;https://github.com/angular/angular/pull/33366&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don't try to write in a subdirectory of the sources either. Only write to the output directory.&lt;/p&gt;

&lt;p&gt;Don't resolve symlinks of the inputs. Node programs often try to read symlinks because of a common pattern of linking build outputs of one package to be dependencies of another. Bazel creates an execroot (for build actions) and a runfiles root (for test/run) filled with symlinks to sources or other outputs. Resolving the symlinks causes non-hermeticity by finding undeclared input files, or causes logic bugs in the program if it compares a symlink target path and expects them to be the same.&lt;/p&gt;

&lt;p&gt;Don't rely on stdin/stdout to communicate inputs and outputs. Bazel will sometimes use stdin for a protocol that communicates with the program, like worker mode or ibazel watch mode. It's possible to work around this with a genrule but that introduces a bash dependency.&lt;/p&gt;

&lt;p&gt;Don't rely on accepting all the configuration over command line arguments. You will likely run into argv length limits. Consider accepting a configuration file or params file with the CLI flags written into it.&lt;/p&gt;

&lt;p&gt;My earlier post about getting Nativescript tools to run under Bazel: &lt;a href="https://medium.com/@Jakeherringbone/running-tools-under-bazel-8aa416e7090c"&gt;https://medium.com/@Jakeherringbone/running-tools-under-bazel-8aa416e7090c&lt;/a&gt;&lt;/p&gt;

</description>
      <category>build</category>
    </item>
    <item>
      <title>Layering in Bazel for Web</title>
      <dc:creator>Alex 🦅 Eagle</dc:creator>
      <pubDate>Thu, 10 Oct 2019 13:59:58 +0000</pubDate>
      <link>https://forem.com/bazel/layering-in-bazel-for-web-389h</link>
      <guid>https://forem.com/bazel/layering-in-bazel-for-web-389h</guid>
      <description>&lt;h1&gt;
  
  
  Bazel is fast, general-purpose, and stable 1.0
&lt;/h1&gt;

&lt;p&gt;Bazel is a build tool that gives you a typical 10x improvement in your build and test times, using a deterministic dependency graph, a distributed cache of prior intermediate build results, and parallel execution over cloud workers.&lt;/p&gt;

&lt;p&gt;Bazel has &lt;a href="https://blog.bazel.build/2019/10/10/bazel-1.0.html"&gt;just released 1.0&lt;/a&gt; which is a huge deal. Congratulations to the Bazel team on this culmination of &lt;a href="https://blog.bazel.build/2015/03/27/Hello-World.html"&gt;over four years of hard work&lt;/a&gt;! Large companies cannot afford risks on beta software and need a guarantee of stability. Now that we have 1.0, Bazel is ready to use! But Bazel by itself is generally not sufficient.&lt;/p&gt;

&lt;p&gt;Bazel is like an execution engine. It's really the core of a build system, because it doesn't know how to build any particular language. You could use Bazel alone, in theory, using low-level primitives. But in practice you rely on a plugin to translate your higher-level build configuration into Actions, which are subprocesses Bazel spawns in a variety of ways like remote workers. So we see that Bazel is a layer beneath a variety of language-ecosystem-specific plugins.&lt;/p&gt;

&lt;p&gt;These plugins are called rules, and these are at a wide range of maturities. Some rules, like those for Java and C++, are distributed along with the Bazel core and are also mature. Some rules like .net and ruby are community contributed and not at a 1.0 quality.&lt;/p&gt;

&lt;h1&gt;
  
  
  Javascripters delight: Bazel can run almost any tool on npm
&lt;/h1&gt;

&lt;p&gt;The plugin I work on is for JavaScript/Node.js is getting close to 1.0 also, and is called rules_nodejs.&lt;/p&gt;

&lt;p&gt;Bazel just orchestrates the execution of the programs you tell it to run. &lt;em&gt;Any&lt;/em&gt; npm package that publishes a binary just works, without someone needing to write any Bazel plugin specific to that package. rules_nodejs has a "core" distribution that is just the stuff needed to teach Bazel how to run yarn/npm and consume the resulting packages. I'll get into more specifics about how we do it later, but let me first show you how it looks. You can skip to the final solution at &lt;a href="https://github.com/alexeagle/my_bazel_project"&gt;https://github.com/alexeagle/my_bazel_project&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First you need Bazel with some accompanying config files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npx @bazel/create my_bazel_project
&lt;span class="c"&gt;# If you used Bazel before, make sure you got @bazel/create version 0.38.2 or later so later steps will work&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;my_bazel_project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;gives you a new workspace to play in.&lt;/p&gt;

&lt;p&gt;Now let's install some tools. For this example, we'll use Babel to transpile our JS, Mocha for running tests, and http-server to serve our app. This is just an arbitrary choice, you probably have some tools you already prefer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;mocha domino @babel/core @babel/cli @babel/preset-env http-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's run these tools with Bazel. First we need to import them, using a &lt;code&gt;load&lt;/code&gt; statement.&lt;/p&gt;

&lt;p&gt;So edit &lt;code&gt;BUILD.bazel&lt;/code&gt; and add&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@npm//@babel/cli:index.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"babel"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@npm//mocha:index.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"mocha_test"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@npm//http-server:index.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"http_server"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows us that rules_nodejs has told Bazel that a workspace named &lt;code&gt;@npm&lt;/code&gt; is available (think of the at-sign like a scoped package for Bazel). rules_nodejs will add &lt;code&gt;index.bzl&lt;/code&gt; files exposing all the binaries the package manager installed (same as the content of the &lt;code&gt;node_modules/.bin&lt;/code&gt; folder). The three tools we installed are in this &lt;code&gt;@npm&lt;/code&gt; scope and each has an index file with a &lt;code&gt;.bzl&lt;/code&gt; extension.&lt;/p&gt;

&lt;p&gt;Loading from these index files is just like importing symbols into your JavaScript file. Having made our &lt;code&gt;load()&lt;/code&gt;s we can now use them. Each of the symbols is a function that we call with some named parameter arguments.&lt;/p&gt;

&lt;p&gt;Now we write some JavaScript and some tests. To save time I won't go into that for this article.&lt;/p&gt;

&lt;p&gt;Okay, how will we build it? We need to think in terms of a graph of inputs, tools, and outputs, in order to express to Bazel what it needs to do to build a requested output, and how to cache the intermediate results.&lt;/p&gt;

&lt;p&gt;Add this to &lt;code&gt;BUILD.bazel&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;babel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"compile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="s"&gt;"app.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"es5.babelrc"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"@npm//@babel/preset-env"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;outs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"app.es5.js"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="s"&gt;"app.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"--config-file"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"$(location es5.babelrc)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"--out-file"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"$(location app.es5.js)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This just calls the Babel CLI, so you can see their documentation for what arguments to pass. We use the &lt;code&gt;$(location)&lt;/code&gt; helper in Bazel so we don't need to hardcode paths to the inputs or outputs.&lt;/p&gt;

&lt;p&gt;We can try it already: &lt;code&gt;npm run build&lt;/code&gt;&lt;br&gt;
and we see the .js outputs from babel appear in the &lt;code&gt;dist/bin&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;Let's serve the app to see how it looks, by adding to BUILD.bazel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;http_server&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"server"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="s"&gt;"index.html"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"app.es5.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"."&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And add to the &lt;code&gt;scripts&lt;/code&gt; in &lt;code&gt;package.json&lt;/code&gt;: &lt;code&gt;"serve": "ibazel run :server"&lt;/code&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ibazel is the watch mode for bazel.&lt;br&gt;
Note that on Windows, you need to pass &lt;code&gt;--enable_runfiles&lt;/code&gt; flag to Bazel. That's because Bazel creates a directory where inputs and outputs both appear together, for convenience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now we can serve the app: &lt;code&gt;npm run serve&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, let's run mocha:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;mocha_test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"unit_tests"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"*.spec.js"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s"&gt;"*.spec.js"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="s"&gt;"@npm//domino"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"app.es5.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note that we installed the domino package here so we could test the webapp including DOM interactions in node, which is faster than starting up a headless browser.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Run it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ npm test&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Without Bazel knowing anything about babel, http-server, or mocha, we just assembled a working, incremental, remote-executable toolchain for building our little app.&lt;/p&gt;

&lt;h2&gt;
  
  
  More examples
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Bazel running a React app written in TypeScript/Sass with Webpack: &lt;a href="https://github.com/bazelbuild/rules_nodejs/pull/1255"&gt;https://github.com/bazelbuild/rules_nodejs/pull/1255&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Bazel running more mocha tests: &lt;a href="https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/webapp/BUILD.bazel#L34-L51"&gt;https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/webapp/BUILD.bazel#L34-L51&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Bazel running TypeScript &lt;code&gt;tsc&lt;/code&gt;: &lt;a href="https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/app/BUILD.bazel#L51-L74"&gt;https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/app/BUILD.bazel#L51-L74&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Bazel using Babel, then packing the resulting application into a Docker image: &lt;a href="https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/angular/src/BUILD.bazel#L131-L204"&gt;https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/angular/src/BUILD.bazel#L131-L204&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Bazel runs Less and Stylus css preprocessors: &lt;a href="https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/app/styles/BUILD.bazel"&gt;https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/app/styles/BUILD.bazel&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Bazel using Google Closure Compiler for smallest bundle sizes: &lt;a href="https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/closure/BUILD.bazel#L4-L15"&gt;https://github.com/bazelbuild/rules_nodejs/blob/0.38.2/examples/closure/BUILD.bazel#L4-L15&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Bazel running Nuxt.js build for server-side rendered Vue: &lt;a href="https://github.com/albttx/bazel-nuxt"&gt;https://github.com/albttx/bazel-nuxt&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Going further: custom rules and macros
&lt;/h1&gt;

&lt;p&gt;It's great that Bazel can run arbitrary npm tools, but this required that we know about the CLI arguments needed for these tools. It also wasn't very ergonomic (we had to use syntax like &lt;code&gt;$(location)&lt;/code&gt; to adapt to Bazel's paths), and we didn't take advantage of lots of Bazel features like workers (keep tools running in --watch mode), providers (let rules produce different outputs depending on what's requested) and a lot more.&lt;/p&gt;

&lt;p&gt;It also required too much learning and evaluating. Toolchain experts like the engineers on the Angular CLI team spend half their time understanding the capabilities and tradeoffs of the many available tools and choosing something good for you.&lt;/p&gt;

&lt;p&gt;As end-users we would get tired of assembling all our Bazel configuration out of individual tools, where our current experience is generally at a much higher level, and we expect a framework we use to provide a complete out-of-the-box build/serve/test toolchain. Bazel is perfect for toolchain experts to provide this developer experience.&lt;/p&gt;

&lt;p&gt;For example, Angular CLI has a "differential loading" feature where modern browsers can get smaller, modern JS without polyfills, in a way that doesn't break old browsers. This requires quite some tricks with the underlying tools.&lt;/p&gt;

&lt;p&gt;Angular CLI can make a differential loading toolchain using Bazel to compose the rules we saw above. Bazel has a high-level composition feature called macros. We can use a macro to simply wire together a series of tool CLI calls, and make that available to users. Let's say we want to let users call it this way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@npm//http-server:index.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"http_server"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@cool-rules//:differential_loading.bzl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"differential_loading"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;differential_loading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"app"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;srcs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s"&gt;"*.ts"&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
    &lt;span class="n"&gt;entry_point&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"index.ts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;http_server&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"server"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;":app"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;templated_args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"app"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we need a macro called &lt;code&gt;differential_loading&lt;/code&gt; that takes a bunch of TypeScript sources and an entry point for the app, and produces a directory that's ready to serve to both old and modern browsers.&lt;/p&gt;

&lt;p&gt;Here's what a toolchain vendor would write to implement differential loading:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;differential_loading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;entry_point&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;srcs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="s"&gt;"Common workflow to serve TypeScript to modern browsers"&lt;/span&gt;

    &lt;span class="n"&gt;ts_library&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_lib"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;srcs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;srcs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;rollup_bundle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_chunks"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;deps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_lib"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;sourcemap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"inline"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;entry_points&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;entry_point&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"index"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="n"&gt;output_dir&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# For older browsers, we'll transform the output chunks to es5 + systemjs loader
&lt;/span&gt;    &lt;span class="n"&gt;babel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_chunks_es5"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_chunks"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;"es5.babelrc"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;"@npm//@babel/preset-env"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;output_dir&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="s"&gt;"$(location %s_chunks)"&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;"--config-file"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;"$(location es5.babelrc)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;"--out-dir"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s"&gt;"$@"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Run terser against both modern and legacy browser chunks
&lt;/span&gt;    &lt;span class="n"&gt;terser_minified&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_chunks_es5.min"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_chunks_es5"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;terser_minified&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_chunks.min"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_chunks"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;web_package&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;assets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="s"&gt;"styles.css"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="s"&gt;"favicon.png"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_chunks.min"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"_chunks_es5.min"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;index_html&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"index.html"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This looks long, but it's much simpler than what we've built in the current default for Angular CLI that uses pure Webpack. That's because the composition model here has clear separation of concerns between the different tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary: the layering
&lt;/h2&gt;

&lt;p&gt;Each layer can work on its own, but users prefer the higher level abstractions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Raw tool, like &lt;code&gt;babel&lt;/code&gt;: you can call this yourself to transpile JavaScript. These are written by lots of open-source contributors.&lt;/li&gt;
&lt;li&gt;Bazel: use low-level primitives to call &lt;code&gt;babel&lt;/code&gt; from a genrule, but tied to your machine. The Bazel team at Google supports this.&lt;/li&gt;
&lt;li&gt;Bazel + rules_nodejs: use the binary provided to load and run &lt;code&gt;babel&lt;/code&gt;. Written by the JavaScript build/serve team at Google.&lt;/li&gt;
&lt;li&gt;Bazel + custom rules/macros: use a higher level API to run a build without knowing the details. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I expect that more tooling vendors, such as CLI teams on various frameworks, will provide a high-level experience that uses Bazel under the covers. This lets them easily assemble toolchains from existing tools, using their standard CLI, and you get incremental, cacheable, and remote-parallelizable builds automatically.&lt;/p&gt;

</description>
      <category>bazel</category>
    </item>
  </channel>
</rss>
