<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Adam Berger</title>
    <description>The latest articles on Forem by Adam Berger (@abrgr).</description>
    <link>https://forem.com/abrgr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/abrgr"/>
    <language>en</language>
    <item>
      <title>Your backend should probably be a state machine</title>
      <dc:creator>Adam Berger</dc:creator>
      <pubDate>Thu, 21 Sep 2023 14:06:05 +0000</pubDate>
      <link>https://forem.com/abrgr/your-backend-should-probably-be-a-state-machine-1e5o</link>
      <guid>https://forem.com/abrgr/your-backend-should-probably-be-a-state-machine-1e5o</guid>
      <description>&lt;p&gt;Whether you intended to or not, you’re probably building a state machine right now.&lt;/p&gt;

&lt;p&gt;That's because any time you have a set of steps with some ordering between them, your system can be represented and built simply and visually as a state machine.&lt;/p&gt;

&lt;p&gt;On the frontend, it’s a bit easier to squint and see the states and events you’re modeling. After all, you actually talk about transitions and "paths" the user can take through your app. The mapping from the familiar world of screens and popups and nested components to the language of hierarchical states and transitions is fairly straightforward. So, thankfully, we’ve seen more and more (though not yet enough!) adoption of state machines for modeling frontend flows.&lt;/p&gt;

&lt;p&gt;On the backend, however, while it’s just as true that many of the systems we build are implicitly state machines, I’ve yet to see many teams explicitly model them that way.&lt;/p&gt;

&lt;p&gt;I get it. Backend concerns seem quite different. Whiteboards in conference rooms around backend-focused teams are covered in boxes and arrows depicting information flows and architectural dependencies rather than states and transitions.&lt;/p&gt;

&lt;p&gt;So many of us backend engineers are so consumed with the mind-boggling concurrency of our systems that we may even scoff at the idea of a system being &lt;em&gt;in&lt;/em&gt; a "state." If the frontend seems deterministically Newtonian, the backend seems stubbornly relativistic or, on its worst days, quantum.&lt;/p&gt;

&lt;p&gt;But our users most certainly expect that each logical grouping of their data is self-consistent. While we’re thinking about data en masse, our users really care about data in the small—&lt;em&gt;this&lt;/em&gt; document or &lt;em&gt;that&lt;/em&gt; ad campaign.&lt;/p&gt;

&lt;p&gt;We’re talking about queues, eventual consistency, and reliable timers. All our users care about is having our business logic applied to their data consistently.&lt;/p&gt;

&lt;p&gt;There is a better way. And, as is so often the case, it requires a change of perspective, a jump in the level of abstraction at which we’re working.&lt;/p&gt;

&lt;p&gt;What we need on the backend is a focus on logic over infrastructure, an investment in dealing with the essential complexity of our business use cases rather than re-addressing the purely accidental complexity of our architecture with every new project.&lt;/p&gt;

&lt;p&gt;The mechanism we need to accomplish that is none other than the lowly state machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  The five sentence state machine intro
&lt;/h2&gt;

&lt;p&gt;A state machine&lt;sup id="fnref1"&gt;1&lt;/sup&gt; definition consists of states and transitions between them. Transitions happen in response to events and may have conditions that determine whether they’re active or not. Hierarchy allows for parallel (simultaneous) states. Each instance of a state machine is in one set of states at a time (a set rather than a single state to accommodate parallel states) and owns some arbitrary data that it operates on. States can define effects that run when the state is entered or exited and transitions can define effects that run when the transition is taken; those effects can atomically update the instance’s data or interact with external systems.&lt;/p&gt;

&lt;p&gt;This structure is easily visualized like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pas0zq1u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/state-machine-intro-ef530d0ece54df205b39ed15d48f1d6b.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pas0zq1u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/state-machine-intro-ef530d0ece54df205b39ed15d48f1d6b.svg" alt="State machine example" width="240" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The backend state machine value proposition
&lt;/h2&gt;

&lt;p&gt;We’ll talk about exactly how state machines help us solve the major classes of problems we face in backend development but, first, let’s look at the high-level value proposition.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;State machines are a mechanism for carefully &lt;strong&gt;constraining&lt;/strong&gt; the updates to our &lt;strong&gt;critical data&lt;/strong&gt; and the execution of &lt;strong&gt;effects&lt;/strong&gt; in a way that allows us to &lt;strong&gt;express&lt;/strong&gt; solutions to many classes of problems we encounter and to effectively &lt;strong&gt;reason&lt;/strong&gt; about those solutions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's break that down.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Constraints&lt;/strong&gt; are actually good. Like, really good. We’re all trying to build systems that perform tasks that people care about and operate in ways that we can understand, not least because we’d really like to fix them when they misbehave. Unconstrained code leaves no bulwark between our too-burdened brains and the chaos of executing billions of arbitrary operations per core every second. We all &lt;a href="https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.pdf"&gt;consider GOTOs harmful&lt;/a&gt; because Djikstra convinced us that we should aim to "make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible."&lt;/p&gt;

&lt;p&gt;There are few better ways to simplify the correspondence between what your program looks like and what it does than by constraining your program’s high-level structure to a state machine. With that reasonable constraint in place, it suddenly becomes trivial to understand, simulate, and predict what the systems we build will actually do.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Protect your data and orchestrate your effects&lt;/strong&gt;. Just as the infrastructure of our system only exists to support our business logic, our business logic only exists to act on our data and the external world. Data updates are forever and the changes we effect in the world or external systems can have serious repercussions.&lt;/p&gt;

&lt;p&gt;As we saw above, with state machines, data updates and effects are only executed at specific points, with clean error handling hooks, and easy simulation. When you know exactly where and under which conditions these critical actions will happen, your entire system becomes intelligible, invariants become comprehensible, and your data becomes trustworthy.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reasoning about your system&lt;/strong&gt; is not optional. There’s the old adage: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." Kernighan said that in the era of standalone programs. How quaint those times seem now. Once you connect two programs together the emergent effects of your &lt;em&gt;system&lt;/em&gt;—unexpected feedback loops, runaway retries, corrupted data—create a mess many orders of magnitude more “clever” than any one component.&lt;/p&gt;

&lt;p&gt;If we’re going to have any hope of understanding the systems we build—and we better, if we want them to do useful things for people—then we have no option but to constrain ourselves to simple parts. Because they are so simple, state machines are just the right high-level structure for the components of a system you hope to be able to understand.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;We left off expressiveness&lt;/strong&gt;. Expressiveness is the point at which I hear the groans from some of the folks in the back. We've all been burned by the promise of a configuration-driven panacea before. What happens when your problem demands you step beyond the paved road that the platform envisioned? And so began the rise of the "everything as code" movement that's now ascendant. It makes sense. You simply can't forego expressivity because expressivity determines your ability to solve the problems you're faced with. It's non-negotiable.&lt;/p&gt;

&lt;p&gt;But &lt;em&gt;expressivity&lt;/em&gt; is the key, not arbitrary &lt;em&gt;code&lt;/em&gt; executing in arbitrary ways. State machines are expressive enough to model processes in any domain, naturally. They simply provide the high-level structure within which your code executes. This constraint ensures you can naturally express your logic while preserving your ability to model the system in your head. Even non-engineers can typically understand a system's logic by looking at its state machines.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, let's look at the two primary types of backend systems and examine how state machines might form a helpful core abstraction for each. First, we'll examine reactive systems and then proactive systems (aka, workflows).&lt;/p&gt;

&lt;h2&gt;
  
  
  Reactive systems
&lt;/h2&gt;

&lt;p&gt;Most of our APIs fall into this camp. Get a request, retrieve or update some data, return a response; they lay dormant until some external event spurs them to act.&lt;/p&gt;

&lt;p&gt;Whether we write these as microservices, macroservices, miniliths, or monoliths, we have a bunch of seemingly-decoupled functions responding to not-obviously-connected requests by updating some very-much-shared state.&lt;/p&gt;

&lt;h3&gt;
  
  
  A reactive system example
&lt;/h3&gt;

&lt;p&gt;Let's look at an example to understand how state machines can help us build better reactive systems. We’ll walk through the traditional way of building a traditional app: a food delivery service, focusing on the flow of an order.&lt;/p&gt;

&lt;p&gt;We’ll simplify the flow to this: users submit an order, we offer it to the restaurant, the restaurant accepts or rejects it, and a bit later, we send the delivery request to a courier and wait until the courier marks the order complete.&lt;/p&gt;

&lt;p&gt;To build that in a traditional way, we’ll probably want an order service with endpoints for users to create an order, restaurants to accept an order, couriers to accept a delivery and mark it complete, and timers to notify us that they’ve elapsed.&lt;/p&gt;

&lt;p&gt;To wildly simplify what was, in my past job, a few hundred person-years of work, you likely put together some code structured like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KmHFSzRP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/endpoint-files-33c3bf2c60b3e9253fe13fd6fbc124fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KmHFSzRP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/endpoint-files-33c3bf2c60b3e9253fe13fd6fbc124fa.png" alt="Endpoint files" width="417" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To represent a process that I’m pretty sure you’re picturing in your head right now like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wlrWy5yo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/order-state-machine-v1-2c092337038c9b771a6612a98de26414.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wlrWy5yo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/order-state-machine-v1-2c092337038c9b771a6612a98de26414.svg" alt="Order state machine v1" width="762" height="1245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.statebacked.dev/assets/images/order-state-machine-v1-2c092337038c9b771a6612a98de26414.svg"&gt;Expand&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The problem with the traditional approach
&lt;/h3&gt;

&lt;p&gt;It &lt;em&gt;looks like&lt;/em&gt; those endpoints sitting in their separate files are decoupled but, within each route, we have a bunch of assumptions about where we are in the flow. Orders can only be accepted once, couriers need the order information we stored during order acceptance when they pick up the order and shouldn’t be able to accept early since they’re paid based on time spent. We’ll also need to make sure that, if we offer a job to a courier who rejects, they can’t subsequently accept after another courier is assigned.&lt;/p&gt;

&lt;p&gt;In short, to be correct, each endpoint must validate aspects of the overall flow so, to coherently understand this system, we need to think about the whole thing—we can't really understand any part in isolation. The overall &lt;em&gt;process&lt;/em&gt; is what our customers are paying for, not a set of endpoints. Having spent many sleepless nights attending to outages within just such a system, I know firsthand that seemingly innocent changes to a supposedly isolated endpoint can have unintended consequences that ripple through the entire system.&lt;/p&gt;

&lt;p&gt;Basically, all of the critical structure &lt;em&gt;around&lt;/em&gt; and &lt;em&gt;between&lt;/em&gt; the endpoints that jumps right out at us in the state machine is completely hidden and hard to extract from the "decoupled" endpoints.&lt;/p&gt;

&lt;p&gt;Now, let’s imagine an all-too real request: after building this system, our business team decides that we could offer wider selection faster if we send couriers out to buy items from restaurants we have no relationship with (and, therefore, no way to send orders to directly).&lt;/p&gt;

&lt;p&gt;With that feature, we’ve broken all of the assumptions buried in our supposedly decoupled endpoints. Now, couriers get dispatched first and orders are accepted or rejected after the courier is on their way.&lt;/p&gt;

&lt;p&gt;With the traditional structure, we satisfy this new requirement by painstakingly spelunking through each of our endpoints and peppering in the appropriate conditionals, hoping that, in the process, we don’t disrupt the regular orders flowing through our system.&lt;/p&gt;

&lt;p&gt;Then, to satisfy restaurants that want to perform their own deliveries, we add a new option: for some orders, instead of dispatching couriers, we give the restaurant the delivery information so they can bring the customer their food. We wade through the mess of conditionals in our "decoupled" endpoints, struggling to trace distinct, coherent flows, painstakingly adding to the confusion as we implement our new feature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Doing better
&lt;/h3&gt;

&lt;p&gt;The trouble here lies in the difference between coupling and cohesion. Most systems have some degree of coupling, some interdependent assumptions between different endpoints or components. The degree of coupling is directly related to the difficulty of understanding a part of the system separately from the whole. As it becomes harder to understand &lt;em&gt;this&lt;/em&gt; endpoint without also understanding &lt;em&gt;those&lt;/em&gt; endpoints, it becomes more and more important to treat the system as a cohesive whole rather than pretending each part is an isolated component.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As the coupling between endpoints grows, so too do the benefits of representing the system as an explicit state machine&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you’re blessed with a generally stateless problem domain where you can build truly isolated endpoints, you should certainly do so! Our goal is always simplicity in the service of comprehensibility and, by that measure, nothing beats an isolated pure function.&lt;/p&gt;

&lt;p&gt;If, however, your problem domain, like most, requires inter-component assumptions, I highly recommend that you architect your system as it is—as a whole—instead of pretending it is composed of isolated pieces. As the dependencies between the endpoints of your system intensify, you’ll find more and more value from representing your requests as events sent to an instance of a state machine and your responses as pure functions of the machine’s state and owned data. In these systems, your primary concern is to understand the inter-component flow and that’s exactly what a state machine provides. You then build truly decoupled data updates, actions and conditions that your state machine orchestrates into a coherent whole.&lt;/p&gt;

&lt;p&gt;Returning to our example, it doesn’t take a state machine expert to be able to understand our complex, 3-part flow from this executable diagram but I can assure you that after 6 years in the trenches with the "decoupled" endpoint version of this system, I still struggled to piece together a view of the what the &lt;em&gt;system&lt;/em&gt; was actually doing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R4FFY8uP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/order-state-machine-v3-fa6fd770a1eb53924cfdf76488791491.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R4FFY8uP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/order-state-machine-v3-fa6fd770a1eb53924cfdf76488791491.svg" alt="Order state machine v3" width="800" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.statebacked.dev/assets/images/order-state-machine-v3-fa6fd770a1eb53924cfdf76488791491.svg"&gt;Expand&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By making this shift, we can solve the general problem of running consistent instances of these machines once and then spend all of our time building the business logic our users actually need.&lt;/p&gt;

&lt;p&gt;Which brings us to our second class of system…&lt;/p&gt;

&lt;h2&gt;
  
  
  Proactive systems (workflows)
&lt;/h2&gt;

&lt;p&gt;Proactive systems are distinguished by being primarily self-driven. They may wait on some external event occasionally, but the primary impetus driving them forward is the completion of some process or timer they started.&lt;/p&gt;

&lt;p&gt;The fundamental problem with workflows is that computers run code as processes and, while processes are permanent(ish) at the timescale of a request, they are decidedly ephemeral at the timescale of a long-lived workflow. We used to string together cron jobs, queues, and watchdogs to ensure forward progress in the face of machine and process failures. That made things work but created a mess—as with the "decoupled" endpoints we saw above, there was no cohesion to the separately-deployed dependencies. All of the above arguments for building more cohesive systems apply doubly so for workflows built around queues, event buses, and timers—understanding a system from those parts demands top-rate detective work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow engines and the clever hack
&lt;/h3&gt;

&lt;p&gt;In the past few years, we’ve seen the rise of the cohesive workflow as an abstraction in its own right&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. Just write code and let the workflow engine deal with reliably running on top of unreliable processes. Wonderful! Except that nearly all such platforms suffer from two major flaws: a lack of upgradability and embracing an iffy, leaky abstraction.&lt;/p&gt;

&lt;p&gt;There is only one constant across every software project I’ve seen: change. We’ve created this infinitely malleable construct and—of course!—we’re going to take advantage of its amazing ability to change. But there is no coherent upgrade story for &lt;em&gt;any&lt;/em&gt; major workflow platform. After kicking off a job that’s going to run for a year, there’s no reasonable way to change how it works!&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;best&lt;/em&gt; of these systems allow you to litter your code with version checks to manually recover missing context. &lt;em&gt;Understanding&lt;/em&gt; is the hardest part of the job and  trying to reason about a workflow littered with “if (version &amp;gt; 1.123) {...}” checks is like betting your business on your ability to win at 3d chess—we shouldn’t need to introduce a time dimension to our code.&lt;/p&gt;

&lt;p&gt;This obvious problem of wildly complicated updates derives from the less obvious, more insidious issue with workflow platforms: at their core is a &lt;a href="https://www.joelonsoftware.com/2001/12/11/back-to-basics/"&gt;Shlemiel the painter algorithm&lt;/a&gt;. They cleverly provide the illusion of resuming your code where it left off but that’s simply not possible with the lack of constraints present in arbitrary code, where any line can depend on arbitrary state left in memory by any code that previously ran. They provide this illusion by running from the beginning on every execution and using stored responses for already-called effects, thereby re-building all of the in-process context that your next bit of code might depend on.&lt;/p&gt;

&lt;p&gt;It is &lt;em&gt;clever&lt;/em&gt;!&lt;/p&gt;

&lt;p&gt;It is also the &lt;em&gt;wrong abstraction&lt;/em&gt; because it starts from the assumption that we programmers aren’t open to adopting something better than arbitrary, unconstrained code.&lt;/p&gt;

&lt;h3&gt;
  
  
  A better abstraction
&lt;/h3&gt;

&lt;p&gt;With state machines as the core abstraction for workflows, upgrading becomes a simple data mapping exercise because we know &lt;em&gt;exactly&lt;/em&gt; what any future code depends on: our state and our owned data. We can write one function to map states from the old version to states from the new version and one function to map the owned data from the old version to owned data from the new version. Then we can upgrade instances of our state machine whenever we want and our logic itself can ignore the history of mistakes and rethought features that more rightly belong in our git history than our production deployment.&lt;/p&gt;

&lt;p&gt;There’s more. State machines are &lt;em&gt;inherently resumable&lt;/em&gt; because, again, we know exactly how to rebuild the state that any future execution depends on: just load the state and owned data. No clever tricks required.&lt;/p&gt;

&lt;h3&gt;
  
  
  A workflow example
&lt;/h3&gt;

&lt;p&gt;Let’s look at an example of an onboarding workflow we might run with a standard workflow engine today:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;OnboardingWorkflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sendWelcomeEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1 day&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sendFirstDripEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Workflow engines treat each of our &lt;code&gt;await&lt;/code&gt;ed functions as "activities" or "steps", recording the inputs and outputs of each and providing us the illusion of being able to resume execution just after them.&lt;/p&gt;

&lt;p&gt;Now, we decide that we want our welcome email to vary based on the acquisition channel for our user. Simple, right?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;OnboardingWorkflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;acquisitionChannel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;getAcquisitionChannel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sendWelcomeEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;acquisitionChannel&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1 day&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sendFirstDripEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nope! If a user has already passed the &lt;code&gt;sendWelcomeEmail&lt;/code&gt; step, workflow execution engines have no choice but to throw an error or send a second welcome email with this change. Let’s see why.&lt;/p&gt;

&lt;p&gt;The first time the workflow runs the first version of our workflow, the engine will execute the &lt;code&gt;sendWelcomeEmail&lt;/code&gt; activity and store its result, then execute the &lt;code&gt;sleep&lt;/code&gt; activity, which will register a timer and then throw an exception to stop the execution. After the timer elapses, the engine has no way&lt;sup id="fnref3"&gt;3&lt;/sup&gt; to jump to the line of code after our call to &lt;code&gt;sleep&lt;/code&gt;. Instead, it starts at the very top again and uses stored results for any functions it already executed. It &lt;em&gt;has&lt;/em&gt; to do this because there’s no other way to rebuild all of the program state that we might depend on (e.g. local variables, global variables, arbitrary pointers, etc.). Instead, we’ll need to write our updated version more like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;OnboardingWorkflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;getVersion&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sendWelcomeEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;defaultAcquisitionChannel&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;acquisitionChannel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;getAcquisitionChannel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sendWelcomeEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;acquisitionChannel&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1 day&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sendFirstDripEmail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now imagine that we had more updates (great software engineering teams push multiple changes a day, right?) and imagine that the steps of our workflow had more direct dependencies between them. Maybe you could still mentally model the overall flow after v3. What about after v7?&lt;/p&gt;

&lt;h3&gt;
  
  
  Again but with a state machine
&lt;/h3&gt;

&lt;p&gt;With state machines, things are a bit different.&lt;/p&gt;

&lt;p&gt;We start with this state machine:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LYxCipSM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/workflow-v1-811386a81d22f96aedf4ba5578df529c.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LYxCipSM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/workflow-v1-811386a81d22f96aedf4ba5578df529c.svg" alt="Workflow state machine v1" width="320" height="695"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.statebacked.dev/assets/images/workflow-v1-811386a81d22f96aedf4ba5578df529c.svg"&gt;Expand&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For simple workflows, this diagram is helpful but it’s admittedly not a huge improvement in understandability over the code. As things get more complex though, a visual representation of the high-level structure of the workflow is really helpful. More importantly for our analysis here though, this is what an upgrade looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q79F7scR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/workflow-upgrade-fad89627dba433106ec90df80068c211.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q79F7scR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://docs.statebacked.dev/assets/images/workflow-upgrade-fad89627dba433106ec90df80068c211.svg" alt="Workflow state machine upgrade" width="800" height="900"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.statebacked.dev/assets/images/workflow-upgrade-fad89627dba433106ec90df80068c211.svg"&gt;Expand&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As an engineer, we need to do 3 things to cleanly migrate running instances from one version of our machine to another:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We build the new version of our state machine. We don't need to include any vestiges of the old version that are no longer needed. This is represented in the right-hand side of the above diagram.&lt;/li&gt;
&lt;li&gt;We write a function to map our old states to our new states (a trivial mapping in this case). This is represented as the left-to-right arrows in the diagram.&lt;/li&gt;
&lt;li&gt;We write another function to map the owned data for the old version to owned data for our new version of the machine. For example, if we had used the acquisition channel in future states, we would want to populate the acquisition channel in our owned data as part of this mapping.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because of the constraints of state machines, those mapping functions are straightforward to write and easy to test.&lt;/p&gt;

&lt;p&gt;This upgrade mechanism allows us to keep our workflow implementation clean and completely separate from our handling of changes over time. The inherent ability of state machines to &lt;em&gt;actually&lt;/em&gt; resume execution from any state is what allows us to disentangle our change history from our point-in-time state machine definition.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting them together
&lt;/h2&gt;

&lt;p&gt;Examined more broadly, few systems fall entirely into the reactive or proactive categories. An application likely has reactive aspects that kick off proactive processes that wait for reactive events and so forth. With today’s paradigms, these are incredibly awkward to model uniformly, so we tend to create subsystems built around different abstractions with different operational teams with different expertise. Because state machines are driven by &lt;em&gt;events&lt;/em&gt; and are inherently resumable, they easily model both reactive and proactive systems within a single paradigm that’s able to naturally express both types of solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating
&lt;/h2&gt;

&lt;p&gt;Great! So now that you're convinced of the value of state machines, you just need to rewrite your whole backend as a set of state machines in a big, all-at-once migration, right?&lt;/p&gt;

&lt;p&gt;Not quite.&lt;/p&gt;

&lt;p&gt;You don’t need to &lt;em&gt;replace&lt;/em&gt; your existing code with state machines. In many cases, you’ll want to &lt;em&gt;wrap&lt;/em&gt; calls to your (simplified) existing code in a state machine. That’s because, for most backends, the entire concept of a flow is simply missing. Once you introduce a state machine that’s responsible for executing the code that previously sat behind your endpoints, you can update your clients or API layer to send events to your new state machine instead of directly invoking the endpoints. Then, you can remove the flow-related checks and logic from the former endpoint code that now sits behind your state machine. Finally, you can lift your state management out of the former endpoint code to move ownership of the data to the state machine itself.&lt;/p&gt;

&lt;p&gt;Obviously, all of this can be applied just to new projects and migrations can easily be approached piecemeal, wrapping one related set of endpoints at a time.&lt;/p&gt;

&lt;p&gt;Let's examine the value gained at these key milestones:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a state machine to wrap a set of endpoints will yield valuable insight into the system you thought you knew. This executable documentation will allow future engineers to understand the overall flow and confidently make changes. You'll even remove the potential for a whole class of race conditions. Often, you'll discover never-before-considered user flows lurking in your existing implementation and this is a great time to finally define how they're supposed to work.&lt;/li&gt;
&lt;li&gt;Pulling the flow-related checks and validations out of the former endpoint code will simplify things as only deleting code can. You'll likely even find a few lurking bugs in those complex validations.&lt;/li&gt;
&lt;li&gt;Lifting state management out of the former endpoint code and into the state machine removes yet more code with yet more potential for bugs. Importantly, you'll find that your next project finishes faster and with fewer outages because you've pulled many application concerns up to the platform level.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;Ready to start implementing your backends as state machines?&lt;/p&gt;

&lt;p&gt;The most important first step is to start thinking in terms of states and transitions. Immediately, you'll start to see improvements in your ability to understand your software.&lt;/p&gt;

&lt;p&gt;There are even some great libraries you can use to build state machines on the backend, including &lt;a href="https://xstate.js.org/docs/"&gt;XState&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And there's a new service that can definitely help you adopt this pattern...&lt;/p&gt;

&lt;p&gt;I was lucky enough to be a part of Uber Eats' journey from an endpoint-oriented to a workflow-oriented architecture. Complex dependencies between endpoints had made working on them incredibly difficult and error-prone. With the migration to a workflow abstraction, we gained immense confidence in our system by finally having a cohesive view of the user-relevant flows that we were building.&lt;/p&gt;

&lt;p&gt;This was super exciting but, as I'm sure you can tell by now, I saw huge potential for state machines to expand upon that value. So I started State Backed. We recently released &lt;a href="https://www.statebacked.dev"&gt;our state machine cloud&lt;/a&gt; to make it incredibly easy to deploy any state machine as a reliable workflow or a real-time, reactive backend. We'd be proud to help you adopt state machines for your own backend or we're happy to share notes and help however we can if you choose to build a state machine solution yourself.&lt;/p&gt;

&lt;p&gt;You can have a state machine deployed in the &lt;a href="https://www.statebacked.dev"&gt;State Backed&lt;/a&gt; cloud in the next 5 minutes if you'd like to try it out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.statebacked.dev"&gt;Try State Backed for free&lt;/a&gt;&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Technically, we’re talking about statecharts throughout this article because we want the expressivity benefits of hierarchical and parallel states. We’ll use the more common term just for familiarity. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;This was pioneered by platforms like &lt;a href="https://cadenceworkflow.io/docs/concepts/workflows"&gt;Cadence&lt;/a&gt;. These platforms were a &lt;em&gt;huge&lt;/em&gt; leap forward for proactive system design because they enabled cohesion in this type of software for the first time. The fact that we believe that state machines are a more suitable abstraction doesn't detract at all from the amazing advance that these platforms made. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;The only exception we’re aware of is &lt;a href="https://www.golem.cloud/platform"&gt;Golem&lt;/a&gt;, a workflow engine built around Web Assembly. You can't snapshot the memory of a regular process and restore it but, because of Web Assembly’s sandbox model, they are able to capture the full program state and do actual resumption. This is a beautiful abstraction for resumption but doesn't address upgrading running instances. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>backend</category>
      <category>backenddevelopment</category>
      <category>webdev</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Introducing... the Team Pando flow editor</title>
      <dc:creator>Adam Berger</dc:creator>
      <pubDate>Thu, 08 Jun 2023 15:04:08 +0000</pubDate>
      <link>https://forem.com/abrgr/introducing-the-team-pando-flow-editor-cid</link>
      <guid>https://forem.com/abrgr/introducing-the-team-pando-flow-editor-cid</guid>
      <description>&lt;h1&gt;
  
  
  You can now easily edit your Team Pando product flows
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://blog.teampando.com/blog/20230530-end-to-end-product-lifecycle-in-team-pando?utm_source=devto&amp;amp;utm_medium=web&amp;amp;utm_campaign=flow-editor"&gt;Last week&lt;/a&gt; we gave you a sneak peak into how we use Team Pando to build features &lt;em&gt;for&lt;/em&gt; Team Pando.&lt;/p&gt;

&lt;p&gt;That feature has now been live and used by amazing members of Team Pando for a few days so we thought it was time to officially share a demo with you all.&lt;/p&gt;

&lt;p&gt;First, a reminder about what we built.&lt;/p&gt;

&lt;p&gt;Team Pando's Requirements Understanding Engine is the world's first and best engine for extracting the structure from the normal product requirements we all write every day. We turn requirements into &lt;a href="https://blog.teampando.com/blog/20230607-product-building-blocks?utm_source=devto&amp;amp;utm_medium=web&amp;amp;utm_campaign=flow-editor"&gt;product flows&lt;/a&gt; that faithfully represent the product that you're defining.&lt;/p&gt;

&lt;p&gt;When the flows that we generate are slightly different from what you expected, your first choice should be to update the requirements to clarify your intention. We recommend that as the first choice not because it will help our Requirements Understanding Engine but because it will help the rest of your team understand exactly what you have in mind.&lt;/p&gt;

&lt;p&gt;Sometimes, though, you just want to make a quick change. In those cases, Team Pando now let's you easily make changes to the flow we created for each requirement.&lt;/p&gt;

&lt;p&gt;Check out flow editing in action, below.&lt;/p&gt;

&lt;p&gt;Or open up &lt;a href="https://app.teampando.com/?utm_source=medium&amp;amp;utm_medium=web&amp;amp;utm_campaign=20230608-introducing-flow-editor"&gt;Team Pando&lt;/a&gt; and try it out for yourself!&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/xaI87dLfss4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Product building blocks</title>
      <dc:creator>Adam Berger</dc:creator>
      <pubDate>Wed, 07 Jun 2023 21:12:12 +0000</pubDate>
      <link>https://forem.com/abrgr/product-building-blocks-3cfe</link>
      <guid>https://forem.com/abrgr/product-building-blocks-3cfe</guid>
      <description>&lt;h2&gt;
  
  
  From humble beginnings...
&lt;/h2&gt;

&lt;p&gt;Simple things can give rise to enormous complexity.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The same 4 DNA bases and 20 amino acids make chickens, humans, tulips and whales&lt;/li&gt;
&lt;li&gt;Transistors just switch on and off but enough of them can now mimic human intelligence&lt;/li&gt;
&lt;li&gt;The interactions between 4 fundamental forces give rise to all of physics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In field after field, discovering and deeply understanding the &lt;strong&gt;right&lt;/strong&gt; "atoms"--the right building blocks--to describe the complex phenomena we see creates an explosion in our ability to shape our world.&lt;/p&gt;

&lt;h2&gt;
  
  
  But every product is unique, right?
&lt;/h2&gt;

&lt;p&gt;It would seem that each software product is a bespoke beast, with SaaS apps made of completely different "stuff" than a consumer marketplace. Just as it wasn't obvious that the same building blocks and processes produce humans and slugs or that just 4 forces account for everything we see happening in our Universe, it's not immediately obvious that there &lt;em&gt;are&lt;/em&gt; universal building blocks for products.&lt;/p&gt;

&lt;h3&gt;
  
  
  But if there were atoms of products, surely we would want to use them, right?
&lt;/h3&gt;

&lt;p&gt;In field after field, thinking small, from the atoms up, has yielded huge advances in our ability to build big. We think that product building is no exception.&lt;/p&gt;

&lt;h2&gt;
  
  
  Atoms of products
&lt;/h2&gt;

&lt;p&gt;It turns out that there actually &lt;strong&gt;are&lt;/strong&gt; a small set of atoms that all products are made out of.&lt;/p&gt;

&lt;p&gt;And that's what we put at the center of &lt;a href="https://www.teampando.com/?utm_source=devto&amp;amp;utm_medium=web&amp;amp;utm_campaign=atoms-of-products"&gt;Team Pando&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can make every product out of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;States or steps. A product is in one or more state(s) at any given time and the states determine what the user and product can do.&lt;/li&gt;
&lt;li&gt;Events. Events are anything that a user or external actor can do. These are things that our product might want to respond to in some way. Clicking, tapping, typing, etc.&lt;/li&gt;
&lt;li&gt;Conditions. Conditions determine how our product will respond to a given event. For instance, you may want an "is valid" condition determining what effect a form submission event should have.&lt;/li&gt;
&lt;li&gt;Actions. Actions are things that the product can do. Saving data, recording events, messaging other systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And we've found that there's one additional atom that really helps us stay at a comfortable level of detail while we're specifying products:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expectations. We can be explicit about how we expect our product to represent certain states.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. With those building blocks, you can describe any product you've built or interacted with.&lt;/p&gt;

&lt;h2&gt;
  
  
  What about the UI?
&lt;/h2&gt;

&lt;p&gt;So you actually want to interact with this product you're building? ;)&lt;/p&gt;

&lt;p&gt;Our building blocks just specify the different steps that a product has, what's expected at each step, and what the user and product can do at each step. But we haven't mentioned anything about &lt;em&gt;how&lt;/em&gt; users actually do those things yet.&lt;/p&gt;

&lt;p&gt;The answer is fairly beautiful: the UI is determined by which state(s) the product is in. Think about it: the state determines what a user can do, it has associated expectations describing how the user might do some of those things and it comes with a map of where the user can go next. That sounds a lot like a design spec to us! That's why we connect designs to states in Team Pando.&lt;/p&gt;

&lt;h1&gt;
  
  
  Our transistor moment
&lt;/h1&gt;

&lt;p&gt;We have found that representing products this way has &lt;strong&gt;huge&lt;/strong&gt; benefits.&lt;/p&gt;

&lt;p&gt;We can build up a common vocabulary across a team.&lt;/p&gt;

&lt;p&gt;We can be perfectly precise in talking about our products.&lt;/p&gt;

&lt;p&gt;We can connect everyone's work to the same blueprint, keeping teams tightly aligned.&lt;/p&gt;

&lt;p&gt;We can generate documentation, walkthroughs, help content, tests, and code.&lt;/p&gt;

&lt;p&gt;We can identify inter-team overlaps and conflicts while everyone is still in the ideation phase.&lt;/p&gt;

&lt;p&gt;We certainly didn't invent this model (you can learn more about state charts &lt;a href="https://statecharts.dev/"&gt;here&lt;/a&gt;) but we are big believers in its ability to help us think clearly, together. Try out &lt;a href="https://www.teampando.com/?utm_source=devto&amp;amp;utm_medium=web&amp;amp;utm_campaign=atoms-of-products"&gt;Team Pando&lt;/a&gt; with your team and tell us what you think!&lt;/p&gt;

</description>
      <category>product</category>
      <category>architecture</category>
      <category>management</category>
    </item>
    <item>
      <title>End-to-end product lifecycle with Team Pando</title>
      <dc:creator>Adam Berger</dc:creator>
      <pubDate>Tue, 30 May 2023 21:58:35 +0000</pubDate>
      <link>https://forem.com/abrgr/end-to-end-product-lifecycle-with-team-pando-53ij</link>
      <guid>https://forem.com/abrgr/end-to-end-product-lifecycle-with-team-pando-53ij</guid>
      <description>&lt;h2&gt;
  
  
  Eat your own dog food. It's good for you!
&lt;/h2&gt;

&lt;p&gt;There is nothing like eating your own dog food to build empathy with your users and there's nothing that comes close to real user empathy in its ability to improve your product.&lt;/p&gt;

&lt;p&gt;We're lucky at Team Pando: we're a product team who &lt;strong&gt;loves&lt;/strong&gt; helping product teams build better products, faster.&lt;/p&gt;

&lt;p&gt;So we get to use our own product every day to build... our product.&lt;/p&gt;

&lt;p&gt;Here's a view into how we use Team Pando to build Team Pando. (And look out for the feature we discuss to drop in the product soon!)&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Aw-gdF0tQlw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;How do you manage your product, design, and engineering handoffs? If you like what you see, check out &lt;a href="https://www.teampando.com/?utm_source=devto&amp;amp;utm_medium=web&amp;amp;utm_campaign=20230530-end-to-end-feature"&gt;Team Pando&lt;/a&gt; — we’re here to help!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Say Hello to Team Pando</title>
      <dc:creator>Adam Berger</dc:creator>
      <pubDate>Mon, 22 May 2023 17:52:48 +0000</pubDate>
      <link>https://forem.com/abrgr/say-hello-to-team-pando-49d8</link>
      <guid>https://forem.com/abrgr/say-hello-to-team-pando-49d8</guid>
      <description>&lt;p&gt;We'll start with the bad news...&lt;/p&gt;

&lt;p&gt;Building great software is just too hard.&lt;/p&gt;

&lt;p&gt;Great products are wildly complex. Not in the sense that they &lt;em&gt;expose&lt;/em&gt; wild complexity to users but in the sense that the team of people building them have to &lt;em&gt;deal with&lt;/em&gt; a staggering amount of complexity.&lt;/p&gt;

&lt;p&gt;For every step a user takes, a product team had to consider the dozens of steps they &lt;em&gt;didn't&lt;/em&gt; take and how each of those dozens of steps might have influenced the thousands of future steps that user might take.&lt;/p&gt;

&lt;p&gt;But wait... There's more.&lt;/p&gt;

&lt;p&gt;Few products are built singlehandedly. Product managers, designers, engineers, data scientists, quality assurance, product marketers, and operations teams all need to have the &lt;strong&gt;same&lt;/strong&gt;, shared understanding of the key details of the product they're building. Once the picture of the product everyone carries in their mind starts to drift, the end result is as expected: a bit blurry, a bit too much friction, parts that almost fit together, and late-in-the-game delays.&lt;/p&gt;

&lt;h2&gt;
  
  
  But there's a better way.
&lt;/h2&gt;

&lt;p&gt;And it just so happens that the path forward addresses both of the fundamental problems facing product teams today: product complexity and team alignment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The answer is to make the structure of the product itself real and to use that structure as the connective tissue for everything the team builds.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And that is what Team Pando does.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.teampando.com/?utm_source=blog-devto&amp;amp;utm_medium=web&amp;amp;utm_campaign=introducing-team-pando"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1Tk1XNkL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tobbxco745qbgz3qmksa.png" alt="Hello Team Pando" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  You already write requirements
&lt;/h3&gt;

&lt;p&gt;Team Pando turns them into flows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Looming problems become obvious while you're still defining the product&lt;/li&gt;
&lt;li&gt;Experiencing the product from a user's perspective is a click away&lt;/li&gt;
&lt;li&gt;One shared vocabulary for the whole team&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  You already make designs
&lt;/h3&gt;

&lt;p&gt;Team Pando syncs them with your requirements and the key parts of your flow they relate to.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use our Figma plugin to auto-create Figma frames for each step in your flow&lt;/li&gt;
&lt;li&gt;We'll sync just the relevant requirements right into your Figma files&lt;/li&gt;
&lt;li&gt;We'll auto-generate clickable prototypes from your designs based on your flows&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  You already do engineering handoffs
&lt;/h3&gt;

&lt;p&gt;Team Pando makes them a breeze with walkthroughs that put everything an engineer needs to build every step of every flow right at their fingertips.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Engineers have all of the designs, requirements, context, and collateral they need; no more searching through chats and docs&lt;/li&gt;
&lt;li&gt;Engineers can download an executable version of the flow as a state chart&lt;/li&gt;
&lt;li&gt;Auto-generated tests mean QA isn't chasing after the rest of the team&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://app.teampando.com/?utm_source=blog-devto&amp;amp;utm_medium=web&amp;amp;utm_campaign=introducing-team-pando"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u_rgUr5Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yp4ndw6ncycvcttbtczh.gif" alt="Team Pando turns requirements into flows" width="332" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What better to align your whole team to than the product that they're building?&lt;/p&gt;

&lt;p&gt;Team Pando brings the structure of your product front and center and then connects all of your team's work to the key pieces of your product&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://app.teampando.com/?utm_source=blog-devto&amp;amp;utm_medium=web&amp;amp;utm_campaign=introducing-team-pando"&gt;Try Team Pando&lt;/a&gt; for free and stay tuned for more exciting updates soon!
&lt;/h2&gt;

</description>
      <category>productivity</category>
      <category>design</category>
      <category>architecture</category>
      <category>teampando</category>
    </item>
    <item>
      <title>A serverless, versioned, local-first data syncing backend</title>
      <dc:creator>Adam Berger</dc:creator>
      <pubDate>Tue, 20 Dec 2022 16:55:03 +0000</pubDate>
      <link>https://forem.com/abrgr/a-serverless-versioned-local-first-data-syncing-backend-5en5</link>
      <guid>https://forem.com/abrgr/a-serverless-versioned-local-first-data-syncing-backend-5en5</guid>
      <description>&lt;p&gt;We’ve been busy here at &lt;a href="https://www.simplystated.dev?utm_source=devto" rel="noopener noreferrer"&gt;Simply Stated&lt;/a&gt;. We’re still building Omniscient XState Observability but it’s now part of a much more ambitious project. We’ll talk more about our expanded vision soon but we thought it would be fun to share some details about an interesting architecture we’ve been working on in the meantime.&lt;/p&gt;

&lt;p&gt;We are building a collaboration product for serious work. We call the core document that users will be collaboratively working on a project. Like other collaboration apps, we want to make sure that multiple users can edit a project in real time, everyone can see each others’ edits quickly, and edits don’t conflict with each other in a user-visible way.&lt;/p&gt;

&lt;p&gt;We settled on this set of requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Within a project, all user interactions should be local-first and snappy.&lt;/li&gt;
&lt;li&gt; Clients should be able to sync their changes to our backend, which will determine their canonical ordering and, therefore, the succession of canonical versions. We don’t need peer-to-peer consensus.&lt;/li&gt;
&lt;li&gt; We should be able to replay all of the updates (we call them mutations) for a project on top of a different starting state, similarly to a git rebase.&lt;/li&gt;
&lt;li&gt; Mutations may be reordered across clients but must be applied in-order for a particular client. That is, the client orderings define a partial order over the set of mutations.&lt;/li&gt;
&lt;li&gt; We must be able to scale to any number of projects and to large histories within a project but won’t see more than double digit mutations per second within a project.&lt;/li&gt;
&lt;li&gt; We prefer to use autoscaling, serverless components.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;We stand on the shoulders of giants&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A major thank you to &lt;a href="https://doc.replicache.dev/" rel="noopener noreferrer"&gt;Replicache&lt;/a&gt;, &lt;a href="https://www.figma.com/blog/how-figmas-multiplayer-technology-works/" rel="noopener noreferrer"&gt;Figma&lt;/a&gt;, and, as usual, Rich Hickey, this time with &lt;a href="https://docs.datomic.com/on-prem/overview/architecture.html" rel="noopener noreferrer"&gt;Datomic&lt;/a&gt; for sharing their thinking about similar problem spaces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The high-level scheme&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
We will take a Replicache-like approach of encoding mutations as data (similar to a &lt;a href="https://redux.js.org/" rel="noopener noreferrer"&gt;redux&lt;/a&gt; action). On the client, we’ll compute our current state by applying all local mutations to the canonical data we last received from the server. We’ll send a batch of mutations to the server, where the server will decide how to apply them to the now-current canonical data, which may differ from what the client believed to be the canonical data when it applied those mutations. The server-side application of a particular mutation might not result in the same state as the optimistic, client-side application of that mutation and that’s ok! The mutation logic will handle our conflict resolution. After the server applies mutations, it will send the client back the new state and the ID of the latest mutation it applied. The client can then update its canonical state and apply all local mutations &lt;em&gt;after&lt;/em&gt; the latest mutation included in the canonical state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First, our datastore selection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Typically, we start with consistency requirements to determine our datastore. Do we need multi-key transactions, compare-and-swap semantics, etc.? Due to other architectural choices we’ll talk about later, the only consistency guarantee that we actually require of our datastore is read-after-write consistency. That is, we need at least the option of performing a read that will return the data inserted by the most recent write. And because we’re maintaining a version history, we’re actually able to run entirely append-only, so update semantics don’t matter to us.&lt;/p&gt;

&lt;p&gt;Within the AWS ecosystem, both DynamoDB and S3 support read-your-own write consistency models. Dynamo supports a &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html" rel="noopener noreferrer"&gt;ConsistentRead&lt;/a&gt; option on queries and S3 (miraculously!) &lt;a href="https://aws.amazon.com/s3/consistency/" rel="noopener noreferrer"&gt;supports&lt;/a&gt; strong read-after-write consistency for all operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data modeling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we’d like to share just a bit about the data we’re working with.&lt;/p&gt;

&lt;p&gt;Projects–our primary versioned entity–are composed of a few scalars (things like name, description, etc. that we can restrict to ~1.5kb max, typically &amp;lt;250 bytes) and a few (we have 2 right now) collections where the cardinality of each collection is fairly small (hundreds would be very rare) but the items in the collections are rich, potentially medium-sized (tens to hundreds of kbs) structures. We’ll assume we have some long-lasting identifier for a particular client (an entity collaborating on a project), which we’ll call a client ID.&lt;/p&gt;

&lt;p&gt;Before we look at our write path, let’s examine the queries that we’ll need to make.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; We (fairly rarely) need to retrieve the latest version for a project along with all of its collections and their items.&lt;/li&gt;
&lt;li&gt; We need to retrieve a list of metadata (name, id, etc. but not the full data) for all of the items within collections.&lt;/li&gt;
&lt;li&gt; We need to retrieve a specific item by ID from a specific collection from the latest version of a project or a specific version of a project.&lt;/li&gt;
&lt;li&gt; We will (rarely) need to list all of the versions of a project in order and should be able to see the full project state and the (ordered) mutations that were applied to produce any given version.&lt;/li&gt;
&lt;li&gt; We will need to query for whether a given mutation has already been applied.&lt;/li&gt;
&lt;li&gt; We will need to query for the latest mutation from a given client that has been applied to any version of a project.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First, let’s look at some simple approaches.&lt;/p&gt;

&lt;p&gt;In a relational model, we could transactionally copy every record related to a project, re-write all of them for the new version, and store an ordered list of client IDs and their mutations, as they were applied.&lt;/p&gt;

&lt;p&gt;Somewhat similarly, we could store all of the data for a project in one record, either entirely in DynamoDB, entirely in S3, or in DynamoDB but spilling out to S3 if we exceed the 400kb record size limit. Then, whenever we process an update, we can take out a lock, read everything for the project, apply the mutations, and write everything for the new project version along with a list of client IDs and mutations.&lt;/p&gt;

&lt;p&gt;These approaches share a common issue: significant write amplification. That’s an effect where a small change to one part of one item in a collection would require writing the entire structure again. As we said, some of these structures may be large-ish and DynamoDB charges per 1kb chunk on writes. Even with compression, we would be duplicating quite a bit of repetitive data, driving up costs.&lt;/p&gt;

&lt;p&gt;However, we would be able to fulfill all of our query needs with these approaches in any of our potential datastores in a fairly reasonable manner (S3 would appear trickier here but &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html" rel="noopener noreferrer"&gt;S3 Select&lt;/a&gt; would allow us to retrieve only the portion of the content that we needed). Unfortunately, while DynamoDB and S3 allow us to project only a portion of our data for queries, they charge based on the full object size.&lt;/p&gt;

&lt;p&gt;We were concerned about the cost of DynamoDB storage and writes with so much write amplification, and were concerned about the storage costs of duplicating everything for every project version in S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So we decided to take a multi-tier approach to our data storage, inspired partly by git and partly by Datomic. We’ll divide our data into versioned entities and blob entities.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we’ll define a level of granularity at which we’ll treat data as content-addressed blobs. Above that level of granularity, we’ll maintain versioned records that point into those blobs. For us, we defined items within collections as our blob layer and everything above that as our versioned layer. So, for example, projects, and collections are monotonically versioned while collection items are content-addressed. We’ll store our blobs in S3 and our versioned entities in DynamoDB. This allows us consistently fast access to versioned entities and their metadata without suffering from the full write amplification that came along with the simple approaches we discussed above.&lt;/p&gt;

&lt;p&gt;We store our blobs (each collection item) in S3, content-addressed, under the project namespace using a key of the form: …/{sha256(item contents)}.json.gz.&lt;/p&gt;

&lt;p&gt;We store the project structure, with pointers to our content-addressed items, in DynamoDB, using a single-table schema to satisfy all of the queries we mentioned above. As we’ll see, in our usage, a single DynamoDB table is used more for its scaling and cost benefits than for its query benefits.&lt;/p&gt;

&lt;p&gt;Let’s take a look at our DynamoDB data. We have four types of entities that we’ll be storing: projects, project versions, collection versions, and mutations.&lt;/p&gt;

&lt;p&gt;First, we store our project IDs by the organization that each belongs to. This data is essentially write-once, at project creation, and supports listing the projects for an organization by querying by the partition key or retrieving the organization for a project by querying the inverted index by project ID.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5jq7il8cpy9bbnuc2e9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5jq7il8cpy9bbnuc2e9.png" width="500" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we have our project versions. Project versions are keyed by the project ID they belong to and have a lexicographically increasing version number. This allows us to easily query for the most recent version. Each record contains the scalars we mentioned for the project and points to the version number of each collection for the project. We also store an ordered list of mutations that were applied to produce that project version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fan01naqsbm4pgyfd8u6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fan01naqsbm4pgyfd8u6g.png" width="500" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We already introduced our collection version numbers in our project version schema. Each new version of a collection within our project (e.g. adding or modifying an item) has a record, keyed off of the project ID and the collection name and version number (+ a sequence number in case we need to spill over multiple records due to Dynamo’s 400kb item limit, a very unlikely scenario). Each record contains a map from item ids to the hash of the item’s contents, which is its address in S3. So, if we want to lookup the items for Collection1 for project p_4567, we can first query for the latest project version (pk = “Project#p_4567” and begins_with(sk, “ProjectVersion#”), sort descending by sort key, limit 1), get the collection1Version from it, then query for the collection version (pk = “Project#p_4567” and begins_with(sk, “Collection1#c1v_000002#”)), and lookup the collection items in S3 using the provided versions. With some clever query logic, we can actually optimize out the project version lookup by taking advantage of our monotonic collection versions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuwm85tgtf75gw23sc71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuwm85tgtf75gw23sc71.png" width="500" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we have our mutations. We add a record for every mutation to our project, with a partition key that includes the project ID and a sort key that includes the client ID and the mutation ID, where it is the clients’ responsibility to ensure monotonically-increasing mutation IDs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vs1nhprrkmb7ko16mgu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vs1nhprrkmb7ko16mgu.png" width="500" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;So, let’s look at how a simple processor would deal with a batch of mutations for a particular project.&lt;/p&gt;

&lt;p&gt;Imagine we start with data like this, representing a project whose name was just changed from “My project” to “My project (edited)” in the same “commit” as the contents of the “id1” item in collection 1 changed such that its hash was “abc” but is now “xyz”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwt8frclx9x2akl5tnuq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwt8frclx9x2akl5tnuq.png" width="500" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, one client with client ID c_client1 submits a mutation with ID “m_mut1” to modify the item with ID “id1” in collection 1 of project “p_4567”.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;The processor performs the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Check if our mutation has already been applied&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1a.&lt;/strong&gt; Query for pk = “Mutation#c_client1#m_mut1” and begins_with(sk, “Project#”)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1b.&lt;/strong&gt; We see that the mutation has not yet been applied so we continue&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; Find the current project version (we can optimize out this query depending on our query patterns)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2a.&lt;/strong&gt; Query for pk = “Project#p_4567” and begins_with(sk, “ProjectVersion#”), sorting descending.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2b.&lt;/strong&gt; We find this item:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zxtoxbrg358it15evy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zxtoxbrg358it15evy2.png" width="500" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt; Query the collection version to find the pointer to the data for the id1 item in S3&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3a.&lt;/strong&gt; Query for pk = “Project#p_4567” and begins_with(sk, “Collection1#c1v_000002#”)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3b.&lt;/strong&gt; We find this item:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwuy2ge095d2uezksanc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwuy2ge095d2uezksanc.png" width="500" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.&lt;/strong&gt; Load the id1 item from S3 with key: /o_123/projects/p_4567/Collection1/def.json.gz&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.&lt;/strong&gt; Execute our mutation against the id1 data&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.&lt;/strong&gt; Hash the new contents of id1 to determine the new version identifier, let’s call it newHash&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7.&lt;/strong&gt; Write id1 to S3 at key: …/{newHash}.json.gz&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8.&lt;/strong&gt; Increment the project version number (pv_000010) to find our new project version: pv_000011&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9.&lt;/strong&gt; Increment the Collection1 version (c1v_000002) to find our new Collection1 version: c1v_000003&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10.&lt;/strong&gt; Transactionally write our updates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrvnpaqhlqaanhufbtz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrvnpaqhlqaanhufbtz5.png" width="500" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11.&lt;/strong&gt; Now, we’ve stored our new id1 item in S3, wrote our new collection version, pointing to the id1 item in S3, appended our new project version, pointing to our new collection version, and wrote a record of our mutation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guarantees&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now let’s examine what could go wrong.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Other mutations to the same project could have been submitted and may be processed concurrently. Specifically, if both mutations read project version x, apply their mutation, and write project version y, one of the mutations will have been effectively dropped. We will address this below.&lt;/li&gt;
&lt;li&gt;  Our processor might fail and the requester might retry. We apply mutations idempotently, performing a consistent read to skip processing already-applied updates and, because we wrote our mutation record transactionally with the project version update, we won’t double-apply a mutation. As long as the requester continues to retry, we will eventually apply the mutation. Outstanding issues: we need to ensure that clients perform retries and we need to ensure mutation ordering within a given client in the presence of retries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We still have some outstanding issues that our datastore structure left unaddressed. Specifically, we need to ensure that mutations are retried, we need those retries to remain ordered across retries and new mutations coming from the same client for the same project, and we need to ensure that only one processor is applying mutations to any given project at any given time.&lt;/p&gt;

&lt;p&gt;To satisfy those requirements, we will use an &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html" rel="noopener noreferrer"&gt;SQS FIFO queue&lt;/a&gt; with each message specifying the project ID as its &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-message-order.html" rel="noopener noreferrer"&gt;MessageGroupID&lt;/a&gt;. We will configure a lambda processor for our queue, fulfilling our serverless goal.&lt;/p&gt;

&lt;p&gt;Let’s ensure that we’ve addressed our outstanding issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  We trivially satisfy our retry requirement. Lambda will only delete items from the queue after they have been successfully processed (or written to a dead-letter queue) and we already ensure idempotency.&lt;/li&gt;
&lt;li&gt;  FIFO queues are designed specifically to ensure that items within a message group are processed in the order in which they were added to the queue. As long as our clients send their mutations in-order, this architecture ensures that, regardless of retries, all mutations for a given project for a given client will be processed in the same order in which they occurred on each client.&lt;/li&gt;
&lt;li&gt;  FIFO queues are also designed to ensure that no other items from a given message group will be dispensed while other items from that group are in-flight. This ensures that only one processor will process mutations for a project at any time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The client side&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’ll cover the client side of our sync solution in another post. Stay tuned!&lt;/p&gt;

</description>
      <category>career</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Sorry to tell you this but your epistemology is showing</title>
      <dc:creator>Adam Berger</dc:creator>
      <pubDate>Wed, 07 Dec 2022 15:43:11 +0000</pubDate>
      <link>https://forem.com/abrgr/sorry-to-tell-you-this-but-your-epistemology-is-showing-487b</link>
      <guid>https://forem.com/abrgr/sorry-to-tell-you-this-but-your-epistemology-is-showing-487b</guid>
      <description>&lt;p&gt;We're still a young industry. We've been building bridges for a few thousand years and we've only been building software products for a few decades. We've discovered some hard and fast rules for building better software products but much of the wisdom we've acquired now gets bandied about in the form of maxims, many with a kernel of deep truth, but passed around without context.&lt;/p&gt;

&lt;p&gt;Engineers throw around phrases like YAGNI ("you ain't gonna need it") and "premature abstraction is the root of all evil" in well-intentioned attempts to prevent their colleagues from indulging in what we've commonly branded "over-engineering." Product managers champion "ruthless prioritization," justifying a compelling minimalism, forever under threat from the siren song of "more."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w7h_vjMh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/63909ee18a8b418da77b4579_yagni%2520ruthlessly%2520prioritize.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w7h_vjMh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/63909ee18a8b418da77b4579_yagni%2520ruthlessly%2520prioritize.png" alt="Premature abstraction is the root of all evil; ruthlessly prioritize; you ain't gonna need it" width="800" height="784"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These ideas are important and stem from deep, fundamental truths about the nature of the products we build.&lt;/p&gt;

&lt;p&gt;But, as with everything in our industry, context matters. A command--any command--, divorced from its purpose, is a dangerous thing. That's why so much of a product manager's time is spent understanding and then building empathy for the real problems users face and the actual jobs they seek to accomplish with the product the team is building. That's why--often--the majority of an engineering manager's job is spent reinforcing the purpose of the team's work, connecting the code to the outcome.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Why" matters.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, let's examine the why behind these minimalist maxims. What truth about our products underlies them and what determines their applicability? When and why aren't you gonna need it and when might you? When and why should you abstract and when should you solve only the immediate problem? When and why should you "ruthlessly" prioritize and when should you pursue the mythical p3, only so named because we had so many p1s we decided to make a new p0 category?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xI7VHJKG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/63909ec7c4374287bb9ac46b_why.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xI7VHJKG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/63909ec7c4374287bb9ac46b_why.png" alt="wondering why" width="650" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The answer lies in epistemology.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But first, let's agree on what our job is as a member of a product team. Our first instinct might be to say that we are solving user problems and enabling users to more efficiently accomplish the jobs they have to do. Unfortunately, the most direct way to accomplish that is not through software; it's by sitting down with them as their assistant. Our actual job then involves solving those user problems at scale, more efficiently than can be done by assigning more and more smarter and smarter humans to each customer. How then are we to accomplish that? By building in malleable software a model of the domain that our users work in, allowing them to easily map domain-level concepts to product-level concepts and domain level operations to product-level operations.&lt;/p&gt;

&lt;p&gt;That's it. That's the primary job of every member of a product team: product manager, engineer, designer, copywriter, etc. Build a coherent model of the domain your customer is working in and allow them to easily map the things they want to do in the real world onto your representation of it, to manipulate that representation of the thing they actually care about, and then to map back from your representation into the real world.&lt;/p&gt;

&lt;p&gt;If we do that well, our users don't even realize they're doing this mapping. They see a file and they drag it into a folder because that perfectly matches their mental model of what they want to do (for some version of a 1980s-era customer who still knew what a file folder was and once handled physical pieces of trees with writing on them).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That may sound easy but it is likely the hardest thing humans have attempted to do.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We've been trying to categorize things and create &lt;a href="https://en.wikipedia.org/wiki/Sophist_(dialogue)"&gt;ontologies&lt;/a&gt; for our world for about as long as we've been using language.&lt;/p&gt;

&lt;p&gt;And exactly what rules govern that process and how best to approach it is still an unresolved question but the basis for any answer lies in how we understand reality, how we understand what’s true and what isn’t and why.&lt;/p&gt;

&lt;p&gt;Epistemology is our attempt at answering the question: what's true and how do you know it's true?&lt;/p&gt;

&lt;p&gt;That’s our route to improving the way we build and evaluate our models of the world.&lt;/p&gt;

&lt;p&gt;So let's get back to our favorite product-building aphorisms.&lt;/p&gt;

&lt;p&gt;Historically, everything was built in a full waterfall model with a big design up front and implementation at the end. This reflects a perspective of epistemological certainty, a belief that everything is knowable and prestatable.&lt;/p&gt;

&lt;p&gt;The maxims we mentioned at the beginning (ruthless prioritization, YAGNI, premature abstraction) all embody a reaction against the obvious absurdity of total epistemological certainty; they reflect a complete embrace of epistemic uncertainty, of the idea that the world is inherently or effectively un-modelable, &lt;a href="https://www.npr.org/sections/13.7/2011/04/04/135113346/there-are-more-uses-for-a-screwdriver-than-you-can-calculate"&gt;un-prestatable&lt;/a&gt;, and--to a large extent--un-_know_able.&lt;/p&gt;

&lt;p&gt;Unfortunately, both worldviews are wrong in their absolutism. The obvious reality for anyone who has spent much time observing the world is that &lt;em&gt;some things&lt;/em&gt; are knowable and prestatable and &lt;em&gt;some things&lt;/em&gt; are not. And &lt;em&gt;some things&lt;/em&gt; are knowable in principle but we just don’t know them yet.&lt;/p&gt;

&lt;p&gt;Now, this does not imply that each category of thing is partially knowable. The truth rarely lies in the middle. It means just what we said: some things are knowable and some things are not knowable. In some areas, you can design up front and get it right and in other areas, you just have to iteratively unfold your product, letting it co-evolve with your user base.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The correct epistemic stance--and, therefore, the correct product development methodology--depends on the domain you're modeling.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many things in the domains we care about have natural &lt;a href="https://algebradriven.design/"&gt;algebras&lt;/a&gt;, a set of objects and expected relationships between them. A collection of things, for example, can be set-like or list-like and we know quite well what types of things people do with set-like or list-like objects. To pretend that you know you want an add operation for a set-like thing (like user permissions) but that we can't possibly tell in advance if a delete operation is necessary or what its priority is doesn’t quite pass the smell test after millennia of human experience working with sets of things. There is regularity in nature in the way that objects relate to each other and that regularity underlies the domains we care about. It's our responsibility as product builders to recognize the true nature of the objects we're representing and to expose it to our customers. In these domains, missing entities, relations, or operations just confuses and frustrates users and inevitably slows down our ability to improve our product. Working around these missing pieces even distorts the trajectory that our product takes over time.&lt;/p&gt;

&lt;p&gt;But not everything has an obvious, prestatable algebra that relates it to the other entities in our products. In fact, our products as a whole relate in un-prestatable ways to other products, our customers' environments, and the rest of the world. They are part of a co-evolving, complex adaptive system, where interactions between parts constantly create novel niches and affordances that other parts can inhabit and make use of. It is not just that we don’t currently know how our product will need to evolve in this dynamic environment but it is likely even in principle impossible to know the specifics of how our product will need to adapt over time. Here, there is no absolute knowledge. The exact details of the way that these systems unfold are unknowable beforehand; they are &lt;a href="https://mathworld.wolfram.com/ComputationalIrreducibility.html"&gt;computationally irreducible&lt;/a&gt; so the only way to know where you will end up is to play every step of the game between now and then.&lt;/p&gt;

&lt;p&gt;But even for these aspects of our domains, even when the details of how our product needs to work in the future cannot be known, we need not completely give up design and abstraction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We know more than we might think.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We know that our product will need to evolve and we know that interactions between concepts and parts of our systems create constraints that make it more difficult to evolve the system. We know that the more closely the relations between the components of our system match the relations between the concepts we know about in our domain, the more easily we will be able to model new domain concepts in our product.&lt;/p&gt;

&lt;p&gt;Evolvability does not arise as a natural consequence of taking small steps or building sprint by sprint. Evolvability is not a property that derives from how something is built. &lt;a href="https://www.amazon.com/At-Home-Universe-Self-Organization-Complexity/dp/0195111303"&gt;Evolvability&lt;/a&gt; is a property of the built thing; it derives from the structure of the relations between the parts of our product and it can be explicitly designed into our products.&lt;/p&gt;

&lt;p&gt;This is why understanding the deeper reasoning behind the agile maxims is important. &lt;strong&gt;If you happen to be working on a part of a product where the epistemic premises for working in an agile fashion are met, you are also working on a part of a product where the most important trait of your system is evolvability.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many take these maxims to mean that we should just directly build concrete solutions for individual user stories, one after the other. Instead, we think the answer is to build an evolvable product, where the concepts you know about are properly represented (reified), with hard interfaces between components, and with abstractions introduced to keep interactions between components to a minimum. Abstraction and structure become more important when building evolvable systems, not less. If you leave a domain concept that you've uncovered un-represented in the product, it becomes harder to operate with agility, not easier.&lt;/p&gt;

&lt;p&gt;Ruthlessly prioritize to maximize your rate of learning, not as an end in and of itself. Abstract as you learn and as the true nature of things is uncovered but not before. Consider whether there's a natural relation between the thing you're told you ain't gonna need and the rest of your product before you throw it away.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is this on our mind at Simply Stated these days?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uuf9DxP1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/63909fb3726dee8f36fe53b2_picasso-flowchart.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uuf9DxP1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/63909fb3726dee8f36fe53b2_picasso-flowchart.png" alt="Depiction of a user flow" width="800" height="662"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because every product we know of is composed of user flows. The user flow is the abstraction that ties together all of the parts of the product from the customer perspective. The user flow is one of the primary things that product teams work on and that customers experience.&lt;/p&gt;

&lt;p&gt;And there are almost no product teams that can point to their user flow. There's almost never a place in the system that encodes the user flow. And that's a problem.&lt;/p&gt;

&lt;p&gt;That's a problem that we're aiming to help fix.&lt;/p&gt;

&lt;p&gt;Because the user flow is where the action is. That’s the part of your system whose details will evolve most rapidly, where agility commands the greatest premium.&lt;/p&gt;

&lt;p&gt;We have no idea whether the specifics of particular steps of your user flow are in the knowable or unknowable category but we’re fairly certain that the coarse grain relationships between those steps is eminently knowable.&lt;/p&gt;

&lt;p&gt;And we think that making user flows real and tangible will help all of us build more evolvable, adaptable products.&lt;/p&gt;

&lt;p&gt;Stay tuned for more.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The storyteller makes no choice&lt;br&gt;&lt;br&gt;
Soon you will not hear his voice&lt;br&gt;&lt;br&gt;
His job is to shed light&lt;br&gt;&lt;br&gt;
And not to master&lt;br&gt;&lt;br&gt;
- &lt;a href="https://www.youtube.com/watch?v=3I7CLy70WtI"&gt;Terrapin Station&lt;/a&gt;, Grateful Dead&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>design</category>
      <category>programming</category>
      <category>architecture</category>
    </item>
    <item>
      <title>f-of-xstate: run some logic on your logic</title>
      <dc:creator>Adam Berger</dc:creator>
      <pubDate>Tue, 22 Nov 2022 15:33:23 +0000</pubDate>
      <link>https://forem.com/abrgr/f-of-xstate-run-some-logic-on-your-logic-3273</link>
      <guid>https://forem.com/abrgr/f-of-xstate-run-some-logic-on-your-logic-3273</guid>
      <description>&lt;p&gt;Introducing f-of-xstate: free the insight and knowledge you worked so hard to build.&lt;/p&gt;

&lt;p&gt;Every day as programmers and engineers, we write logic. If this, then that. When this, do that. Etc.&lt;/p&gt;

&lt;p&gt;But with too few exceptions, we write that logic as text in a format that's so hard to glean real semantic meaning from that we've all probably taken at least one full undergraduate course about how to extract information from program text, transform it, and write it out again in a format more conducive to execution.&lt;/p&gt;

&lt;p&gt;The closest most of us come to real, automated inspection or modification of logic tends to be in the form of macros or evals and the general advice on both counts is, most charitably, to use them as a last resort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But does that really make sense?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We spend the vast majority of our time thinking about our program logic and then codifying all of that understanding about our problem domains, esoteric edge cases, and easy-to-miss implementation gotchas into a prison of text. And should we ever want to break free those gems of insight in the future, we will almost certainly need a human to read through everything we've written.&lt;/p&gt;

&lt;p&gt;Our logic encodes some of the most important intellectual property we produce. It is the synthesis of everything we've learned. And we lock it up in text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There is a better way.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We'd like to introduce &lt;a href="https://github.com/simplystated/f-of-xstate"&gt;f-of-xstate&lt;/a&gt;, our recently-open-sourced library for querying and modifying your app logic.&lt;/p&gt;

&lt;p&gt;First of all, we can express our logic in &lt;a href="https://statecharts.dev/"&gt;statecharts&lt;/a&gt;. If you're working in javascript, &lt;a href="https://github.com/statelyai/xstate"&gt;XState&lt;/a&gt; is a great choice for writing and executing your app logic as a statechart. Just by moving your logic from text to datastructures, specifically statecharts, you'll already have gained clearer insight into your own understanding of your problem space.&lt;/p&gt;

&lt;p&gt;Next, we can query the structure of our statechart to derive new meta-facts about our logic. This is where f-of-xstate comes in. We aim to make it easier to programmatically introspect your statecharts. We'll see an example below.&lt;/p&gt;

&lt;p&gt;That's not all, we can also use f-of-xstate to easily modify our logic, layering on generic functionality or enforcing important invariants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Show me the code!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's see an example of querying our logic. Let's imagine that we have some set of destructive actions that should always be guarded by user confirmation. In &amp;lt;20 lines of code, we can write a function that checks our logic to ensure that property holds. You could also imagine building something more involved like a tool to show users step-by-step instructions from wherever they are in your app to any goal state they want to reach.&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://codesandbox.io/embed/x5bizd"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Let's check out one more example. This time, instead of just detecting dangerous states, we'll fix them! Given any "dangerous" machine that might delete a user's data without confirmation, we'll convert it into a "safe" machine that always prompts for confirmation.&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://codesandbox.io/embed/f4uk5l"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Head over to &lt;a href="https://github.com/simplystated/f-of-xstate"&gt;simplystated/f-of-xstate&lt;/a&gt; to see more examples (and star the repo for later).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Once in a while, you get shown the light&lt;br&gt;&lt;br&gt;
In the strangest of places if you look at it right&lt;br&gt;&lt;br&gt;
- &lt;a href="https://www.youtube.com/watch?v=Kj_kK1j3CV0"&gt;Scarlet Begonias&lt;/a&gt;, Grateful Dead&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Follow us &lt;a href="https://www.twitter.com/abrgrBuilds"&gt;@abrgrBuilds&lt;/a&gt; and join &lt;a href="https://www.simplystated.dev"&gt;our newsletter&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adam Berger
Founder, Simply Stated&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>opensource</category>
      <category>logicapps</category>
    </item>
    <item>
      <title>Engineering is tradeoffs... right?</title>
      <dc:creator>Adam Berger</dc:creator>
      <pubDate>Wed, 16 Nov 2022 01:05:12 +0000</pubDate>
      <link>https://forem.com/abrgr/engineering-is-tradeoffs-right-4dfb</link>
      <guid>https://forem.com/abrgr/engineering-is-tradeoffs-right-4dfb</guid>
      <description>&lt;p&gt;Efficient frontiers, problem structure, and how to reframe your way to a free lunch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wqHbqVbz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/634839279623466a4b8cc401_Blog_Images-tradeoffs_feature.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wqHbqVbz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/634839279623466a4b8cc401_Blog_Images-tradeoffs_feature.png" alt="Engineering is tradeoffs... right?" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We always talk about tradeoffs. Strike up a conversation with most engineers about almost any problem they've worked on and you're likely to hear a lot about how they had to consider and choose from among various tradeoffs.&lt;/p&gt;

&lt;p&gt;Living in a tradeoff-rich world, many of us start to assume that literally &lt;em&gt;everything&lt;/em&gt; requires tradeoffs. Want better performance? Better sacrifice readability. Want quicker time to market? Get ready to give up reliability and correctness. &lt;a href="https://en.wikipedia.org/wiki/Pareto_efficiency"&gt;Pareto improvements&lt;/a&gt;? Never!&lt;/p&gt;

&lt;p&gt;...But does every improvement really require a sacrifice somewhere else?&lt;/p&gt;

&lt;h3&gt;
  
  
  The efficient frontier
&lt;/h3&gt;

&lt;p&gt;Let's take a quick diversion into finance. Modern portfolio theory is based on the idea that every portfolio (basket of assets) has a volatility and expected return derived from the assets and their cross-correlations. The space of possible volatility/return pairs is constrained by the available assets and, in particular, their inter-relationships (correlations). That is, if you plot the volatility/return of all possible portfolios composed of some set of assets, you'll find a boundary you can't cross. At that boundary, called the &lt;strong&gt;efficient frontier&lt;/strong&gt;, you can pick your tradeoff between risk and return but you can never construct a portfolio with a higher return and equivalent risk or lower risk and equivalent return to one on the efficient frontier. However, if you have a portfolio &lt;em&gt;inside&lt;/em&gt; the efficient frontier, you can do strictly better--increase your return and keep the same level of volatility or reduce your volatility and keep your same return--by moving up or over to a point on the efficient frontier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LsR1Belv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/6366828226b9afcf4d3a614e_Markowitz_frontier.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LsR1Belv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/6366828226b9afcf4d3a614e_Markowitz_frontier.jpg" alt="Efficient frontier" width="434" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://commons.wikimedia.org/w/index.php?curid=21166729"&gt;By User:G2010a - Own work, Public Domain&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As an example, if your portfolio consists entirely of umbrella maker stock, you could probably achieve higher returns with lower risk by adding a few shares of an ice cream manufacturer. However, there is no combination of umbrella and ice cream stock that will get you 50% returns with 10% volatility--that is beyond the efficient frontier.&lt;/p&gt;

&lt;h3&gt;
  
  
  The efficient frontier... in engineering
&lt;/h3&gt;

&lt;p&gt;So, how is this related to software and engineering?&lt;/p&gt;

&lt;p&gt;First of all, we need to swap the x-axis from the above picture. In engineering, our axes represent limited quantities and we'll assume more is better but we tend to have a maximum achievable quantity on each axis. (Technically, we're now talking about &lt;a href="https://en.wikipedia.org/wiki/Pareto_front"&gt;Pareto Frontiers&lt;/a&gt;). For the rest of this article, imagine x and y axes labeled with quantities like: performance, readability, agility, ease of maintenance, scalability, engineering time, number of engineers, etc. such that more is "better."&lt;/p&gt;

&lt;p&gt;Anyway, once you're &lt;strong&gt;on&lt;/strong&gt; the efficient frontier, everything really is just tradeoffs. There is no free lunch. You must give up something to get more of something else.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1qffmo1w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/6349a5124a6db3e3d513cfe2_efficient%2520frontier%2520%284%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1qffmo1w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/6349a5124a6db3e3d513cfe2_efficient%2520frontier%2520%284%29.png" alt="It's all tradeoffs on the efficient frontier" width="303" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But if you're &lt;strong&gt;inside&lt;/strong&gt; the efficient frontier, you actually can do strictly better. You don't need to give up any x to get more y. You have a menu of tradeoff-free improvements to choose from.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XU5e52La--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/6349a5676fe7a927228b2309_efficient%2520frontier%2520%285%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XU5e52La--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/6349a5676fe7a927228b2309_efficient%2520frontier%2520%285%29.png" alt="Pareto improvements inside the efficient frontier" width="303" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, is everything really just a matter of picking your poison and making the right sacrifices to hit a reasonable set of tradeoffs?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Probably not.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is plenty of knowledge sharing and competition driving us all to find and adopt better and better ways to structure our systems but how sure are you that you're actually on the efficient frontier, that you've reached the best of all possible configurations such that nothing can possibly be improved without giving up something in return? Based on what we've seen in industry, that's just not very likely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FhjAu42K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/634830000ff4f5f4b2f88f6d_efficient%2520frontier.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FhjAu42K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/634830000ff4f5f4b2f88f6d_efficient%2520frontier.png" alt="Are you sure you're on the efficient frontier?" width="303" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the very least, whenever you're discussing a tradeoff you &lt;em&gt;think&lt;/em&gt; you have to make, spend some time to really convince yourself that you're already on the efficient frontier, living in one of the best of all possible worlds.&lt;/p&gt;

&lt;h3&gt;
  
  
  But what if you decide you really are on the efficient frontier?
&lt;/h3&gt;

&lt;p&gt;Surely, if you're on the efficient frontier (or the pareto front), then you must really have to choose, right?&lt;/p&gt;

&lt;p&gt;Maybe.&lt;/p&gt;

&lt;p&gt;But remember, the efficient frontier was determined by the interrelationships between the components of our system or, as I like to call it, by the &lt;strong&gt;structure of the system&lt;/strong&gt;. In the portfolio theory case, the structure was governed by the correlations between assets. In our engineering case, the structure of the system is governed by the constraints of our problem space.&lt;/p&gt;

&lt;p&gt;It's tough as a market participant to change asset correlations (and I'm fairly certain that the SEC looks unkindly on those who attempt it).&lt;/p&gt;

&lt;p&gt;It's also tough but eminently possible as an engineer to change your conception of your problem space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://reactjs.org/"&gt;React&lt;/a&gt; did just that. Prior to React, we were all solving the problem of mutating the DOM in response to events and--wow!--that problem has a shallow efficient frontier. React reframed the problem to one of updating abstract state in response to events and generating a DOM based on that state and immediately pushed out the frontier of the possible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jUjvT4RD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/636685e5c4e75f458efac4f6_efficient%2520frontier%2520%286%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jUjvT4RD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://uploads-ssl.webflow.com/629a68bab9ecd2214988c714/636685e5c4e75f458efac4f6_efficient%2520frontier%2520%286%29.png" alt="Push out the efficient frontier" width="303" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key insight here is that the same business problems can be approached by solving any one of many possible technical problems. The mapping from business problem to technical problem is many-to-many. But the choice of technical problem determines the structure of the technical domain, which determines the efficient frontier of what's possible.&lt;/p&gt;

&lt;p&gt;Pushing out the efficient frontier by uncovering a new problem structure that subsumes a significant set of use cases of the current best alternative is no mean feat. These are the events in the history of programming we still talk about decades later and look back on the before times with astonishment: "they used to program like that?!?!" &lt;a href="http://jmc.stanford.edu/computing-science/timesharing.html"&gt;Time sharing&lt;/a&gt;, &lt;a href="http://www.u.arizona.edu/~rubinson/copyright_violations/Go_To_Considered_Harmful.html"&gt;structured programming&lt;/a&gt;, &lt;a href="http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en"&gt;OOP&lt;/a&gt;, &lt;a href="http://jmc.stanford.edu/articles/recursive/recursive.pdf"&gt;functional programming&lt;/a&gt;, &lt;a href="https://static.googleusercontent.com/media/research.google.com/en//archive/mapreduce-osdi04.pdf"&gt;map/reduce&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=wW9CAH9nSLs"&gt;Docker&lt;/a&gt;, and &lt;a href="https://reactjs.org/"&gt;React&lt;/a&gt; are all examples of structural reframings that changed the constraints engineers had to deal with in solving business problems. The big ones change the world. Plenty of small ones have changed industries, markets, and companies. Uncovering new structures, new relationships between components, is a worthwhile goal.&lt;/p&gt;

&lt;h3&gt;
  
  
  That was a fun history lesson. How do I actually put this to use?
&lt;/h3&gt;

&lt;p&gt;When you're facing a problem and think you might need to make a tradeoff:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Really test whether you're on the efficient frontier of the problem space (i.e. set of constraints imposed by system structure) you're facing. If you're not, think hard to find a way to formulate your solution to avoid any sacrifices.&lt;/li&gt;
&lt;li&gt; If you really are on the efficient frontier, spend some &lt;a href="https://www.youtube.com/watch?v=f84n5oFoZBc"&gt;hammock time&lt;/a&gt; to think about how you might reframe the problem that your solving in such a way that the structure of the system (the relationships between its components) and the constraints that they imply push out the efficient frontier sufficiently far that you can easily achieve your goal without any sacrifices.&lt;/li&gt;
&lt;li&gt; When all else fails, make a sacrifice, write it down, and keep it in the back of your mind in hopes that some flash of insight one day allows you to reframe the problem.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Aren't you into statecharts? What does all of this have to do with that?
&lt;/h3&gt;

&lt;p&gt;At Simply Stated, we believe that statecharts represent a key reframing of the problem space of building apps. Whether you're looking at how product teams collaborate, how engineers implement solutions, how companies gain confidence that those solutions work, how customers learn to use them, or how those solutions can be safely adapted over time, we think statecharts and, more generally, declarative logic expand the set of achievable outcomes. We're tired of using (and, admittedly, building) software that doesn't work and takes hundreds of people too long to build. We think we can build better software faster with statecharts and declarative logic. That's why we want to contribute to the ecosystem of tooling needed to achieve mass adoption of this transformative reframing of the problem space facing engineers.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Since it cost a lot to win&lt;br&gt;&lt;br&gt;
And even more to lose&lt;br&gt;&lt;br&gt;
You and me bound to spend some time&lt;br&gt;&lt;br&gt;
Wondering what to choose&lt;br&gt;&lt;br&gt;
- &lt;a href="https://www.youtube.com/watch?v=gFDeJK5Ewvs"&gt;The Deal&lt;/a&gt;, Grateful Dead&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Follow &lt;a href="https://www.twitter.com/abrgrBuilds"&gt;@abrgBuilds&lt;/a&gt; and join the &lt;a href="https://www.simplystated.dev"&gt;Simply Stated newsletter&lt;/a&gt; for more like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.simplystated.dev/blog/engineering-is-tradeoffs-right"&gt;Original&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>react</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
