<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Philip Hern</title>
    <description>The latest articles on Forem by Philip Hern (@shrouwoods).</description>
    <link>https://forem.com/shrouwoods</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shrouwoods"/>
    <language>en</language>
    <item>
      <title>on the plane, again</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Sun, 26 Apr 2026 22:35:23 +0000</pubDate>
      <link>https://forem.com/shrouwoods/on-the-plane-again-3j90</link>
      <guid>https://forem.com/shrouwoods/on-the-plane-again-3j90</guid>
      <description>&lt;h2&gt;
  
  
  thesis
&lt;/h2&gt;

&lt;p&gt;my opinion on using wifi on a plane has shifted. i do not think it is the right default for everyone in every situation, but when i am traveling alone, especially for work, i have started to treat a connected cabin as a feature. it takes hours that used to feel like pure waiting, time i was just trying to burn through, and turns them into a stretch where i can work with a surprisingly solid level of focus and relatively few distractions.&lt;/p&gt;

&lt;h2&gt;
  
  
  context
&lt;/h2&gt;

&lt;p&gt;a few weeks ago i wrote about how torn i still was on this topic in &lt;a href="https://philliant.com/posts/20260324-wifi-on-planes/" rel="noopener noreferrer"&gt;plane wifi: when the cabin forced disconnect&lt;/a&gt;. that piece was an honest inventory of the tradeoffs. this one is an update from the other side of the choice, after i have spent more flights actually buying the pass and sitting down to work instead of debating it.&lt;/p&gt;

&lt;h2&gt;
  
  
  argument
&lt;/h2&gt;

&lt;p&gt;when i am alone and the trip is for my job, the row stops feeling like a cage and starts feeling like a quiet room with bad legroom. i already have headphones in, so the cabin noise is under control. notifications are fewer than at my desk, nobody is making noise by my office door, and the margin of "things i could be doing instead" feels narrower. it is not peace and it is not deep rest, but it is a usable kind of concentration.&lt;/p&gt;

&lt;p&gt;i am now using that block to write, to debug, to plan, and to close loops i would otherwise push to after i land. i go in knowing i will get a meaningful amount done, and that expectation makes the clock feel less stuck. the time still passes at the same speed, but it passes with output attached, and that changes how it feels in my body.&lt;/p&gt;

&lt;p&gt;i also have a concrete proof point that this mode is not just talk. i completely stood up my personal website while airborne, end to end, in one of those sessions. right now i am on a plane again, getting ahead for an in-person meeting so i can walk in prepared instead of scrambling on the jet bridge.&lt;/p&gt;

&lt;h3&gt;
  
  
  tension or counterpoint
&lt;/h3&gt;

&lt;p&gt;i am not arguing that every person should pay for wifi on every flight. shared trips, family logistics, the middle seat, motion sickness, or the simple need to be offline are all good reasons to skip it. economy is still a bad default office for anyone who needs space or quiet that the cabin cannot give. my point is narrower. for me, in the situations where it fits, the cost of the pass is cheaper than the opportunity cost of treating the whole flight as lost time.&lt;/p&gt;

&lt;h2&gt;
  
  
  closing
&lt;/h2&gt;

&lt;p&gt;i will probably still sometimes want the cabin to be an excuse to be unreachable. when i do, i can leave the wifi off. when i do not, i am glad the option exists, and i am using it on purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/In-flight_connectivity" rel="noopener noreferrer"&gt;in-flight connectivity&lt;/a&gt;, background on how internet reaches aircraft&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related on this site
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260324-wifi-on-planes/" rel="noopener noreferrer"&gt;plane wifi: when the cabin forced disconnect&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/series/commentary/" rel="noopener noreferrer"&gt;commentary series&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>commentary</category>
      <category>travel</category>
      <category>wifi</category>
      <category>work</category>
    </item>
    <item>
      <title>logic</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Sun, 26 Apr 2026 22:35:21 +0000</pubDate>
      <link>https://forem.com/shrouwoods/logic-3360</link>
      <guid>https://forem.com/shrouwoods/logic-3360</guid>
      <description>&lt;h2&gt;
  
  
  thesis
&lt;/h2&gt;

&lt;p&gt;learning basic logic is one of the most useful, durable skills i can recommend to anyone, regardless of profession. the english-language version of if/then/else is a thinking tool that works everywhere, never expires, and quietly compounds into better decisions over a lifetime.&lt;/p&gt;

&lt;h2&gt;
  
  
  context
&lt;/h2&gt;

&lt;p&gt;most people associate logic with code, math, or a philosophy classroom. that framing is too narrow. logic is just structured cause-and-effect thinking, and the simplest version of it sounds exactly like english:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;if this, then that, else that other thing&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;once you can hold that pattern in your head on purpose, it changes how you plan, diagnose, design, and interpret almost everything in front of you. you do not need a programming language to use it. you just need to be willing to slow down for a beat and think in branches instead of straight lines.&lt;/p&gt;

&lt;h3&gt;
  
  
  a few familiar gates
&lt;/h3&gt;

&lt;p&gt;here a few small pictures to demonstrate simple examples of logic gates that you already use in daily life. this just illustrates it so you can literally follow along with the logic gates as each situation progresses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;and&lt;/strong&gt; both must be true for the outcome to be true&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;or&lt;/strong&gt; at least one true is enough&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;not&lt;/strong&gt; you follow the opposite branch of the test&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the flow is the same kind of small chart you would sketch on a napkin.&lt;/p&gt;

&lt;p&gt;{{&amp;lt; mermaid &amp;gt;}}&lt;br&gt;
flowchart TB&lt;br&gt;
subgraph g_and [and, both need to be true]&lt;br&gt;
direction LR&lt;br&gt;
w[good weather?] --&amp;gt; and1{and}&lt;br&gt;
f[afternoon free?] --&amp;gt; and1&lt;br&gt;
and1 --&amp;gt;|yes| hike[go hiking]&lt;br&gt;
and1 --&amp;gt;|no| home[stay in]&lt;br&gt;
end&lt;br&gt;
subgraph g_or [or, at least one is enough]&lt;br&gt;
direction LR&lt;br&gt;
car[friend can drive?] --&amp;gt; or1{or}&lt;br&gt;
bus[transit is running?] --&amp;gt; or1&lt;br&gt;
or1 --&amp;gt;|yes| go[you can get there]&lt;br&gt;
or1 --&amp;gt;|no| stuck[you are stuck]&lt;br&gt;
end&lt;br&gt;
subgraph g_not [not, the opposite branch of the test]&lt;br&gt;
direction LR&lt;br&gt;
pow[power on?] --&amp;gt;|no| br[check the breaker]&lt;br&gt;
pow --&amp;gt;|yes| next1[next check in the chain]&lt;br&gt;
end&lt;br&gt;
{{&amp;lt; /mermaid &amp;gt;}}&lt;/p&gt;

&lt;p&gt;none of that requires a keyboard. it is the same branch habit as the if/then/else line in the last section, just drawn with a few boxes.&lt;/p&gt;

&lt;h2&gt;
  
  
  argument
&lt;/h2&gt;

&lt;h3&gt;
  
  
  logic is a thinking tool, not a coding tool
&lt;/h3&gt;

&lt;p&gt;the if/then/else pattern is older than any programming language. when i write a small script, i am formalizing the same branching i already do when i pick what to wear, route around traffic, or decide how to respond when something at work breaks. the keyboard is incidental, the structure is the point.&lt;/p&gt;

&lt;p&gt;this kind of structured thinking is what moves me from "i feel stuck" to "what is the next decision, and what are the branches under it". that small shift, from a vague feeling to a concrete branch point, is where most of the leverage comes from.&lt;/p&gt;

&lt;h3&gt;
  
  
  where it pays off in normal life
&lt;/h3&gt;

&lt;p&gt;once you start noticing branches, you see them everywhere. planning a day with kids becomes a small logic tree. if the weather holds, we hike. if it does not, we move to the indoor option. if both fail, we cancel and reschedule. naming the branches up front means the day does not collapse when conditions change.&lt;/p&gt;

&lt;p&gt;troubleshooting has the same shape. when something is not working, i walk a tree out loud. if the appliance has power, then check the next link. if not, then check the breaker. each step rules out a branch and shrinks the search space.&lt;/p&gt;

&lt;p&gt;designing a process at work has the same bones. i list the conditions first, then the path each one takes. naming the branches early makes the design easier to explain and easier to fix later.&lt;/p&gt;

&lt;p&gt;understanding behavior is harder, but the structure still helps. people are not perfectly logical, but their patterns often are. if my kid is tired, then certain tantrums become more likely. if a colleague is overloaded, then certain reactions track. recognizing the antecedent makes the response less personal and easier to handle.&lt;/p&gt;

&lt;p&gt;reverse engineering is the same thing run backward. when i look at a result and want to understand how it got there, i walk the logic in reverse. if this output exists, then these inputs and conditions must have been true. if not, then the model i had in my head is wrong, and that gap is useful information on its own.&lt;/p&gt;

&lt;h3&gt;
  
  
  the tool never goes out of style
&lt;/h3&gt;

&lt;p&gt;frameworks change. tools change. programming languages come and go. if/then/else does not. it is a structure of thought, not a piece of technology, which is why it keeps working in domains it was never designed for. cooking, parenting, negotiations, medical decision trees, customer support scripts, and legal arguments all lean on the same scaffolding.&lt;/p&gt;

&lt;p&gt;i find real comfort in skills that age well. so much of what i learn in tech has a short half-life now. logic does not. once i have it, i have it for good.&lt;/p&gt;

&lt;h3&gt;
  
  
  applying it broadly is what makes it powerful
&lt;/h3&gt;

&lt;p&gt;a tool that works in one place is useful. a tool that works everywhere is leverage. logic works everywhere because every domain has cause and effect, conditions, and outcomes. that generality is the multiplier, and it is the same kind of cross-domain value i wrote about with &lt;a href="https://philliant.com/posts/20260327-adaptability/" rel="noopener noreferrer"&gt;adaptability&lt;/a&gt;. the principle stays steady while the surface details swap.&lt;/p&gt;

&lt;h3&gt;
  
  
  learn it as early as you can
&lt;/h3&gt;

&lt;p&gt;the earlier this gets internalized, the more downstream decisions inherit it. a kid who can think in branches asks better questions, accepts fewer "because i said so" answers, and gradually builds a habit of checking conditions before reacting. that habit then runs in the background for the rest of their life.&lt;/p&gt;

&lt;p&gt;i think about this with my own kids, and i think about it for myself. every year i wait to make decisions more deliberately is a year of slightly noisier decisions stacked behind me.&lt;/p&gt;

&lt;h3&gt;
  
  
  the butterfly effect of better decisions
&lt;/h3&gt;

&lt;p&gt;small improvements in single decisions do not look like much in isolation. two paths that differ by one degree at the start can end up far apart over a long enough timeline. better daily decisions, even by a thin margin, compound the same way &lt;a href="https://philliant.com/posts/20260406-little-by-little-a-little-becomes-a-lot/" rel="noopener noreferrer"&gt;a little becomes a lot&lt;/a&gt; does for habits. the quality of the inputs, repeated across years, becomes the quality of the life.&lt;/p&gt;

&lt;p&gt;logic is one of the cheapest ways i know to nudge that compounding in a good direction.&lt;/p&gt;

&lt;h3&gt;
  
  
  tension or counterpoint
&lt;/h3&gt;

&lt;p&gt;logic on its own is not the whole answer. real situations carry emotion, ambiguity, missing information, and people who do not behave according to clean rules. if i treat every interaction like a flowchart, i lose intuition, empathy, and the ability to sit with uncertainty.&lt;/p&gt;

&lt;p&gt;the skill is to use logic as scaffolding, not as a replacement for judgment. i map the branches i can see, then i listen for the part that the branches do not capture. both layers matter, and the logical part actually helps the intuitive part by giving it a clean place to stand.&lt;/p&gt;

&lt;h2&gt;
  
  
  closing
&lt;/h2&gt;

&lt;p&gt;this is one of the cheapest, most durable investments anyone can make. learn the english-language form of if/then/else. practice naming the conditions and the branches in your own life. apply it to planning, troubleshooting, designing, and understanding the people around you.&lt;/p&gt;

&lt;p&gt;learn it once and you keep it forever. apply it everywhere and it compounds. not many skills pay back like that.&lt;/p&gt;

&lt;h2&gt;
  
  
  further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Propositional_calculus" rel="noopener noreferrer"&gt;propositional logic&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Decision_tree" rel="noopener noreferrer"&gt;decision tree&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Critical_thinking" rel="noopener noreferrer"&gt;critical thinking&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related on this site
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260327-adaptability/" rel="noopener noreferrer"&gt;adaptability&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260406-little-by-little-a-little-becomes-a-lot/" rel="noopener noreferrer"&gt;little by little, a little becomes a lot&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260410-comfortable-being-uncomfortable/" rel="noopener noreferrer"&gt;comfortable being uncomfortable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/series/commentary/" rel="noopener noreferrer"&gt;commentary series&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>logic</category>
      <category>thinking</category>
      <category>decisionmaking</category>
      <category>problemsolving</category>
    </item>
    <item>
      <title>back at it</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Thu, 23 Apr 2026 12:27:13 +0000</pubDate>
      <link>https://forem.com/shrouwoods/back-at-it-4aa7</link>
      <guid>https://forem.com/shrouwoods/back-at-it-4aa7</guid>
      <description>&lt;h2&gt;
  
  
  thesis
&lt;/h2&gt;

&lt;p&gt;this is a small checkpoint post. the heavy lift is not finished, but i am out of the weeds for now, and that is worth naming out loud.&lt;/p&gt;

&lt;h2&gt;
  
  
  context
&lt;/h2&gt;

&lt;p&gt;the same work i was carrying in &lt;a href="https://philliant.com/posts/20260416-stick-with-it/" rel="noopener noreferrer"&gt;stick with it&lt;/a&gt; kept growing in weight and surface area. for a while it felt like one endless tangle. i stayed with it anyway, and eventually i approached it the way i should have from the start, as smaller chunks that stack into the much larger change. each piece still had to be real, but the sequencing and scope finally matched how my head and the system can tolerate change.&lt;/p&gt;

&lt;h2&gt;
  
  
  argument
&lt;/h2&gt;

&lt;p&gt;getting to a stable point did not erase the backlog. i still have more testing to run, more simulations to exercise, and real user acceptance testing ahead. the difference is that the foundation is no longer thrashing. errors and surprises have a place to land without undoing everything at once.&lt;/p&gt;

&lt;p&gt;that stability is what gave me room to breathe. i can take a short break on purpose, look at the whole arc with a little distance, and come back to the tuning work with less panic and more optimism. the remaining work is still serious, but it is the kind of serious that fits a calendar instead of the kind that owns every waking hour.&lt;/p&gt;

&lt;h3&gt;
  
  
  tension or counterpoint
&lt;/h3&gt;

&lt;p&gt;a stable checkpoint is not the same as done. if i confuse relief for completion, i will skip validation i still need. the discipline now is to rest without pretending the job is closed.&lt;/p&gt;

&lt;h2&gt;
  
  
  closing
&lt;/h2&gt;

&lt;p&gt;so i am back at it in a different posture, not firefighting the whole shape at once, but finishing the test matrix, listening to users, and dialing things in with a clearer mind. sticking with it got me here. the next stretch is about proving it in the world, calmly.&lt;/p&gt;

&lt;h2&gt;
  
  
  further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Chunking_(psychology)" rel="noopener noreferrer"&gt;chunking (psychology)&lt;/a&gt;, on breaking information and work into manageable units&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related on this site
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260416-stick-with-it/" rel="noopener noreferrer"&gt;stick with it&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260406-little-by-little-a-little-becomes-a-lot/" rel="noopener noreferrer"&gt;little by little, a little becomes a lot&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/series/commentary/" rel="noopener noreferrer"&gt;commentary series&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>persistence</category>
      <category>workflow</category>
      <category>testing</category>
      <category>stress</category>
    </item>
    <item>
      <title>stick with it</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Fri, 17 Apr 2026 13:24:41 +0000</pubDate>
      <link>https://forem.com/shrouwoods/stick-with-it-3c6i</link>
      <guid>https://forem.com/shrouwoods/stick-with-it-3c6i</guid>
      <description>&lt;h2&gt;
  
  
  thesis
&lt;/h2&gt;

&lt;p&gt;this one is for me as much as anyone reading. the single most important thing i can do on a long, hard project is keep showing up for it. motivation rises and falls, energy comes in waves, and neither of those things matter as much as continuity. if i stay with the work long enough, the payoff arrives, even when progress is invisible for stretches in the middle.&lt;/p&gt;

&lt;h2&gt;
  
  
  context
&lt;/h2&gt;

&lt;p&gt;i am in the middle of a very heavy lift right now. it started as a change i thought i would finish quickly, and it has turned into something much bigger. the effort, concentration, and validation required are more than i am used to, and the timeline has stretched well past what a typical change would take. the stress is real. i feel it in how i think about the project before bed and how quickly i reach for my laptop in the morning.&lt;/p&gt;

&lt;p&gt;i am still in it, though, because the value at the end is worth the cost. when this lands, i will have more stable and explainable historical data, which means my ongoing workload of troubleshooting data validity questions drops. less firefighting later is worth more pressure now, and that tradeoff is the only reason i would keep going through a change this heavy.&lt;/p&gt;

&lt;h2&gt;
  
  
  argument
&lt;/h2&gt;

&lt;h3&gt;
  
  
  continuity beats intensity
&lt;/h3&gt;

&lt;p&gt;motivation is a wave, not a rope. it pulls me forward for a while, then it lets go, then it comes back later with a different shape. if i tie my progress to the wave, i stop whenever the wave stops. if i tie my progress to the habit of showing up, the wave cannot take the project down with it. that is the same pattern i wrote about in &lt;a href="https://philliant.com/posts/20260406-little-by-little-a-little-becomes-a-lot/" rel="noopener noreferrer"&gt;little by little, a little becomes a lot&lt;/a&gt;, just applied to a single long problem instead of a daily practice.&lt;/p&gt;

&lt;h3&gt;
  
  
  isolate your changes, even in your own playground
&lt;/h3&gt;

&lt;p&gt;the hardest lesson from this round is about isolation. i have been testing work in an environment i consider my playground, and for a long time that has been fine. this time, my changes broke downstream consumers, and the pressure immediately escalated because other people were suddenly blocked. the takeaway is simple. if my changes can reach downstream consumers, i need to separate my testing from a shared test environment, regardless of how freely i am used to moving in that space. a playground still has neighbors.&lt;/p&gt;

&lt;h3&gt;
  
  
  do not try to lift several objects at once
&lt;/h3&gt;

&lt;p&gt;i also tried to move multiple pieces of the system at the same time. i thought bundling them would be faster. what actually happened is that each piece depended on the others in a way that made every single one harder to validate, and the total stress grew faster than the total work. smaller, sequential chunks would have finished sooner and felt calmer. one object at a time, even if it feels slower on paper, is almost always faster in practice.&lt;/p&gt;

&lt;h3&gt;
  
  
  better preparation shrinks the stress
&lt;/h3&gt;

&lt;p&gt;the last lesson is about preparation. i went in expecting a small change and i prepared like it was a small change. when the scope grew, my preparation did not grow with it, and that mismatch is where the break points appeared. better preparation up front, regardless of how small i thought the task was, would have reduced both the stress and the number of places things could go wrong. the cost of preparing for a bigger job than you need is tiny. the cost of not preparing for the job you actually have is not.&lt;/p&gt;

&lt;h3&gt;
  
  
  tension or counterpoint
&lt;/h3&gt;

&lt;p&gt;persistence is not the same as refusing to reassess. sticking with every hard thing forever is just sunk cost fallacy wearing a motivational t-shirt. the honest check i keep running is whether the value at the end is still real and still mine. if the answer is yes, i keep going. if the answer turns into no, i stop, and that is not quitting, that is discernment.&lt;/p&gt;

&lt;p&gt;there is also a stress cost to "push through" language. if the pressure is spilling into health, relationships, or judgment, that is a signal to change the pace, not a signal to try harder. pushing through is a tool, not a strategy, and it only works when i also rest and isolate the work properly. that is part of why i think it helps to get &lt;a href="https://philliant.com/posts/20260410-comfortable-being-uncomfortable/" rel="noopener noreferrer"&gt;comfortable being uncomfortable&lt;/a&gt; without confusing discomfort for permission to keep grinding.&lt;/p&gt;

&lt;h2&gt;
  
  
  closing
&lt;/h2&gt;

&lt;p&gt;so this is my note to myself. keep going. the work is real, the value is real, and the lessons i am collecting on the way are already paying off for the next change. next time i will isolate my testing better, break the work into one object at a time, and prepare like the task is bigger than i think it is, because it almost always is.&lt;/p&gt;

&lt;p&gt;and if the wave of motivation dips again tomorrow, that is fine. waves dip. what matters is that i still show up, finish one more piece, and trust that continuity is the actual engine. stick with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Grit_(personality_trait)" rel="noopener noreferrer"&gt;grit (personality trait)&lt;/a&gt;, angela duckworth on long-term persistence&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Sunk_cost" rel="noopener noreferrer"&gt;sunk cost fallacy&lt;/a&gt;, useful balance for deciding when to keep going versus when to stop&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related on this site
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260406-little-by-little-a-little-becomes-a-lot/" rel="noopener noreferrer"&gt;little by little, a little becomes a lot&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260410-comfortable-being-uncomfortable/" rel="noopener noreferrer"&gt;comfortable being uncomfortable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260327-adaptability/" rel="noopener noreferrer"&gt;adaptability&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/series/commentary/" rel="noopener noreferrer"&gt;commentary series&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>persistence</category>
      <category>consistency</category>
      <category>stress</category>
      <category>selftalk</category>
    </item>
    <item>
      <title>comfortable being uncomfortable</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Fri, 10 Apr 2026 19:29:32 +0000</pubDate>
      <link>https://forem.com/shrouwoods/comfortable-being-uncomfortable-5dac</link>
      <guid>https://forem.com/shrouwoods/comfortable-being-uncomfortable-5dac</guid>
      <description>&lt;h2&gt;
  
  
  thesis
&lt;/h2&gt;

&lt;p&gt;i want to normalize a simple idea that still feels hard in practice. getting outside of your comfort zone is not a side quest. it is the main mechanism by which you stretch, learn, and see the world with more room in it for other people. yes, it is uncomfortable, and that is exactly the point.&lt;/p&gt;

&lt;h2&gt;
  
  
  context
&lt;/h2&gt;

&lt;p&gt;most of us are trained to seek stability. stability is not bad, but it is also not where the adaptation happens. when everything feels familiar, your brain is mostly rehearsing what it already knows. the moment you step into something new, the cost shows up immediately as awkwardness, uncertainty, or fear of looking foolish. that friction is not a sign you chose wrong. it is often a sign you chose honestly.&lt;/p&gt;

&lt;h2&gt;
  
  
  argument
&lt;/h2&gt;

&lt;p&gt;change is disruptive by definition. if it did not interrupt your default patterns, it would not be change. i think we should embrace that disruption more often, because it is where new experiences actually enter your life. without that interruption, you mostly get repetition with better packaging.&lt;/p&gt;

&lt;p&gt;the growth part is not theoretical. discomfort is where skills get pressure-tested. you learn not only how to do things, but how &lt;strong&gt;not&lt;/strong&gt; to do things, which is just as valuable and often faster feedback. mistakes in public or under stress are expensive emotionally, but they are also unusually clear. they show you boundaries, preferences, and limits in a way that a comfortable afternoon rarely will.&lt;/p&gt;

&lt;p&gt;more experiences also broaden your worldview in a practical sense. when you have seen more contexts, constraints, and ways people solve problems, it becomes harder to treat your own habits as universal law. that widening tends to produce more tolerant and compassionate attitudes, not because tolerance is a slogan, but because you have more firsthand evidence that reasonable people can live and work in very different, equally valid ways.&lt;/p&gt;

&lt;p&gt;so my encouragement is simple. expose yourself to new experiences on purpose. seek situations where the pressure is on you to perform, because that is where you rise to the occasion and discover how capable you can be. it is also where you might discover that this is not your thing, and you should move on. either outcome is a win, because both give you self-insight you cannot fake. you learn what energizes you, what drains you, and what you are willing to practice until it gets easier.&lt;/p&gt;

&lt;p&gt;this connects to how i think about &lt;a href="https://philliant.com/posts/20260327-adaptability/" rel="noopener noreferrer"&gt;adaptability&lt;/a&gt; in general. comfort is a resting state. adaptation requires movement.&lt;/p&gt;

&lt;h3&gt;
  
  
  tension or counterpoint
&lt;/h3&gt;

&lt;p&gt;there is a real downside to glorifying discomfort without boundaries. not every challenge is worth the cost, and not every "growth opportunity" is ethical or safe. pushing yourself is different from letting yourself be pushed past your values or health. the goal is not suffering for its own sake. the goal is chosen stretch, with recovery and discernment built in.&lt;/p&gt;

&lt;h2&gt;
  
  
  closing
&lt;/h2&gt;

&lt;p&gt;i am not asking for constant chaos. i am asking for a bias toward the new when you can afford it, and toward the high-stakes try when you are ready. the uncomfortable path is where you find out who you are when the easy defaults are not available, and that knowledge is about as practical as it gets.&lt;/p&gt;

&lt;h2&gt;
  
  
  further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Comfort_zone" rel="noopener noreferrer"&gt;comfort zone&lt;/a&gt; (psychology of performance and anxiety)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related on this site
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260327-adaptability/" rel="noopener noreferrer"&gt;adaptability&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260406-little-by-little-a-little-becomes-a-lot/" rel="noopener noreferrer"&gt;little by little, a little becomes a lot&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/series/commentary/" rel="noopener noreferrer"&gt;commentary series&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>growth</category>
      <category>change</category>
      <category>comfortzone</category>
      <category>learning</category>
    </item>
    <item>
      <title>dbt snapshots: moving from merges to native history</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Fri, 10 Apr 2026 19:29:31 +0000</pubDate>
      <link>https://forem.com/shrouwoods/dbt-snapshots-moving-from-merges-to-native-history-cjd</link>
      <guid>https://forem.com/shrouwoods/dbt-snapshots-moving-from-merges-to-native-history-cjd</guid>
      <description>&lt;h2&gt;
  
  
  quick answer
&lt;/h2&gt;

&lt;p&gt;dbt snapshots provide a native way to track slowly changing dimensions over time. by migrating from custom merge statements to native dbt snapshots, you can simplify your codebase, rely on built-in history tracking, and ensure your downstream models always have access to point-in-time records.&lt;/p&gt;

&lt;h2&gt;
  
  
  who this is for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;audience: data engineers and analytics engineers using dbt&lt;/li&gt;
&lt;li&gt;prerequisites: basic knowledge of dbt models, sql, and data warehousing concepts&lt;/li&gt;
&lt;li&gt;when to use this guide: when you need to track historical changes to mutable source records and want to move away from manual merge logic&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  why this matters
&lt;/h2&gt;

&lt;p&gt;tracking historical changes is a common requirement in data warehousing. building custom merge logic to handle inserts, updates, and history tracking is error-prone and difficult to maintain. dbt snapshots handle the heavy lifting of history tracking out of the box. this ensures you do not lose historical context when source systems overwrite data.&lt;/p&gt;

&lt;h2&gt;
  
  
  moving from merge to snapshot
&lt;/h2&gt;

&lt;p&gt;recently, i migrated several historical tables from a custom merge strategy to native dbt snapshots. the previous approach relied on complex merge statements that manually checked for changes and inserted or updated rows to maintain history. this was difficult to read and even harder to debug.&lt;/p&gt;

&lt;p&gt;by adopting native dbt snapshots, the logic became declarative. instead of writing the exact update and insert commands, i only needed to define the source query and configure how dbt should detect changes. the downstream consumer views then filter the snapshot output to return the current row or a point-in-time record.&lt;/p&gt;

&lt;h3&gt;
  
  
  the core shift in thinking
&lt;/h3&gt;

&lt;p&gt;when using snapshots, your snapshot definition should remain source-representative. do not apply business date-window filtering in the snapshot definition itself. instead, capture the raw history and apply your logic for which rows to return in downstream consumer views.&lt;/p&gt;

&lt;p&gt;for example, to get the current row in a downstream model, you filter using the sentinel value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="k"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'my_snapshot_st'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;
&lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;dbt_valid_to&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'9999-12-31 23:59:59'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to get a freeze record for a specific point in time, you derive a freeze timestamp and filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="k"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'my_snapshot_st'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;
&lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;dbt_valid_from&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;freeze_ts&lt;/span&gt; &lt;span class="k"&gt;and&lt;/span&gt; &lt;span class="n"&gt;dbt_valid_to&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;freeze_ts&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  basic example
&lt;/h2&gt;

&lt;p&gt;here is a basic example of a dbt snapshot using the check strategy. this snapshot tracks changes to a practice affiliation table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="n"&gt;snapshot&lt;/span&gt; &lt;span class="n"&gt;practice_affiliation_st&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;{{&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;target_schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'snapshots'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;strategy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'check'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;unique_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'fmno'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'cycle'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'committee'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'hierarchy'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;check_cols&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'all'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;hard_deletes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'invalidate'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;dbt_valid_to_current&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;"to_timestamp_ntz('9999-12-31 23:59:59')"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}}&lt;/span&gt;

&lt;span class="k"&gt;select&lt;/span&gt;
    &lt;span class="n"&gt;fmno&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;cycle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;committee&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;practice_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;hierarchy&lt;/span&gt;
&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="k"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'source_practice_affiliation_v'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;

&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="n"&gt;endsnapshot&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  configuration options
&lt;/h2&gt;

&lt;p&gt;dbt snapshots offer several configuration options that control how changes are detected and recorded. you can read more about these in the &lt;a href="https://docs.getdbt.com/docs/build/snapshots" rel="noopener noreferrer"&gt;official dbt snapshot documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;here are the key options and what they control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;target_schema&lt;/strong&gt;: the schema where the snapshot table will be built&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;strategy&lt;/strong&gt;: determines how dbt detects changes, with the two main options being timestamp and check&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;unique_key&lt;/strong&gt;: the primary key of the record, which can be a single column or a list of columns for a composite key&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;check_cols&lt;/strong&gt;: used with the check strategy to specify which columns to monitor for changes, accepting a list of column names or the word all&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;updated_at&lt;/strong&gt;: used with the timestamp strategy to specify the column that indicates when the source row was last modified&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hard_deletes&lt;/strong&gt;: controls how dbt handles rows that disappear from the source, such as setting it to invalidate to close the current row when a key is no longer present&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;dbt_valid_to_current&lt;/strong&gt;: overrides the default null value for current records, allowing you to set a far-future date to make downstream filtering easier&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  timestamp vs check strategy
&lt;/h3&gt;

&lt;p&gt;the choice between timestamp and check strategies is critical.&lt;/p&gt;

&lt;p&gt;use the timestamp strategy when your source has a reliable updated column that changes whenever the row changes. dbt compares the source timestamp to the snapshot timestamp to decide if a new version is needed.&lt;/p&gt;

&lt;p&gt;use the check strategy when you do not have a reliable updated timestamp, or when you want to detect any change in a specific set of columns. dbt compares the actual values of the check columns between the source and the current snapshot row. if any checked column differs, dbt closes the current row and inserts a new version.&lt;/p&gt;

&lt;p&gt;in my recent work, i found that the check strategy with all columns checked and a composite unique key was the most robust approach for sources where the updated timestamp was synthetic or not authoritative.&lt;/p&gt;

&lt;h2&gt;
  
  
  gotchas and lessons learned
&lt;/h2&gt;

&lt;p&gt;migrating to snapshots surfaced a few important lessons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;upstream scope gating&lt;/strong&gt;: if your upstream source query includes filters that remove keys, and you have hard deletes configured to invalidate, dbt will intentionally close the current rows for those missing keys&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;composite keys&lt;/strong&gt;: dbt fully supports composite unique keys, and passing a list of columns ensures that dbt tracks history at the correct grain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;duplicate source rows&lt;/strong&gt;: snapshots expect the source data to be unique at the unique key grain, so if your source contains duplicate keys, the snapshot will fail or bloat&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;defensive deduplication&lt;/strong&gt;: in some cases, i had to add a defensive qualify row number guard in the snapshot definition to collapse known duplicate-key source rows before dbt processed them&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;sentinel values&lt;/strong&gt;: using a sentinel value for current rows instead of null makes downstream queries much cleaner, allowing you to use an equals operator instead of checking for nulls&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  deployment and automation
&lt;/h2&gt;

&lt;p&gt;snapshots are not updated automatically when you run your standard dbt build commands. they require a dedicated command: &lt;code&gt;dbt snapshot&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;if you do not automate this, your history tracking will be manual and prone to gaps. to ensure continuous history capture, you must schedule the snapshot command to run on a regular cadence.&lt;/p&gt;

&lt;p&gt;in a production environment, this usually means setting up a continuous integration workflow or an orchestrator task. for example, you can use automated workflows to run snapshot tags on daily, hourly, or monthly schedules.&lt;/p&gt;

&lt;p&gt;a typical automated workflow might look like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;a scheduled trigger fires the workflow&lt;/li&gt;
&lt;li&gt;the workflow checks out the repository and sets up the dbt environment&lt;/li&gt;
&lt;li&gt;the workflow executes the snapshot command for specific tags&lt;/li&gt;
&lt;li&gt;dbt connects to the warehouse, compares the source data to the existing snapshot tables, and applies any necessary inserts or updates&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;by decoupling the snapshot schedule from your standard model runs, you can capture history at the exact frequency your business logic requires.&lt;/p&gt;

&lt;h2&gt;
  
  
  references
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.getdbt.com/docs/build/snapshots" rel="noopener noreferrer"&gt;dbt snapshots documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.getdbt.com/reference/snapshot-configs" rel="noopener noreferrer"&gt;dbt snapshot configurations&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/" rel="noopener noreferrer"&gt;dbt models&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dbt</category>
      <category>dataengineering</category>
      <category>snowflake</category>
      <category>snapshots</category>
    </item>
    <item>
      <title>little by little, a little becomes a lot</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Tue, 07 Apr 2026 23:13:24 +0000</pubDate>
      <link>https://forem.com/shrouwoods/little-by-little-a-little-becomes-a-lot-2acf</link>
      <guid>https://forem.com/shrouwoods/little-by-little-a-little-becomes-a-lot-2acf</guid>
      <description>&lt;h2&gt;
  
  
  thesis
&lt;/h2&gt;

&lt;p&gt;the importance of just trying to do a little each day cannot be overstated. we often overestimate what we can accomplish in a single afternoon, but we vastly underestimate what we can build over a year of sustained effort. incremental changes add up, and what seems like a drop in the bucket today becomes a reservoir over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  context
&lt;/h2&gt;

&lt;p&gt;whether it is work, fitness, or building new routines, habit forming often feels like it takes forever to take hold. we live in a world that expects immediate results, and we naturally get frustrated when the scale does not move or the project does not finish overnight. when that initial burst of motivation inevitably fades, the reality of the daily grind sets in, and that is exactly when most people decide to walk away.&lt;/p&gt;

&lt;h2&gt;
  
  
  argument
&lt;/h2&gt;

&lt;p&gt;this process is a little easier when you understand you are working toward continuity and not perfection. it is not about executing flawlessly every single day, nor is it about never taking a break. it is about showing up and putting in the reps, even when the effort feels small or uninspired. missing one day is just a bump in the road, as long as you do not let it become two days in a row.&lt;/p&gt;

&lt;p&gt;i am starting to see the rewards of that mindset now. little by little, my experience has added up into a foundation i can actually rely on. little by little, i have started sharing via my website, turning scattered thoughts into a structured body of work. little by little, i am starting to have a greater reach and help more people, simply because i chose to publish something small rather than waiting for the perfect masterpiece.&lt;/p&gt;

&lt;h3&gt;
  
  
  tension or counterpoint
&lt;/h3&gt;

&lt;p&gt;the hardest part is trusting the process when the visible progress is zero. it is incredibly easy to quit when you do not see the immediate payoff of your daily effort. it feels like you are just watering dirt for weeks on end. but the compounding effect of showing up is real, even if it remains completely invisible in the short term.&lt;/p&gt;

&lt;h2&gt;
  
  
  closing
&lt;/h2&gt;

&lt;p&gt;so right in theme, i will keep this short and keep focusing on the small, daily inputs rather than the distant outputs. the goal is simply to keep the chain going, trusting that a little becomes a lot when you give it enough time.&lt;/p&gt;

&lt;h2&gt;
  
  
  further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://jamesclear.com/atomic-habits" rel="noopener noreferrer"&gt;atomic habits&lt;/a&gt; (james clear)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.tinyhabits.com/" rel="noopener noreferrer"&gt;tiny habits&lt;/a&gt; (bj fogg)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Small_wins" rel="noopener noreferrer"&gt;small wins&lt;/a&gt; (karl weick, organizational change)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related on this site
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260402-sharing-is-caring/" rel="noopener noreferrer"&gt;sharing is caring&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260405-automated-devto-linkedin-visibility/" rel="noopener noreferrer"&gt;how i automated dev.to and linkedin publishing so visibility stops depending on memory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260327-adaptability/" rel="noopener noreferrer"&gt;adaptability&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>habits</category>
      <category>consistency</category>
      <category>growth</category>
    </item>
    <item>
      <title>how i automated dev.to and linkedin publishing so visibility stops depending on memory</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Sun, 05 Apr 2026 14:03:19 +0000</pubDate>
      <link>https://forem.com/shrouwoods/how-i-automated-devto-and-linkedin-publishing-so-visibility-stops-depending-on-memory-2g2i</link>
      <guid>https://forem.com/shrouwoods/how-i-automated-devto-and-linkedin-publishing-so-visibility-stops-depending-on-memory-2g2i</guid>
      <description>&lt;p&gt;after i started writing more consistently, it became obvious that writing is only half the work; distribution is the other half. i wanted a system where i can publish from one canonical source and let automation push the same story to dev.to and linkedin.&lt;/p&gt;

&lt;h2&gt;
  
  
  quick answer
&lt;/h2&gt;

&lt;p&gt;i set up two publish automations that watch my post changes and sync them to dev.to and linkedin. the first publish creates the post on each platform, and later edits update the same external post instead of creating duplicates. this gives me consistent visibility without adding manual publishing steps after every article.&lt;/p&gt;

&lt;h2&gt;
  
  
  who this is for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;people who publish technical writing and keep forgetting cross-posting&lt;/li&gt;
&lt;li&gt;creators who want one canonical source plus repeatable distribution&lt;/li&gt;
&lt;li&gt;builders who care about discoverability as much as writing quality&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  why this matters
&lt;/h2&gt;

&lt;p&gt;if distribution is manual, it eventually slips. then strong posts sit unread because i forgot to copy, paste, format, and re-share them across platforms. automation solves that by making visibility part of the same delivery path as the content itself.&lt;/p&gt;

&lt;p&gt;this is the same pattern i described in &lt;a href="https://philliant.com/posts/20260319-practical-ai-workflow-jira-github-mcp/" rel="noopener noreferrer"&gt;a practical ai workflow: jira, github, and mcp&lt;/a&gt;, define one clear source of truth, then automate the handoff steps so i can spend more time on thinking and less time on clerical work.&lt;/p&gt;

&lt;h2&gt;
  
  
  step-by-step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) define the starting point
&lt;/h3&gt;

&lt;p&gt;i chose my site post as the only canonical source. every external platform receives content from that source, not from separate drafts. this keeps language, links, and updates aligned over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) apply the change
&lt;/h3&gt;

&lt;p&gt;i added automation for both targets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;trigger on post updates and support manual runs when i want a full backfill&lt;/li&gt;
&lt;li&gt;create posts when no external mapping exists&lt;/li&gt;
&lt;li&gt;update existing external posts when a mapping already exists&lt;/li&gt;
&lt;li&gt;keep a small state map so each canonical url stays attached to one external post id&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the practical result is that i can keep writing in one place and trust the sync layer to handle distribution. this complements the writing habits from &lt;a href="https://philliant.com/posts/20260313-my-cursor-setup/" rel="noopener noreferrer"&gt;my cursor setup&lt;/a&gt;, where reusable workflows remove repeated manual work.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) validate the result
&lt;/h3&gt;

&lt;p&gt;i test in three passes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;dry run to confirm detection and decisions without publishing&lt;/li&gt;
&lt;li&gt;publish-all run to verify initial backfill behavior&lt;/li&gt;
&lt;li&gt;normal change-trigger run to verify incremental updates on later edits&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;when all three pass, i know the pipeline is reliable enough for daily use.&lt;/p&gt;

&lt;h2&gt;
  
  
  faq
&lt;/h2&gt;

&lt;h3&gt;
  
  
  what was the biggest setup mistake?
&lt;/h3&gt;

&lt;p&gt;token and redirect mismatches during oauth were the main failure point at first. once i aligned scopes, callback values, and secret placement, the automation became stable.&lt;/p&gt;

&lt;h3&gt;
  
  
  should i keep manual publishing as a fallback?
&lt;/h3&gt;

&lt;p&gt;yes, especially while you are in early setup. after the workflow proves stable, manual publishing becomes a recovery path instead of a default habit.&lt;/p&gt;

&lt;h2&gt;
  
  
  references
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developers.forem.com/api" rel="noopener noreferrer"&gt;dev.to api docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/developers/" rel="noopener noreferrer"&gt;linkedin developer platform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;github actions documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260319-practical-ai-workflow-jira-github-mcp/" rel="noopener noreferrer"&gt;a practical ai workflow: jira, github, and mcp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260313-my-cursor-setup/" rel="noopener noreferrer"&gt;my cursor setup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260315-starter-templates-for-ai-rules-skills-and-commands/" rel="noopener noreferrer"&gt;starter templates for ai rules, skills, and commands&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>automation</category>
      <category>devto</category>
      <category>linkedin</category>
      <category>publishing</category>
    </item>
    <item>
      <title>the future of data engineering workflows with ai</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Fri, 03 Apr 2026 14:11:59 +0000</pubDate>
      <link>https://forem.com/shrouwoods/the-future-of-data-engineering-workflows-with-ai-42mb</link>
      <guid>https://forem.com/shrouwoods/the-future-of-data-engineering-workflows-with-ai-42mb</guid>
      <description>&lt;h2&gt;
  
  
  quick answer
&lt;/h2&gt;

&lt;p&gt;the future of data engineering workflows with ai is about moving from manual coding to intelligent orchestration. ai agents will handle boilerplate code, pipeline generation, and data quality checks, allowing data engineers to focus on architecture, governance, and business value.&lt;/p&gt;

&lt;h2&gt;
  
  
  who this is for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;audience: data engineers, analytics engineers, data architects, and technical leaders.&lt;/li&gt;
&lt;li&gt;prerequisites: an understanding of modern data stack concepts and basic ai principles.&lt;/li&gt;
&lt;li&gt;when to use this guide: when planning your data strategy and evaluating how to integrate ai into your engineering practices.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  why this matters
&lt;/h2&gt;

&lt;p&gt;the volume and complexity of data are growing faster than engineering teams can scale. relying solely on manual workflows leads to bottlenecks, technical debt, and delayed insights. embracing ai is not just about efficiency, it is a strategic imperative to remain competitive.&lt;/p&gt;

&lt;h2&gt;
  
  
  step-by-step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) define the starting point
&lt;/h3&gt;

&lt;p&gt;traditionally, data engineering has been a highly manual discipline. engineers spend countless hours writing sql, configuring orchestrators like airflow, and debugging failed pipelines. this approach is brittle and scales poorly as the organization grows.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) apply the change
&lt;/h3&gt;

&lt;p&gt;the integration of ai changes this paradigm. large language models can now generate complex sql queries, translate between dialects, and even suggest optimal data models based on source schemas. ai agents can monitor pipeline health, automatically retry transient failures, and alert engineers only when human intervention is necessary. this shift transforms the engineer from a coder into a system architect.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) validate the result
&lt;/h3&gt;

&lt;p&gt;the impact of this transformation is measurable. development cycles shorten, data quality improves through automated testing, and the overall reliability of the platform increases. engineers spend less time firefighting and more time building scalable, resilient architectures that drive business decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  faq
&lt;/h2&gt;

&lt;h3&gt;
  
  
  what is the most important caveat?
&lt;/h3&gt;

&lt;p&gt;ai is a tool, not a replacement for fundamental engineering principles. you still need a strong understanding of data modeling, governance, and security to build a robust platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  what should i do first?
&lt;/h3&gt;

&lt;p&gt;start by identifying the most repetitive tasks in your workflow, such as writing documentation or basic transformations. experiment with ai tools to automate these specific areas before attempting to overhaul your entire architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  references
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://a16z.com/2020/10/15/the-emerging-architectures-for-modern-data-infrastructure/" rel="noopener noreferrer"&gt;the modern data stack&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260318-from-prototype-to-production-ai/" rel="noopener noreferrer"&gt;from prototype to production ai&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dataengineering</category>
      <category>ai</category>
      <category>workflow</category>
      <category>future</category>
    </item>
    <item>
      <title>how i use cursor and ai agents to write dbt tests and documentation</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Fri, 03 Apr 2026 14:07:49 +0000</pubDate>
      <link>https://forem.com/shrouwoods/how-i-use-cursor-and-ai-agents-to-write-dbt-tests-and-documentation-46od</link>
      <guid>https://forem.com/shrouwoods/how-i-use-cursor-and-ai-agents-to-write-dbt-tests-and-documentation-46od</guid>
      <description>&lt;h2&gt;
  
  
  quick answer
&lt;/h2&gt;

&lt;p&gt;writing dbt tests and documentation is often the most neglected part of data engineering. i use cursor and custom ai agents to automate this process by reading my sql models, inferring the business logic, and generating the corresponding yaml files. this ensures high-quality data pipelines without the manual overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  who this is for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;audience: data engineers, analytics engineers, and developers using dbt&lt;/li&gt;
&lt;li&gt;prerequisites: basic knowledge of dbt, sql, and cursor&lt;/li&gt;
&lt;li&gt;when to use this guide: when you want to scale your data engineering practices and reduce the time spent on writing boilerplate yaml&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  why this matters
&lt;/h2&gt;

&lt;p&gt;documentation and testing are critical for data trust, but they are tedious to write manually. when these steps are skipped, data quality suffers and debugging becomes a nightmare. by automating this with ai, you get the benefits of rigorous testing and clear documentation while freeing up your time for higher-value architectural work.&lt;/p&gt;

&lt;h2&gt;
  
  
  step-by-step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) define the starting point
&lt;/h3&gt;

&lt;p&gt;most data engineers start with a raw sql model and a blank slate for their &lt;code&gt;schema.yml&lt;/code&gt; file. the traditional approach requires manually typing out every column name, description, and test. this is prone to human error and inconsistency, plus almost always falls out of sync with current models with the first change.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) apply the change
&lt;/h3&gt;

&lt;p&gt;i use cursor to bridge this gap. by creating specific ai rules and skills, i can highlight a dbt model and ask the agent to generate the documentation. the agent reads the sql, understands the joins and transformations, and produces a complete yaml file with standard tests like &lt;code&gt;not_null&lt;/code&gt; and &lt;code&gt;unique&lt;/code&gt;. it can even infer complex relationships and suggest custom tests based on the data domain.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) validate the result
&lt;/h3&gt;

&lt;p&gt;once the ai generates the yaml, i review it for accuracy. i then run &lt;code&gt;dbt test&lt;/code&gt; and &lt;code&gt;dbt docs generate&lt;/code&gt; to ensure everything compiles correctly. the ai rarely makes syntax errors, so the validation step is mostly about confirming the business logic aligns with the documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  faq
&lt;/h2&gt;

&lt;h3&gt;
  
  
  what is the most important caveat?
&lt;/h3&gt;

&lt;p&gt;you must still review the generated output. ai is excellent at scaffolding and inferring patterns, but it does not possess the full business context that you do.&lt;/p&gt;

&lt;h3&gt;
  
  
  what should i do first?
&lt;/h3&gt;

&lt;p&gt;start by creating a simple cursor skill that defines your team's standards for dbt documentation. feed it a few examples of your best &lt;code&gt;schema.yml&lt;/code&gt; files so it learns your preferred style.&lt;/p&gt;

&lt;h2&gt;
  
  
  references
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.getdbt.com/" rel="noopener noreferrer"&gt;dbt documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260313-my-cursor-setup/" rel="noopener noreferrer"&gt;my cursor setup&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dbt</category>
      <category>cursor</category>
      <category>ai</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>sharing is caring</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Thu, 02 Apr 2026 12:11:05 +0000</pubDate>
      <link>https://forem.com/shrouwoods/sharing-is-caring-2303</link>
      <guid>https://forem.com/shrouwoods/sharing-is-caring-2303</guid>
      <description>&lt;h2&gt;
  
  
  the value of early adoption
&lt;/h2&gt;

&lt;p&gt;i have always found a unique kind of energy in being an early adopter. when a new tool emerges, especially something as transformative as cursor and artificial intelligence, diving in headfirst is not just about personal efficiency. it is about understanding the landscape before the map is fully drawn. by spending the hours required to become a high-level user, i build a deep familiarity with the edges of what the technology can do.&lt;/p&gt;

&lt;p&gt;this mastery translates directly into value for my colleagues. when you understand the high-level nuance of a complex tool, you naturally become the point person for your team. people have onboarding questions, they hit roadblocks, and they need someone who has already navigated those early frustrations. being that resource is incredibly rewarding. it shifts my role from an individual contributor to a multiplier, helping the entire team elevate their workflow and avoid the pitfalls i have already solved.&lt;/p&gt;

&lt;h2&gt;
  
  
  the responsibility to share
&lt;/h2&gt;

&lt;p&gt;this dynamic reminds me of a principle i have heard often over the years regarding the importance of using your voice and your platform to share. this is exactly why i started this website. i wanted a dedicated space to share my voice, my knowledge, my opinions, my experience, and the solutions i have discovered along the way.&lt;/p&gt;

&lt;p&gt;when you hold onto knowledge, its impact is limited to your own output. when you share it, the impact scales infinitely. writing about these tools, documenting my workflows, and answering the nuanced questions my colleagues ask are all extensions of the same core belief. knowledge is meant to be distributed.&lt;/p&gt;

&lt;h2&gt;
  
  
  stepping into mentorship
&lt;/h2&gt;

&lt;p&gt;i have reached a point of mastery and experience where the natural next step for me is to mentor others and deliberately increase my visibility and presence. it is no longer enough to simply be good at what i do behind the scenes. the real work now is in lifting others up.&lt;/p&gt;

&lt;p&gt;in fact, i am starting to feel the weight of this realization. it feels almost selfish not to share what i have learned. when you spend years honing a craft or mastering a paradigm-shifting tool like ai-assisted development, you accumulate a wealth of invisible context. keeping that context locked away serves no one. stepping into a mentorship role, both directly with my colleagues and publicly through this platform, is how i honor the effort it took to gain that experience in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  looking forward
&lt;/h2&gt;

&lt;p&gt;my goal is to continue exploring the bleeding edge of these tools, but with a renewed focus on how i can translate those discoveries into accessible guidance for others. whether it is through answering a quick onboarding question about cursor, writing a detailed guide on this site, or simply being a sounding board for a colleague, the objective remains the same. i want to use my experience to make the path easier for those who follow.&lt;/p&gt;

&lt;h2&gt;
  
  
  further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.cursor.com/" rel="noopener noreferrer"&gt;cursor documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related on this site
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/" rel="noopener noreferrer"&gt;post title&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mentorship</category>
      <category>ai</category>
      <category>cursor</category>
      <category>earlyadoption</category>
    </item>
    <item>
      <title>what is art?</title>
      <dc:creator>Philip Hern</dc:creator>
      <pubDate>Mon, 30 Mar 2026 23:38:30 +0000</pubDate>
      <link>https://forem.com/shrouwoods/what-is-art-1ofe</link>
      <guid>https://forem.com/shrouwoods/what-is-art-1ofe</guid>
      <description>&lt;h2&gt;
  
  
  thesis
&lt;/h2&gt;

&lt;p&gt;i keep pondering lately, what are we actually defending when we say "ai art is not real art"?&lt;/p&gt;

&lt;p&gt;i do not have a final position yet. i am writing this to think in public, not to close the debate.&lt;/p&gt;

&lt;h2&gt;
  
  
  context
&lt;/h2&gt;

&lt;p&gt;while driving on a family vacation, i asked my wife to fulfill her duty as the passenger and dj some motown bangers. she searched on spotify and found something that seemed to fit the bill. most of the songs were recognizable, memories from my childhood, riding in the backseat listening to my parents' favorites. however, the first song on the playlist was by an artist called the &lt;strong&gt;19s soulers&lt;/strong&gt;, which was an artist i did not recognize. this was a user created playlist so not everything might fit perfectly into the motown mold i was asking for, and that was ok. the song started and it was &lt;strong&gt;SOOO&lt;/strong&gt; good. &lt;strong&gt;TOO&lt;/strong&gt; good. i had my suspicions, but the music caught me so hard that i completely forgot. i asked my boys in the back seat to look up the artist and they did not even search and just responded "AI DAD - IT IS AI". i felt so many conflicting emotions, including one of pride that my boys could tell the difference and they have some defense against being fooled.&lt;/p&gt;

&lt;p&gt;the conversation around ai-generated images and music feels hotter every week, especially when a new ai music act gets attention or a contract. the reaction is often immediate and predictable outrage, fear, dismissal, and arguments about stolen style.&lt;/p&gt;

&lt;p&gt;at the same time, many of us use ai to help write code, review pull requests, or shape architecture notes without the same emotional response. that contrast is interesting to me.&lt;/p&gt;

&lt;p&gt;if i call code a craft, and sometimes an art form, then why does ai help feel acceptable there for so many people, but unacceptable when the ai helps write song lyrics? and if code can be expressive, why is the outrage concentrated in painting, illustration, and music.&lt;/p&gt;

&lt;p&gt;my code has my fingerprints all over it, just as much as this website and the way i speak and write. it defintely qualifies as expressive. i make stylistic, logic, function, etc. choices that suit my style. how is this different from writing a book? but if you asked which one is acceptable to use ai and which one is not, i could guess your answer 99% of the time.&lt;/p&gt;

&lt;h2&gt;
  
  
  argument
&lt;/h2&gt;

&lt;p&gt;i see a few possible reasons, and none of them feel complete on their own:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;visual art and music are tied to identity in a very direct way&lt;/li&gt;
&lt;li&gt;audiences often connect to the "maker story", not only the artifact&lt;/li&gt;
&lt;li&gt;creative labor markets in those fields already felt fragile before ai&lt;/li&gt;
&lt;li&gt;software teams have normalized tool-assisted output for decades&lt;/li&gt;
&lt;li&gt;code is often judged by function first, while art is judged by intention and feeling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;still, even with those differences, i cannot shake the inconsistency.&lt;/p&gt;

&lt;p&gt;when i use ai in code, i still feel like the author because i set constraints, reject bad output, and own the result. i do not think that is very different from guiding a visual generator, editing outputs, and curating a final piece. maybe the difference is only social permission, not creative mechanics.&lt;/p&gt;

&lt;p&gt;this question also links to my concern about ownership in &lt;a href="https://philliant.com/posts/20260326-the-danger-of-trusting-the-ai-agent/" rel="noopener noreferrer"&gt;the danger of trusting the ai agent&lt;/a&gt;, where speed is useful but responsibility still has to stay human.&lt;/p&gt;

&lt;h3&gt;
  
  
  tension or counterpoint
&lt;/h3&gt;

&lt;p&gt;there is also a strong counterpoint i take seriously: in code, wrong answers fail in visible ways. tests fail, services break, users complain, and teams can trace accountability. in art, value is less binary, and that makes authorship feel more central and more vulnerable.&lt;/p&gt;

&lt;p&gt;another counterpoint is economic, not philosophical. people may not be reacting to "is this art" at all. they may be reacting to "will this replace my livelihood".&lt;/p&gt;

&lt;p&gt;both of those points feel real to me.&lt;/p&gt;

&lt;p&gt;and i think the later point is one worth exploring because the wide-spread availability of ai has "democratized" creativity, technical endeavors, etc. for people who might have great ideas, but not the musical or technical skill to carry out the plan. well, now they do. and that instant competition that was not present before can certainly feel intimidating and encroaching.&lt;/p&gt;

&lt;p&gt;i am currently mostly pro-ai, but with caution. we should have caution regarding how the models are being trained (and on what data) and regulated. we should exercise caution surrounding &lt;em&gt;who&lt;/em&gt; is doing the regulating, as well. ai is a powerful assistant, and as we all know from spiderman, with great power comes great responsibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  closing
&lt;/h2&gt;

&lt;p&gt;i am left with questions, not conclusions.&lt;/p&gt;

&lt;p&gt;maybe we value human touch most where we believe the human story is the product. maybe we accept ai more where we believe the product is utility. maybe those boundaries are changing and we are all reacting in real time.&lt;/p&gt;

&lt;p&gt;for now, i am trying to keep the question open.....when ai is part of the process, what still makes something mine, yours, or ours?&lt;/p&gt;

&lt;h2&gt;
  
  
  further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.copyright.gov/ai/" rel="noopener noreferrer"&gt;copyright and artificial intelligence, u.s. copyright office&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Generative_art" rel="noopener noreferrer"&gt;generative art&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Computer_music" rel="noopener noreferrer"&gt;computer music&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  related on this site
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260318-from-prototype-to-production-ai/" rel="noopener noreferrer"&gt;from prototype to production: my early adopter view of ai&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/posts/20260326-the-danger-of-trusting-the-ai-agent/" rel="noopener noreferrer"&gt;the danger of trusting the ai agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://philliant.com/series/commentary/" rel="noopener noreferrer"&gt;commentary series&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>art</category>
      <category>creativity</category>
      <category>music</category>
    </item>
  </channel>
</rss>
