<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Marsulta</title>
    <description>The latest articles on Forem by Marsulta (@marsulta).</description>
    <link>https://forem.com/marsulta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/marsulta"/>
    <language>en</language>
    <item>
      <title>Silicon Holler: What Happens When the Brain Drain Finally Stops</title>
      <dc:creator>Marsulta</dc:creator>
      <pubDate>Fri, 15 May 2026 02:07:40 +0000</pubDate>
      <link>https://forem.com/marsulta/silicon-holler-what-happens-when-the-brain-drain-finally-stops-52f4</link>
      <guid>https://forem.com/marsulta/silicon-holler-what-happens-when-the-brain-drain-finally-stops-52f4</guid>
      <description>&lt;p&gt;&lt;em&gt;An Appalachian builder's case for why the next great tech hub is hiding in plain sight&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;There's a thing that happens in Eastern Kentucky.&lt;/p&gt;

&lt;p&gt;You grow up sharp. You figure things out. You see patterns other people miss, build things from nothing, solve problems with whatever's in reach. And then, if you're ambitious enough, you leave. You have to. Because the message this place has sent for generations is that the good stuff happens somewhere else.&lt;/p&gt;

&lt;p&gt;Silicon Valley. Austin. Boston. Seattle. The coasts are where you go if you want to matter.&lt;/p&gt;

&lt;p&gt;That's the brain drain. It's not a news story or a policy problem or an abstraction. It's personal. It's the smartest people in your county packing up and leaving because the county doesn't have anything for them.&lt;/p&gt;

&lt;p&gt;And the data backs it up. The Appalachian Regional Commission found the region grew just 4.0% from 2010 to 2023, compared to 7.8% nationally. The Appalachian portions of West Virginia, Ohio, New York, Virginia, and Mississippi each lost at least 3% of their populations. In some distressed counties, the damage is generational -- research found that places like McDowell County, West Virginia lost roughly 25% of their young adult population through net outmigration, with college-educated residents leaving at a 29% net rate.&lt;/p&gt;

&lt;p&gt;Every smart person who leaves makes it a little harder for the next one to stay.&lt;/p&gt;

&lt;p&gt;I'm an addictions counselor in Eastern Kentucky. I'm also a solo developer who, with no CS degree, no funding, and no team, built an AI orchestration engine that 44 funded teams independently decided needed to exist. I named it Orca. I shipped v1.0.0 in late 2025.&lt;/p&gt;

&lt;p&gt;I didn't leave.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Research Actually Says
&lt;/h2&gt;

&lt;p&gt;Before I make the case, I want to be honest about something: the brain drain in Appalachia is not a single uniform thing. The ARC's own data shows a bifurcated region. Southern Appalachia grew 13.2% over the same period the rest of the region stagnated. The outmigration crisis is concentrated in distressed, rural, Central and Northern Appalachia -- places like Eastern Kentucky, southern West Virginia, and parts of Ohio.&lt;/p&gt;

&lt;p&gt;That distinction matters. It means this isn't a story about a dying region. It's a story about a region with a widening internal divide, where some parts are thriving and others are still bleeding out the people they can least afford to lose.&lt;/p&gt;

&lt;p&gt;The research on &lt;em&gt;why&lt;/em&gt; people leave is equally clear. A peer-reviewed study of Central Appalachian students found the strongest predictor of wanting to stay wasn't love of home or cultural identity -- it was the perceived likelihood of finding an interesting job with good salary and advancement opportunities. Partner employment and access to continued education also mattered. The policy implication the researchers drew was blunt: stronger public-private partnerships that create real jobs matter more than rhetoric about place loyalty.&lt;/p&gt;

&lt;p&gt;In other words, people will stay if there's something worth staying for.&lt;/p&gt;

&lt;p&gt;That's the opening.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Structural Advantages Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;When people think about building tech in Appalachia, they think about what's missing. The venture capital, the density, the ecosystem. Those gaps are real. But the cost structure tells a different story.&lt;/p&gt;

&lt;p&gt;Bureau of Economic Analysis regional price parity data shows that Appalachian metros operate at a fraction of what the major tech hubs cost. Huntington-Ashland comes in at 88.4 on the all-items index (national average is 100). Lexington sits at 93.1. Pittsburgh at 94.4. Compare that to Boston at 111.6, Seattle at 113.0, and San Francisco at 118.2.&lt;/p&gt;

&lt;p&gt;On housing specifically, the gap is staggering. BEA data shows housing price parity values of 50.0 for Huntington-Ashland and 74.4 for Lexington -- compared to 148.2 for Boston, 151.5 for Seattle, and 200.2 for San Francisco. That's not a slight discount. That's a fundamentally different cost basis for building a company.&lt;/p&gt;

&lt;p&gt;The labor cost differential mirrors it. BLS data shows mean hourly wages for software developers running $181,220 annually in San Francisco and $164,130 in Seattle. The interior U.S. operates at a materially lower baseline -- which means a tech employer can hire five serious engineers in Appalachia for what one costs in the Bay Area. That's not a small thing. That's a structural advantage that compounds over a decade.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Land and power.&lt;/strong&gt; Data centers need physical footprint and power infrastructure. Eastern Kentucky has both, at prices California can't touch. The buildout that AI infrastructure requires is going to happen somewhere. The question is who captures that value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Loyalty.&lt;/strong&gt; When someone who grew up here gets a real shot at building something meaningful in their own backyard, the calculus flips. Research on the Tulsa Remote program -- which offered relocation incentives to remote workers -- found it increased community engagement, entrepreneurship, and long-term willingness to stay. A Brookings evaluation found that work-from-anywhere policies can help reverse brain drain to large cities while increasing workers' real income and community connection. The lesson wasn't that cash alone works. It was that cash plus curation, community-building, and a credible local narrative can work. Appalachia has the narrative. It needs the jobs to back it up.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Infrastructure Is Further Along Than You Think
&lt;/h2&gt;

&lt;p&gt;One of the persistent myths about Appalachia is that it's a digital dead zone. The reality is more nuanced.&lt;/p&gt;

&lt;p&gt;ARC data shows 86.2% of Appalachian households had broadband subscriptions in 2019-2023, up 11.1 percentage points from the previous period. Device access reached 92% of households. The gaps remain -- 116 Appalachian counties still had broadband subscription rates below 80%, almost all of them rural and outside metro areas -- but the trend line is moving in the right direction.&lt;/p&gt;

&lt;p&gt;The ARC itself has been building the ecosystem infrastructure for years: entrepreneurship academies, STEM academies, workforce development programs, energy transition initiatives, and broadband investment portfolios that have aimed to serve 500 communities and 70,000 households. Kentucky's SOAR organization serves all 54 ARC counties in Eastern Kentucky, explicitly focused on the "deep-seated issue of population retention and growth." West Virginia launched Ascend WV, a remote-worker attraction program that has generated 90,000 applications and welcomed 1,400 individuals from 48 states and eight countries.&lt;/p&gt;

&lt;p&gt;The pieces exist. They're just not yet assembled into something coherent enough to create the conditions where a young engineer or founder looks around and thinks: &lt;em&gt;I can build a full life here.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's the gap. And it's closeable.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Silicon Holler Actually Means
&lt;/h2&gt;

&lt;p&gt;I'm not talking about a marketing campaign. I'm not talking about a "tech district" on a render.&lt;/p&gt;

&lt;p&gt;I'm talking about what happens when the jobs exist, the broadband works, the partner can find employment, and the kid who's too smart for the options in front of them doesn't have to choose between ambition and home.&lt;/p&gt;

&lt;p&gt;The research says something important about this: the best model for Appalachian tech formation isn't a single winner-take-all city. It's a distributed hub -- a few anchor metros and university nodes linked to lower-cost surrounding counties, with remote-work pathways, local coworking infrastructure, and targeted sector bets. Not a Bay Area clone. Something that fits the actual geography and the actual talent pool.&lt;/p&gt;

&lt;p&gt;That fits exactly how I think about this. You don't need everyone in one building. You need the work to be real, the pay to be fair, and the infrastructure to hold.&lt;/p&gt;

&lt;p&gt;My core mission with Orca and everything I'm building under Yak Stacks is making high-quality AI accessible to people who've been priced out of it. That mission and this place are the same mission. Eastern Kentucky has always been on the outside of economic power. I know what it feels like to not have access to the good tools. That's not abstract for me.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Region Actually Has
&lt;/h2&gt;

&lt;p&gt;People talk about Appalachia like it's a problem to be solved. A place of deficits.&lt;/p&gt;

&lt;p&gt;That's a real part of the story. But it's not the whole story.&lt;/p&gt;

&lt;p&gt;Eastern Kentucky gave the world coal that powered the industrial revolution. Bourbon that became a global industry. Bluegrass that influenced every genre of music that followed. This region has always produced things the world needed. It just never got credit and never captured the value.&lt;/p&gt;

&lt;p&gt;Educational attainment has been rising -- 27.3% of Appalachian adults held at least a bachelor's degree in 2019-2023, up 3.1 percentage points in five years. Central Appalachia still sits nearly 20 points below the national average, which is a real gap, but the direction is clear. The talent pipeline is being built. The question is whether it empties into San Francisco again or builds something here.&lt;/p&gt;

&lt;p&gt;The research on talent retention says the same thing over and over: people will stay if the job is interesting, the pay is real, the advancement is possible, and the community feels like it has a future. None of those things require a zip code that starts with 9.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Path
&lt;/h2&gt;

&lt;p&gt;Every great tech hub started with one person who proved it was possible.&lt;/p&gt;

&lt;p&gt;Orca is a proof of concept that this kind of work can come from here. That the architecture doesn't care where you're sitting when you write it. That quality gating, multi-agent orchestration, and typed protocol design can come out of Eastern Kentucky as legitimately as they can come out of anywhere else.&lt;/p&gt;

&lt;p&gt;The path from here looks like this: traction leads to credibility, credibility leads to capital, capital creates the conditions where the next person doesn't have to do this alone. Where the next sharp kid in a small Appalachian county has a local ecosystem to plug into instead of a one-way ticket out.&lt;/p&gt;

&lt;p&gt;I'm not claiming to be the person who builds Silicon Holler alone. Nobody does it alone.&lt;/p&gt;

&lt;p&gt;I'm claiming that the conditions are right, the cost structure is real, the infrastructure is building, and the research says it's possible.&lt;/p&gt;

&lt;p&gt;November 28, 2025 is the day we started finding out.&lt;/p&gt;




&lt;h2&gt;
  
  
  To the People Who Already Left
&lt;/h2&gt;

&lt;p&gt;The brain drain wasn't a character failure. It was a rational response to real scarcity. When the job isn't here, you go where the job is. That's not a moral failing -- it's just math.&lt;/p&gt;

&lt;p&gt;But the math is changing. The tools are leveling the playing field faster than the geography can push back. If you left this place carrying something forged here -- the stubbornness, the resourcefulness, the ability to solve problems with whatever's available -- you might find there's something worth coming back for.&lt;/p&gt;

&lt;p&gt;Not because things are perfect. They're not.&lt;/p&gt;

&lt;p&gt;Because what you've always been capable of building doesn't require San Francisco anymore.&lt;/p&gt;




&lt;h2&gt;
  
  
  To the People Who Stayed
&lt;/h2&gt;

&lt;p&gt;You already know what this place has that the coasts don't. The way people here build things from nothing, solve problems with what's available, and keep going when the conditions aren't right.&lt;/p&gt;

&lt;p&gt;Those are exactly the qualities that produce durable technology. Not the pitch deck. Not the pedigree.&lt;/p&gt;

&lt;p&gt;The stubbornness.&lt;/p&gt;

&lt;p&gt;Use it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;James Yarber is the founder of Yak Stacks and the developer of Orca, an open-source AI orchestration engine. He lives and works in Eastern Kentucky.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Sources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Appalachian Regional Commission, &lt;em&gt;Appalachia Then and Now: Population Overview&lt;/em&gt; — demographic history, county-level persistence of decline&lt;/li&gt;
&lt;li&gt;ARC Chartbook 2019–2023 — regional population growth (4.0% vs. 7.8% national), educational attainment (27.3% bachelor's degree), broadband subscription rates (86.2%), device access (92%)&lt;/li&gt;
&lt;li&gt;ARC Kentucky FY 2025 — $23.6M in 49 projects, workforce and infrastructure investment&lt;/li&gt;
&lt;li&gt;ARC Broadband Portfolio RFP — 500 communities, 70,000 households, 7,000 businesses served&lt;/li&gt;
&lt;li&gt;Bureau of Economic Analysis, Regional Price Parities 2023 — all-items and housing RPPs for Appalachian metros vs. coastal hubs&lt;/li&gt;
&lt;li&gt;BEA State RPP 2024 — Kentucky 89.9, Tennessee 92.1, California 110.7, Massachusetts 107.7&lt;/li&gt;
&lt;li&gt;Bureau of Labor Statistics, Occupational Employment and Wage Statistics May 2023/2024 — metro wage comparisons, software developer salary benchmarks&lt;/li&gt;
&lt;li&gt;Vazzana &amp;amp; Rudi-Polloshka, peer-reviewed study of Central Appalachian student retention — job quality and advancement as primary retention predictors&lt;/li&gt;
&lt;li&gt;Terman, qualitative research on West Virginia post-coal communities — social identity and institutional support in talent retention&lt;/li&gt;
&lt;li&gt;Kannapel &amp;amp; Flory, review of postsecondary transitions in Central Appalachia — interventions with retention evidence&lt;/li&gt;
&lt;li&gt;Ascend WV program data — 90,000 applications, 1,400 individuals relocated from 48 states&lt;/li&gt;
&lt;li&gt;Brookings Institution / Upjohn Institute, Tulsa Remote evaluations — rural talent attraction, community embeddedness, ROI&lt;/li&gt;
&lt;li&gt;SOAR Kentucky — 54-county mission, population retention and growth focus&lt;/li&gt;
&lt;li&gt;USDA ERS, nonmetro migration post-2020 — national context for rural in-migration trends&lt;/li&gt;
&lt;li&gt;Lichter et al., historical ARC outmigration research — McDowell County data (25% young adult loss, 29% college-educated net outmigration)&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>webdev</category>
      <category>career</category>
      <category>beginners</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I Was Told It Would Take 2 Years. I Did It in 3 Months. And Nobody Knows My Name Yet.</title>
      <dc:creator>Marsulta</dc:creator>
      <pubDate>Fri, 08 May 2026 13:55:16 +0000</pubDate>
      <link>https://forem.com/marsulta/i-was-told-it-would-take-2-years-i-did-it-in-3-months-and-nobody-knows-my-name-yet-1dkb</link>
      <guid>https://forem.com/marsulta/i-was-told-it-would-take-2-years-i-did-it-in-3-months-and-nobody-knows-my-name-yet-1dkb</guid>
      <description>&lt;p&gt;Last July I hit my limit on Cursor. Literally. The free tier ran out and it told me I couldn't use it anymore for a month.&lt;/p&gt;

&lt;p&gt;So I switched to Copilot. Except I didn't know how it worked. I didn't know I needed to pay $10 a month just to use it without also paying for the models underneath it. I didn't know what models even were, not really.&lt;/p&gt;

&lt;p&gt;I was just a guy in Eastern Kentucky trying to build an app. No CS degree. No team. No funding. No idea what I was doing.&lt;/p&gt;

&lt;p&gt;This was before Claude Code went mainstream. Before every AI company had a coding agent. Before any of this was normal. I was copying code out of ChatGPT, pasting it into VS Code, asking Claude questions in another tab, and manually trying to stitch it together. It was slow and painful and I kept hitting walls I didn't know how to climb.&lt;/p&gt;

&lt;p&gt;So I decided to build my own coding assistant. Something that would actually help me.&lt;/p&gt;




&lt;p&gt;Then I noticed something that bothered me.&lt;/p&gt;

&lt;p&gt;The model I was paying for -- the expensive one -- was spending most of its time fixing ESLint errors. Formatting issues. Cheap problems. I was burning real money watching a frontier model argue with a semicolon.&lt;/p&gt;

&lt;p&gt;I thought: why can't a cheap model handle the dumb stuff while the expensive model handles the hard stuff?&lt;/p&gt;

&lt;p&gt;I asked the AI chatbots about it. They told me multi-agent orchestration was really hard. Agents working in parallel created all kinds of race conditions and conflicts. It would take a serious engineering team. Probably two years of work.&lt;/p&gt;

&lt;p&gt;I decided to try it anyway.&lt;/p&gt;

&lt;p&gt;I called it Maestro. Because it was conducting multiple agents like an orchestra.&lt;/p&gt;




&lt;p&gt;Then I found out all the models cost money. Every single one.&lt;/p&gt;

&lt;p&gt;So I thought: fine. I'll just make my own.&lt;/p&gt;

&lt;p&gt;I started learning about distillation -- the process of training a small cheap model to do a specific thing well by learning from a bigger model. And when I heard the word "distill" the first thing I thought of was a still. Moonshine. The thing people in these hills made when the gatekeepers told them they couldn't have the real thing.&lt;/p&gt;

&lt;p&gt;That's when I named it Moonshiner.&lt;/p&gt;

&lt;p&gt;But I needed someone to check the quality of what came out of the still. You can't just drink whatever drips out. You need someone with a good palate who knows the difference between something worth keeping and something that'll make you go blind.&lt;/p&gt;

&lt;p&gt;I named that quality checker Pappy. After the most respected name in Kentucky bourbon.&lt;/p&gt;

&lt;p&gt;Pappy doesn't just say yes or no. He says PASS, WARN, or FAIL. With a confidence score. And if something fails, the loop runs again until it passes.&lt;/p&gt;




&lt;p&gt;I kept building. I made a desktop app. I wrestled with MCP integrations until I got frustrated enough to build my own protocol for agent-to-agent communication. I called it the Agent Handoff Protocol. The runtime is called Mailman because it delivers the packets.&lt;/p&gt;

&lt;p&gt;I named the access control layer Miranda. Because she reads agents their rights -- you can only use the tools you're authorized to use, nothing more.&lt;/p&gt;

&lt;p&gt;I named the task router Brain.&lt;/p&gt;

&lt;p&gt;I built a conversation layer and named her Benson.&lt;/p&gt;

&lt;p&gt;I named the whole thing Orca. Because orcas hunt in coordinated packs, each one with a role, none of them wasted.&lt;/p&gt;

&lt;p&gt;My wife named the engine and designed the logo. That's the whole team.&lt;/p&gt;




&lt;p&gt;I shipped v1.0.0 in February 2026. Six months after I hit my limit on Cursor.&lt;/p&gt;

&lt;p&gt;And here's the part that's been messing with my head ever since.&lt;/p&gt;

&lt;p&gt;In the six months since I committed Moonshiner v0.5.0 on November 28, 2025, 31 external teams have independently shipped pieces of this architecture. IBM. Microsoft. AWS. Google. Anthropic. Alibaba. Meituan. Sakana AI. UC Santa Barbara. DARPA.&lt;/p&gt;

&lt;p&gt;And yesterday -- Cursor. The exact tool that hit my limit in July 2025 and sent me down this road in the first place. They just shipped /orchestrate with planners, workers, and verifiers. If verification fails, the planner spawns a new worker to fix it.&lt;/p&gt;

&lt;p&gt;That's Brain plus Pappy plus the repair loop. Drawn as a diagram. Posted by the tool whose free tier I couldn't afford.&lt;/p&gt;

&lt;p&gt;Every single one of them built a piece of what I already had working.&lt;/p&gt;

&lt;p&gt;Every single one of them was missing the same piece: the contractual relationship between Pappy and Moonshiner. The quality gate that feeds the distillation loop. The system that doesn't just fix the current output -- it trains a better model for next time.&lt;/p&gt;

&lt;p&gt;I documented all 31 (UPDATE 05/13/2026: I'm up to 44 now) in a file called PRIOR_ART.md in the repo. Organized by component. With a gap matrix. With timestamps.&lt;/p&gt;




&lt;p&gt;This wasn't the first time.&lt;/p&gt;

&lt;p&gt;In June 2025 I had an idea for an app that used OCR to scan receipts and automatically track your warranties by make and model. Three different AI chatbots told me I needed to patent it. I didn't move fast enough. Six months later warranty tracking with receipt scanning was everywhere.&lt;/p&gt;

&lt;p&gt;I didn't learn my lesson fast enough. But I did learn it.&lt;/p&gt;

&lt;p&gt;When I started building Maestro I stopped asking for permission.&lt;/p&gt;

&lt;p&gt;Some days I was sure I was wasting my time. There were no peers to ask. No investors to signal that someone else believed in it. No community of people building the same thing. Just me, a laptop, Eastern Kentucky, and a stubborn belief that the tools people were settling for weren't good enough.&lt;/p&gt;

&lt;p&gt;The chatbots told me it would take two years. I did it in three months.&lt;/p&gt;

&lt;p&gt;I was told multi-agent orchestration was too complex for a non-coder to build. I shipped 620 passing tests across 12 packages.&lt;/p&gt;

&lt;p&gt;I was told you need money for models. I built a distillery.&lt;/p&gt;

&lt;p&gt;Now Anthropic ships Outcomes -- a quality gate for agents -- and calls it a breakthrough. I built Pappy in November 2025. I built the part that comes after it too.&lt;/p&gt;




&lt;p&gt;I'm not telling this story to be bitter. I'm telling it because somewhere right now there's someone being told their idea is too hard. It'll take too long. They don't have the skills. They should wait until they know more.&lt;/p&gt;

&lt;p&gt;Don't wait.&lt;/p&gt;

&lt;p&gt;The tools exist to build things that didn't exist last year. The walls people told you about are shorter than they look. The two-year timeline they quoted you is a guess from someone who never tried.&lt;/p&gt;

&lt;p&gt;I'm one person. Eastern Kentucky. No funding. No team. No CS degree.&lt;/p&gt;

&lt;p&gt;I built the thing 44 teams independently decided needed to exist.&lt;/p&gt;

&lt;p&gt;You can build yours too.&lt;/p&gt;




&lt;p&gt;There's one more thing I want to say.&lt;/p&gt;

&lt;p&gt;Eastern Kentucky gave the world coal that powered the industrial revolution. Bourbon that became a global industry. Music that influenced every genre that followed. This region has always built things the world needed. It just never got credit and never captured the value.&lt;/p&gt;

&lt;p&gt;I want to change that.&lt;/p&gt;

&lt;p&gt;The long term vision behind all of this isn't just Orca. It's Silicon Holler -- an innovation ecosystem on the eastern side of this country where people who can't afford to move to San Francisco don't have to. Where the kid in a small Appalachian town who's too smart for the options in front of them gets a shot at building the future instead of just watching it happen somewhere else.&lt;/p&gt;

&lt;p&gt;Every great tech hub started with one person who proved it was possible. &lt;/p&gt;

&lt;p&gt;I'm not saying I'm that person. I'm saying November 28, 2025 is the day we started finding out.&lt;/p&gt;

&lt;p&gt;The tools exist. The talent exists. The only thing missing is the belief that it can happen here.&lt;/p&gt;

&lt;p&gt;I believe it can.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/junkyard22/Orca" rel="noopener noreferrer"&gt;https://github.com/junkyard22/Orca&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Prior art breakdown:&lt;/strong&gt; &lt;a href="https://github.com/junkyard22/Orca/blob/main/PRIOR_ART.md" rel="noopener noreferrer"&gt;https://github.com/junkyard22/Orca/blob/main/PRIOR_ART.md&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;AHP runtime:&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/@marsulta/mailman" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/@marsulta/mailman&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>programming</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>I built the quality gate that IBM, Google, and Cursor all skipped</title>
      <dc:creator>Marsulta</dc:creator>
      <pubDate>Thu, 30 Apr 2026 12:58:50 +0000</pubDate>
      <link>https://forem.com/marsulta/i-built-the-quality-gate-that-ibm-google-and-cursor-all-skipped-10jf</link>
      <guid>https://forem.com/marsulta/i-built-the-quality-gate-that-ibm-google-and-cursor-all-skipped-10jf</guid>
      <description>&lt;p&gt;&lt;strong&gt;April 28, 2026 was a weird day for me&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;IBM shipped Bob. Thoughtworks published SPDD. Researchers at Fudan, Peking, and Shanghai AI Lab published Agentic Harness Engineering on arxiv. Microsoft shipped A2A v1 backed by AWS, Cisco, Google, IBM, Salesforce, SAP, and ServiceNow.&lt;br&gt;
Four independent teams. Same day. Same problem: orchestrate AI across a software development workflow.&lt;br&gt;
Every single one of them stopped at generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The question nobody answered&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;How do you know the output is actually good?&lt;br&gt;
They all stop at generation. A human checks the checkpoint. A reviewer approves the step. The system moves on. That's supervision by convention, not by architecture.&lt;br&gt;
I've been working on the answer for nine months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Meet Pappy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pappy is a QC role inside Orca that scores every pipeline output before it reaches the user. PASS, WARN, or FAIL with a confidence score. Failed runs trigger an automatic repair loop. Verified runs feed Moonshiner, a distillation pipeline that trains small specialist models from quality-gated data only.&lt;br&gt;
IBM documents what happened. Pappy decides whether it was good enough.&lt;br&gt;
The trace becomes the curriculum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How the rest maps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every major architectural decision in Orca has a direct parallel in what shipped on April 28:&lt;br&gt;
Brain handles task decomposition and model routing. That's Bob's multi-model orchestration.&lt;br&gt;
Miranda enforces compliance and human approval gates per task type. That's Bob's configurable checkpoints, except enforcement is in the protocol, not manual configuration.&lt;br&gt;
Benson is the only user-facing voice. One consistent output layer regardless of what ran underneath.&lt;br&gt;
Orca's agent handoff layer is architecturally aligned with the A2A v1 standard the industry ratified this week. AHP is Orca's internal trust layer. A2A is Orca's external compatibility layer. No other system in that April 28 pile has both.&lt;br&gt;
Moonshiner distills verified runs into training data. That's AHE's experience observability pillar.&lt;br&gt;
ARCHITECTURE.md and CLAUDE.md enforce explicit revertible component scope across agent handoffs. That's AHE's component observability pillar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who built this&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm a self-taught solo developer in Eastern Kentucky. No CS degree, no co-founder, no local technical peers. I built this over nine months in focused sessions using AI coding agents because I don't write code directly. Every major architectural decision I made, four independent teams published on the same day two days ago.&lt;br&gt;
That's either validating or humbling depending on how you look at it. I choose validating.&lt;/p&gt;

&lt;p&gt;Try it&lt;br&gt;
v1.2.16 is live. 620 tests passing across 12 packages. Windows installer and portable .exe both available. Apache 2.0. Free. Runs on your machine. You own your data.&lt;br&gt;
Pipeline tracer demo: &lt;a href="https://www.loom.com/share/01765a415d0e4027b115427693a8734a" rel="noopener noreferrer"&gt;https://www.loom.com/share/01765a415d0e4027b115427693a8734a&lt;/a&gt;&lt;br&gt;
Desktop demo: &lt;a href="https://www.loom.com/share/1e94a7c0fb7c476d89d6d1230fb541db" rel="noopener noreferrer"&gt;https://www.loom.com/share/1e94a7c0fb7c476d89d6d1230fb541db&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://github.com/junkyard22/Orca" rel="noopener noreferrer"&gt;https://github.com/junkyard22/Orca&lt;/a&gt;&lt;br&gt;
Releases: &lt;a href="https://github.com/junkyard22/Orca/releases" rel="noopener noreferrer"&gt;https://github.com/junkyard22/Orca/releases&lt;/a&gt;&lt;br&gt;
The mission is making high-quality AI accessible to everyone at low cost. Orca is the foundation.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>opensource</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Why Reliable AI Should Be Structured Like a System, Not a Superhero</title>
      <dc:creator>Marsulta</dc:creator>
      <pubDate>Tue, 14 Apr 2026 19:29:15 +0000</pubDate>
      <link>https://forem.com/marsulta/why-reliable-ai-should-be-structured-like-a-system-not-a-superhero-5b17</link>
      <guid>https://forem.com/marsulta/why-reliable-ai-should-be-structured-like-a-system-not-a-superhero-5b17</guid>
      <description>&lt;p&gt;Most AI is still being imagined the wrong way.&lt;/p&gt;

&lt;p&gt;We picture a single brilliant machine sitting in a box, waiting for a prompt, ready to solve whatever gets thrown at it. We ask it to reason, code, summarize, research, verify, explain, remember, plan, and somehow do all of it well. Then we act surprised when it gets something wrong with complete confidence.&lt;/p&gt;

&lt;p&gt;That model is exciting, but it is flawed.&lt;/p&gt;

&lt;p&gt;Reliable AI should not be built like a superhero.&lt;/p&gt;

&lt;p&gt;It should be built like a system.&lt;/p&gt;

&lt;p&gt;That is the mistake at the center of so much AI design right now. We keep trying to create one all-powerful agent that can do everything, when the real path to trust is structure: intake, triage, specialists, verification, escalation, documentation, and clear communication.&lt;/p&gt;

&lt;p&gt;In other words, the future of dependable AI will not look like a genius working alone. It will look more like a well-run institution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Superhero Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The fantasy of the superhero model is obvious. One mind. One interface. One answer. Ask it anything, and it handles everything itself.&lt;/p&gt;

&lt;p&gt;That sounds elegant, but in practice it creates a fragile system.&lt;/p&gt;

&lt;p&gt;A single model, no matter how impressive, is still being forced into too many jobs at once. It has to interpret the request, decide what matters, choose a strategy, possibly use tools, possibly retrieve context, generate an answer, and then judge whether its own answer is any good. That is a lot to ask from one component, especially when speed, cost, and reliability all matter.&lt;/p&gt;

&lt;p&gt;And when that one model fails, it tends to fail in the worst possible way: smoothly.&lt;/p&gt;

&lt;p&gt;It does not usually say, “I am out of my depth.” It says something polished. Something plausible. Something that sounds finished enough to pass unless somebody checks it.&lt;/p&gt;

&lt;p&gt;That is not trustworthiness. That is theater.&lt;/p&gt;

&lt;p&gt;The problem is not that today’s models are unintelligent. The problem is that we are using them like lone heroes when they should be part of an organized system.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Real Reliability Comes from Structure&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
High-trust environments do not depend on one exceptional individual doing everything.&lt;/p&gt;

&lt;p&gt;They depend on roles.&lt;/p&gt;

&lt;p&gt;They depend on process.&lt;/p&gt;

&lt;p&gt;They depend on handoffs, review, escalation paths, and clear standards for what counts as “done.”&lt;/p&gt;

&lt;p&gt;If you want AI that people can actually rely on, especially for coding, research, operations, or anything that carries real consequences, then the question changes. Instead of asking, “How do we make one model smarter?” we should also be asking, “How do we make the whole system more dependable?”&lt;/p&gt;

&lt;p&gt;That leads to a different architecture entirely.&lt;/p&gt;

&lt;p&gt;Not one giant mind.&lt;/p&gt;

&lt;p&gt;A coordinated workflow.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Start with Intake, Not Output&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
One of the biggest mistakes AI systems make is rushing straight from prompt to answer.&lt;/p&gt;

&lt;p&gt;But a good system should first understand what kind of problem it is dealing with.&lt;/p&gt;

&lt;p&gt;Is this a simple task or a complex one? Does it require creativity or precision? Is it low stakes or high stakes? Does it need tools? Does it need memory? Does it need a specialist? Does it need a stronger model? Does it need a human in the loop?&lt;/p&gt;

&lt;p&gt;That first layer matters more than people think.&lt;/p&gt;

&lt;p&gt;A bad start contaminates everything that comes after it. If the system misclassifies the task, routes it poorly, or assumes it understands the request when it does not, then even a powerful model is already working from the wrong foundation.&lt;/p&gt;

&lt;p&gt;Reliable AI begins with proper intake. Before you solve anything, you need to know what kind of problem you are solving.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Triage Is Intelligence&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Not every task deserves the same resources.&lt;/p&gt;

&lt;p&gt;That should be obvious, but many AI systems still treat every request like it ought to go through the same pipeline. Either everything is sent to the biggest model, which is wasteful and slow, or everything is pushed through the same cheap flow, which creates avoidable errors.&lt;/p&gt;

&lt;p&gt;Neither is wise.&lt;/p&gt;

&lt;p&gt;A reliable system needs triage.&lt;/p&gt;

&lt;p&gt;Simple tasks should be handled quickly and cheaply. Harder tasks should be routed upward. Ambiguous tasks may need clarification, deeper reasoning, or more context. High-risk tasks may need extra validation before anything is returned.&lt;/p&gt;

&lt;p&gt;This is not inefficiency. It is the opposite.&lt;/p&gt;

&lt;p&gt;Triage is how serious systems stay both fast and safe. It is how they avoid wasting expensive intelligence where it is not needed, while still bringing real weight to the moments that require it.&lt;/p&gt;

&lt;p&gt;The goal is not maximum power at all times. The goal is appropriate power at the right time.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Specialists Beat Generalists&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
The deeper AI work goes, the clearer this becomes: one model trying to be everything is not the most trustworthy setup.&lt;/p&gt;

&lt;p&gt;A single large model may be decent at many things, but dependable systems are often built by dividing labor. One component may be especially good at planning. Another may be strong at focused coding. Another may be better at checking work. Another may be best at retrieving context or formatting a final answer.&lt;/p&gt;

&lt;p&gt;This is where specialization becomes powerful.&lt;/p&gt;

&lt;p&gt;Instead of treating intelligence like one giant blob, we can treat it more like a team. Smaller, focused units can do narrower jobs more consistently, especially when an orchestrator decides who should handle what.&lt;/p&gt;

&lt;p&gt;That idea matters because reliability is not just about raw capability. It is about using the right capability in the right place.&lt;/p&gt;

&lt;p&gt;A system made of specialists has several advantages. It can be cheaper. It can be more modular. It can be easier to improve. It can be easier to test. And perhaps most importantly, it can be easier to trust, because each part has a more defined responsibility.&lt;/p&gt;

&lt;p&gt;People often assume the “smartest” system is the one with the biggest model. But in practice, the smarter system may be the one that knows when not to use brute force.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Protocols Matter More Than Personality&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
A lot of AI demos succeed because the assistant sounds confident, smooth, and human-like. But a pleasing tone is not the same thing as reliability.&lt;/p&gt;

&lt;p&gt;What creates trust over time is not charisma. It is consistency.&lt;/p&gt;

&lt;p&gt;That comes from protocols.&lt;/p&gt;

&lt;p&gt;A dependable AI system needs rules for how work is performed and checked. It needs done criteria. It needs boundaries. It needs validation steps. It needs explicit expectations for when a response should be accepted, repaired, or escalated.&lt;/p&gt;

&lt;p&gt;Without protocol, the system is mostly improvising.&lt;/p&gt;

&lt;p&gt;Improvisation can look impressive in a demo. It does not scale well when people depend on the outcome.&lt;/p&gt;

&lt;p&gt;The strongest systems in the real world do not rely on vibes. They rely on repeatable process. AI should be no different.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Verification Cannot Be Optional&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
One of the strangest habits in AI is that we let systems generate answers and then often trust those same systems to judge whether their own answers are correct.&lt;/p&gt;

&lt;p&gt;That is a weak pattern.&lt;/p&gt;

&lt;p&gt;Reliable systems need verification that is meaningfully separate from generation.&lt;/p&gt;

&lt;p&gt;If one part of the system writes code, another part should be able to review it. If one part answers a question, another should be able to check for omissions, contradictions, hallucinations, or false confidence. If one part uses tools, another should be able to confirm that the tool output actually supports the final claim.&lt;/p&gt;

&lt;p&gt;This does not mean every answer needs a giant audit trail. It means that trust should be earned inside the system before it is presented to the user.&lt;/p&gt;

&lt;p&gt;Verification is not a luxury feature. It is one of the core differences between an entertaining assistant and a dependable one.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Escalation Is a Sign of Maturity&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
A weak system acts like it always knows.&lt;/p&gt;

&lt;p&gt;A mature system knows when to escalate.&lt;/p&gt;

&lt;p&gt;That may mean handing a task from a cheap model to a stronger one. It may mean asking a specialist to review what a generalist produced. It may mean retrying with better context. It may mean involving a human because the stakes are high or the uncertainty is real.&lt;/p&gt;

&lt;p&gt;Too many AI products treat escalation like failure. It is not.&lt;/p&gt;

&lt;p&gt;Escalation is what serious systems do when accuracy matters more than ego.&lt;/p&gt;

&lt;p&gt;A dependable AI does not need to look omniscient. It needs to behave responsibly.&lt;/p&gt;

&lt;p&gt;Sometimes the most trustworthy thing a system can do is say, in effect, “This deserves a better path than the default one.”&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Documentation Creates Accountability&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
If a system makes decisions, uses tools, revises outputs, or hands work across components, that activity should not disappear into fog.&lt;/p&gt;

&lt;p&gt;Reliable AI needs operational memory.&lt;/p&gt;

&lt;p&gt;Not necessarily public chain-of-thought, but enough structure to know what happened: how the task was classified, where it was routed, which tools were called, what failed, what was repaired, what confidence signals were raised, and why the final answer passed.&lt;/p&gt;

&lt;p&gt;That kind of trace matters for debugging, improvement, and trust.&lt;/p&gt;

&lt;p&gt;If a system cannot show its operational path, then every mistake becomes harder to diagnose and every success becomes harder to reproduce.&lt;/p&gt;

&lt;p&gt;Documentation is not glamorous, but it is one of the things that separates a toy from a platform.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The User Still Needs One Clear Voice&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Even in a system with many moving parts, the final experience should not feel chaotic.&lt;/p&gt;

&lt;p&gt;The user should not have to sort through internal machinery, half-formed thoughts, or role confusion. They should not be forced to watch the whole factory run just to get a useful answer.&lt;/p&gt;

&lt;p&gt;Reliable AI may require a system behind the curtain, but the front should still be clear.&lt;/p&gt;

&lt;p&gt;One calm voice. One understandable response. One output that has already passed through the right process before it reaches the user.&lt;/p&gt;

&lt;p&gt;Complexity in the backend should create simplicity in the experience.&lt;/p&gt;

&lt;p&gt;That is part of what makes structured AI better than superhero AI. The system can be disciplined without forcing the user to carry that complexity.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The Future of AI Is Operational, Not Mythical&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
There is a deeper shift coming in how people think about intelligent systems.&lt;/p&gt;

&lt;p&gt;For a while, the central question was, “How smart is the model?”&lt;/p&gt;

&lt;p&gt;That still matters. But increasingly, a more important question is emerging: “How is the system run?”&lt;/p&gt;

&lt;p&gt;Because once AI is used for real work, not just novelty, raw cleverness is not enough. People want systems that are dependable, inspectable, and appropriately cautious. They want systems that do not bluff. They want systems that know when to verify, when to escalate, and when to slow down instead of pretending.&lt;/p&gt;

&lt;p&gt;That is not a model problem alone.&lt;/p&gt;

&lt;p&gt;That is an operations problem.&lt;/p&gt;

&lt;p&gt;And operations problems are solved with architecture.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Build Institutions, Not Idols&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
The long-term winners in AI will not be the systems that feel most magical in a five-minute demo.&lt;/p&gt;

&lt;p&gt;They will be the systems that keep working when the novelty wears off.&lt;/p&gt;

&lt;p&gt;The ones that route well. The ones that specialize well. The ones that verify. The ones that document. The ones that fail honestly. The ones that recover cleanly. The ones that earn trust through process rather than performance.&lt;/p&gt;

&lt;p&gt;That is why reliable AI should be structured like a system, not a superhero.&lt;/p&gt;

&lt;p&gt;Because trust does not come from making one machine feel all-powerful.&lt;/p&gt;

&lt;p&gt;It comes from designing an intelligence workflow that behaves responsibly from beginning to end.&lt;/p&gt;

&lt;p&gt;The future of AI is not one giant hero standing in the spotlight.&lt;/p&gt;

&lt;p&gt;It is a well-run organization behind the scenes.&lt;/p&gt;

&lt;p&gt;And that is a much better foundation to build on.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>automation</category>
      <category>agents</category>
    </item>
    <item>
      <title>Vibe-Based Engineering: Why Your Agent Pipeline Will Eventually Betray You</title>
      <dc:creator>Marsulta</dc:creator>
      <pubDate>Thu, 09 Apr 2026 16:41:55 +0000</pubDate>
      <link>https://forem.com/marsulta/vibe-based-engineering-why-your-agent-pipeline-will-eventually-betray-you-483c</link>
      <guid>https://forem.com/marsulta/vibe-based-engineering-why-your-agent-pipeline-will-eventually-betray-you-483c</guid>
      <description>&lt;p&gt;I've been building in the agentic space for a while. Not as a researcher, not at a well-funded lab — as a solo indie developer trying to build something that actually works in production.&lt;br&gt;
And the same failure mode keeps showing up regardless of which framework people use.&lt;/p&gt;

&lt;p&gt;When something goes wrong in a multi-agent pipeline, nobody knows where it broke. The LLM completed successfully from the framework's perspective. No exception was thrown. But the output was wrong, the next agent consumed it anyway, and by the time a human noticed, the error had propagated three steps downstream.&lt;/p&gt;

&lt;p&gt;Most frameworks treat agent communication like a conversation. One agent finishes, dumps its output into context, and the next agent picks it up. There's no contract. No definition of what "done" actually means. No gate between steps that asks whether the output meets acceptance criteria before allowing the next agent to proceed.&lt;br&gt;
I call this vibe-based engineering. The system works great in demos because demos don't encounter unexpected model behavior. Production does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem With "Just Retry"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The standard answer to LLM unreliability is retry logic. If the model returns something unexpected, retry until it doesn't.&lt;br&gt;
This is necessary but not sufficient. Retry logic answers the question "did the function complete." It doesn't answer "was the output actually correct." A task can succeed in every framework-observable way while producing output that silently breaks the next step in the chain.&lt;br&gt;
This is the gap. Most orchestration tooling is building a reliable conveyor belt. Nobody is checking whether what came off the conveyor belt is actually good.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contract-Based Engineering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pattern that fixes this is treating agent handoffs like typed work orders rather than conversations.&lt;/p&gt;

&lt;p&gt;Instead of an agent dumping output into shared context, it produces a packet — a typed object with a defined scope, constraints, acceptance criteria, and a lifecycle. The receiving agent cannot start until the packet is valid. The output cannot advance until it passes a quality check. If it fails, the packet is rejected and the reason is recorded.&lt;br&gt;
Every transition is traceable. Every failure has a location and a cause. You can prove exactly where a task died and why it was blocked.&lt;br&gt;
This is what I've been calling the Agent Handoff Protocol. It's a small open spec, runtime and model agnostic, MIT licensed.&lt;br&gt;
What This Unlocks Beyond Reliability&lt;/p&gt;

&lt;p&gt;The traceability isn't just useful for debugging. It turns out that a quality-gated packet trace is a training curriculum.&lt;/p&gt;

&lt;p&gt;Every verified handoff is a labeled teacher-student pair. Every rejected output is a labeled negative example. If you're distilling smaller specialist models from your agent runs, the quality gate means your training data is clean by construction — bad runs are rejected before they ever become training signal.&lt;/p&gt;

&lt;p&gt;This is the insight that changed how I think about the whole system. Reliability and distillation aren't separate concerns. The same gate that makes your pipeline trustworthy is the same gate that makes your training data trustworthy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where This Lives&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I've built this out into a full orchestration engine called Orca, named by my wife who got tired of hearing me say "orchestrator." It has named roles that communicate via AHP packets, 620 tests passing across 12 packages, and a v1.2.2 release on GitHub.&lt;/p&gt;

&lt;p&gt;The protocol is separate from the engine by design. AHP is useful without Orca. You can implement the packet structure in any system, with any models, using any runtime.&lt;/p&gt;

&lt;p&gt;If you're building anything beyond a single-agent wrapper, the contract-based vs vibe-based distinction starts to matter a lot.&lt;/p&gt;

&lt;p&gt;AHP protocol and spec: &lt;a href="https://github.com/junkyard22/AHP" rel="noopener noreferrer"&gt;https://github.com/junkyard22/AHP&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Orca engine: &lt;a href="https://github.com/junkyard22/Orca" rel="noopener noreferrer"&gt;https://github.com/junkyard22/Orca&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy to get into the weeds on architecture, the quality gating design, or what it looks like to build something like this as a solo indie dev.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>architecture</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
