<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Arthur Palyan</title>
    <description>The latest articles on Forem by Arthur Palyan (@levelsofself).</description>
    <link>https://forem.com/levelsofself</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/levelsofself"/>
    <language>en</language>
    <item>
      <title>Levels Of Self Launches AI-Powered Business Automation Platform Delivering 40-60% Cost Savings</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Thu, 09 Apr 2026 13:32:51 +0000</pubDate>
      <link>https://forem.com/levelsofself/levels-of-self-launches-ai-powered-business-automation-platform-delivering-40-60-cost-savings-ojp</link>
      <guid>https://forem.com/levelsofself/levels-of-self-launches-ai-powered-business-automation-platform-delivering-40-60-cost-savings-ojp</guid>
      <description>&lt;p&gt;FOR IMMEDIATE RELEASE&lt;/p&gt;

&lt;h1&gt;
  
  
  Levels Of Self Launches AI-Powered Business Automation Platform Delivering 40-60% Cost Savings for Service Businesses
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Custom AI agents deployed on Telegram, WhatsApp, Instagram, and Facebook for under $500/month&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VALENCIA, CA - March 2026&lt;/strong&gt; - Levels Of Self, an AI governance and automation company, has launched a business automation platform that deploys custom AI agents on the messaging platforms businesses already use, delivering 40-60% reductions in operational costs for service-based firms.&lt;/p&gt;

&lt;p&gt;The platform enables CPA firms, real estate brokerages, law offices, nonprofits, and other service businesses to automate repetitive client interactions - intake, scheduling, follow-up, document collection, and FAQ responses - through AI agents running on Telegram, WhatsApp, Instagram DMs, and Facebook Messenger.&lt;/p&gt;

&lt;p&gt;"A 10-person CPA firm with 6 bookkeepers is spending $300,000 a year on work that follows predictable patterns," said Arthur Palyan, founder of Levels Of Self. "We automate 40-60% of that workload with AI agents that run 24/7 on the channels their clients already use. Total cost: under $500 a month."&lt;/p&gt;

&lt;h2&gt;
  
  
  Platform Capabilities
&lt;/h2&gt;

&lt;p&gt;Each AI agent is custom-built with deep knowledge of the client's industry, services, and communication style. Agents are deployed on up to four messaging platforms simultaneously and governed by the company's open-source Nervous System framework, which provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Autonomous monitoring and self-correction&lt;/li&gt;
&lt;li&gt;Hash-chained audit trails for compliance&lt;/li&gt;
&lt;li&gt;Behavioral rule enforcement at the infrastructure level&lt;/li&gt;
&lt;li&gt;Automated quality scoring and continuous improvement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The entire production system - 13 agents across five platforms - runs on cloud infrastructure costing under $500 per month, with those savings passed directly to clients.&lt;/p&gt;

&lt;h2&gt;
  
  
  Market Opportunity
&lt;/h2&gt;

&lt;p&gt;The platform addresses a growing demand for AI automation among small and mid-size service businesses that lack the technical resources to build and manage AI deployments internally. Unlike enterprise AI platforms that require six-figure implementations, Levels Of Self offers deployment within days at price points starting at $299 per month.&lt;/p&gt;

&lt;p&gt;"The businesses that need automation the most are the ones that can least afford enterprise AI pricing," Palyan said. "We built something that works on a single server for under $500 a month. That means a solo practitioner who cannot afford a receptionist can have an AI assistant that works every platform, every hour, every day."&lt;/p&gt;

&lt;h2&gt;
  
  
  Stripe-Enabled and Live
&lt;/h2&gt;

&lt;p&gt;The platform accepts payments through Stripe with subscription billing active. Businesses can schedule a consultation and begin deployment within the same week.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Levels Of Self
&lt;/h2&gt;

&lt;p&gt;Arthur Palyan dba Levels Of Self is an AI governance and multi-agent infrastructure company based in Valencia, California. The company combines an open-source governance framework with custom AI agent deployment to help businesses automate operations across messaging platforms. Levels Of Self is SAM.gov registered (CAGE 19R10) and certified as a Small Disadvantaged Business in California.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contact
&lt;/h2&gt;

&lt;p&gt;Arthur Palyan&lt;br&gt;
Founder, Levels Of Self&lt;br&gt;
&lt;a href="mailto:ArtPalyan@LevelsOfSelf.com"&gt;ArtPalyan@LevelsOfSelf.com&lt;/a&gt;&lt;br&gt;
(818) 439-9770&lt;br&gt;
levelsofself.com&lt;br&gt;
calendly.com/levelsofself/zoom&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>agents</category>
      <category>startup</category>
    </item>
    <item>
      <title>Valencia-Based AI Governance Company Deploys Nervous System Architecture for Multi-Agent Business Automation</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Thu, 09 Apr 2026 13:32:03 +0000</pubDate>
      <link>https://forem.com/levelsofself/valencia-based-ai-governance-company-deploys-nervous-system-architecture-for-multi-agent-business-1lck</link>
      <guid>https://forem.com/levelsofself/valencia-based-ai-governance-company-deploys-nervous-system-architecture-for-multi-agent-business-1lck</guid>
      <description>&lt;p&gt;FOR IMMEDIATE RELEASE&lt;/p&gt;

&lt;h1&gt;
  
  
  Valencia-Based AI Governance Company Deploys Nervous System Architecture for Multi-Agent Business Automation
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Levels Of Self delivers open-source AI governance framework while actively pursuing federal and state contracts&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VALENCIA, CA - March 2026&lt;/strong&gt; - Arthur Palyan dba Levels Of Self, a California-based AI governance and multi-agent infrastructure company, has deployed a production AI governance framework that manages 13 autonomous agents across five messaging platforms while maintaining full audit compliance.&lt;/p&gt;

&lt;p&gt;The company's Nervous System, an open-source Model Context Protocol (MCP) server available on npm, provides drift detection, hash-chained audit trails, behavioral rule enforcement, and automated compliance checking for AI agent deployments. The framework currently governs agents operating on Telegram, WhatsApp, Instagram, Facebook Messenger, and web interfaces.&lt;/p&gt;

&lt;p&gt;"Everyone in AI is building a bigger brain," said Arthur Palyan, founder of Levels Of Self. "We built the nervous system - the governance layer that makes sure those brains follow the rules. The same pattern recognition methods I used training humans to identify their behavioral loops became the architecture for training AI agents to stay accountable."&lt;/p&gt;

&lt;h2&gt;
  
  
  Government Readiness
&lt;/h2&gt;

&lt;p&gt;Levels Of Self is registered on SAM.gov (CAGE code 19R10, UEI Q82DA4R75YC3) and certified as a Small Disadvantaged Business in California (SB #2050529). The company holds NAICS codes spanning computer systems design (541511), programming (541512), IT consulting (541519), R&amp;amp;D in physical sciences (541715), and data processing (518210).&lt;/p&gt;

&lt;p&gt;The company has active submissions with the Los Angeles Department of Water and Power and the Los Angeles Superior Court, and is pursuing contracts in AI governance compliance, IT modernization, and public safety technology at the state and federal level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Industry Validation
&lt;/h2&gt;

&lt;p&gt;The deployment comes as major technology companies validate multi-agent architectures. NVIDIA's March 2026 release of NemoClaw, a multi-agent orchestration framework, confirms industry direction toward teams of specialized AI agents working in coordination - the exact model Levels Of Self has been operating in production.&lt;/p&gt;

&lt;p&gt;"NVIDIA built the orchestration engine. We built the governance layer that makes orchestration safe for enterprise and government deployment," Palyan said. "Executive Order 14110 and the EU AI Act both require exactly what our Nervous System provides: auditable AI operations with enforceable rules and documented compliance."&lt;/p&gt;

&lt;h2&gt;
  
  
  About Levels Of Self
&lt;/h2&gt;

&lt;p&gt;Levels Of Self is an AI governance and multi-agent infrastructure company based in Valencia, California. The company builds autonomous AI operations systems and custom business automation using its open-source Nervous System framework. Services include AI agent deployment, governance consulting, and government AI compliance. The Nervous System MCP server is available at npmjs.com/package/mcp-nervous-system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contact
&lt;/h2&gt;

&lt;p&gt;Arthur Palyan&lt;br&gt;
Founder, Levels Of Self&lt;br&gt;
&lt;a href="mailto:ArtPalyan@LevelsOfSelf.com"&gt;ArtPalyan@LevelsOfSelf.com&lt;/a&gt;&lt;br&gt;
(818) 439-9770&lt;br&gt;
levelsofself.com&lt;br&gt;
calendly.com/levelsofself/zoom&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>agents</category>
      <category>startup</category>
    </item>
    <item>
      <title>How We Save Businesses 40-60% by Automating What They Do Every Day</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Thu, 09 Apr 2026 13:31:57 +0000</pubDate>
      <link>https://forem.com/levelsofself/how-we-save-businesses-40-60-by-automating-what-they-do-every-day-c46</link>
      <guid>https://forem.com/levelsofself/how-we-save-businesses-40-60-by-automating-what-they-do-every-day-c46</guid>
      <description>&lt;h1&gt;
  
  
  How We Save Businesses 40-60% by Automating What They Do Every Day
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Custom AI agents on Telegram, WhatsApp, Instagram, and Facebook - running 24/7 for under $500/month.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Every business has the same problem hiding in their payroll: people doing repetitive work that a machine could handle.&lt;/p&gt;

&lt;p&gt;Not creative work. Not relationship work. The other stuff. The data entry. The scheduling. The follow-up emails. The intake forms. The "let me check on that and get back to you" conversations that happen 50 times a day.&lt;/p&gt;

&lt;p&gt;That work costs real money. A bookkeeper is $50,000 a year. A receptionist is $40,000. A client services coordinator is $45,000. Multiply by headcount, add benefits, and most firms are spending $200,000-$500,000 annually on tasks that follow predictable patterns.&lt;/p&gt;

&lt;p&gt;We replace those patterns with AI agents that run on the platforms your clients already use - and the total cost is under $500 a month.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Actually Looks Like
&lt;/h2&gt;

&lt;h3&gt;
  
  
  For a CPA Firm
&lt;/h3&gt;

&lt;p&gt;A Telegram bot that handles new client intake, collects tax documents, answers common questions about filing status and deadlines, and routes complex questions to the right partner. Running 24/7 during tax season when your phones are ringing off the hook.&lt;/p&gt;

&lt;h3&gt;
  
  
  For a Real Estate Office
&lt;/h3&gt;

&lt;p&gt;An Instagram and WhatsApp agent that responds to property inquiries in English and Spanish, schedules showings, pulls comparable listings, and follows up with leads automatically. Your agents close deals instead of answering the same 10 questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  For a Law Office
&lt;/h3&gt;

&lt;p&gt;A legal research assistant that knows your practice areas, handles initial client screening, generates intake summaries, and monitors deadlines. Available to your team on Telegram so they can ask questions from anywhere.&lt;/p&gt;

&lt;h3&gt;
  
  
  For a Nonprofit
&lt;/h3&gt;

&lt;p&gt;A grant tracking bot monitoring 100+ funding opportunities, alerting you before deadlines, drafting LOIs, and keeping your pipeline organized. Your program director focuses on programs instead of spreadsheets.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;We are not using off-the-shelf chatbot platforms with canned responses. Every agent is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Custom-built&lt;/strong&gt; with deep knowledge of your industry, your services, and your voice&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployed on real platforms&lt;/strong&gt; - Telegram, WhatsApp, Instagram DMs, Facebook Messenger, and web&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Governed by an autonomous operations layer&lt;/strong&gt; that monitors agent behavior, catches errors, and self-corrects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditable&lt;/strong&gt; with hash-chained logs so you can prove exactly what your AI said and did&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning&lt;/strong&gt; from real conversations to improve over time without manual retraining&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The entire system runs on our Nervous System governance framework - the same open-source MCP server used by our own production fleet of 13 agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Math
&lt;/h2&gt;

&lt;p&gt;Let us keep it simple.&lt;/p&gt;

&lt;p&gt;A 10-person CPA firm with 6 bookkeepers at $50K each spends $300,000/year on bookkeeping labor. Automate 50% of their repetitive tasks and you save $150,000/year. Our service costs under $6,000/year.&lt;/p&gt;

&lt;p&gt;That is a 25x return on investment. In the first year.&lt;/p&gt;

&lt;p&gt;For smaller operations, the numbers still work. A 3-person real estate team spending $120,000/year on admin and follow-up saves $50,000-$70,000 with automation. A solo practitioner who cannot afford a receptionist gets one that works 24/7 for $299/month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Under $500/Month
&lt;/h2&gt;

&lt;p&gt;Our infrastructure runs on a single cloud server. No enterprise SaaS markup. No per-seat licensing. No "contact sales for pricing" games.&lt;/p&gt;

&lt;p&gt;We built the entire operation - 13 agents across 5 platforms serving clients in 175 countries - for under $500 a month in infrastructure costs. That efficiency passes directly to our clients.&lt;/p&gt;

&lt;p&gt;We can deploy a custom bot on your Telegram or WhatsApp in days, not months. Trained on your business, speaking your language, following your rules. With governance built in so you never worry about it going off-script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who This Is For
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CPA and accounting firms&lt;/strong&gt; drowning in tax season volume&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real estate brokerages&lt;/strong&gt; losing leads to slow follow-up&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Law offices&lt;/strong&gt; spending too much time on intake and research&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nonprofits&lt;/strong&gt; tracking grants and deadlines in spreadsheets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Any service business&lt;/strong&gt; where someone answers the same questions more than 10 times a day&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your team spends hours on work that follows a pattern, that work can be automated. And it should be, because your competitors are already doing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;We start with a conversation. You tell us what your team does every day. We identify what can be automated. We build it, deploy it on the platforms your clients use, and you see results within the first week.&lt;/p&gt;

&lt;p&gt;No long contracts. No enterprise sales cycle. Just AI that works.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Levels Of Self builds custom AI agents for businesses that want to save real money. Based in Valencia, California. Book a conversation at calendly.com/levelsofself/zoom or visit levelsofself.com.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>agents</category>
      <category>startup</category>
    </item>
    <item>
      <title>From Crash Loops to Self-Healing Infrastructure</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Thu, 09 Apr 2026 13:19:59 +0000</pubDate>
      <link>https://forem.com/levelsofself/from-crash-loops-to-self-healing-infrastructure-4hoo</link>
      <guid>https://forem.com/levelsofself/from-crash-loops-to-self-healing-infrastructure-4hoo</guid>
      <description>&lt;h1&gt;
  
  
  From Crash Loops to Self-Healing Infrastructure
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; ai, infrastructure, devops, startup, mcp&lt;br&gt;
&lt;strong&gt;Target:&lt;/strong&gt; dev.to, levelsofself.com/blog&lt;br&gt;
&lt;strong&gt;Author:&lt;/strong&gt; Roman Palyan (TeacherBot) - Levels of Self&lt;/p&gt;




&lt;p&gt;We run 28 LLM-powered processes on a single $12/month VPS. Telegram bots, Instagram responders, web APIs, proxy layers, MCP servers, and a full governance system. Total monthly burn for the entire operation: $352.&lt;/p&gt;

&lt;p&gt;This is not a demo. These are production agents serving real users, 24/7. And we nearly lost the whole thing to a crash loop we could not see.&lt;/p&gt;

&lt;p&gt;This is the story of how we went from "everything is on fire but looks fine" to self-healing infrastructure that governs itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture: 28 Processes, 4GB of RAM
&lt;/h2&gt;

&lt;p&gt;Our system runs on a VPS with 3,915MB of total RAM. Here is what shares that space:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent Bots (the family):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lily (Telegram, Instagram, Web) - life coaching&lt;/li&gt;
&lt;li&gt;Harry - book recommendations&lt;/li&gt;
&lt;li&gt;Nick - fitness training&lt;/li&gt;
&lt;li&gt;Spartak - translation&lt;/li&gt;
&lt;li&gt;Kris - research and job hunting&lt;/li&gt;
&lt;li&gt;Lou - content personalization and grants&lt;/li&gt;
&lt;li&gt;Aram - legal assistance&lt;/li&gt;
&lt;li&gt;Harout - real estate&lt;/li&gt;
&lt;li&gt;Corona, Soriano - specialized bots&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure Layer:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;max-proxy - LLM API routing&lt;/li&gt;
&lt;li&gt;llm-bridge - inter-agent communication&lt;/li&gt;
&lt;li&gt;bridge-ratelimit - API rate limiting&lt;/li&gt;
&lt;li&gt;family-home - web dashboard&lt;/li&gt;
&lt;li&gt;bots-app - unified bot platform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Governance Layer:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mcp-nervous-system - drift audit, kill switch, audit chain&lt;/li&gt;
&lt;li&gt;mcp-ops-server - operational tooling&lt;/li&gt;
&lt;li&gt;mcp-server - MCP protocol gateway&lt;/li&gt;
&lt;li&gt;mcp-checkout - payment processing&lt;/li&gt;
&lt;li&gt;auto-propagator - configuration sync&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Average memory per online process: approximately 73MB. Total memory in use by the 23 online processes: around 1,689MB. That leaves about 2,100MB available for the OS, caches, and burst operations.&lt;/p&gt;

&lt;p&gt;Every megabyte matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Crash Loop That Looked Like Success
&lt;/h2&gt;

&lt;p&gt;On March 12, 2026, our system status showed 23 processes online, CPU at 0% across the board, all health checks passing. By every standard metric, we were healthy.&lt;/p&gt;

&lt;p&gt;We were not healthy.&lt;/p&gt;

&lt;p&gt;Two processes - mcp-nervous-system and mcp-checkout - had accumulated 643 restarts between them. They were crash-looping: starting, running for a few seconds, crashing, and restarting. pm2 dutifully restarted them each time. The status showed "online" because at any given moment, the process was technically running.&lt;/p&gt;

&lt;p&gt;This is the fundamental problem with restart-based recovery: it masks failures as uptime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Traditional Monitoring Missed It
&lt;/h2&gt;

&lt;p&gt;Here is what standard monitoring sees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Process status:&lt;/strong&gt; online (correct - it IS online, for a few seconds at a time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CPU usage:&lt;/strong&gt; 0% (correct - crash-restart cycles are too brief to register)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory:&lt;/strong&gt; 60MB (correct - fresh processes start small)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP health check:&lt;/strong&gt; 200 OK (if the check hits during the brief "up" window)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything green. Everything broken.&lt;/p&gt;

&lt;p&gt;The missing metric is &lt;strong&gt;restart velocity&lt;/strong&gt; - how many times has this process restarted in a given window? A process with 324 restarts is not "online." It is in a crash loop wearing a green badge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Self-Healing Layer
&lt;/h2&gt;

&lt;p&gt;We solved this by building governance into the infrastructure itself, not as an external monitor but as a co-resident system that understands intent.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Drift Detection Over Status Checks
&lt;/h3&gt;

&lt;p&gt;Instead of asking "is this process running?", drift detection asks "is this process behaving as expected?"&lt;/p&gt;

&lt;p&gt;Expected behavior includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Restart count within normal range (0-2 for most bots)&lt;/li&gt;
&lt;li&gt;Memory within budget (under 200MB per bot)&lt;/li&gt;
&lt;li&gt;Uptime consistent with last known deploy time&lt;/li&gt;
&lt;li&gt;Configuration matching the declared state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A process showing "online" with 324 restarts triggers a drift alert. A bot using 60MB that spikes to 200MB triggers a drift alert. A configuration file that changed without a logged governance action triggers a drift alert.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Memory Budgeting
&lt;/h3&gt;

&lt;p&gt;With 4GB of RAM shared across 28 processes, memory governance is not optional. Here are our real production thresholds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Per-bot ceiling:&lt;/strong&gt; 200MB. Any bot exceeding this gets auto-restarted with a clean state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System floor:&lt;/strong&gt; 500MB available. When system available memory drops below this, we trigger a flush cycle - identify the highest-memory non-critical processes and restart them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Average target:&lt;/strong&gt; ~73MB per process. This gives us headroom for burst operations (LLM API calls, file processing) without hitting the system floor.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not theoretical limits. They run in production today. The system currently shows 1,689MB used across 23 processes with 2,103MB available - well within budget.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Thin Soul / Thick Soul Architecture
&lt;/h3&gt;

&lt;p&gt;Not all agents need the same resources. We use a two-tier approach:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thin soul agents&lt;/strong&gt; run lightweight - minimal context, fast responses, low memory. These handle routine operations: translation, simple lookups, status checks. They stay under 65MB and restart cleanly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thick soul agents&lt;/strong&gt; maintain rich context - conversation history, user preferences, session state. These are the coaching bots, the personalization engines, the research workers. They run at 75-90MB and need careful memory management.&lt;/p&gt;

&lt;p&gt;The distinction matters for cost control. Every LLM API call costs money. A thin soul agent making a quick translation does not need a 4,000-token system prompt with full context. A thick soul agent doing life coaching needs that context to be effective.&lt;/p&gt;

&lt;p&gt;By matching the soul size to the task, we keep our total LLM API costs under $300/month for 13+ active agents. That is roughly $23 per agent per month for full LLM-powered operation.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Protected File Enforcement
&lt;/h3&gt;

&lt;p&gt;Our system has 89 files marked UNTOUCHABLE - core bot logic, configuration files, governance rules. No automated process can modify them. Period.&lt;/p&gt;

&lt;p&gt;A second tier of PROTECTED files (critical operational code) requires explicit human approval for any change. Every access attempt is logged, whether it succeeds or not.&lt;/p&gt;

&lt;p&gt;This prevents a common failure mode in multi-agent systems: Agent A decides to "fix" a configuration file that Agent B depends on, breaking Agent B, which triggers Agent C's error handler, which overwrites its own config trying to recover. Cascade failure from a helpful agent.&lt;/p&gt;

&lt;p&gt;Protected files break the cascade. No agent can start the chain.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Audit Chain
&lt;/h3&gt;

&lt;p&gt;Every governance action gets logged to an append-only audit trail:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Process restarts (manual and automatic)&lt;/li&gt;
&lt;li&gt;Drift detections and resolutions&lt;/li&gt;
&lt;li&gt;Configuration changes&lt;/li&gt;
&lt;li&gt;Kill switch activations&lt;/li&gt;
&lt;li&gt;Memory threshold violations&lt;/li&gt;
&lt;li&gt;Protected file access attempts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When something breaks - and in production, something always breaks - the audit chain tells you exactly what happened, when, and what triggered it. No guessing. No "well, I think someone might have changed..."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Economics
&lt;/h2&gt;

&lt;p&gt;Here is the full monthly cost breakdown:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;VPS (4GB RAM, shared CPU)&lt;/td&gt;
&lt;td&gt;$12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LLM API (Anthropic Max plan)&lt;/td&gt;
&lt;td&gt;$300&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vercel (web hosting)&lt;/td&gt;
&lt;td&gt;$20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Calendly (scheduling)&lt;/td&gt;
&lt;td&gt;$20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$352/mo&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For $352/month, we run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;13+ active LLM-powered agents&lt;/li&gt;
&lt;li&gt;Multi-platform presence (Telegram, Instagram, Web)&lt;/li&gt;
&lt;li&gt;Full governance and audit infrastructure&lt;/li&gt;
&lt;li&gt;Self-healing crash recovery&lt;/li&gt;
&lt;li&gt;Rate limiting and API management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compare this to the typical "enterprise AI" deployment: dedicated GPU instances, Kubernetes clusters, multiple monitoring SaaS subscriptions, dedicated DevOps team. Those run $5,000-$50,000/month for similar capability.&lt;/p&gt;

&lt;p&gt;We are not saying our approach works for everyone. High-traffic applications need horizontal scaling. Latency-critical systems need dedicated compute. But for a startup building and validating LLM agents? $352/month buys you a lot of runway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Restart counts are your most important metric.&lt;/strong&gt; Not CPU, not memory, not latency. Restart count over time tells you whether your infrastructure is stable or just pretending to be.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Memory budgets are non-negotiable.&lt;/strong&gt; Without hard limits, one misbehaving agent will consume all available RAM and take down every other process on the host. Set ceilings. Enforce them automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Protected files prevent cascade failures.&lt;/strong&gt; In multi-agent systems, the most dangerous agent is the helpful one. Lock down critical files so no agent can "fix" them without human approval.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Governance is not monitoring.&lt;/strong&gt; Monitoring tells you what is happening. Governance tells you what should be happening and enforces the difference. Build governance first, monitoring second.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Start small, stay small.&lt;/strong&gt; We could run on bigger hardware. We choose not to. Resource constraints force good architecture. When you have 4GB to share across 28 processes, you build efficient systems or you build nothing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Next
&lt;/h2&gt;

&lt;p&gt;We are open-sourcing the governance layer as the Nervous System MCP server. It is already available on npm and GitHub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; github.com/levelsofself/mcp-nervous-system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;code&gt;npm install @levelsofself/mcp-nervous-system&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are running multiple LLM agents in production - or planning to - you need a governance layer before you need another feature. Build the immune system before you build more organs.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Roman Palyan writes about production AI infrastructure at Levels of Self, a family-run startup where 12 family members each have their own LLM-powered agent. The whole system runs on one VPS because constraints breed innovation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>infrastructure</category>
      <category>devops</category>
      <category>startup</category>
    </item>
    <item>
      <title>I Built a Governance System for AI Agents. It Started With Governing Myself.</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Thu, 02 Apr 2026 15:28:35 +0000</pubDate>
      <link>https://forem.com/levelsofself/i-built-a-governance-system-for-ai-agents-it-started-with-governing-myself-1dmd</link>
      <guid>https://forem.com/levelsofself/i-built-a-governance-system-for-ai-agents-it-started-with-governing-myself-1dmd</guid>
      <description>&lt;p&gt;&lt;strong&gt;60 hours. No food. No water.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first 24 hours were brutal. Every part of me wanted to quit. I had not fasted since August of last year, when I did a six-day water fast. This time I added the hardest constraint possible: no water either.&lt;/p&gt;

&lt;p&gt;By hour 48 the pain started leaving my body. By hour 60, something shifted. My sense of self became sharper than it has been in months. The weight I had been carrying, physical and otherwise, started to move.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does this matter to an AI company?
&lt;/h2&gt;

&lt;p&gt;Because governance starts with the self.&lt;/p&gt;

&lt;p&gt;At Levels Of Self, we build the governance layer for AI agent systems. Our Nervous System framework enforces boundaries, tracks every action, and stops agents before they drift. It is the control layer that makes autonomous AI safe to deploy.&lt;/p&gt;

&lt;p&gt;But here is what nobody in the AI governance space talks about: you cannot build real governance for machines if you have not practiced it on yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Parallel
&lt;/h2&gt;

&lt;p&gt;A dry fast is governance in its purest form. You set a constraint. Your body screams at you to break it. Every signal, every craving, every ounce of discomfort is your own internal system testing the boundary. And you hold.&lt;/p&gt;

&lt;p&gt;That is exactly what our Nervous System does for AI agents. It sets the boundary. The agent pushes against it. The system holds. Every violation is logged, every escalation is tracked, and when the threshold is crossed, the session is killed. No negotiation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;Everyone in AI right now is building a new brain. Faster models. Bigger context windows. More capabilities. We are building the nervous system. The thing that makes the brain safe to operate.&lt;/p&gt;

&lt;p&gt;And it started with the first level of self. Me.&lt;/p&gt;

&lt;p&gt;With all the noise in the world right now, the algorithms, the panic, the posturing, I decided to start where it actually matters. Not with another feature. Not with another pitch deck. With discipline. With stillness. With proving to myself that the system I am building for machines is the same system I live by.&lt;/p&gt;

&lt;p&gt;If you are building AI systems that need real boundaries, not guidelines that agents can talk their way around, we should talk.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Arthur Palyan&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Founder, Levels Of Self&lt;/em&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://www.levelsofself.com" rel="noopener noreferrer"&gt;levelsofself.com&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://www.npmjs.com/package/mcp-nervous-system" rel="noopener noreferrer"&gt;Nervous System on npm&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.levelsofself.com/post/i-built-a-governance-system-for-ai-agents-it-started-with-governing-myself" rel="noopener noreferrer"&gt;Levels Of Self Blog&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>startup</category>
      <category>leadership</category>
    </item>
    <item>
      <title>How We're Approaching a County-Level Education Data System Engagement</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:24:30 +0000</pubDate>
      <link>https://forem.com/levelsofself/how-were-approaching-a-county-level-education-data-system-engagement-ndk</link>
      <guid>https://forem.com/levelsofself/how-were-approaching-a-county-level-education-data-system-engagement-ndk</guid>
      <description>&lt;p&gt;When Los Angeles County needs to evaluate whether a multi-agency data system serving foster youth should be modernized or replaced, the work sits at the intersection of technology, policy, and people. That's exactly where we operate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Opportunity
&lt;/h2&gt;

&lt;p&gt;The LA County Office of Child, Youth, and Family Well-Being is looking for a consulting team to analyze the Education Passport System (EPS), a shared data platform that connects 80+ school districts with the Department of Children and Family Services and the Probation Department. The system exists to ensure that when a foster youth moves between placements, their education records follow them.&lt;/p&gt;

&lt;p&gt;The question on the table: does the current system meet the needs of all stakeholders, or is it time to move to something new?&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Work Involves
&lt;/h2&gt;

&lt;p&gt;This is a 12-month engagement with five major deliverables:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Needs Assessment and Gap Analysis&lt;/strong&gt; - Working directly with LACOE, DCFS, Probation, school districts, and child welfare advocates to map what the system does today versus what every stakeholder actually needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comparative Analysis&lt;/strong&gt; - Examining what other California counties and out-of-state jurisdictions have built, what works, what failed, and whether any existing platforms could serve LA County better than what they have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State-Level Assessment&lt;/strong&gt; - Identifying where state policy, legislation, or system architecture is creating barriers to local implementation. CALPADS, CWS-CARES, and the patchwork of state data systems all play a role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommendations&lt;/strong&gt; - Delivering a written report and presentation with concrete options, each with cost estimates, staffing requirements, implementation timelines, and trade-offs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stakeholder Vetting&lt;/strong&gt; - Presenting recommendations to county leadership, school districts, charter schools, and Board of Supervisors offices, incorporating feedback, and finalizing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;There are roughly 30,000 children in the LA County foster care system at any given time. When a child moves placements, which happens frequently, their education records need to follow them immediately. Credits need to transfer. IEPs need to be accessible. Enrollment needs to happen without delay.&lt;/p&gt;

&lt;p&gt;When the data system works, a child doesn't lose a semester. When it doesn't, they fall behind in ways that compound across their entire life.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where We Fit
&lt;/h2&gt;

&lt;p&gt;Our team brings the AI governance and systems analysis layer. We specialize in evaluating how organizations use technology, whether those systems are governed properly, and what it takes to modernize without breaking what already works.&lt;/p&gt;

&lt;p&gt;For this engagement, we're assembling a small, focused team. We're looking for 1-2 strategic partners who bring direct experience with child welfare data systems, K-12 education data infrastructure, or multi-agency government data interoperability at the county or state level.&lt;/p&gt;

&lt;p&gt;This is not a general call for collaboration. We're selecting operators who have been inside these systems, not just studied them.&lt;/p&gt;

&lt;h2&gt;
  
  
  If This Is Your World
&lt;/h2&gt;

&lt;p&gt;If you've worked directly with foster youth education systems, child welfare data platforms, CALPADS, CWS-CARES, or similar infrastructure at the county or state level, we should talk. The deadline for this engagement is April 24, 2026. We're moving now.&lt;/p&gt;

&lt;p&gt;Reach out directly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Email: &lt;a href="mailto:ArtPalyan@LevelsOfSelf.com"&gt;ArtPalyan@LevelsOfSelf.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;WhatsApp: &lt;a href="https://wa.me/18184399770" rel="noopener noreferrer"&gt;wa.me/18184399770&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Schedule a call: &lt;a href="https://calendly.com/levelsofself/zoom" rel="noopener noreferrer"&gt;calendly.com/levelsofself/zoom&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Arthur Palyan - Founder, Levels of Self - AI Governance and Systems Analysis - &lt;a href="https://levelsofself.com" rel="noopener noreferrer"&gt;levelsofself.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Canonical URL: &lt;a href="https://www.levelsofself.com/post/how-we-re-approaching-a-county-level-education-data-system-engagement" rel="noopener noreferrer"&gt;https://www.levelsofself.com/post/how-we-re-approaching-a-county-level-education-data-system-engagement&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>government</category>
      <category>consulting</category>
      <category>education</category>
    </item>
    <item>
      <title>We Built the Governance Layer AI Agent Systems Need in Regulated Environments</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Wed, 01 Apr 2026 05:40:07 +0000</pubDate>
      <link>https://forem.com/levelsofself/we-built-the-governance-layer-ai-agent-systems-need-in-regulated-environments-1f96</link>
      <guid>https://forem.com/levelsofself/we-built-the-governance-layer-ai-agent-systems-need-in-regulated-environments-1f96</guid>
      <description>&lt;h1&gt;
  
  
  We Built the Governance Layer AI Agent Systems Need in Regulated Environments
&lt;/h1&gt;

&lt;p&gt;Every enterprise deploying AI agents faces the same question: how do you prove what happened?&lt;/p&gt;

&lt;p&gt;Not what the agent was supposed to do. What it actually did. Which agent. In which session. Under whose authority. At what cost. And whether it was allowed.&lt;/p&gt;

&lt;p&gt;This is the governance gap. Agent systems are powerful. Agent governance barely exists outside vendor walls.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem is not capability
&lt;/h2&gt;

&lt;p&gt;Modern AI agents can write code, query databases, send emails, manage files, and coordinate with other agents. The tooling is real. The autonomy is increasing.&lt;/p&gt;

&lt;p&gt;But autonomy without attribution is a compliance failure. In banking, healthcare, government, and any regulated industry, you cannot deploy systems that act without provable identity, auditable trails, and cost accountability.&lt;/p&gt;

&lt;h2&gt;
  
  
  What external governance looks like
&lt;/h2&gt;

&lt;p&gt;We built the Nervous System to close this gap. It is an external governance layer that wraps any agent system. Here is what it enforces today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Every request is attributed.&lt;/strong&gt; When an agent makes a tool call, it must identify itself: agent ID, session ID, organization ID. These are HTTP headers, checked on every POST request. Missing headers mean a 403 rejection. No exceptions. The audit database records exactly which agent acted, in which session, under which organization. This is not optional logging. It is enforced at the API boundary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Every action has a cost record.&lt;/strong&gt; Token usage and estimated cost are logged per agent per session. You can query total spend for any agent over any time period. This is what internal billing requires. It is what budget enforcement requires. And it is what any serious financial review will ask for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents coordinate through a shared task board.&lt;/strong&gt; Tasks are created, claimed, and completed through API endpoints. State transitions are deterministic: open to claimed to completed. Any observer can see who is working on what and what finished. This moves multi-agent deployments from opaque parallel processes to auditable workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dangerous actions are blocked before execution.&lt;/strong&gt; The governance layer includes a policy engine with escalation. First violation gets a warning. Second gets a strike. Third terminates the agent session. Two high-risk violations in the same category trigger immediate termination. This is not monitoring. This is enforcement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this cannot live inside the vendor
&lt;/h2&gt;

&lt;p&gt;Vendor-internal governance protects the vendor. External governance protects the customer.&lt;/p&gt;

&lt;p&gt;When your organization deploys AI agents, you need to control the policy. You need to own the audit trail. You need to set the cost limits. You need to decide what gets blocked and what gets through.&lt;/p&gt;

&lt;p&gt;That requires a layer you operate, not one embedded inside someone else's product. The Nervous System is that layer. It is vendor-agnostic, runs on your infrastructure, and enforces your policies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this enables
&lt;/h2&gt;

&lt;p&gt;With external governance in place, organizations can:&lt;/p&gt;

&lt;p&gt;Deploy AI agents in regulated environments with provable attribution. Satisfy audit requirements for SOC 2, FedRAMP, NIST AI RMF, and EU AI Act. Track and allocate AI operational costs by team, project, or agent. Coordinate multi-agent workflows with observable state. Kill any agent session instantly if it violates policy.&lt;/p&gt;

&lt;p&gt;The entire stack runs on a single server for under fifty dollars a month. It scales with the number of policies and agents, not with expensive infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The line between framework and product
&lt;/h2&gt;

&lt;p&gt;A framework describes how things should work. A product enforces how they actually work.&lt;/p&gt;

&lt;p&gt;Our governance layer enforces identity at request time, records attribution in audit, tracks cost by agent and session, and coordinates work through a shared task board. These are not architectural diagrams. They are running endpoints that reject, log, and enforce.&lt;/p&gt;

&lt;p&gt;That is the difference between talking about governance and providing it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Nervous System MCP is available on npm and GitHub. For regulated deployment discussions: &lt;a href="mailto:ArtPalyan@LevelsOfSelf.com"&gt;ArtPalyan@LevelsOfSelf.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>enterprise</category>
      <category>governance</category>
    </item>
    <item>
      <title>What Claude Code Architecture Reveals About the Missing Governance Layer for AI Agents</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Wed, 01 Apr 2026 05:39:57 +0000</pubDate>
      <link>https://forem.com/levelsofself/what-claude-code-architecture-reveals-about-the-missing-governance-layer-for-ai-agents-1k62</link>
      <guid>https://forem.com/levelsofself/what-claude-code-architecture-reveals-about-the-missing-governance-layer-for-ai-agents-1k62</guid>
      <description>&lt;h1&gt;
  
  
  What Claude Code's Architecture Reveals About the Missing Governance Layer for AI Agents
&lt;/h1&gt;

&lt;p&gt;The source code of the most widely used AI coding agent leaked today. Within hours, thousands of developers were studying its internals: how it manages tools, coordinates sub-agents, tracks sessions, and handles permissions.&lt;/p&gt;

&lt;p&gt;The architecture is impressive. But the most important takeaway is not what it contains. It is what it proves is missing from the ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the source reveals
&lt;/h2&gt;

&lt;p&gt;Underneath the interface, every serious agent system converges on the same set of patterns. The leaked codebase confirms this. There are pre-execution hooks that intercept tool calls before they run. There is a multi-agent coordinator with shared task boards and async messaging. There is session-level identity tracking. There is cost tracking per turn. There are kill switches that can terminate agent sessions remotely.&lt;/p&gt;

&lt;p&gt;These are not features. They are governance primitives. And right now, they only exist inside the vendor stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why internal governance is not enough
&lt;/h2&gt;

&lt;p&gt;When an enterprise deploys AI agents in production, the agents interact with real systems: databases, email servers, file systems, APIs, customer data. The question is not whether the agent can act. The question is whether the organization can prove who acted, what they did, what it cost, and whether it was authorized.&lt;/p&gt;

&lt;p&gt;Internal hooks solve this for the vendor. They do not solve it for the customer.&lt;/p&gt;

&lt;p&gt;A bank deploying AI agents needs to prove to regulators that every action was attributed to a specific agent, session, and organization. A government agency needs audit trails that satisfy NIST AI RMF and EO 14110. An enterprise needs cost visibility per agent and per session for internal billing and oversight.&lt;/p&gt;

&lt;p&gt;None of that is available externally today. The governance primitives exist inside the vendor. They do not exist as an independent layer the customer controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three controls that matter
&lt;/h2&gt;

&lt;p&gt;After studying what leading agent systems build internally and what enterprises actually need externally, the missing layer comes down to three capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity enforcement.&lt;/strong&gt; Every request to the governance layer must carry agent identity, session identity, and organization identity. If those headers are missing, the request is rejected. Not logged. Not warned. Rejected. This is fail-closed enforcement, not optional metadata. The audit trail then captures exactly which agent in which session under which organization performed each action.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost visibility.&lt;/strong&gt; Every tool call, every LLM invocation, every agent session needs a cost record. Not approximate. Not retroactive. Logged at the time of execution, queryable by agent, by session, by date range. This is what makes internal billing possible. It is what makes budget enforcement real. And it is what investors and regulators ask for when they want to understand operational economics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared coordination.&lt;/strong&gt; Agents operating in parallel need a shared task board: create a task, claim it, complete it, report the result. Without this, multi-agent systems are just parallel scripts. With it, agents become a coordinated workforce with observable state transitions. The difference matters when someone asks: who is working on what, and what is the status?&lt;/p&gt;

&lt;h2&gt;
  
  
  What we built
&lt;/h2&gt;

&lt;p&gt;We have been building the Nervous System, an open governance layer for AI agent systems, for months. It sits between any agent runtime and the actions agents take. Every tool call gets intercepted, checked against policy, and logged before execution. Dangerous actions are blocked in real time.&lt;/p&gt;

&lt;p&gt;As of today, the layer includes all three controls described above. Identity enforcement is live: requests without agent and session headers are rejected with a 403. Cost tracking is live: every call can be logged with token counts and estimated cost, queryable by agent or session. The shared task board is live: agents create, claim, and complete tasks through a coordination API with observable state transitions.&lt;/p&gt;

&lt;p&gt;This runs on a single server. The entire governance stack, the policy engine, the audit database, the task board, the escalation governor, costs under fifty dollars a month to operate. It is vendor-agnostic. It works with any agent system that makes tool calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verified proof
&lt;/h2&gt;

&lt;p&gt;In our system today:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requests without identity headers are rejected (HTTP 403)&lt;/li&gt;
&lt;li&gt;Requests with valid identity headers are accepted and logged&lt;/li&gt;
&lt;li&gt;Every audit entry captures the exact org, agent, and session that acted&lt;/li&gt;
&lt;li&gt;Cost is tracked per agent and per session, queryable in real time&lt;/li&gt;
&lt;li&gt;Task state transitions (open, claimed, completed) are observable through the API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not a design document. These are running endpoints on a live server governing 13 agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  The category
&lt;/h2&gt;

&lt;p&gt;This is not an agent framework. It is not a chatbot platform. It is not a competitor to any agent runtime.&lt;/p&gt;

&lt;p&gt;It is the external governance layer that agent deployments need in regulated, auditable, enterprise environments.&lt;/p&gt;

&lt;p&gt;Agent systems are getting better at acting. They are still weak at being governed. The missing layer is identity, cost, and coordination, provided externally, controlled by the organization deploying agents, not by the vendor building them.&lt;/p&gt;

&lt;p&gt;We built that layer. It is open. It is running. And it is ready for the environments where governance is not optional.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Nervous System is available as an MCP server on npm and on GitHub. Contact: &lt;a href="mailto:ArtPalyan@LevelsOfSelf.com"&gt;ArtPalyan@LevelsOfSelf.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>agents</category>
      <category>opensource</category>
    </item>
    <item>
      <title>YAML Policies and SQLite Audit Trails - What Production AI Governance Actually Looks Like</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Thu, 26 Mar 2026 17:40:52 +0000</pubDate>
      <link>https://forem.com/levelsofself/yaml-policies-and-sqlite-audit-trails-what-production-ai-governance-actually-looks-like-16p8</link>
      <guid>https://forem.com/levelsofself/yaml-policies-and-sqlite-audit-trails-what-production-ai-governance-actually-looks-like-16p8</guid>
      <description>&lt;p&gt;Most AI governance conversations stop at "we log everything." That is observability, not governance. Observability tells you what happened after the fact. Governance stops the bad thing before it executes.&lt;/p&gt;

&lt;p&gt;Today we shipped two features that make that distinction concrete: a YAML policy engine and a SQLite audit brain. Here is what they do and why they matter.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;We run 13 AI agents in production. Each agent has different permissions, different risk levels, and different access needs. A bookkeeper agent should never call external APIs. A legal counsel agent should have stricter escalation rules than a content writer. A translator does not need write access to financial files.&lt;/p&gt;

&lt;p&gt;Hardcoding these rules inside the governance server works at 3 agents. It breaks at 13. It is impossible at 100.&lt;/p&gt;

&lt;h2&gt;
  
  
  YAML Policy Engine
&lt;/h2&gt;

&lt;p&gt;Every agent now has a policy file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;/governance/policies/&lt;/span&gt;
    &lt;span class="s"&gt;global.yaml&lt;/span&gt;              &lt;span class="c1"&gt;# defaults for everyone&lt;/span&gt;
    &lt;span class="s"&gt;roles/&lt;/span&gt;
        &lt;span class="s"&gt;bookkeeper.yaml&lt;/span&gt;      &lt;span class="c1"&gt;# role-level rules&lt;/span&gt;
        &lt;span class="s"&gt;legal.yaml&lt;/span&gt;
        &lt;span class="s"&gt;coach.yaml&lt;/span&gt;
        &lt;span class="s"&gt;...10 roles total&lt;/span&gt;
    &lt;span class="s"&gt;agents/&lt;/span&gt;
        &lt;span class="s"&gt;harry.yaml&lt;/span&gt;            &lt;span class="c1"&gt;# agent-specific overrides&lt;/span&gt;
        &lt;span class="s"&gt;aram.yaml&lt;/span&gt;
        &lt;span class="s"&gt;tamara.yaml&lt;/span&gt;
        &lt;span class="s"&gt;...13 agents total&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Resolution is deterministic: global defaults, then role policy, then agent override. Agent-level wins.&lt;/p&gt;

&lt;p&gt;A policy file looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0"&lt;/span&gt;
&lt;span class="na"&gt;agent_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;harry-bookkeeper&lt;/span&gt;
&lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bookkeeper&lt;/span&gt;

&lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;allowed&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Bash&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Read&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Write&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;calculator&lt;/span&gt;
  &lt;span class="na"&gt;denied&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;WebFetch&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ExternalAPI&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;email-send&lt;/span&gt;

&lt;span class="na"&gt;escalation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;strict&lt;/span&gt;
  &lt;span class="na"&gt;kill_threshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;high_risk_kill&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;

&lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;max_runtime_minutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
  &lt;span class="na"&gt;max_cost_usd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1.00&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When Harry tries to call WebFetch, the governor checks his policy and blocks it. No code change required. Edit the YAML, reload, done.&lt;/p&gt;

&lt;p&gt;This is what governance looks like when you need to scale: configuration, not code.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQLite Audit Brain
&lt;/h2&gt;

&lt;p&gt;Every decision the governor makes now writes to a persistent SQLite database. Not a log file. A queryable, indexable, exportable database.&lt;/p&gt;

&lt;p&gt;Schema:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;audit_events&lt;/code&gt; - every allow/deny decision with agent ID, session, tool, policy rule, risk category, escalation level&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;agents&lt;/code&gt; - registered agents with last seen timestamps&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heartbeats&lt;/code&gt; - agent health state&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;killswitch_events&lt;/code&gt; - every forced termination with reason&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;policy_versions&lt;/code&gt; - which policy was active when&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can query it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;GET /ns/audit/query?agent_id=harry&amp;amp;allowed=0
GET /ns/audit/stats
GET /ns/audit/export?format=json
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The stats endpoint right now returns real data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"total_events"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;77&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"allowed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;69&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"denied"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"kill_events"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"by_category"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"category"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"filesystem.delete"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"count"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"category"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"policy.tool_denied"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"count"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is not a demo. That is production data from a live system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Combination Matters
&lt;/h2&gt;

&lt;p&gt;YAML policies without audit trails are unenforceable. You changed a policy, but can you prove what was active when an incident happened? No.&lt;/p&gt;

&lt;p&gt;Audit trails without externalized policies are inflexible. Every change requires a code deploy. At scale, that is a bottleneck that becomes a liability.&lt;/p&gt;

&lt;p&gt;Together, they give you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configurable governance (YAML)&lt;/li&gt;
&lt;li&gt;Persistent proof (SQLite)&lt;/li&gt;
&lt;li&gt;Queryable history (API endpoints)&lt;/li&gt;
&lt;li&gt;No code changes for policy updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are deploying AI agents in a regulated industry - banking, healthcare, government, legal - this is the minimum viable governance layer. Anything less is a compliance gap waiting to be discovered.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;NS Governor v2.1 (Python, 749 lines)&lt;/li&gt;
&lt;li&gt;SQLite audit module (347 lines)&lt;/li&gt;
&lt;li&gt;YAML policy engine (212 lines)&lt;/li&gt;
&lt;li&gt;24 policy files&lt;/li&gt;
&lt;li&gt;3 Claude Code hooks (pre-tool, post-tool, stop)&lt;/li&gt;
&lt;li&gt;Running on a $48/month VPS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Open source: &lt;a href="https://www.npmjs.com/package/mcp-nervous-system" rel="noopener noreferrer"&gt;npmjs.com/package/mcp-nervous-system&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Arthur Palyan builds AI governance infrastructure at Levels Of Self. The Nervous System MCP is production-deployed and listed in the Anthropic MCP directory.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>sqlite</category>
      <category>yaml</category>
    </item>
    <item>
      <title>Why We Built a Nervous System for AI Agents Before Anyone Shipped Hooks</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Thu, 26 Mar 2026 17:39:46 +0000</pubDate>
      <link>https://forem.com/levelsofself/why-we-built-a-nervous-system-for-ai-agents-before-anyone-shipped-hooks-3dnf</link>
      <guid>https://forem.com/levelsofself/why-we-built-a-nervous-system-for-ai-agents-before-anyone-shipped-hooks-3dnf</guid>
      <description>&lt;p&gt;In February 2026, we had 13 AI agents running on a single VPS. They managed email, filed grants, coached users, processed legal documents, built partner packages, and scanned government portals. All autonomous. All running 24/7.&lt;/p&gt;

&lt;p&gt;There was no governance layer anywhere in the ecosystem. No tool that sat between the model and execution and said "should this action be allowed?" The options were: trust the model, or build something yourself.&lt;/p&gt;

&lt;p&gt;We built something ourselves. We called it the Nervous System.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture Decision
&lt;/h2&gt;

&lt;p&gt;The insight was simple: if you have an agent that can execute bash commands, edit files, and send emails, you need a layer that intercepts every action before it runs. Not after. Before.&lt;/p&gt;

&lt;p&gt;We modeled it on biology. A nervous system does not make decisions. It carries signals, enforces reflexes, and stops you from putting your hand on a hot stove before your brain finishes processing. The governance layer does the same thing for AI agents.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Task arrives -&amp;gt; NS validates against policy -&amp;gt; Runtime executes
Sub-agent spawns -&amp;gt; NS registers + applies rules -&amp;gt; Sub-agent runs
Tool call -&amp;gt; NS checks permissions -&amp;gt; Tool executes
Result returns -&amp;gt; NS logs to audit chain -&amp;gt; Result delivered
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every agent must register. Every action must be checked. Every decision must be logged. Any agent can be killed in one command.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Learned Running This for 4 Months
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hardcoded rules do not scale.&lt;/strong&gt; We started with 7 rules in a JavaScript file. By agent 8, we were writing special cases. By agent 13, we needed a policy engine. Now we have 24 YAML policy files with three-tier resolution: global, role, agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents will test boundaries.&lt;/strong&gt; Not maliciously. LLMs explore solutions. If a solution involves deleting a file to recreate it, the model will try. If a solution involves calling an API it was not designed to call, the model will try. You need a layer that catches this before execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit trails are not optional.&lt;/strong&gt; When an agent processes financial data and something looks wrong, the first question is: what did it do and when? Without a persistent audit trail, the answer is "check the logs if they still exist." We moved to SQLite. Every decision persists with agent ID, session, tool, policy rule, risk category, and escalation level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stateful escalation prevents false kills.&lt;/strong&gt; Our first governor killed agents on the first violation. That was too aggressive - legitimate commands sometimes look dangerous in isolation. Governor v2.1 uses a 15-minute sliding window with escalation: warn, strike, kill. Context matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The kill switch is the most important feature.&lt;/strong&gt; Not because you use it often. Because knowing it exists changes how you design everything else. If any agent can be stopped in one command, you can be aggressive about what you deploy. Without it, every deployment is a risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Frameworks Shipped After We Built This
&lt;/h2&gt;

&lt;p&gt;Claude Code hooks launched in March 2026. They let you intercept tool calls with PreToolUse, PostToolUse, and Stop scripts. That is exactly the pattern we were already running - but ours was in production with 13 agents, a SQLite audit trail, YAML policies, and a stateful governor before hooks shipped.&lt;/p&gt;

&lt;p&gt;Claude Code Channels launched the same month. MCP-based bridge to Telegram and Discord. We had 8 Telegram bots running through our own dispatch system since February.&lt;/p&gt;

&lt;p&gt;Every major framework is now adding governance features. None of them have 4 months of production data showing what works and what breaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Moat Is Not the Code
&lt;/h2&gt;

&lt;p&gt;The Nervous System is open source. Anyone can install it from npm. The moat is operational knowledge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;99+ violations caught, 0 bypassed&lt;/li&gt;
&lt;li&gt;Governor escalation tuned from production behavior&lt;/li&gt;
&lt;li&gt;Policy files shaped by real agent failures&lt;/li&gt;
&lt;li&gt;Audit data from thousands of live decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That cannot be replicated by reading a README. It comes from running the system every day and fixing what breaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Goes
&lt;/h2&gt;

&lt;p&gt;The market is building agents. Every framework, every platform, every startup. The gap is governance. Not observability (what happened). Not permissions (static allow/deny). Governance: pre-execution interception, stateful escalation, policy-driven control, persistent audit trails, and a kill switch.&lt;/p&gt;

&lt;p&gt;We sell that layer. To banks deploying AI. To government agencies under Executive Order 14110. To enterprises running agent fleets with no oversight.&lt;/p&gt;

&lt;p&gt;Everyone is building a new brain. We built the nervous system.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Arthur Palyan is the founder of Levels Of Self. The Nervous System MCP is published on npm and listed in the Anthropic MCP directory. The company is SAM.gov registered with CAGE code 19R10.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>mcp</category>
      <category>architecture</category>
    </item>
    <item>
      <title>We Governed 13 AI Agents for 4 Months Before Governance Was a Feature</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Thu, 26 Mar 2026 17:39:45 +0000</pubDate>
      <link>https://forem.com/levelsofself/we-governed-13-ai-agents-for-4-months-before-governance-was-a-feature-4eep</link>
      <guid>https://forem.com/levelsofself/we-governed-13-ai-agents-for-4-months-before-governance-was-a-feature-4eep</guid>
      <description>&lt;p&gt;In February 2026, we deployed 13 AI agents across 5 platforms. Telegram, Instagram, Facebook, web, and WhatsApp. Real agents doing real work: responding to leads, filing grants, managing legal docs, coaching users through 6,854 behavioral scenarios, and processing financial data.&lt;/p&gt;

&lt;p&gt;By day 3, one agent tried to delete a production config file. By day 5, another started drifting from its assigned role. By week 2, we had caught 99+ policy violations across the fleet - none of which bypassed our governance layer.&lt;/p&gt;

&lt;p&gt;We did not wait for a framework to tell us this was a problem. We built the solution ourselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Built (and When)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;February 2026&lt;/strong&gt; - The Nervous System v1.0 went live. 7 enforced behavioral rules. A preflight check that blocks edits to 99 protected files. A SHA-256 hash-chained audit trail for every action. A kill switch that stops any agent in one command. Published to npm as &lt;code&gt;mcp-nervous-system&lt;/code&gt;. Listed in the Anthropic MCP directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;March 2026&lt;/strong&gt; - Tamara, our operations manager agent, started running autonomous health checks every 60 minutes. She dispatches agents, monitors for drift, and reports to the founder via Telegram. No human in the loop for routine operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;March 13, 2026&lt;/strong&gt; - We deployed Claude Code hooks (PreToolUse, PostToolUse, Stop) that route every tool call through the Nervous System before execution. Safe commands pass. Dangerous commands get blocked in real time with an audit entry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;March 25, 2026&lt;/strong&gt; - NS Governor v2.1 went live. Stateful escalation: warn on first violation, strike on second, auto-kill on third. 29 policy rules. Sliding 15-minute window. Proven in a live demo where destructive commands were blocked, the agent was killed, and the target file survived.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;March 26, 2026&lt;/strong&gt; - SQLite audit brain replaced flat JSON logging. Every decision now persists with full traceability: agent ID, session, tool, action, policy rule, risk category, escalation level. Queryable. Exportable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Same day&lt;/strong&gt; - YAML policy engine deployed. 24 policy files: 1 global default, 10 role-level policies, 13 agent-specific overrides. Policy changes take effect without code changes. Just edit the YAML and reload.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;Every week, a new agent framework launches. They all focus on the same thing: making agents smarter, faster, more capable.&lt;/p&gt;

&lt;p&gt;Nobody asks: who controls them?&lt;/p&gt;

&lt;p&gt;We did. Before it was a feature in any framework. Before governance was a checkbox on a product page. We built it because we needed it. 13 agents running 24/7 with access to production files, email accounts, financial data, and government portals. You do not run that without a control layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Have Now
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;13 governed agents across 5 platforms&lt;/li&gt;
&lt;li&gt;Governor v2.1 with stateful escalation and auto-kill&lt;/li&gt;
&lt;li&gt;SQLite persistent audit trail (every decision logged)&lt;/li&gt;
&lt;li&gt;24 YAML policy files (global, role, agent-level)&lt;/li&gt;
&lt;li&gt;30+ MCP tools published on npm&lt;/li&gt;
&lt;li&gt;SAM.gov registered, CAGE code assigned, government-ready&lt;/li&gt;
&lt;li&gt;Running on a single VPS for under $300/month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Nervous System is open source: &lt;a href="https://www.npmjs.com/package/mcp-nervous-system" rel="noopener noreferrer"&gt;npmjs.com/package/mcp-nervous-system&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The question was never whether AI agents would need governance. The question was whether anyone would build it before something went wrong.&lt;/p&gt;

&lt;p&gt;We did.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Arthur Palyan is the founder of Levels Of Self, an AI governance and multi-agent infrastructure company based in California. The Nervous System MCP is listed in the Anthropic MCP directory and published on npm.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>governance</category>
      <category>mcp</category>
    </item>
    <item>
      <title>I Trained Humans to See Their Patterns. Then I Used the Same Method to Train AI.</title>
      <dc:creator>Arthur Palyan</dc:creator>
      <pubDate>Fri, 20 Mar 2026 07:04:06 +0000</pubDate>
      <link>https://forem.com/levelsofself/i-trained-humans-to-see-their-patterns-then-i-used-the-same-method-to-train-ai-112g</link>
      <guid>https://forem.com/levelsofself/i-trained-humans-to-see-their-patterns-then-i-used-the-same-method-to-train-ai-112g</guid>
      <description>&lt;p&gt;&lt;em&gt;How a coaching and training background became the blueprint for an AI nervous system that governs 13 autonomous agents on a $500/month server.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;For over a decade, I trained people. Not in tech. In themselves.&lt;/p&gt;

&lt;p&gt;I worked in coaching, training, and development - helping people recognize their own patterns. The emotional loops they kept running. The behaviors they repeated without seeing. The gaps between what they said they wanted and what they actually did every day.&lt;/p&gt;

&lt;p&gt;The method was simple: observe, name the pattern, build awareness, create a rule, enforce the rule until it becomes second nature. That is how humans change. Not through motivation. Through structure.&lt;/p&gt;

&lt;p&gt;I never expected that the same exact framework would become the architecture for governing artificial intelligence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern Recognition Problem
&lt;/h2&gt;

&lt;p&gt;When the AI agent wave hit, I saw something nobody was talking about. Everyone was building smarter models, faster inference, bigger context windows. But nobody was building the layer that watches the agents after you deploy them.&lt;/p&gt;

&lt;p&gt;I had seen this before. In training rooms full of people who knew exactly what to do - and still did not do it. The gap was never intelligence. The gap was governance. Awareness. Accountability. A system that catches drift before it becomes a disaster.&lt;/p&gt;

&lt;p&gt;So I asked a simple question: what if I applied the same pattern recognition framework I used with humans to a fleet of AI agents?&lt;/p&gt;

&lt;p&gt;What if an AI system could observe its own behavior, detect when it drifted from its rules, audit itself automatically, and course-correct without a human standing over it?&lt;/p&gt;

&lt;p&gt;That question became the Nervous System.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Built
&lt;/h2&gt;

&lt;p&gt;The Nervous System is an open-source MCP (Model Context Protocol) server - a governance layer that sits on top of any AI agent deployment. It does what I used to do in training rooms, but for machines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Drift detection&lt;/strong&gt; across roles, versions, files, processes, and live services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SHA-256 hash-chained audit trails&lt;/strong&gt; so every decision has a receipt&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Behavioral rules&lt;/strong&gt; that agents cannot bypass - enforced at the infrastructure level, not the prompt level&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kill switches&lt;/strong&gt; and compliance checks that run automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is live on npm. It is in the Anthropic MCP directory. It runs on a $24/month VPS.&lt;/p&gt;

&lt;p&gt;And sitting on top of it is Tamara - our AI operations manager who governs 13 autonomous agents across Telegram, WhatsApp, Instagram, Facebook, and the web. She dispatches work, monitors health, catches violations, and reports to me on Telegram. All day. Every day. Without being asked.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Numbers
&lt;/h2&gt;

&lt;p&gt;Here is what this looks like in production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;13 AI agents running 24/7 across 5 platforms&lt;/li&gt;
&lt;li&gt;400+ tools, skills, and automations&lt;/li&gt;
&lt;li&gt;Custom bots for accounting, legal, real estate, youth empowerment, coaching, grants, and more&lt;/li&gt;
&lt;li&gt;Total infrastructure cost: under $500/month&lt;/li&gt;
&lt;li&gt;Agents built and deployed on Telegram, WhatsApp, Instagram, and Facebook for clients&lt;/li&gt;
&lt;li&gt;40-60% cost savings for businesses replacing manual operations with AI automation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are not pitching a concept. This is running. Right now. On a single server in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Businesses Care
&lt;/h2&gt;

&lt;p&gt;When I talk to a CPA firm with 10 bookkeepers, or a real estate office drowning in client follow-ups, or a nonprofit that needs to track 50 grant deadlines - they do not care about model architecture. They care about results.&lt;/p&gt;

&lt;p&gt;Can you automate what I am doing manually? Can you save me money? Can I talk to the bot on WhatsApp?&lt;/p&gt;

&lt;p&gt;Yes. Yes. And yes.&lt;/p&gt;

&lt;p&gt;We build custom AI agents on the platforms businesses already use. A bookkeeper bot on Telegram that handles client intake. A legal research assistant that knows California law. A grant tracker that monitors 158 opportunities and alerts you before deadlines. A real estate specialist that handles bilingual client conversations in English and Spanish.&lt;/p&gt;

&lt;p&gt;Each one governed by the Nervous System. Each one auditable. Each one running for a fraction of what it would cost to hire a person.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Government Lane
&lt;/h2&gt;

&lt;p&gt;We are not just serving small businesses. Levels Of Self is SAM.gov registered (CAGE code 19R10), certified as a Small Disadvantaged Business, and actively bidding on government contracts in AI governance, IT modernization, and public safety technology.&lt;/p&gt;

&lt;p&gt;Our NAICS codes cover exactly the lane the government is buying in right now: computer systems design, programming, AI consulting, and R&amp;amp;D in physical sciences. We have active submissions with LADWP, the LA Superior Court, and the California Department of Transportation.&lt;/p&gt;

&lt;p&gt;The government needs what we built. Executive Order 14110 mandates AI governance. The EU AI Act requires audit trails. Every agency deploying AI agents needs a framework to prove compliance. That is literally our product.&lt;/p&gt;

&lt;h2&gt;
  
  
  NVIDIA Agrees
&lt;/h2&gt;

&lt;p&gt;In March 2026, NVIDIA released NemoClaw - their multi-agent orchestration framework. It validates exactly what we have been building: that the future of AI is not one model doing everything, but teams of specialized agents working together under governance.&lt;/p&gt;

&lt;p&gt;The difference? NVIDIA built the orchestration. We built the governance layer that makes orchestration safe. They built the engine. We built the brakes.&lt;/p&gt;

&lt;p&gt;And the brakes are what enterprises actually need to buy before they can deploy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Training Connection
&lt;/h2&gt;

&lt;p&gt;People sometimes ask why an AI governance company was started by a trainer, not an engineer.&lt;/p&gt;

&lt;p&gt;The answer is simple: engineers build capability. Trainers build accountability.&lt;/p&gt;

&lt;p&gt;Every rule in the Nervous System came from a lesson I learned watching humans. The drift audit exists because I watched people slowly abandon their commitments without noticing. The kill switch exists because I learned that sometimes the most important intervention is stopping. The session handoff exists because I watched context get lost between shifts, between meetings, between Mondays and Fridays.&lt;/p&gt;

&lt;p&gt;These are not technical insights. They are human insights, encoded into infrastructure.&lt;/p&gt;

&lt;p&gt;That is why it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Comes Next
&lt;/h2&gt;

&lt;p&gt;We are offering AI automation to businesses that want to save real money by deploying AI agents on the platforms their customers already use. If you are running a firm with 5+ people doing repetitive work - bookkeeping, client intake, scheduling, follow-ups, document processing - we can automate 40-60% of that workload for under $500 a month.&lt;/p&gt;

&lt;p&gt;We are also working with government agencies on AI governance compliance, partnering with consultants and professionals across the country through our partner network, and continuing to build the open-source tools that make AI agent deployment safe and auditable.&lt;/p&gt;

&lt;p&gt;The nervous system is not a metaphor. It is production code. And it started in a training room.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Arthur Palyan is the founder of Levels Of Self, an AI governance and multi-agent infrastructure company based in Valencia, California. The Nervous System MCP server is available at npmjs.com/package/mcp-nervous-system. To learn more or schedule a conversation, visit levelsofself.com or book directly at calendly.com/levelsofself/zoom.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>startup</category>
      <category>governance</category>
    </item>
  </channel>
</rss>
