<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Art Hicks</title>
    <description>The latest articles on Forem by Art Hicks (@arthicksdev).</description>
    <link>https://forem.com/arthicksdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/arthicksdev"/>
    <language>en</language>
    <item>
      <title>Data Debt: The Silent Killer of Enterprise AI Ambitions</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Mon, 06 Apr 2026 00:48:58 +0000</pubDate>
      <link>https://forem.com/arthicksdev/data-debt-the-silent-killer-of-enterprise-ai-ambitions-4jf2</link>
      <guid>https://forem.com/arthicksdev/data-debt-the-silent-killer-of-enterprise-ai-ambitions-4jf2</guid>
      <description>&lt;p&gt;Your AI models are not the problem.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Enterprises are deploying increasingly sophisticated large language models, building agentic workflows, and investing heavily in AI platforms. The technology has never been more capable. Yet 73% of organizations report their data initiatives falling short of ROI expectations — and only 27% exceed their targets.

    The gap between AI ambition and AI results has a name: data debt.

    Data debt is not a storage problem. It is the accumulated cost of fragmented architectures, broken pipelines, manual workarounds, and governance gaps that compound every time you try to scale AI on infrastructure that was never designed for it. And it is quietly killing enterprise AI ambitions at a rate most leadership teams do not fully understand.

    ## The $29 Million Problem Nobody Talks About

    The average enterprise spends $29.3 million per year on data programs, according to Fivetran's 2026 Enterprise Data Infrastructure Benchmark Report. Data integration alone consumes $4.2 million of that budget. Engineers spend $2.2 million annually maintaining pipelines — with 53% of engineering time devoted to maintenance rather than building anything new.

    These are not innovation budgets. They are maintenance budgets disguised as data strategy.

    And the maintenance is not even working. Data pipelines break an average of 4.7 times per month — rising to 8.3 times in large enterprises — causing 60.4 hours of monthly downtime at a cost of $49,600 per hour. In large organizations, that figure reaches $75,200 per hour.

    When pipelines break, AI stops. Models trained on stale data produce stale decisions. Dashboards go dark. Automated workflows stall. The estimated annual business impact from stale data alone ranges from $36 million to $54 million per enterprise.

    The [AI ROI reckoning](/news/ai-roi-reckoning) boards are demanding cannot be answered when the data infrastructure underneath the AI is this fragile.

    ## Model-Rich, Data-Poor

    Here is the paradox most enterprises are living: they have access to the most powerful AI models ever built, and they cannot use them effectively because their data is not ready.

    Eighty percent of enterprise AI initiatives struggle to scale due to fragmented data silos. Gartner projects that 60% of AI projects will be abandoned by 2026 specifically because organizations lack AI-ready data infrastructure. The models are not failing. The foundation underneath them is.

    This is what researchers at Hexalytics call operating "model-rich, data-poor" — deploying advanced LLMs and agentic systems on top of data architectures that cannot provide the real-time, cross-system visibility those systems require. It is like installing a Formula 1 engine in a car with flat tires.

    Poor data quality and siloed architectures cost organizations between $12.9 million and $15 million annually. A quarter of enterprises lose over $5 million per year from data integrity issues alone.

    ## The Three Silent Killers

    Data debt does not announce itself with a system crash. It operates through three mechanisms that are easy to miss until the damage is done:

    ### 1. Decision Lag

    When data is fragmented across systems, AI models make decisions based on partial information. A demand forecasting model that cannot see real-time inventory data across all warehouses produces forecasts that are directionally correct but operationally useless. The decisions arrive, but they arrive too late or too incomplete to act on.

    This connects directly to the [resilience gap](/news/beyond-efficiency-enterprise-resilience-ai-metric) we identified earlier: systems optimized for efficiency on clean data become brittle the moment data quality degrades — which, in most enterprises, is constantly.

    ### 2. Quiet Failures

    Data debt creates failures that do not trigger alerts. A pipeline that delivers data 30 minutes late does not crash — it just makes every downstream AI model slightly wrong. A customer record that exists in three systems with three different formats does not produce an error — it produces a recommendation engine that contradicts itself.

    These quiet failures accumulate. Nobody notices one slightly wrong prediction. But thousands of slightly wrong predictions per day add up to significant revenue leakage, customer dissatisfaction, and operational drift — all invisible to traditional monitoring.

    ### 3. Compute Waste

    Unstructured, poorly governed data inflates cloud costs dramatically. When AI systems must clean, transform, and reconcile data before they can use it, the compute overhead can reach 60% of total cloud spending. Organizations are paying for AI inference when they are actually paying for data janitorial work.


        ### Is your data infrastructure ready for the AI workloads you are planning?

        Most enterprises discover the answer too late.

        [Talk to ViviScape](/contact)



    ## From Passive Storage to Active Intelligence

    The solution to data debt is not buying more storage or adding another data lake. It is fundamentally rethinking what enterprise data infrastructure is for.

    As Abhas Ricky, Chief Strategy Officer at Cloudera, frames it: data must shift "from passive storage into an active intelligence layer that can contextualize information, enforce policy, audit decisions, and preserve traceability."

    This shift requires three architectural changes:

    **Unified governance across hybrid infrastructure.** Most enterprises operate across cloud, on-premise, and edge environments. Sergio Gago, CTO at Cloudera, notes that "hybrid infrastructure is no longer a compromise between legacy and cloud systems. It has instead become the architectural backbone." Data governance must work seamlessly across all environments — not just the ones that are easiest to govern.

    **Agent-ready data access.** As organizations deploy [AI agents at scale](/news/rise-of-the-ai-workforce), their data architecture must support agent-specific needs: clear data access controls, security permissions, observability into agent actions, and agent registries for workflow versioning. The [shadow agent governance crisis](/news/shadow-agents-governance-crisis) becomes exponentially worse when ungoverned agents have ungoverned data access.

    **Managed integration over DIY pipelines.** Fivetran's research shows that organizations using fully managed ELT (Extract, Load, Transform) infrastructure are nearly twice as likely to exceed ROI targets — 45% versus 27% for legacy or DIY setups. The engineering hours saved on pipeline maintenance convert directly into innovation capacity. The organizations still building and maintaining their own data pipelines are paying a premium in both money and opportunity cost.

    ## The Data Debt Audit: Five Questions

    Before your next AI investment, ask whether your data infrastructure can answer these:



        - **What percentage of engineering time goes to pipeline maintenance versus new development?** If it is above 40%, your data debt is consuming your innovation budget.

        - **How many times per month do your data pipelines break?** Industry average is 4.7. If you are above that, your AI systems are running on unreliable foundations.

        - **Can your data infrastructure support real-time, cross-system queries?** If AI models must wait for batch processing to see current data, your decisions are always based on yesterday's reality.

        - **Do you have a unified governance framework across all data environments?** If governance is fragmented by system, so is your AI's understanding of the business.

        - **What is your stale data exposure?** If you do not know, the annual impact is likely in the tens of millions.



    ## The Bottom Line

    Enterprise AI is only as good as the data underneath it. And for most organizations, that data is fragmented, stale, poorly governed, and maintained by engineers who spend more than half their time keeping the lights on.

    Data debt is not a technical inconvenience. It is the single largest barrier between AI investment and [AI ROI](/news/ai-roi-reckoning). Every dollar spent on AI models, every agent deployed, every automation built — all of it depends on data infrastructure that most enterprises have systematically underinvested in.

    The organizations that solve data debt first will be the ones that scale AI successfully. The rest will keep wondering why their models are so capable and their results so disappointing.

    *ViviScape helps enterprises eliminate data debt and build AI-ready infrastructure that scales. If your data architecture is holding your AI strategy back, [let's talk](/contact).*


        ### Ready to build an AI-ready data foundation?

        ViviScape eliminates data debt and builds the infrastructure your AI actually needs — so your models stop running on yesterday's reality.

        [Schedule a Free Consultation](/consultation)



        [ The AI ROI Reckoning](/news/ai-roi-reckoning)
        [Back to News Room ](/news)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>ai</category>
      <category>technology</category>
      <category>business</category>
      <category>software</category>
    </item>
    <item>
      <title>The Last Mile Problem: Why Change Management Is Killing AI at Scale</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Mon, 06 Apr 2026 00:48:27 +0000</pubDate>
      <link>https://forem.com/arthicksdev/the-last-mile-problem-why-change-management-is-killing-ai-at-scale-4kgj</link>
      <guid>https://forem.com/arthicksdev/the-last-mile-problem-why-change-management-is-killing-ai-at-scale-4kgj</guid>
      <description>&lt;p&gt;A global investment bank has deployed over 250 LLM applications connected to enterprise systems. A global payments network reports 99% employee copilot adoption. By any technical measure, these organizations have succeeded at AI deployment.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Yet the gains remain, as Harvard Business Review documents, "trapped inside individual workflows."

    This is the last mile problem. The technology works. The models are capable. The infrastructure is in place. But the organizational design — the workflows, roles, decision rights, and cultural habits that determine how work actually gets done — has not changed to absorb what the technology makes possible.

    **And it is killing AI at scale.**

    ## Pilot-Rich, Transformation-Poor

    Most enterprises have no shortage of AI initiatives. The problem is that those initiatives exist as isolated improvements that never compound into business transformation.

    HBR identifies this as being "pilot-rich but transformation-poor" — a state where organizations accumulate hundreds of AI use cases, each delivering modest gains within its own workflow, while the overall operating model remains unchanged. The primary obstacle, the research concludes, "is rarely model quality or data availability, but rather the 'last mile' of transformation where technical capability must meet organizational design."

    The numbers confirm the gap:

    **Leadership knows change is needed.** Seventy-eight percent of CHROs agree that workflows and roles must change to realize AI value, according to a Gartner survey of 110 chief human resources officers.

    **Most have not acted.** Only just over half of organizations have actually redesigned or redefined roles because of AI. The majority acknowledge the need while continuing to operate with pre-AI organizational structures.

    **The efficiency trap persists.** Sixty-six percent of organizations report AI-driven productivity gains, but only 34% are "truly reimagining the business," per Deloitte's 2026 State of AI in the Enterprise report. Two-thirds are still in the efficiency phase — [the same trap](/news/beyond-efficiency-enterprise-resilience-ai-metric) that optimizes for current conditions without building adaptive capacity.

    ## Seven Frictions That Block the Last Mile

    HBR's research identifies seven structural frictions that prevent AI deployments from becoming AI transformations. Three are particularly relevant for enterprise leaders:

    ### 1. Process Debt

    Just as [data debt](/news/data-debt-silent-killer-enterprise-ai) accumulates from fragmented infrastructure, process debt accumulates from decades of incremental workflow modifications. Most enterprise processes were designed for a world without AI — layering AI on top of them produces faster versions of outdated workflows, not fundamentally better operations.

    The solution is what HBR calls "clean-sheet process redesign" — asking not "how can AI improve this process?" but "if we built this today with AI agents, how would we do it?" This reframing consistently produces dramatically different — and dramatically better — outcomes than incremental automation.

    ### 2. The Identity Problem

    When AI takes over tasks that previously defined someone's professional identity, resistance is not irrational — it is predictable. Knowledge workers who built careers on expertise that AI can now replicate face a genuine threat, not to their employment, but to their sense of professional value.

    This manifests as tribal knowledge hoarding — experts who withhold the institutional knowledge AI needs to function effectively. Not out of malice, but out of self-preservation. Organizations that fail to address this dynamic find their AI systems permanently limited by the knowledge their people choose not to share.

    The response is not to dismiss the concern, but to redefine professional value around the capabilities AI cannot replicate: [judgment under uncertainty, creative problem-solving, and stakeholder relationships](/news/ai-skills-paradox) that require human trust.

    ### 3. Pilot Proliferation Without Integration

    Every successful pilot creates organizational momentum — toward more pilots. Without deliberate integration strategy, enterprises accumulate dozens of AI tools, each solving a narrow problem, none connected to the others, and collectively creating a fragmented landscape that is harder to govern and more expensive to maintain than the systems they replaced.

    The [shadow agent crisis](/news/shadow-agents-governance-crisis) is partly a symptom of this pattern: when AI deployment is distributed across teams without centralized orchestration, ungoverned proliferation is the inevitable result.

    ## What Changes Everything: The 4x Multiplier

    Gartner's research reveals a striking finding: organizations that continuously adapt their change plans based on employee responses are **four times more likely** to achieve change success.

    Not organizations with bigger budgets. Not organizations with better technology. Organizations that treat change management as an ongoing, responsive process rather than a one-time plan.

    Similarly, leaders who "routinize change" — embedding adaptation into regular operational cadence rather than relying on inspiration or top-down mandates — are three times more likely to achieve healthy AI adoption.

    This suggests the last mile problem is not fundamentally about resistance to change. It is about how change is managed. The organizations failing at AI transformation are not failing because their people cannot adapt. They are failing because they treat organizational change as a project with an end date rather than a continuous operating capability.


        ### Is your AI strategy outpacing your organizational readiness?

        [Talk to ViviScape](/contact)



    ## The Talent Remix

    The change management challenge is about to intensify. Gartner advises CHROs to prepare for a "talent remix" — a period of simultaneous layoffs, redeployments, and reskilling at scale that will test every organizational design assumption enterprises currently hold.

    The AI skills gap is already the number one barrier to integration, according to Deloitte. Worker access to AI tools rose 50% in 2025, but skill development and role transformation have not kept pace. Most organizations responded to the skills challenge with education — training programs and courses — rather than the role redesign that Gartner's data shows is actually needed.

    This mirrors the broader pattern: organizations address the last mile problem with the tools they are comfortable with (training, communication, project management) rather than the structural changes the problem actually requires (workflow redesign, role redefinition, decision-rights redistribution, and governance transformation).

    Only 42% of organizations report high strategic preparedness for AI transformation. Fewer feel ready on infrastructure, data, risk, and talent dimensions. The last mile is not getting shorter — it is getting longer as AI capabilities accelerate while organizational readiness stalls.

    ## Five Principles for Closing the Last Mile

    For organizations ready to move from pilot-rich to transformation-ready:

    **1. Redesign processes before automating them.** Ask "how would we build this from scratch with AI?" before asking "how can AI make this faster?" Clean-sheet redesign consistently outperforms incremental optimization.

    **2. Treat change management as infrastructure, not a project.** Build continuous adaptation into operational cadence — regular feedback loops, responsive plan adjustments, embedded change leadership at the team level. The 4x success multiplier comes from making change a routine, not an event.

    **3. Redefine professional identity around irreplaceable capabilities.** Help knowledge workers shift their sense of value from tasks AI can do to judgment AI cannot. This is not a communications exercise — it requires structural changes to roles, career paths, and performance evaluation.

    **4. Integrate before you proliferate.** Every new AI pilot should include an integration plan that connects it to existing systems and governance frameworks. Isolated pilots become [shadow agents](/news/shadow-agents-governance-crisis) and process debt.

    **5. Pair AI-proficient teams with early deployment.** Gartner recommends establishing regular cadences between HR leadership and AI teams, and placing AI-skilled staff alongside the first wave of deployments to bridge the gap between technical capability and organizational adoption.

    ## The Bottom Line

    The last mile of AI transformation is not a technology problem. It is an organizational design problem — and it is the reason most enterprises are generating productivity statistics instead of business results.

    The technology to transform enterprise operations exists today. The [ROI is proven](/news/ai-roi-reckoning) for organizations that reach production scale. The models are capable, the infrastructure is available, and the use cases are clear.

    What is missing is the organizational readiness to absorb what the technology makes possible. And until enterprises treat that readiness as seriously as they treat the technology itself, the last mile will remain the longest mile.

    *ViviScape pairs AI deployment with organizational transformation — because technology that does not change how you work does not change your results. If your AI pilots are not becoming AI outcomes, [let's close the gap](/contact).*


        ### Ready to close the last mile?

        ViviScape pairs AI deployment with organizational transformation — so your technology investments actually change how your business operates.

        [Schedule a Free Consultation](/consultation)



        [ Data Debt: Silent Killer of AI](/news/data-debt-silent-killer-enterprise-ai)
        [The Orchestration Trap ](/news/orchestration-trap-multi-agent-ai)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>ai</category>
      <category>technology</category>
      <category>business</category>
      <category>software</category>
    </item>
    <item>
      <title>The Orchestration Trap: Why Multi-Agent AI Fails Without a Coordination Strategy</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Mon, 06 Apr 2026 00:47:56 +0000</pubDate>
      <link>https://forem.com/arthicksdev/the-orchestration-trap-why-multi-agent-ai-fails-without-a-coordination-strategy-4p6b</link>
      <guid>https://forem.com/arthicksdev/the-orchestration-trap-why-multi-agent-ai-fails-without-a-coordination-strategy-4p6b</guid>
      <description>&lt;p&gt;Interest in multi-agent AI systems surged 1,445% between Q1 2024 and Q2 2025. By the end of 2026, 40% of enterprise applications will feature task-specific AI agents — up from less than 5% in 2025.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    And 40% of those agent projects will fail by 2027.

    That is not a contradiction. It is a pattern. The same technology generating the most enterprise excitement is also generating the most enterprise failures — and the reason is not the agents themselves. It is the absence of what sits between them.

    **Welcome to the orchestration trap: the gap between deploying individual AI agents and coordinating them into a system that actually works.**

    ## The Rush and the Reckoning

    The scale of enterprise agent adoption is staggering. Gartner projects agentic AI will generate approximately 30% of enterprise software revenue by 2035, exceeding $450 billion — up from 2% in 2025. Seventy-three percent of organizations are expected to adopt "agent assist" capabilities by year-end.

    But adoption speed and deployment maturity are two different things. While 80% of enterprise leaders say their organization has mature basic automation, only 28% say the same for automation combined with AI agents, according to Deloitte's survey of 550 US cross-industry leaders.

    The maturity gap reveals the trap: organizations are deploying agents at the pace of their ambition, not the pace of their readiness. And the cost of getting it wrong is not just a failed project — it is agent sprawl, ungoverned proliferation, and the [shadow agent crisis](/news/shadow-agents-governance-crisis) we have already documented.

    As Anushree Verma, Senior Director Analyst at Gartner, notes: "AI agents are evolving rapidly, progressing from basic assistants to task-specific agents by 2026 and ultimately multiagent ecosystems by 2029." The question is whether enterprises will build the coordination infrastructure to keep pace with that evolution — or let it outrun them.

    ## Why Individual Agent Success Does Not Scale

    Here is the scenario playing out across thousands of enterprises: a team deploys an AI agent to handle customer inquiry routing. It works brilliantly. Another team deploys an agent for invoice processing. Also excellent. A third builds an agent for supply chain anomaly detection.

    Each agent succeeds in isolation. But when you have dozens — then hundreds — of agents operating across an organization, new problems emerge that individual agent performance cannot solve:

    **Agents conflict.** A sales optimization agent promises delivery dates that a supply chain agent knows are impossible. A cost reduction agent cancels a vendor contract that a compliance agent flagged as mandatory. Without shared context and coordination, agents optimize for their own objectives at the expense of organizational coherence.

    **State becomes invisible.** When Agent A passes a task to Agent B, what happens to the context? Who tracks whether the handoff succeeded? What if Agent B fails silently? In most enterprise deployments, the answer is: nobody knows. The [data debt](/news/data-debt-silent-killer-enterprise-ai) problem is compounded when agents generate data that other agents consume without governance.

    **Governance becomes impossible.** Individual agent governance is manageable. Governing an ecosystem of agents — each with different permissions, different data access, different decision boundaries — requires infrastructure that most organizations have not built. Only 28% consider their agent automation mature, and only 12% expect ROI from automation-plus-agents within three years, compared to 45% for basic automation alone.

    ## The Three-Layer Architecture

    Deloitte's research identifies a three-layer enterprise architecture that separates orchestrated agent deployments from chaotic ones:

    ### Layer 1: The Context Layer

    Before agents can coordinate, they need shared understanding. The context layer provides knowledge graphs, ontologies, and taxonomies that give every agent in the ecosystem a consistent view of the business — shared definitions, shared relationships, shared constraints.

    Without this layer, every agent operates on its own interpretation of reality. The sales agent and the supply chain agent are not just making different decisions — they are making decisions based on different understandings of the same data.

    ### Layer 2: The Agent Layer

    This is where most enterprises focus — and where most stop. The agent layer handles safety, autonomy, and interoperability with modular design and advanced telemetry. But the critical insight is that agent-level excellence is necessary but insufficient. A perfectly designed agent in a poorly orchestrated ecosystem still fails.

    The emerging inter-agent protocols — Google's A2A (Agent-to-Agent), Cisco's AGNTCY, and Anthropic's MCP (Model Context Protocol) — are beginning to standardize how agents communicate and coordinate. Deloitte expects these to converge to two or three leading standards, which will define the interoperability landscape for the next decade.

    ### Layer 3: The Experience Layer

    The experience layer provides human oversight through agent status dashboards with explainability features. This is not just monitoring — it is the mechanism through which humans maintain appropriate control as agents take on more autonomous decision-making.

    The human oversight model is evolving along a spectrum: humans in the loop (approving every decision), humans on the loop (monitoring with intervention capability), and humans out of the loop (fully autonomous). Advanced organizations are shifting to "on the loop" in 2026 — maintaining oversight without bottlenecking agent operations.


        ### How many AI agents are running in your organization, and who is coordinating them?

        If there is no clear answer, you have an orchestration gap.

        [Talk to ViviScape](/contact)



    ## The Five-Stage Evolution You Need to Plan For

    Gartner maps the agent evolution trajectory that enterprises should be architecting toward:



        - **2025: Assistants.** AI handles prompts and basic tasks under direct human supervision.

        - **2026: Task-specific agents.** Agents operate autonomously within bounded domains — the stage most enterprises are entering now.

        - **2027: Collaborative agents.** Multiple agents coordinate on complex workflows, requiring the orchestration infrastructure most enterprises have not yet built.

        - **2028: Cross-platform ecosystems.** Agents operate across organizational boundaries — partners, vendors, customers — demanding standardized protocols and shared governance.

        - **2029: Worker-created agents.** Fifty percent of knowledge workers will create and govern agents on demand, democratizing agent deployment while exponentially increasing the governance challenge.



    The organizations building orchestration infrastructure now are not over-investing. They are building for Stage 3 before they get there — which is the only way to be ready when they arrive.

    CIOs face what Gartner calls a three-to-six-month window to define their AI agent strategy or risk competitive disadvantage. That window is not about choosing agents. It is about choosing orchestration.

    ## The Failure Modes

    For organizations that deploy agents without coordination strategy, three failure modes are predictable:

    **Agent sprawl.** Departments deploy agents independently, creating an ungoverned ecosystem that nobody can map, monitor, or manage. This is the [shadow agent problem](/news/shadow-agents-governance-crisis) at organizational scale.

    **Vendor lock-in through walled gardens.** Platform vendors offer orchestration as part of their agent ecosystem, creating dependencies that reduce flexibility and increase switching costs. Organizations that adopt vendor-specific orchestration without an abstraction layer find themselves locked into architectures that may not align with the converging protocol standards.

    **Regulatory non-compliance.** The [EU AI Act](/news/ai-compliance-countdown-2026) and emerging regulations require traceability, explainability, and human oversight for autonomous systems. Agent ecosystems without centralized governance infrastructure cannot demonstrate compliance at audit time — and the penalties for high-risk AI systems are substantial.

    ## The Bottom Line

    The autonomous AI agent market is projected to reach $8.5 billion by 2026 and $35 billion by 2030. The organizations that capture that value will not be the ones deploying the most agents. They will be the ones that build the orchestration layer that makes multi-agent systems coherent, governed, and aligned with business outcomes.

    Every agent you deploy without a coordination strategy is a bet that individual optimization will somehow produce organizational results. The evidence says otherwise. The [ROI](/news/ai-roi-reckoning) comes from orchestration — and orchestration requires architecture, not just ambition.

    The trap is thinking that more agents equals more capability. The reality is that more agents without coordination equals more chaos. Build the orchestra before you hire the musicians.

    *ViviScape specializes in multi-agent orchestration — designing the coordination infrastructure that turns individual AI agents into coherent enterprise systems. If your agent deployments need an orchestration strategy, [let's build one](/contact).*


        ### Ready to orchestrate your AI agents?

        ViviScape designs multi-agent coordination infrastructure that turns individual AI agents into coherent enterprise systems — governed, aligned, and delivering real business outcomes.

        [Schedule a Free Consultation](/consultation)



        [ The Last Mile Problem](/news/last-mile-problem-change-management-ai)
        [AI Vendor Reckoning ](/news/ai-vendor-reckoning-2026)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>ai</category>
      <category>technology</category>
      <category>business</category>
      <category>software</category>
    </item>
    <item>
      <title>The AI Vendor Reckoning: Why 2026 Is the Year Enterprises Stop Buying Demos</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Mon, 06 Apr 2026 00:46:46 +0000</pubDate>
      <link>https://forem.com/arthicksdev/the-ai-vendor-reckoning-why-2026-is-the-year-enterprises-stop-buying-demos-219h</link>
      <guid>https://forem.com/arthicksdev/the-ai-vendor-reckoning-why-2026-is-the-year-enterprises-stop-buying-demos-219h</guid>
      <description>&lt;p&gt;Worldwide AI spending will reach $2.52 trillion in 2026 — a 44 percent increase over 2025. Enterprise technology investment will hit $5.6 trillion globally. Eighty-six percent of organizations say their AI budget is increasing.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    And yet, enterprises are buying from fewer vendors, not more.

    That is the defining shift of 2026: the era of AI procurement through demonstration, proof of concept, and innovation theater is ending. What is replacing it is outcome-driven buying — where measurable business results, not impressive demos, determine which vendors survive the next budget cycle.

    **Welcome to the AI vendor reckoning.**

    ## The Pilot Graveyard

    The scale of enterprise AI experimentation over the past two years has been extraordinary — and extraordinarily wasteful. In Asia-Pacific markets, companies launched an average of 24 generative AI pilots annually. Only three reached production. Ninety-five percent of enterprise AI investments have failed to meet ROI targets. Only 31 percent of AI use cases have reached full production deployment.

    The math is brutal: for every dollar spent on AI pilots, most organizations got a demo, a deck, and a depreciated proof of concept that never made it to production.

    The problem is not that enterprises are under-investing. It is that they are over-experimenting — spreading budgets across too many vendors, too many use cases, and too many proofs of concept that were never designed to scale. As one Databricks Ventures VP predicted, 2026 is the year enterprises "start consolidating their investments and picking winners."

    The best-of-breed rationale for adding new AI suppliers has hit a two-year low. CIOs are no longer assembling toolchains. They are pruning them.

    ## The Promise-Reality Gap

    At the center of the vendor reckoning is a credibility crisis. Vendors promise six-to-eight-week implementations. Actual enterprise deployments average five to nine months. Vendors sell "self-learning" systems that require continuous human feedback and periodic retraining. Vendors demo seamless integration while internal teams discover months of custom middleware work ahead.

    As MindFinders reported: "The gap between promise and reality is where enterprise AI budgets go to disappear."

    The credibility crisis extends beyond timelines and costs. The [shadow agent governance crisis](/news/shadow-agents-governance-crisis) has exposed how many "AI agent" solutions are what industry analysts now call "agent washing" — legacy automation tools with conversational interfaces that operate according to predefined workflows, not systems that actually reason about goals and adapt to context.

    Enterprises that bought the demo are discovering they purchased sophisticated chatbots, not autonomous agents. And they are not renewing.


        ### How many of your AI vendor contracts delivered the outcomes they promised at the proof-of-concept stage?

        If the answer is uncomfortable, you are not alone — and the solution is not more vendors.

        [Talk to ViviScape](/contact)



    ## What Changed in 2026

    Three forces are converging to end the demo-buying era:

    ### 1. CFOs Took Control

    The [AI ROI Reckoning](/news/ai-roi-reckoning) is not just a measurement challenge — it is a procurement revolution. Seventy-three percent of CEOs now own AI decisions, double the rate from a year ago. But it is CFOs who are reshaping how those decisions translate into vendor contracts.

    Direct financial impact has nearly doubled as the primary ROI metric for AI investments, rising to 21.7 percent. Productivity gains — the vague, hard-to-verify justification that sustained years of experimental spending — fell from 23.8 percent to 18 percent as the top justification. Boards are done with productivity proxies. They want revenue, margin, and cost reduction tied to specific vendor deliverables.

    Gartner positions 2026 within the "Trough of Disillusionment" — the phase where procurement controls planning rather than innovation departments. ROI must be measurable within renewal cycles to secure continued funding.

    ### 2. Incumbents Won the Distribution War

    Gartner's forecast reveals a structural shift: AI will most often be sold to enterprises by their incumbent software providers rather than bought as part of new moonshot projects. The implication is devastating for standalone AI vendors: enterprises are not looking for new relationships. They are looking for AI capabilities bundled into the platforms they already use.

    The bundling advantage is real. Incumbent vendors offer coterminous agreements, committed-use discounts, and integrated security reviews. A standalone AI vendor competing against an incumbent's bundled offering needs to demonstrate dramatically superior outcomes — not just marginally better technology.

    By 2026, CIOs are trading sprawling AI toolchains for platform SKUs and fewer invoices. The consolidation is not about reducing innovation. It is about reducing integration complexity, security surface area, and vendor management overhead.

    ### 3. Data Readiness Became the Gating Factor

    Sixty-five percent of organizations lack AI-ready data infrastructure. This single statistic explains more vendor failures than any technology limitation. Vendors who sell AI solutions without addressing the [data debt](/news/data-debt-silent-killer-enterprise-ai) problem are selling into a foundation that cannot support what they are building.

    The enterprises that are successfully scaling AI are the ones investing in data foundations before vendor selection — not the other way around. AI infrastructure will consume $1.366 trillion in 2026, more than half of total AI spending. The market has spoken: compute and data infrastructure come first. Application-layer AI vendors come second.

    ## The New Procurement Playbook

    The enterprises navigating the vendor reckoning successfully are adopting a fundamentally different procurement approach:

    ### Outcome-First Evaluation

    Instead of evaluating vendors on capability demonstrations, leading organizations define measurable business outcomes before the first vendor conversation. The evaluation criterion is not "can this tool do X?" but "will this tool deliver $Y in measurable impact within Z months?"

    Ninety-one percent of enterprise buyers now prioritize technical expertise over feature lists. Eighty-eight percent require proven track records with comparable use cases. Seventy-nine percent rate integration capability as a top criterion — not because integration is exciting, but because integration failures are the leading cause of pilot-to-production collapse.

    ### Build Where It Differentiates, Buy Where It Does Not

    The vendor consolidation trend does not mean enterprises should build everything in-house. It means they should be strategic about the boundary between buy and build. Commodity capabilities — document processing, basic classification, standard analytics — are best sourced from incumbent platforms. Differentiating capabilities — custom [orchestration](/news/orchestration-trap-multi-agent-ai), domain-specific agent workflows, proprietary process intelligence — are best built.

    The organizations achieving the highest AI ROI are those that build custom where competitive advantage demands it and consolidate vendors where standardization reduces cost. The [last mile problem](/news/last-mile-problem-change-management-ai) is not solved by buying more tools. It is solved by building the integration and change management infrastructure that makes tools actually work.

    ### Due Diligence Over Demos

    Enterprise AI procurement in 2026 requires a due diligence discipline that most organizations lacked during the experimentation phase. Eight questions should precede any vendor contract:



        - Can the vendor provide reference customers in your specific industry with comparable data complexity?

        - Who owns the data, the model outputs, and the intellectual property generated during the engagement?

        - What is the realistic integration scope — not the demo scope — for your existing systems?

        - Are performance SLAs contractually binding with financial consequences for non-delivery?

        - What is the exit strategy? Can you extract your data and models if the relationship ends?

        - How many internal FTEs will be required for ongoing operation — honestly?

        - Has legal reviewed the AI-specific contract terms, including liability for autonomous decisions?

        - How will the vendor's product roadmap affect your existing deployment if priorities shift?



    If a vendor cannot answer these questions clearly, the demo is irrelevant.

    ## The Consolidation Forecast

    The next twelve months will reshape the enterprise AI vendor landscape. The dynamics are clear:

    **Budgets will increase for a narrow set of AI products** that clearly deliver results. They will decline sharply for everything else. A small number of vendors will capture a disproportionate share of enterprise AI budgets while many others see revenue flatten or contract.

    **Contract cycles will drive strategy.** Enterprise scaling now depends on demonstrating concrete operational improvements — whether in contact center efficiency, sales cycle acceleration, or incident reduction — tied directly to renewal timelines.

    **Custom integration partners will gain share.** As enterprises consolidate platform vendors and build differentiating capabilities in-house, the demand shifts from product vendors to integration and orchestration partners who can connect platforms, customize workflows, and ensure the [resilience](/news/beyond-efficiency-enterprise-resilience-ai-metric) that off-the-shelf solutions cannot guarantee.

    ## The Bottom Line: Stop Buying Demos, Start Buying Outcomes

    The AI vendor reckoning is not a correction. It is a maturation. The organizations that thrived during the experimentation era — the ones with the most pilots, the most vendor relationships, the most proofs of concept — are not necessarily the ones that will thrive in the consolidation era.

    The winners in 2026 are the enterprises that can distinguish between vendors who deliver outcomes and vendors who deliver demos. That distinction requires procurement discipline, technical due diligence, and a clear-eyed assessment of where to build and where to buy.

    Two-point-five-two trillion dollars will be spent on AI this year. The question is not whether your enterprise will spend. It is whether your spending will produce results — or another round of pilots that never ship.

    Stop buying demos. Start buying outcomes.

    *ViviScape builds custom AI solutions designed around your business outcomes — not vendor feature lists. If your AI procurement strategy needs a reset, [let's start with what you actually need](/contact).*


        ### Ready to stop buying demos and start building outcomes?

        ViviScape designs custom AI solutions with measurable business results baked in from day one — not bolted on after the vendor contract is signed.

        [Schedule a Free Consultation](/consultation)



        [ The Orchestration Trap](/news/orchestration-trap-multi-agent-ai)
        [Agent Governance Stack ](/news/agent-governance-stack-enterprise-ai)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>ai</category>
      <category>technology</category>
      <category>business</category>
      <category>software</category>
    </item>
    <item>
      <title>The Agent Governance Stack: What Your Enterprise Needs Before Deploying Autonomous AI</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Mon, 06 Apr 2026 00:46:43 +0000</pubDate>
      <link>https://forem.com/arthicksdev/the-agent-governance-stack-what-your-enterprise-needs-before-deploying-autonomous-ai-hf2</link>
      <guid>https://forem.com/arthicksdev/the-agent-governance-stack-what-your-enterprise-needs-before-deploying-autonomous-ai-hf2</guid>
      <description>&lt;p&gt;Forty-eight percent of cybersecurity professionals now rank agentic AI as the number-one attack vector heading into 2026 — ahead of deepfakes, ransomware, and supply chain compromise. Yet only 34 percent of enterprises have AI-specific security controls in place.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    That gap is not an oversight. It is a structural failure. Enterprises have spent the last two years building and deploying AI agents at breakneck speed while governance tooling lagged a full generation behind. The result: [shadow agents operating without oversight](/news/shadow-agents-governance-crisis), security incidents climbing, and a compliance deadline approaching that most organizations are not ready for.

    The good news is that 2026 is the year the governance stack caught up to the agent stack. The bad news is that most enterprises have not started building it.

    ## The OWASP Wake-Up Call

    In December 2025, OWASP released the Top 10 for Agentic Applications — a peer-reviewed framework developed by more than 100 security researchers that catalogs the most critical risks facing autonomous AI systems. It is the first authoritative attempt to formalize what can go wrong when AI systems do not just generate text but call APIs, execute code, move files, and make decisions with minimal human oversight.

    The top risk is Agent Goal Hijacking: attackers manipulate an agent's objectives through poisoned inputs — emails, documents, web content — and redirect the agent to perform harmful actions using its legitimate tools and access. Because agents cannot reliably distinguish instructions from data, a single malicious input can compromise an entire workflow.

    Three of the top four risks revolve around identities, tools, and delegated trust boundaries. This is critical because it means the attack surface for agentic AI is fundamentally different from traditional LLM security. Prompt injection is a content problem. Agent hijacking is an infrastructure problem. And infrastructure problems require infrastructure solutions.

    The OWASP framework makes one thing clear: the security model that worked for chatbots does not work for agents. Enterprises that treat agent governance as an extension of their existing AI safety programs are building on the wrong foundation.

    ## What the Governance Stack Actually Looks Like

    Until recently, "governing AI agents" meant writing policies that humans would manually enforce. That approach fails at scale — you cannot manually review every action taken by hundreds or thousands of autonomous agents operating across your enterprise.

    What enterprises need is a runtime governance layer: infrastructure that intercepts, evaluates, and controls agent actions before they execute, at machine speed. Microsoft's release of the Agent Governance Toolkit on April 2, 2026 — an open-source, seven-package system — provides the first comprehensive reference architecture for what this stack looks like in production.

    The architecture breaks down into four layers that every enterprise deploying autonomous agents needs to address:

    ### Layer 1: Policy Enforcement

    Every agent action must pass through a policy engine before execution. Not after. Not during review. Before. The enforcement layer evaluates each action against organizational rules, regulatory requirements, and safety constraints in sub-millisecond time.

    This is where most enterprises fail first. They deploy agents with broad permissions and plan to add constraints later. By the time "later" arrives, the agents have already created dependencies, accumulated access, and established patterns that are difficult to roll back.

    The principle is simple: default deny, explicit allow. Every tool call, every API request, every data access should require policy approval. The challenge is making this enforcement fast enough that it does not degrade agent performance — and flexible enough that it does not require rewriting agent code every time a policy changes.

    ### Layer 2: Identity and Trust

    The [shadow agents crisis](/news/shadow-agents-governance-crisis) revealed that 45.6 percent of organizations rely on shared API keys for agent-to-agent authentication, and only 21.9 percent treat agents as independent identity-bearing entities. This is the equivalent of giving every employee the same badge and hoping nothing goes wrong.

    Agents need their own cryptographic identities — not borrowed human credentials, not shared service accounts. Each agent should have a verifiable identity that tracks across its entire lifecycle, from deployment through every action it takes to eventual decommission.

    Beyond identity, agents need dynamic trust scoring. An agent that has operated reliably for months within defined boundaries earns higher trust than a newly deployed agent with broad permissions. Trust should be earned incrementally and revoked instantly when anomalies are detected. The concept of execution rings — inspired by CPU privilege levels — provides a practical model: agents operate at the minimum privilege level required for their current task, with elevation requiring explicit authorization.

    ### Layer 3: Reliability and Observability

    Production AI agents need the same reliability engineering that production software systems demand — and then some. Circuit breakers prevent cascading failures when one agent's error triggers chain reactions across connected systems. Error budgets establish acceptable failure rates and automatically throttle agent autonomy when thresholds are exceeded.


        ### Is your enterprise deploying AI agents without a governance stack?

        You are not alone — but the compliance clock is ticking. The EU AI Act high-risk obligations take effect in August 2026.

        [Talk to ViviScape](/contact)



    Observability is not optional. Every agent decision, every tool invocation, every data access must be logged, traceable, and auditable. This is not just good engineering practice — it is a regulatory requirement. The [AI compliance countdown](/news/ai-compliance-countdown-2026) is real: the EU AI Act's high-risk obligations take effect in August 2026, and the Colorado AI Act becomes enforceable in June 2026. Organizations without comprehensive agent audit trails will face regulatory exposure on a timeline measured in months, not years.

    ### Layer 4: Compliance Automation

    Manual compliance verification does not scale. Enterprises deploying dozens or hundreds of agents need automated governance verification that continuously maps agent behavior against regulatory requirements — EU AI Act, HIPAA, SOC2, and the emerging patchwork of AI-specific regulations.

    This layer should generate compliance evidence automatically, not through periodic audits but through continuous monitoring. When a regulator asks how your agents handle personal data, the answer should come from your governance infrastructure, not from a frantic investigation.

    ## The Confidence-Incident Paradox

    The most dangerous finding from the 2026 security landscape is not the volume of incidents — it is the confidence gap. Eighty-two percent of executives feel confident that existing policies protect against unauthorized agent actions. Meanwhile, 88 percent of organizations reported confirmed or suspected AI agent security incidents.

    This paradox exists because executives are evaluating agent risk through the lens of traditional software security. They see access controls, encryption, and network policies and assume their agents are governed. They are not. Agents introduce a new category of risk — autonomous decision-making with real-world consequences — that existing security controls were never designed to address.

    The OWASP Agentic Top 10 is not an incremental update to the LLM security framework. It is a fundamentally different threat model. And it requires a fundamentally different response.

    ## The Build-Versus-Buy Decision

    Open-source governance tooling like Microsoft's Agent Governance Toolkit provides a strong foundation — but a foundation is not a finished building. The toolkit covers the horizontal capabilities that every enterprise needs: policy enforcement, identity management, observability, compliance mapping.

    What it does not cover is the vertical integration that makes governance actually work in your specific environment: your data classification scheme, your regulatory exposure profile, your agent topology, your escalation workflows, your existing identity infrastructure.

    This is where the [orchestration trap](/news/orchestration-trap-multi-agent-ai) applies directly to governance. Off-the-shelf governance tools solve generic problems. Your enterprise has specific agents, specific data flows, specific compliance obligations, and specific risk tolerances. The governance stack that protects your organization needs to reflect those specifics.

    The enterprises that will navigate the agentic era successfully are those that build governance as a first-class engineering discipline — not a checkbox exercise bolted on after deployment.

    ## The Compliance Clock

    The regulatory timeline is no longer theoretical:



        - **June 2026:** Colorado AI Act becomes enforceable

        - **August 2026:** EU AI Act high-risk AI obligations take effect

        - **2028:** Gartner predicts 65 percent of governments will have introduced technological sovereignty requirements



    Organizations deploying autonomous agents without governance infrastructure are not just accepting security risk — they are accepting regulatory risk on a defined timeline. And unlike security incidents, which can sometimes be contained, regulatory non-compliance has consequences that compound.

    The question for every enterprise leader is straightforward: do you have a governance stack that can demonstrate — to auditors, regulators, and your board — exactly what your agents are doing, why they are doing it, and what controls prevent them from doing what they should not?

    If the answer is no, the time to build it is before the compliance deadline, not after.

    ## The Bottom Line

    The agent governance gap is closing — but it is closing through tooling and architecture, not through policy documents and committee meetings. The enterprises that will lead in autonomous AI are not the ones deploying the most agents. They are the ones deploying agents they can actually govern.

    The governance stack is not a tax on innovation. It is the infrastructure that makes innovation sustainable. Without it, every agent you deploy is a liability waiting to be discovered — by an attacker, a regulator, or your own audit team.

    Build the governance stack first. Then deploy the agents. The order matters.

    *ViviScape builds custom governance infrastructure for enterprises deploying autonomous AI agents — from policy engines to compliance automation. If your agent deployments are outpacing your governance capabilities, [let's fix that before the deadline](/contact).*


        ### Ready to govern your AI agents?

        ViviScape builds custom governance infrastructure for enterprises deploying autonomous AI — from policy engines to compliance automation that scales with your agent deployments.

        [Schedule a Free Consultation](/consultation)



        [ AI Vendor Reckoning](/news/ai-vendor-reckoning-2026)
        [All Articles ](/news)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>ai</category>
      <category>technology</category>
      <category>business</category>
      <category>software</category>
    </item>
    <item>
      <title>Wysi Wysi Wysiwyg</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Wed, 13 May 2020 13:17:42 +0000</pubDate>
      <link>https://forem.com/arthicksdev/wysi-wysi-wysiwyg-3dcj</link>
      <guid>https://forem.com/arthicksdev/wysi-wysi-wysiwyg-3dcj</guid>
      <description>&lt;p&gt;I have worked with a variety of Wysiwyg's in the past.  &lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Wysiwyg?
&lt;/h2&gt;

&lt;p&gt;"Software allows content to be edited in a form that resembles its appearance when printed or displayed as a finished product, such as a printed document, web page, or slide presentation."&lt;/p&gt;

&lt;p&gt;Or Simplified: A Html or Content editor&lt;/p&gt;

&lt;p&gt;They really have help provide a rich experience for end users that are coders and not non-coders interact with your application.&lt;/p&gt;

&lt;p&gt;Here are a few of the top choice I have selected that I have worked with.&lt;/p&gt;

&lt;p&gt;1.) Summernote (Arts Choice - Offers AnglarJs Directives)&lt;br&gt;
2.) MediumEditor (Simple/Clean)&lt;br&gt;
3.) TinyMCE (Feature Rich - Popular)&lt;br&gt;
4.) CKEditor (Feature Rich)&lt;br&gt;
5.) KendoUI Editor (Flexibile)&lt;/p&gt;

&lt;p&gt;There are many others that I have worked, but this is my list of go to's.&lt;/p&gt;

&lt;p&gt;What's your favorite Wysi? 😁 &lt;/p&gt;

</description>
      <category>javascript</category>
      <category>html</category>
      <category>newbie</category>
      <category>wysiwyg</category>
    </item>
    <item>
      <title>NodeJs or C#</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Wed, 06 May 2020 05:16:25 +0000</pubDate>
      <link>https://forem.com/arthicksdev/nodejs-or-c-4jm9</link>
      <guid>https://forem.com/arthicksdev/nodejs-or-c-4jm9</guid>
      <description>&lt;p&gt;I have worked with C# for over 15 years and used it from desktop, server, web, and mobile development.  I also use Node.Js for a lot of real-time applications. I have experienced real-time applications being easier to develop NodeJs vs SignalR in C#. That being said, I still believe that the strong typing of C# enforces  discipline and reduces the amount of bugs within your application.  &lt;/p&gt;

&lt;p&gt;Also now with .Net Core has demonstrated superior performance benchmarks.  I am not sure if I would make a full switch I’m NodeJs direction.&lt;/p&gt;

&lt;p&gt;I have found that NodeJs is good for small to medium application, where if I’m designing something on an enterprise level I’m sticking with C#.&lt;/p&gt;

&lt;p&gt;At the end of the day I think it boils down to what tribe you want to live in.&lt;/p&gt;

&lt;p&gt;Fellow polyglots out there what are your thoughts? ☺️&lt;/p&gt;

</description>
      <category>node</category>
      <category>csharp</category>
      <category>webdev</category>
      <category>api</category>
    </item>
    <item>
      <title>Node AWS S3 Photo Gallery Starter Project</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Fri, 01 May 2020 04:19:44 +0000</pubDate>
      <link>https://forem.com/arthicksdev/node-aws-s3-photo-gallery-starter-project-102n</link>
      <guid>https://forem.com/arthicksdev/node-aws-s3-photo-gallery-starter-project-102n</guid>
      <description>&lt;p&gt;🔰 I created a quick little project to demonstrate how to pull data from amazon #AWS #S3 #cloudstorage with nodejs.  My primary language of choice is typically #C, but I know a large majority of developers in the community prefer #node.&lt;/p&gt;

&lt;p&gt;I will continue to add on to the project when I have more time.  This was a great experience making this for you guys to use, and I am looking forward to doing alot more NodeJS projects.&lt;/p&gt;

&lt;p&gt;Where to Get it: &lt;br&gt;
🔗 [&lt;a href="https://github.com/arthicksdev/NodeAWSS3PhotoGallery" rel="noopener noreferrer"&gt;https://github.com/arthicksdev/NodeAWSS3PhotoGallery&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Let me know your thought's or if there is anything specific you would like to see.&lt;/p&gt;

&lt;p&gt;✌️ &lt;a class="mentioned-user" href="https://dev.to/arthicksdev"&gt;@arthicksdev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>bootstrap</category>
      <category>angular</category>
      <category>aws</category>
    </item>
    <item>
      <title>Twilio Hackathon Conference Call Automation</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Tue, 28 Apr 2020 19:35:08 +0000</pubDate>
      <link>https://forem.com/arthicksdev/twilio-hackathon-conference-call-automation-4bge</link>
      <guid>https://forem.com/arthicksdev/twilio-hackathon-conference-call-automation-4bge</guid>
      <description>&lt;p&gt;I created a sample application using NodeJs and AngularJS that allows members to subscribe and be sent a call to join into a specific conference room.  This will be useful for automating/routing specific people into the right rooms.  &lt;/p&gt;

&lt;p&gt;This is a simple example but could be scaled out tremendously into a full enterprise level application.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Category Submission:
&lt;/h4&gt;

&lt;p&gt;Engaging Engagements&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo Link
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://twilio-hackathon-275613.uc.r.appspot.com/" rel="noopener noreferrer"&gt;https://twilio-hackathon-275613.uc.r.appspot.com/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Link to Code
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/arthicksdev/TwilioHackathon2020" rel="noopener noreferrer"&gt;https://github.com/arthicksdev/TwilioHackathon2020&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I built it
&lt;/h2&gt;

&lt;p&gt;Built the solution use NodeJs,AngularJs Boostrap 4, FontAwesome, JQuery, Google Cloud Console (only for hosting the demo) and used VSCode as the IDE.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources/Info
&lt;/h2&gt;

&lt;p&gt;Tried to keep line of code down to a minimum so it could be recreated and understood by someone starting out, or looking to recreate the experience.&lt;/p&gt;

</description>
      <category>twiliohackathon</category>
      <category>node</category>
      <category>angular</category>
    </item>
    <item>
      <title>The Journey of a Developer</title>
      <dc:creator>Art Hicks</dc:creator>
      <pubDate>Tue, 07 Apr 2020 05:04:17 +0000</pubDate>
      <link>https://forem.com/arthicksdev/the-journey-of-a-developer-38pa</link>
      <guid>https://forem.com/arthicksdev/the-journey-of-a-developer-38pa</guid>
      <description>&lt;p&gt;Development is more than a trade for me. It is a part of my DNA.  As a child, I always took things apart and tried to put them back together.  Sometimes I discovered you can make things better if you understood how they were intended to work.  It doesn’t matter what language you write, there is always a unique signature that accents your digital impact.  Being a polyglot I tried many different languages to discover advantages with one language versus the next.&lt;/p&gt;

&lt;p&gt;I have learned many things over the years and found out that superpowers are real. After creating and developing for over 20+ years I found comfort in the C# language as my core stack to develop with and I currently work with various desktop, mobile, tv, and web technologies.  One fact I discovered early on is that the purpose of your application outweighs the code itself.  As developers, we strive to solve problems and create new experiences.  That is why we always have to focus on the next person that will interact with your creations and not get in our own way.&lt;/p&gt;

&lt;p&gt;I have surrounded myself around many great people that have helped me learned what I need to know today, and as technology continues to evolve one thing will never change. The REASON.  We all code, create, solve, and live for a reason.  That reason is also what makes a good versus great at what we do.&lt;/p&gt;

&lt;p&gt;I implore you to identify your superpower if you haven’t already had and made yourself the most powerful being in your space.&lt;/p&gt;

</description>
      <category>codequality</category>
      <category>purpose</category>
      <category>problemsoving</category>
      <category>nextsteps</category>
    </item>
  </channel>
</rss>
