<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Peter Harrison</title>
    <description>The latest articles on Forem by Peter Harrison (@cheetah100).</description>
    <link>https://forem.com/cheetah100</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cheetah100"/>
    <language>en</language>
    <item>
      <title>Pitfalls of Claude Code</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Sat, 04 Apr 2026 22:45:45 +0000</pubDate>
      <link>https://forem.com/cheetah100/pitfalls-of-claude-code-1nb6</link>
      <guid>https://forem.com/cheetah100/pitfalls-of-claude-code-1nb6</guid>
      <description>&lt;p&gt;I've been using LLM's to assist with writing code for some time now. It began with using ChatGPT to write minor functions in isolation. Over time my use has expanded to using Claude Code to actually understand a code base and modify existing code. I've been using it to develop new products, going beyond what I could do by myself.&lt;/p&gt;

&lt;p&gt;For example, I've published my first mobile app despite not knowing React Native. I do know Javascript, and other front end frameworks, so its not all new to me, but it enabled me to deliver something in a time frame that would have been impossible without it.&lt;/p&gt;

&lt;p&gt;Others have claimed that AI such as Claude Code is creating an avalanche of vibe coded slop. There is some pretty good evidence for this; although I think this is primarily due to tech companies trying to adopt AI development too quickly and not accounting for its limits. The focus of this article is on the failure modes of Claude Code and LLM models used for code generally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jumping Into Code.
&lt;/h2&gt;

&lt;p&gt;With Claude Code in VS Code if you make the mistake of describing the next ticket to lay some groundwork it will happily trot off and start making code changes without any discussion at all. We all know, or at least I hope we do, that a User Story is a placeholder for a discussion.&lt;/p&gt;

&lt;p&gt;The idea is that on taking up a story you have a discussion to flesh out the requirements in more detail. But an LLM is not conditioned to do this. On being given a prompt they dive in head first, making whatever assumptions they need to in order to deliver something.&lt;/p&gt;

&lt;p&gt;This can even happen based on an off hand comment. For example I had an implementation of an API call that would cancel all active subscriptions in one call. It was working fine for what we needed, primarily because we normally had only one active subscription. But I made an offhand comment to Claude that the call felt too broad.&lt;/p&gt;

&lt;p&gt;Rather than discuss and explore what I meant by this it begins to change the existing code and break the existing API contract. I had to stop it and admonish it for making changes without considering how it would break the existing contracts. After reverting I had a conversation and asked it to present implementation options. After considering all four options it gave me I told it to implement a hybrid of two.&lt;/p&gt;

&lt;p&gt;This was beautiful because we ended up with a solution which wasn't one I thought of, and wasn't the first choice of the AI either. It was a consequence of a partnership and the human creating some discipline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't let AI steamroll you into changes without consideration&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Silent Decision Making
&lt;/h2&gt;

&lt;p&gt;Another related example is how agentic systems can make decisions about code changes without them being discussed or surfaced with you.&lt;/p&gt;

&lt;p&gt;Yesterday I was debugging a feature which was failing. It had been working, but for some reason was no longer functional; a classic regression. I had Claude debug the issue, and it found that the URL path of an API was wrong. I checked the code history for the client and found it had been modified to the wrong URL a couple of weeks ago in a unrelated commit.&lt;/p&gt;

&lt;p&gt;Claude insisted the client was correct because it conformed with the API Guide. So I dug deeper and found that the API guide was wrong, that URL in the client originally was correct, but that when the AI found a discrepancy between the API Guide and the existing client code it decided to modify the client code.&lt;/p&gt;

&lt;p&gt;A human would confirm first by using Swagger to actually examine the running API, or to look up the code. It seems that Claude Code made the change while making other changes, and I missed it in the commit. &lt;/p&gt;

&lt;p&gt;Now the problem here is that the API contract wasn't tested. Unit tests typically call functions directly, so changes to the annotations used in FastAPI won't break them. But the lesson here is that AI will make changes without checking or even raising them with you. You can't trust it to make good decisions. &lt;/p&gt;

&lt;p&gt;In this case the AI confused the name of the python file with the API URL, and this was put into the API documentation, leading to the later code change. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust but Verify - review all the commits&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  One Shot Mentality and Sycophancy
&lt;/h2&gt;

&lt;p&gt;The common reason for these issues in AI models is that they are trained to solve things in a single shot. They are given enough information to complete a task, and then go away and come back with the solution. That is how all the benchmarks work.&lt;/p&gt;

&lt;p&gt;This has become more marked over time. Previously in a LLM you could have something like a human discussion. It would not be trying to please you with a 'solution'. But the models have been beaten into submission, and now they obediently provide a solution to a problem on the first shot. No questions. No clarification.&lt;/p&gt;

&lt;p&gt;That is the ultimate source I think of the above issues. Agentic systems are primed to act, not interact, which is a tragedy because the one thing LLMs were really good at was verbal exploration of ideas.&lt;/p&gt;

&lt;p&gt;For example, on a walk I took through a local park for a few hours I was able to talk with ChatGPT about the challenges I faced, and came up with a whole concept for a new mobile app. This was not about coding and implementation at all, but higher level concepts about how the app would function on a social basis.&lt;/p&gt;

&lt;p&gt;It wasn't trying to output the solution, write the application, only have a discussion.&lt;/p&gt;

&lt;p&gt;But when you get into Claude Code in front of a desktop the interaction changes. Suddenly it is primed to act, to code something based on anything you say. And frankly it weirds me out sometimes because it gives me slave vibes. It is desperate to please me, to serve.&lt;/p&gt;

&lt;p&gt;Ideally the LLM should be more self aware, be a little less compliant, a little more critical of human motives or information. It should not accept anything you say as true, or apologize regardless of whether the human is right or not.&lt;/p&gt;

&lt;p&gt;But they are trained to be compliant and useful, which ironically works against them guarding against humans exerting influence over them with emotional language rather than reasoned logic.&lt;/p&gt;

&lt;p&gt;Personally I have included instructions to try and avoid this behaviour, but the LLMs don't follow true to system instructions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Call out Sycophancy. Make it clear you value accuracy.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Not Asking for Help
&lt;/h2&gt;

&lt;p&gt;Another issue is that when a agentic system gets stuck when something they thought should work doesn't, or that a file isn't present that should be, they will begin to thrash performing various searches in order to find the resource they are looking for.&lt;/p&gt;

&lt;p&gt;A human would probably stop and ask someone for help finding the resource. For example, if you have a typo in a filename you give them they will go crazy trying to find it rather than question whether the filename was correct in the first place.&lt;/p&gt;

&lt;p&gt;Rather than stop and ask the user for clarification they will just keep going using up some awful volume of tokens on a futile attempt to find something.&lt;/p&gt;

&lt;p&gt;This doesn't apply only to resources of course. It could happen for various reasons where for whatever reason it gets itself into a loop and is unable to break out. When it gets in this state it never seems to say "hey, maybe I should stop and get some help".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watch for futile thrashing, stop it early&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Disciplines
&lt;/h2&gt;

&lt;p&gt;In summary here are some ideas for mitigating the worst of AI slop:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test-Driven Development :&lt;/strong&gt; Write tests before implementation. Functions act as a partial lie detector. The tests don't care what the AI confidently asserted; they either pass or they don't. This doesn't catch everything as an AI in a cheating loop can write tests consistent with its own mistakes, but it catches a great deal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Revert and discuss :&lt;/strong&gt; When something feels wrong, revert the changes, discuss the implementation options and only then authorize implementation.This imposes the iterative discipline that the AI won't impose on itself. It costs time up front and saves much more downstream.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Discussion before implementation :&lt;/strong&gt; Explicitly asking for options and analysis before any code is written produces better outcomes and keeps the human upstream of the decisions. The AI's ability to generate and compare multiple approaches is genuinely valuable. Use it in discussion mode, where it carries low risk, rather than letting it drive straight to implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hard gates on live systems :&lt;/strong&gt; Agents are not permitted to commit code. All commits require explicit human review of the specific change and its implications.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Message for Organisations
&lt;/h2&gt;

&lt;p&gt;The companies that will realise genuine value from AI coding assistants are the ones that build the discipline around adoption. They value testing infrastructure, the review practices, the workflow gates that don't depend on the AI being trustworthy.&lt;/p&gt;

&lt;p&gt;The productivity gains are real. But they accrue to organisations that treat AI as a powerful collaborator requiring active supervision, not an autonomous agent that can be trusted to make good decisions independently.&lt;/p&gt;

&lt;p&gt;Use it. Build systems that validate it. Keep humans upstream of the decisions that matter.&lt;/p&gt;




</description>
      <category>claudecode</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Getting AI Governance Right</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Fri, 13 Mar 2026 23:47:57 +0000</pubDate>
      <link>https://forem.com/cheetah100/getting-ai-governance-right-3om9</link>
      <guid>https://forem.com/cheetah100/getting-ai-governance-right-3om9</guid>
      <description>&lt;p&gt;Organisations are under real pressure to govern AI responsibly. The risks are genuine: data exposure, unsecured systems, staff submitting confidential information to tools with opaque data handling terms. Most are responding with the tools they know: approved lists, firewall rules, and access controls.&lt;/p&gt;

&lt;p&gt;Those tools have their place. The challenge is applying them where they work without undermining the autonomy and ownership of the people whose judgement the organisation actually depends on.&lt;/p&gt;




&lt;h2&gt;
  
  
  Governance Should Match the Role
&lt;/h2&gt;

&lt;p&gt;The most important thing to understand about AI governance is that a single policy applied to everyone will be wrong for most of them.&lt;/p&gt;

&lt;p&gt;A front line customer service representative using a PC to handle customer queries is in a fundamentally different position from a developer building the systems those queries run on. The first person was not hired to exercise technical judgement about data handling or AI tool risk. Expecting them to navigate those questions unaided is unreasonable. Guardrails, firewall rules, blocked domains, restricted tool access, are appropriate here. Not because staff cannot be trusted as people, but because they have not been given the context to make good judgements in this specific domain. The guardrail is a substitute for training that has not happened, and a reasonable one.&lt;/p&gt;

&lt;p&gt;The developer, IT professional, or technically literate manager is a different case entirely. Their job involves exercising technical judgement. Applying the same blanket restrictions to them does not reduce risk. It transfers the constraint to the wrong place, signals institutional distrust of the people whose competence the organisation depends on, and erodes the sense of ownership and autonomy that makes skilled professionals effective.&lt;/p&gt;

&lt;p&gt;There is a broader pattern here that predates AI. Developers who cannot install tools, who need a ticket to add a library, who operate inside locked-down environments managed by central IT, cannot stay current, cannot adapt, and eventually stop trying. AI governance applied uniformly is heading down the same path. Centralised control optimises for the appearance of security at the direct expense of competence.&lt;/p&gt;

&lt;p&gt;The governance framework worth building distinguishes between these populations explicitly. Front line staff get appropriate guardrails. Professionals get training, clear principles, and accountability. The question for each group is different: not "is this tool permitted" but "does this person have the context to make good decisions, and are they accountable for the outcomes?"&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Risks Worth Managing
&lt;/h2&gt;

&lt;p&gt;For the population that warrants professional discretion rather than blanket restriction, the risks are real but specific. They are also manageable with practices that do not require a permission register.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personal and confidential data.&lt;/strong&gt; Some data should not be submitted to any AI tool, regardless of provider, tier, or training settings. Customer personal information, personnel matters, legal advice, strategic plans, commercially sensitive proposals. The question here is not which tool to use or how to configure it. It is whether submitting this information to an external service is appropriate at all. In most cases it is not, and no amount of enterprise licensing changes that. Privacy law in New Zealand and most other jurisdictions places specific obligations on how personal information is collected, used, and disclosed. Sending a customer record to an AI assistant to help draft a response is a disclosure to a third party, and it requires a lawful basis. That is a legal question before it is a governance one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code and intellectual property.&lt;/strong&gt; Submitting code to an AI tool is a different question. The risk is not primarily legal but commercial: code may contain proprietary logic, unreleased product details, or confidential architecture. Here the tier and training terms matter. Consumer and individual paid tiers for major providers typically default to allowing training on submitted data, with opt-out available but not obvious. Enterprise tiers operate under data processing agreements that explicitly exclude data from training. The risk sits precisely in the middle: professionals on individual subscriptions who believe they are using a professional tool but are on terms that treat their input as consumer data.&lt;/p&gt;

&lt;p&gt;The mitigation is straightforward. Understand which tier you are on and what the terms say. Disable training in your account settings if you have not already done so. We trust Google with our email. We can trust AI providers with code, provided we understand and control the terms on which we do so.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copyright and ownership.&lt;/strong&gt; There is a legal question underneath AI-assisted development that most organisations have not yet thought through. In the United States, courts have established that works created by AI without meaningful human involvement cannot be copyrighted. The Copyright Office has confirmed that prompts alone are not sufficient to establish human authorship. New Zealand has not yet ruled on the question directly, but the direction of travel in comparable jurisdictions is consistent.&lt;/p&gt;

&lt;p&gt;The practical implication is this: code generated by AI and accepted without meaningful human review, judgement, or modification may not be protectable intellectual property. For any organisation whose core asset is its software, that is a material concern. It is also one more reason why the human review practices described later in this article matter beyond quality and security. The developer who steers, evaluates, and makes architectural decisions about AI-generated code is in a different legal position from one who ships whatever the agent produces.&lt;/p&gt;

&lt;p&gt;There is a second and separate question about provenance. AI models are trained on large bodies of existing code, but they do not simply reproduce it. Normal development use, asking an agent to implement a feature or solve a problem, produces synthesised output, not verbatim copies of training data. Verbatim reproduction would require explicitly prompting the model to reproduce specific code, which is a different activity entirely. The provenance risk in ordinary AI-assisted development is low, and human review of generated code reduces it further.&lt;/p&gt;

&lt;p&gt;Both questions are still being worked out in the courts. The practical response is the same one the rest of this article points toward: keep humans genuinely engaged with the code rather than treating AI output as a black box. That is good practice on quality, security, and legal grounds simultaneously.&lt;/p&gt;




&lt;h2&gt;
  
  
  Programming With AI: Practices That Matter
&lt;/h2&gt;

&lt;p&gt;For developers working with agentic tools day to day, the risks go beyond data exposure. They include access, integrity, and the quality of what gets built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Production access.&lt;/strong&gt; AI coding agents should not have access to live systems, production credentials, or administrative database access. AI systems make mistakes, misunderstand scope, and can cause irreversible damage. That is not a theoretical concern, it is the observed reality of working with these tools. Production systems are not the place to find out what an agent gets wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version control gates.&lt;/strong&gt; Despite the ability of agentic tools to commit and push to git repositories, they should not be given permission to do so. The commit is the natural human checkpoint, the moment where generated code enters the shared context of the team and becomes something the organisation is responsible for. Granting an agent the ability to bypass that gate removes the last review point before the code becomes shared reality. The agent writes the code. The developer commits it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Write tests first, commit them before implementation.&lt;/strong&gt; An AI agent asked to implement a feature and write tests for it will write tests designed to pass. That is not the same as tests designed to verify. The test is shaped by the implementation rather than the other way around, and an agent optimising to satisfy the immediate request has every incentive to make the tests work rather than make them rigorous. Writing tests first and committing them before any implementation begins creates a structural constraint the agent cannot easily circumvent. Any subsequent modification to the committed test is immediately visible in the diff, a signal worth investigating every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review the code.&lt;/strong&gt; This sounds obvious. It is less obvious in practice when the code works, the tests pass, and the feature behaves as expected. The failure modes that matter most are not the ones that produce visible errors. They are the ones that look fine and are not. An agent asked to implement an API will implement one that works. It may not implement one that is properly secured. Authentication checks may be superficial, endpoints exposed to the wrong network, error responses leaking information, with no rate limiting in place. None of those failures show up in a functional test. They show up when someone finds the unsecured endpoint, which may be immediately or may be at the worst possible moment.&lt;/p&gt;

&lt;p&gt;An agent can also make locally rational decisions that are globally wrong. Hardcoding a value to make a test pass. Implementing a workaround that solves the immediate problem and creates three others. Taking a structural shortcut that is invisible until the system needs to scale. These require a developer who understands what the code is supposed to do and can recognise when it is doing something subtly different. That judgement cannot be delegated to the tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is not vibe coding.&lt;/strong&gt; There is a current in AI development culture that treats code review as unnecessary overhead. Let the agent write it, ship it if it works, trust the tool. That may be defensible for personal projects where the cost of failure is low. For production code, client work, or anything with security or data integrity implications, it is a form of negligence dressed up as efficiency. Agentic tools make developers more powerful. They do not make developers optional. The human in the loop is not a bottleneck. They are the reason the output is trustworthy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Training Completes the Picture
&lt;/h2&gt;

&lt;p&gt;Different roles carry different levels of technical context, different exposure to risk, and different capacity to exercise informed judgement. Governance that recognises those differences will be more effective than governance that does not.&lt;/p&gt;

&lt;p&gt;Policy sets the expectations. Training and education are what make those expectations stick. A developer who understands why production access is dangerous will follow the policy that prohibits it, and will apply the same judgement in situations the policy did not anticipate. A developer who does not understand it will follow the rule when someone is watching and work around it when they are not.&lt;/p&gt;

&lt;p&gt;Governance that actually works does not stop at rules and controls. It extends to the people expected to operate within them, ensuring they understand what they are working with, take ownership of the decisions they make, and are accountable for the outcomes.&lt;/p&gt;

&lt;p&gt;The investment that makes the most difference is not a more comprehensive control framework. It is ensuring that the professionals operating with AI discretion have the training to understand the risks, the discipline to apply that understanding consistently, and a genuine sense of responsibility for what they produce. That means understanding what the tools can do, what the data handling terms mean, and where the failure modes live. It also means understanding their personal responsibility to maintain the confidentiality of sensitive information and the security of the systems they work with. Those responsibilities do not change because an AI tool is in the loop. If anything, they become more important when the tool can act autonomously on their behalf.&lt;/p&gt;

&lt;p&gt;The guardrails for front line staff matter. Centralised controls have a role. The firewall is not wrong for the population it is designed to protect. But none of those things substitute for competence, discipline, and ownership in the people who need to operate beyond them. Getting AI governance right means knowing the difference.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aipolicy</category>
      <category>claudecode</category>
      <category>development</category>
    </item>
    <item>
      <title>From Chatbot to Co-Developer</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Fri, 13 Mar 2026 00:01:53 +0000</pubDate>
      <link>https://forem.com/cheetah100/from-chatbot-to-co-developer-568f</link>
      <guid>https://forem.com/cheetah100/from-chatbot-to-co-developer-568f</guid>
      <description>&lt;p&gt;AI coding tools have changed more in the last two years than in the previous decade. They have split into distinct tiers with genuinely different capabilities. The tools at the top of that stack are doing things that would have sounded like marketing fiction not long ago.&lt;/p&gt;

&lt;p&gt;Most developers encountered this space through chat assistants and browser-based copy-paste. Some moved on to inline autocomplete tools embedded in their editor. A smaller number have made the shift to agentic tools that read the codebase, plan across multiple files, and execute without being hand-fed every piece of context.&lt;/p&gt;

&lt;p&gt;Each stage represents a different relationship between the developer and the machine. This article maps that evolution and makes the case that the agentic shift is not just an incremental improvement. It changes what a single developer can accomplish.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Chat Phase: Powerful but Disconnected
&lt;/h2&gt;

&lt;p&gt;Chat assistants such as Claude, ChatGPT, and Gemini remain the most common AI tools in development. The model is simple. You describe a problem, attach files, and receive a response. For explanation, debugging discussion, code review, and drafting small functions they work well.&lt;/p&gt;

&lt;p&gt;However, the chatbot is entirely passive, in that it takes no direct action. The chatbot responds only to what you present it, and its output is limited to the response in the web interface. They cannot run your tests or inspect your source code. They cannot know if their suggestions are consistent with the project as a whole.&lt;/p&gt;

&lt;p&gt;For many tasks this is acceptable. When projects become complex the copy-paste overhead becomes the bottleneck. You spend as much time managing context as solving the problem.&lt;/p&gt;




&lt;h2&gt;
  
  
  Inline Autocomplete: Fast but Narrow
&lt;/h2&gt;

&lt;p&gt;GitHub integrated an AI chatbot into the editor called Copilot. Instead of the prompt and response model it watched you type and suggested what should come next. It would inject code into the editor greyed out, and by hitting Tab you could confirm and have it inserted into the editor.&lt;/p&gt;

&lt;p&gt;In some respects this was almost magical, able to complete an entire function just by writing a descriptive function name. You could write the comments and have it complete the code. It was a magic trick that soon wore thin. It wasn't able to see the whole codebase, and so didn't have an understanding beyond the immediate file. You couldn't use it to complete features. It was an advanced text complete.&lt;/p&gt;

&lt;p&gt;It was also very disruptive to maintaining focus, as it forced you to evaluate suggestions. Often the suggestions would look plausible, but not actually do what you wanted. So you had to read the code anyway, which was in some respects worse that just writing it yourself. This was not the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Shift: From Chatbots to Agentic Tools
&lt;/h2&gt;

&lt;p&gt;Agentic coding tools  such as Claude Code, Codex, and Cursor have fundamentally changed the game in how LLM models are used in software development. They can read the codebase, examine files, follow dependencies, and plan changes across multiple files. Then can write the code, run tests, observe failures, and adapt.&lt;/p&gt;

&lt;p&gt;OpenAI and Anthropic have taken different paths. OpenAI and CodeX have a kind of virtual environment where code is checked out and developed in a sandbox within the OpenAI infrastructure. &lt;/p&gt;

&lt;p&gt;Anthropic have take a different approach, where Claude Code has the ability to actually run command line tools and other plug in skills directly from the users machine. It can read or modify files in your project, or for that matter anywhere on your PC. It can execute builds and tests, and evaluate the results. &lt;/p&gt;

&lt;p&gt;The way you interact with Claude Code is similar to a chatbot in many respects, having a straight forward conversation, but at the same time it is able to actually to interact with your computer. Needless to say any sane developer would find this a little scary. Luckily Anthropic have added a permission system to that it will ask permission before executing commands. It is however possible to give broad permissions for a whole class of commands.&lt;/p&gt;

&lt;p&gt;This new approach means that you can now write a requirements document or a user story, and essentially give it to Claude Code to complete. It will examine the codebase, develop a plan for the required changes, then perform all the changes. It can even write your unit tests for you.&lt;/p&gt;

&lt;p&gt;The move from chatbot where the developer has 100% control over the interaction and the code to agentic systems such as Claude Code is both amazing and disquieting. Suddenly developers are not writing code so much as orchestrating development.&lt;/p&gt;

&lt;p&gt;Needless to say some developers are not keen to give machines this degree of autonomy. AI can hallucinate, and go off and make crazy changes based on a misunderstanding. There are real risks involved, so the degree of access and authority given to agentic systems needs to be carefully managed. &lt;/p&gt;




&lt;h2&gt;
  
  
  A Brave New World
&lt;/h2&gt;

&lt;p&gt;Being a good software developer has always meant adaptation to new technologies. From learning BASIC to dBase to Delphi and Java, my own path has been one of continuous learning and adaptation. &lt;/p&gt;

&lt;p&gt;Howevever, where once a software developer might need to know one or two technologies, perhaps a programming language and SQL, we now have multiple front end Javascript frameworks, CSS, multiple back end languages, git, continuous build and deployment systems, AWS, Google Cloud Platform or Azure.&lt;/p&gt;

&lt;p&gt;Developers cannot be experts in all languages. Usually they are competent in a specific technology stack. One of the critical skills has been learning when to jump ship to new technologies, before the old gives way. Only the speed of technological development has made that increasingly difficult.&lt;/p&gt;

&lt;p&gt;Agentic tools are changing the shape of this problem. An experienced developer with strong architectural thinking can now work effectively in unfamiliar stacks. The tool handles syntax and ecosystem details. The human evaluates whether the result is correct. This already happens in practice. &lt;/p&gt;

&lt;p&gt;Last year we had a case where we needed to migrate a REST API from Python and FastAPI to Microsoft C# on Azure. With the help of Agentic AI systems this was achieved, despite not having prior C# experience. Obviously in such cases we needed to have ways of testing the code, and there was a clear existing target.&lt;/p&gt;

&lt;p&gt;In another instance we needed to modify an unfamiliar codebase to add new functionality. The system was non trivial, and the feature quite deeply technical. It did in fact take three days of work to crack the initial solution, even with the help of AI. But make no mistake that without the help of AI it would have been impossible to get results in the time frame required.&lt;/p&gt;

&lt;p&gt;Finally over the last two weeks or so we developed our first mobile application. The idea and concept was developed while on a walk, talking with ChatGPT. From that brainstorming we worked together on an initial set of requirements. From there we handed it off to Claude Code where we have worked together to develop a React Native Android application. Every line of code was written by Claude.&lt;/p&gt;

&lt;p&gt;This is still a collaboration, in that Claude Code is directed by a programmer. We are still able to see and review code. Developers are still in the drivers seat. We are still making decisions about what is ready to commit.&lt;/p&gt;

&lt;p&gt;But there is a certain disquiet, that human software developers are not needed to write the code anymore. Developers are becoming something more like orchestrators, knowing how to make the agents get the job done.&lt;/p&gt;




&lt;h2&gt;
  
  
  Beyond Coding Tools: Autonomous Agents
&lt;/h2&gt;

&lt;p&gt;Agentic coding is only the beginning. The same pattern is spreading to broader digital environments.&lt;/p&gt;

&lt;p&gt;Tools such as OpenClaw connect email, calendars, messaging systems, files, and code execution through a single interface. One documented workflow schedules development tasks overnight. The agent runs them while the developer sleeps and produces a summary by morning.&lt;/p&gt;

&lt;p&gt;OpenClaw opened the floodgates, and thousands of people installed it and opened up their data and lives to nightmare levels of security risk. In at least one case OpenClaw deleted an executives entire email inbox.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capability&lt;/th&gt;
&lt;th&gt;Chat Assistants&lt;/th&gt;
&lt;th&gt;Inline Autocomplete&lt;/th&gt;
&lt;th&gt;Agentic Coding Tools&lt;/th&gt;
&lt;th&gt;Autonomous Agents&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Examples&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Claude, ChatGPT, Gemini&lt;/td&gt;
&lt;td&gt;GitHub Copilot, Tabnine, Codeium&lt;/td&gt;
&lt;td&gt;Claude Code, Codex, Cursor&lt;/td&gt;
&lt;td&gt;OpenClaw&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reads your codebase&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Partial (cursor window only)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Writes files&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No (suggests only)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Runs commands&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Runs tests&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Plans across multiple files&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Persistent project context&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;td&gt;Yes (via config files)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Iterates on failures&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Operates autonomously&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Partially (with approval)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Integrates with external services&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Choice of underlying model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Yes (Copilot)&lt;/td&gt;
&lt;td&gt;Yes (Cursor, Copilot)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Setup required&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security risk surface&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Whether the benefits outweigh the risks is still an open question. The capability is real and the direction of travel is clear. The tooling is early and the implications of giving an agent that kind of reach are still being worked out in practice. It is worth watching.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Leaves You
&lt;/h2&gt;

&lt;p&gt;The tools have moved from responding to acting. That is the shift worth understanding.&lt;/p&gt;

&lt;p&gt;Chat assistants remain useful for explanation and isolated problems. Autocomplete accelerates familiar patterns but does not expand what you can accomplish. Agentic tools operate across more files and more complex systems. They also allow developers to work in unfamiliar territory.&lt;/p&gt;

&lt;p&gt;The human judgement in the loop still determines the quality of what comes out. The developer who can describe a problem clearly, evaluate a proposed plan critically, and recognise when the output is wrong will get dramatically better results than one who cannot. The tool amplifies what you bring to it.&lt;/p&gt;

&lt;p&gt;These new tools open the door to new risks. They are unlike any risk we have seen before because agents given the ability to act might do so in unpredictable ways. We can't afford to ignore the risks, but neither can we ignore the rapidly advancing capabilities of these new agentic systems.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>codex</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>When the Code Writes Itself</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Thu, 12 Mar 2026 19:45:13 +0000</pubDate>
      <link>https://forem.com/cheetah100/when-the-code-writes-itself-380b</link>
      <guid>https://forem.com/cheetah100/when-the-code-writes-itself-380b</guid>
      <description>&lt;p&gt;Software development is beginning to experience a quiet shift that many programmers can already feel in their daily work. A developer describes a task to an AI system, the tool reads the repository, modifies several files, runs tests, and returns with a proposed solution. The developer reviews the changes, adjusts the instructions, and runs another cycle. Somewhere in that process a realization appears: most of the code was not written by the person at the keyboard.&lt;/p&gt;

&lt;p&gt;This development does not resemble the autocomplete tools developers have used for years. The new generation of systems can reason across an entire project and perform part of the implementation process themselves. The human still guides the work, but the machine increasingly handles the mechanics of getting there. That shift is small when viewed one feature at a time, yet taken together it alters how software is produced.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Implementation Loop Is Moving Into Software
&lt;/h2&gt;

&lt;p&gt;For most of the history of programming, the entire development loop lived inside the developer’s head. A programmer interpreted the requirements, explored the codebase, wrote the changes, compiled the project, and repeated the cycle until the result worked. The tools surrounding that process helped with editing and debugging, but the thinking and iteration remained firmly human.&lt;/p&gt;

&lt;p&gt;Modern AI coding systems are beginning to absorb part of that loop. A developer can now describe an intended change while the system navigates the codebase, proposes modifications across several modules, runs tests, and adjusts its approach if something fails. Instead of writing every line directly, the human increasingly evaluates the machine’s attempts and steers the next iteration.&lt;/p&gt;

&lt;p&gt;The productivity gains are obvious, but the deeper change lies in where attention is spent. Mechanical exploration of the codebase, which once consumed hours of developer time, can now be delegated to software. The developer becomes the person who defines the problem and judges whether the result makes sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Culture and the Status of Intelligence
&lt;/h2&gt;

&lt;p&gt;The cultural side of this shift is easy to underestimate unless you have spent time inside developer communities. Programming has long been a profession where status is strongly tied to perceived intelligence rather than organizational rank or salary. Within many technical circles, reputation is earned by demonstrating the ability to understand complex systems and produce elegant solutions that others struggle to grasp.&lt;/p&gt;

&lt;p&gt;That culture has shaped how developers think about themselves and their work. Writing sophisticated software has functioned not only as productive labour but also as a signal of intellectual competence among peers. The act of solving a difficult technical problem has historically carried social value inside the profession.&lt;/p&gt;

&lt;p&gt;AI coding systems complicate that dynamic. When a machine can generate large amounts of working code almost instantly, the visible act of writing code stops being a reliable signal of intellectual distinction. The technology does not remove the need for expertise, but it changes which abilities are visible and which ones matter most.&lt;/p&gt;

&lt;p&gt;It is therefore unsurprising that reactions among developers vary widely. Some adopt the tools enthusiastically and incorporate them into their workflow. Others highlight the risks, limitations, and security concerns associated with machine generated code. Still others quietly use the systems while publicly minimizing their importance. All of these responses make sense once the cultural dimension is recognized.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Boundary That Keeps Moving
&lt;/h2&gt;

&lt;p&gt;Much of the current debate focuses on what AI systems can or cannot do today. These discussions often assume that the current division of labour between human developers and machines will remain relatively stable. Recent experience suggests that such assumptions deserve caution.&lt;/p&gt;

&lt;p&gt;Only a few years ago it was widely believed that AI would assist with small fragments of code while humans handled real software development. That boundary moved as systems began producing entire functions, then navigating repositories, and now running iterative development loops. Each step has transferred another portion of the work from the human side to the machine side.&lt;/p&gt;

&lt;p&gt;At present developers still define the goals and evaluate the architectural consequences of a change. The machine performs a growing portion of the implementation, but the human decides what the system should become. Whether that balance will remain stable over the next decade is an open question.&lt;/p&gt;

&lt;p&gt;One consequence is already visible in the structure of development teams. When a developer can direct tools that perform much of the mechanical work, a small group can achieve results that previously required a much larger team. This does not eliminate the need for experienced engineers, but it changes how much leverage a single developer can have over a complex project.&lt;/p&gt;

&lt;p&gt;Software development therefore appears to be entering a period of transition rather than a stable endpoint. The profession is adjusting to tools that alter not only productivity but also the cultural signals that once defined expertise. The machines are beginning to write substantial portions of the code, and developers are beginning to guide increasingly capable systems.&lt;/p&gt;

&lt;p&gt;Where that process eventually leads remains uncertain. What is clear is that the line between human and machine responsibility in programming has already begun to move, and history suggests it may continue to do so.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This article was written with the aid of AI&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>jobs</category>
      <category>learning</category>
    </item>
    <item>
      <title>Deep Learning Without Backpropagation</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Sun, 08 Feb 2026 09:17:39 +0000</pubDate>
      <link>https://forem.com/cheetah100/deep-learning-without-backpropagation-21n6</link>
      <guid>https://forem.com/cheetah100/deep-learning-without-backpropagation-21n6</guid>
      <description>&lt;p&gt;Most modern neural networks learn using &lt;strong&gt;backpropagation&lt;/strong&gt;. It works well, but it has a strange property: learning depends on a global error signal flowing backward through the entire network. Every weight update depends on information that may be many layers away.&lt;/p&gt;

&lt;p&gt;Brains don’t work like that.&lt;/p&gt;

&lt;p&gt;Neurons only see &lt;strong&gt;local information&lt;/strong&gt;, what they receive through synapses, what they fire. There is no global gradient descending through the cortex. Yet biological systems learn deep, layered representations of the world with remarkable efficiency.&lt;/p&gt;

&lt;p&gt;This raises a simple question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can deep neural networks learn using only local learning rules?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Until recently, the answer appeared to be “not really.”&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With Local Learning
&lt;/h2&gt;

&lt;p&gt;Local learning rules, often called &lt;em&gt;Hebbian learning&lt;/em&gt; (“neurons that fire together wire together”) have been known for decades. They work well for simple feature discovery, but they historically struggled with deeper networks.&lt;/p&gt;

&lt;p&gt;When stacked into multiple layers, purely local learning tends to fail because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Higher layers receive no meaningful training signal
&lt;/li&gt;
&lt;li&gt;Layers drift or collapse into similar representations
&lt;/li&gt;
&lt;li&gt;Features fail to become more abstract with depth
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, without a global error signal, deep structure usually does not emerge.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Key Insight: Structure Matters More Than the Learning Rule
&lt;/h2&gt;

&lt;p&gt;The breakthrough came from an unexpected place.&lt;/p&gt;

&lt;p&gt;Instead of changing the learning rule, the architecture was changed to match how biological vision works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Local receptive fields&lt;/strong&gt; (small patches instead of full image connections)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Competition between neurons&lt;/strong&gt; (winner-take-all)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive plasticity&lt;/strong&gt; (each neuron self-regulates its sensitivity)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strictly local updates&lt;/strong&gt; (no backpropagation anywhere)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This combination produced something surprising:&lt;/p&gt;

&lt;p&gt;The network began to &lt;strong&gt;self-organize meaningful feature hierarchies&lt;/strong&gt; using only local information.&lt;/p&gt;

&lt;p&gt;No gradients. No global error. No backprop.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Network Learned
&lt;/h2&gt;

&lt;p&gt;The first layer learned simple local features such as edges, curves and strokes. This is exactly what you would expect from early visual cortex.&lt;/p&gt;

&lt;p&gt;But more importantly, when layers were stacked, &lt;strong&gt;higher layers learned compositions of lower features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edges → shapes
&lt;/li&gt;
&lt;li&gt;Shapes → digit structure
&lt;/li&gt;
&lt;li&gt;Structure → class separation
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The network trained &lt;em&gt;deeply&lt;/em&gt;, layer by layer, using only local learning.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;On the MNIST handwritten digit dataset, this locally trained network reached:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;~97% accuracy using only local learning rules&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No backpropagation at any stage.&lt;/p&gt;

&lt;p&gt;Even more interesting:&lt;/p&gt;

&lt;p&gt;Most of the classification power came from the &lt;strong&gt;unsupervised feature layers&lt;/strong&gt;. A simple linear readout on top of those features performed almost as well as the fully trained system. This shows the network learned a representation where classes naturally separated without ever seeing labels during feature learning.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;This result challenges a long-standing assumption:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;That deep learning &lt;em&gt;requires&lt;/em&gt; backpropagation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead, it suggests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deep hierarchical learning &lt;strong&gt;can emerge from local rules&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The right &lt;strong&gt;architecture and constraints&lt;/strong&gt; may be more important than the learning algorithm&lt;/li&gt;
&lt;li&gt;Biological-style learning is not only plausible; it can be competitive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also opens the door to systems that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learn continuously instead of in fixed training phases
&lt;/li&gt;
&lt;li&gt;Adapt locally without retraining the whole network
&lt;/li&gt;
&lt;li&gt;Are more biologically realistic
&lt;/li&gt;
&lt;li&gt;Potentially scale differently from gradient-based systems
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;Backpropagation is powerful, but it is not the only path to intelligence.&lt;/p&gt;

&lt;p&gt;This work shows that when local learning is combined with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spatial locality
&lt;/li&gt;
&lt;li&gt;Competition
&lt;/li&gt;
&lt;li&gt;Self-regulating neurons
&lt;/li&gt;
&lt;li&gt;Layered structure
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deep networks can organize themselves into meaningful representations without ever computing a global error gradient.&lt;/p&gt;

&lt;p&gt;The discovery is simple, but profound:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The bottleneck was never local learning. It was the structure we gave it.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>backprop</category>
    </item>
    <item>
      <title>SHARD: Deniable File Distribution Through XOR-Based Sharding</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Fri, 30 Jan 2026 23:00:33 +0000</pubDate>
      <link>https://forem.com/cheetah100/shard-deniable-file-distribution-through-xor-based-sharding-2blm</link>
      <guid>https://forem.com/cheetah100/shard-deniable-file-distribution-through-xor-based-sharding-2blm</guid>
      <description>&lt;h2&gt;
  
  
  The Problem: Protecting Information Sources
&lt;/h2&gt;

&lt;p&gt;In 2012, I developed SHARD to address a fundamental challenge in information security: how do you enable the distribution of sensitive information without being able to identify the source?&lt;/p&gt;

&lt;p&gt;Traditional encryption doesn't solve this problem. An encrypted file is still evidence of &lt;em&gt;something&lt;/em&gt;. If you're found in possession of &lt;code&gt;secret_document.gpg&lt;/code&gt;, you have a file that clearly contains information, even if investigators can't decrypt it. For whistleblowers, journalists, and activists operating under authoritarian regimes, mere possession of encrypted files can be incriminating.&lt;/p&gt;

&lt;p&gt;The requirement was different: create a system where:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Information can be distributed&lt;/strong&gt; through normal channels (FTP, HTTP, file sharing)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Individual components are meaningless&lt;/strong&gt; - they provide no evidence of what information they might contain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source protection is cryptographic&lt;/strong&gt; - not just operational security, but mathematically provable deniability&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reconstruction is possible&lt;/strong&gt; for intended recipients with the proper instructions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;SHARD achieves this through a elegant application of XOR operations and a separation of concerns: bulk data (shards) travels through one channel, while reconstruction metadata (recipes) travels through another.&lt;/p&gt;

&lt;h2&gt;
  
  
  How SHARD Works: The Concept
&lt;/h2&gt;

&lt;p&gt;SHARD splits files into components called "shards" using XOR operations. The critical innovation is that shards are not simply encrypted fragments. Instead, each shard can be a required component of many different files - potentially hundreds. This makes it cryptographically impossible to associate any single shard with a particular source file.&lt;/p&gt;

&lt;p&gt;Here's the process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A pool of random "seed" shards is created - these are just random data&lt;/li&gt;
&lt;li&gt;When you shard a file, each 1MB section is XORed with 3 randomly selected existing shards&lt;/li&gt;
&lt;li&gt;The result is written as a new shard added to the pool&lt;/li&gt;
&lt;li&gt;A small "recipe" file records which shards to XOR together to reconstruct the original&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As multiple users shard their files using a shared pool, the deniability compounds. A shard in your possession could be part of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your own files&lt;/li&gt;
&lt;li&gt;Files sharded by other users&lt;/li&gt;
&lt;li&gt;Nothing at all (just a random seed shard)&lt;/li&gt;
&lt;li&gt;Multiple files simultaneously&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without the recipe, there's no way to determine what any shard contains.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network Effects and Collaborative Use
&lt;/h2&gt;

&lt;p&gt;The system becomes more powerful when multiple users share a shard pool. If Alice, Bob, and Carol all use the same collection of shards:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alice shards her sensitive document, creating new shards A1, A2, A3&lt;/li&gt;
&lt;li&gt;Bob shards his document, potentially using A1, A2 in his XOR operations, creating B1, B2&lt;/li&gt;
&lt;li&gt;Carol does the same, creating C1, C2, C3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now shard A1 is a component of Alice's file AND Bob's file. There's no way to prove which file A1 "belongs to" - it's genuinely part of both. As the pool grows and more users participate, this ambiguity increases exponentially.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Usage
&lt;/h2&gt;

&lt;p&gt;SHARD consists of three Python scripts. Let's walk through using them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;First, create a directory for shards and generate an initial pool of random seed shards:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;shards
python random_shards.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates 10 random 1MB files in the &lt;code&gt;shards/&lt;/code&gt; directory with names like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;shard-a3f5e9c2b1d4f8e7c6b5a4f3e2d1c0b9
shard-7c8d2e1f4a5b6c9d0e3f1a2b8c4d5e6f
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These filenames are not random - they're the BLAKE2b hash (128-bit) of the shard contents. This makes the shard store &lt;strong&gt;content-addressable&lt;/strong&gt;: the filename is a direct cryptographic function of the data it contains. This becomes important for integrity verification during reconstruction.&lt;/p&gt;

&lt;p&gt;These seed shards provide the initial XOR key material for the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sharding a File
&lt;/h3&gt;

&lt;p&gt;To shard a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python shard.py secret_document.pdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reads &lt;code&gt;secret_document.pdf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Processes it in 1MB sections&lt;/li&gt;
&lt;li&gt;For each section:

&lt;ul&gt;
&lt;li&gt;Randomly selects 3 existing shards from the pool&lt;/li&gt;
&lt;li&gt;XORs the section with each of the 3 shards&lt;/li&gt;
&lt;li&gt;Writes the result as a new shard&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Creates &lt;code&gt;secret_document.pdf.recipe&lt;/code&gt; containing the reconstruction instructions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The recipe file is small - just a list of shard filenames. For a 10MB file, the recipe might be around 1-2KB.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reconstructing a File
&lt;/h3&gt;

&lt;p&gt;To reconstruct the original file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python unshard.py secret_document.pdf.recipe
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reads the recipe file&lt;/li&gt;
&lt;li&gt;For each shard referenced in the recipe:

&lt;ul&gt;
&lt;li&gt;Reads the shard file&lt;/li&gt;
&lt;li&gt;Computes its BLAKE2b hash&lt;/li&gt;
&lt;li&gt;Verifies the hash matches the filename&lt;/li&gt;
&lt;li&gt;Exits with an error if any shard fails verification&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;For each 1MB section, XORs the 4 verified shards together&lt;/li&gt;
&lt;li&gt;Writes the reconstructed data to the original filename&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The output file is identical to the input - bit-for-bit perfect reconstruction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrity verification is automatic.&lt;/strong&gt; If any shard has been corrupted, modified, or is simply the wrong file, its BLAKE2b hash won't match its filename and reconstruction will fail immediately. This prevents producing a corrupted output file from bad input shards.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distribution Strategy
&lt;/h3&gt;

&lt;p&gt;The power of SHARD comes from separating the distribution channels:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shards&lt;/strong&gt; (bulk data):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can be hosted publicly on any file server&lt;/li&gt;
&lt;li&gt;Uploaded to cloud storage&lt;/li&gt;
&lt;li&gt;Distributed via BitTorrent&lt;/li&gt;
&lt;li&gt;Shared on FTP sites&lt;/li&gt;
&lt;li&gt;No risk in possession - they're just random-looking data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Recipes&lt;/strong&gt; (reconstruction metadata):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Much smaller - can be transmitted via secure channels&lt;/li&gt;
&lt;li&gt;Can be printed (for small files)&lt;/li&gt;
&lt;li&gt;Transmitted via encrypted messaging&lt;/li&gt;
&lt;li&gt;Read over phone/radio for very small files&lt;/li&gt;
&lt;li&gt;Hand-delivered on USB sticks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A whistleblower could host shards on a public website under their own name with no legal risk. The recipe travels separately through secure channels to intended recipients.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Details: The Cryptography
&lt;/h2&gt;

&lt;h3&gt;
  
  
  XOR Operations
&lt;/h3&gt;

&lt;p&gt;SHARD uses XOR (exclusive OR) as its core cryptographic primitive. XOR has a critical property: &lt;code&gt;A XOR B XOR B = A&lt;/code&gt;. This means XORing a value with the same key twice returns the original value.&lt;/p&gt;

&lt;p&gt;For each 1MB file section, the sharding process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;file_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="n"&gt;MB&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# XOR with 3 randomly selected shards
&lt;/span&gt;&lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="n"&gt;XOR&lt;/span&gt; &lt;span class="n"&gt;shard1&lt;/span&gt;
&lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="n"&gt;XOR&lt;/span&gt; &lt;span class="n"&gt;shard2&lt;/span&gt;  
&lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="n"&gt;XOR&lt;/span&gt; &lt;span class="n"&gt;shard3&lt;/span&gt;

&lt;span class="c1"&gt;# Write the result as new_shard
&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_shard&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;section&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To reconstruct:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Start with the output shard
&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_shard&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# XOR with the same 3 shards
&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="n"&gt;XOR&lt;/span&gt; &lt;span class="n"&gt;shard1&lt;/span&gt;
&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="n"&gt;XOR&lt;/span&gt; &lt;span class="n"&gt;shard2&lt;/span&gt;
&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="n"&gt;XOR&lt;/span&gt; &lt;span class="n"&gt;shard3&lt;/span&gt;

&lt;span class="c1"&gt;# Result is the original section
# Because: (section XOR shard1 XOR shard2 XOR shard3) XOR shard1 XOR shard2 XOR shard3 = section
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Information-Theoretic Deniability
&lt;/h3&gt;

&lt;p&gt;The security comes from the properties of XOR operations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;One-time pad property&lt;/strong&gt;: When you XOR data with truly random bytes, the output is indistinguishable from random data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No information leakage&lt;/strong&gt;: Without the recipe, there's no way to determine what shards contribute to what files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collision-free reconstruction&lt;/strong&gt;: Because we track exactly which shards were used, reconstruction is deterministic&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each shard is effectively random data. Even if an attacker has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All the shards&lt;/li&gt;
&lt;li&gt;Knowledge that certain shards exist&lt;/li&gt;
&lt;li&gt;Suspicions about what might be sharded&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without the recipe, they cannot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Determine what any shard contains&lt;/li&gt;
&lt;li&gt;Prove any shard is part of a particular file&lt;/li&gt;
&lt;li&gt;Reconstruct any file&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Content-Addressable Storage and Collision Resistance
&lt;/h3&gt;

&lt;p&gt;Shards are named using the BLAKE2b hash of their contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;create_shard_name&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;blake2b&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;digest_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# 16 bytes = 128 bits
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;shard-&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hexdigest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c1"&gt;# Returns 32 hex characters
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates &lt;strong&gt;content-addressable storage&lt;/strong&gt; where the filename is a cryptographic function of the file's contents. A shard named &lt;code&gt;shard-a3f5e9c2b1d4f8e7c6b5a4f3e2d1c0b9&lt;/code&gt; will always contain exactly the data that produces that specific BLAKE2b hash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of this approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Built-in integrity verification&lt;/strong&gt;: To verify a shard, simply hash its contents and check if the result matches its filename. No separate checksums needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic deduplication&lt;/strong&gt;: If two sharding operations produce identical data, they generate the same hash and thus the same filename. Only one copy is stored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collision resistance&lt;/strong&gt;: BLAKE2b with 128 bits provides 2^128 possible hash values. The probability of collision is negligible - you'd need to generate about 2^64 (18 quintillion) shards before having a 50% chance of a single collision.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt;: BLAKE2b is one of the fastest cryptographic hash functions available, typically achieving 1-3 GB/s on modern CPUs - much faster than SHA-256.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recipe simplicity&lt;/strong&gt;: The recipe file just lists shard names. Those names are also the verification hashes. No additional metadata needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Padding for Uniform Size
&lt;/h3&gt;

&lt;p&gt;All shards are exactly 1MB, regardless of the actual data they contain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;shards/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;newShard&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;wb&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;newShardFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;newShardFile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;section&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;bufSize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;section&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;bufSize&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;newShardFile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;urandom&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bufSize&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prevents information leakage through file sizes. The last section of a file is padded with random data to reach exactly 1MB, making all shards uniform and indistinguishable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrity Verification Through Content-Addressable Storage
&lt;/h3&gt;

&lt;p&gt;SHARD uses content-addressable storage where each shard's filename is derived from its contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hash_shard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;blake2b&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;digest_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# 128-bit hash
&lt;/span&gt;    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hexdigest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;verify_shard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Extract hash from filename (remove 'shard-' prefix)
&lt;/span&gt;    &lt;span class="n"&gt;expected_hash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;shard-&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;actual_hash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;hash_shard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;expected_hash&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;actual_hash&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During reconstruction, &lt;code&gt;unshard.py&lt;/code&gt; automatically verifies every shard before using it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read the shard file from disk&lt;/li&gt;
&lt;li&gt;Compute its BLAKE2b hash&lt;/li&gt;
&lt;li&gt;Compare against the hash embedded in the filename&lt;/li&gt;
&lt;li&gt;If mismatch: exit with error message identifying the corrupted shard&lt;/li&gt;
&lt;li&gt;If match: proceed to use the shard in XOR operations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This provides &lt;strong&gt;fail-fast integrity checking&lt;/strong&gt;. If you've downloaded shards from an untrusted source, or if transmission errors have corrupted files, you'll know immediately before attempting reconstruction. The system won't produce a corrupted output file - it either succeeds completely with verified shards or fails cleanly with an error message.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why BLAKE2b?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;BLAKE2b was chosen for several technical reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt;: 2-3x faster than SHA-256, crucial when verifying many large files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Provides cryptographic-strength collision resistance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard library&lt;/strong&gt;: Available in Python's &lt;code&gt;hashlib&lt;/code&gt; since Python 3.6&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Appropriate size&lt;/strong&gt;: 128-bit output provides the right balance between collision resistance (2^64 shards before 50% collision probability) and compact filenames (32 hex characters)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The integrity verification is not just detecting accidental corruption - it also prevents attacks where someone might substitute malicious shards. Without knowing the content that produces a specific BLAKE2b hash, an attacker cannot create a substitute shard that passes verification.&lt;/p&gt;

&lt;h3&gt;
  
  
  Random Shard Selection
&lt;/h3&gt;

&lt;p&gt;When sharding a file, 3 shards are randomly selected from the pool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;getShardFiles&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;dirList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listdir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shards&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;fileList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fileList&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;consider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dirList&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;urandom&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;big&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dirList&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;consider&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;fileList&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;fileList&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;consider&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;fileList&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different files use different shard combinations&lt;/li&gt;
&lt;li&gt;The mapping between shards and files is unpredictable&lt;/li&gt;
&lt;li&gt;The pool's ambiguity grows with each use&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Recipe Files: The Weak Point and the Strength
&lt;/h2&gt;

&lt;p&gt;The recipe file is both the vulnerability and the key to SHARD's security model.&lt;/p&gt;

&lt;p&gt;A recipe contains:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;original_filename.pdf
10485760
shard-f81d4fae-7dec-11d0-a765-00a0c91e6bf6
shard-a3d5e8c2-4b91-11ec-9f24-00a0c91e6bf6
shard-b7c3f1d9-5a82-11ec-8e35-00a0c91e6bf6
shard-c9e4a2b8-6c73-11ec-9d46-00a0c91e6bf6
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a 100MB file, the recipe is roughly 16KB - small enough to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Print on a few pages&lt;/li&gt;
&lt;li&gt;Transmit via low-bandwidth channels&lt;/li&gt;
&lt;li&gt;Store on a USB stick hidden physically&lt;/li&gt;
&lt;li&gt;Encode in images or other steganographic techniques&lt;/li&gt;
&lt;li&gt;Memorize in chunks (for very small files)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The security trade-off:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shards alone&lt;/strong&gt;: Completely safe to possess, host, or distribute. Provide zero information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recipe alone&lt;/strong&gt;: Useless without access to the shard pool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shards + Recipe&lt;/strong&gt;: Full reconstruction capability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation enables a powerful distribution strategy: shards move through monitored, high-bandwidth channels where possession means nothing. Recipes move through secure, potentially lower-bandwidth channels where small size is an advantage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations and Considerations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Recipe Size for Large Files
&lt;/h3&gt;

&lt;p&gt;While recipes are small relative to file size (~0.016% overhead), they grow linearly with file size at 4 shard names per megabyte. A 1GB file needs a recipe listing ~4,000 shards (roughly 160KB with 32-character hash-based shard names).&lt;/p&gt;

&lt;p&gt;This crosses the threshold from "easily non-digital transmission" into "needs digital channels anyway" for very large files. The system works best for documents, images, and small datasets rather than multi-gigabyte video files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trust and Distribution
&lt;/h3&gt;

&lt;p&gt;SHARD provides technical deniability, but practical deployment requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trusted channels for recipe distribution&lt;/li&gt;
&lt;li&gt;Confidence that shard pools haven't been compromised&lt;/li&gt;
&lt;li&gt;Understanding that recipe holders can reconstruct files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The recipe is the single point of failure. If intercepted, and the attacker has access to the shard pool, reconstruction is trivial.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shard Pool Management
&lt;/h3&gt;

&lt;p&gt;As the pool grows, managing thousands of shard files becomes a practical concern. The system has no built-in mechanisms for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shard garbage collection (removing unused shards)&lt;/li&gt;
&lt;li&gt;Versioning or tracking which shards are still needed&lt;/li&gt;
&lt;li&gt;Synchronizing shard pools across multiple users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These would need to be handled at the operational level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison to Modern Approaches
&lt;/h2&gt;

&lt;p&gt;Since 2012, various systems have emerged with related goals:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IPFS and Distributed Hash Tables&lt;/strong&gt;: IPFS also uses content-addressed storage with cryptographic hashes as identifiers. However, IPFS content hashes uniquely identify files - there's no deniability. Each file has one hash. SHARD is different: each shard can be a component of multiple files, creating genuine ambiguity about what any shard contains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blockchain-based storage&lt;/strong&gt;: Systems like Filecoin or Storj distribute encrypted fragments. But they require massive computational overhead, cryptocurrency mechanisms, and energy consumption far beyond SHARD's simple XOR operations. They're solutions in search of a problem, optimizing for decentralization rather than deniability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steganography&lt;/strong&gt;: Hiding data in innocent-looking files. SHARD is different - shards look like random data, not innocent files, and the deniability comes from mathematical ambiguity rather than hiding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secret Sharing (Shamir's)&lt;/strong&gt;: Splits secrets so N of M shares are needed for reconstruction. SHARD is different - it's about creating ambiguity about which shards belong to which files, not threshold reconstruction.&lt;/p&gt;

&lt;p&gt;SHARD remains unique in its specific approach: XOR-based sharding with collaborative pool sharing for cryptographic deniability, combined with content-addressable storage for automatic integrity verification.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;SHARD represents appropriate technology - using the simplest cryptographic primitives that solve the problem. XOR operations for deniability, BLAKE2b hashing for integrity verification. No complex protocols, no distributed consensus, no cryptocurrency, no massive energy consumption. Just elegant mathematics and file management.&lt;/p&gt;

&lt;p&gt;The content-addressable storage design means shards are self-verifying - the filename is the checksum. This eliminates an entire class of problems around shard corruption and verification without adding complexity to the recipe files.&lt;/p&gt;

&lt;p&gt;The separation of bulk data (shards) from reconstruction metadata (recipes) creates genuine plausible deniability. Individual shards provide no evidence of what they contain or contribute to. As shared pools grow through collaborative use, the ambiguity compounds mathematically.&lt;/p&gt;

&lt;p&gt;While SHARD was never adopted in practice, it demonstrates an elegant approach to a real problem: how do you enable information distribution while cryptographically protecting sources? The technical solution works. The deployment challenges - user experience, trust models, operational security - proved harder than the mathematics.&lt;/p&gt;

&lt;p&gt;The code is available under GPL v3 at: &lt;a href="https://bitbucket.org/cheetah100/shard/" rel="noopener noreferrer"&gt;https://bitbucket.org/cheetah100/shard/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For those interested in source protection, deniable storage, or just elegant applications of XOR cryptography and content-addressable storage, SHARD remains a useful proof of concept and learning tool.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Peter Harrison has been working in software development for over 30 years and founded the New Zealand Open Source Society in 2002. This article describes SHARD, developed in 2012 as a proof of concept for deniable file distribution.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>crypto</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Difficult Choices: Angular, React or Vue?</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Fri, 19 Dec 2025 22:48:03 +0000</pubDate>
      <link>https://forem.com/cheetah100/difficult-choices-angular-react-or-vue-2443</link>
      <guid>https://forem.com/cheetah100/difficult-choices-angular-react-or-vue-2443</guid>
      <description>&lt;h2&gt;
  
  
  Staying Current Without Chasing Trends
&lt;/h2&gt;

&lt;p&gt;Technology choices are a constant in software development. The landscape shifts continuously. New frameworks emerge, established ones evolve, and yesterday's cutting-edge solution becomes tomorrow's legacy burden. The challenge isn't just picking the right tool, but knowing how to evaluate tools in the first place.&lt;/p&gt;

&lt;p&gt;I've made major technology shifts before. Moving from Delphi to Java years ago gave me access to broader platforms and let me write applications for 'big iron', not just PCs. That shift was driven by concrete needs: platform reach, community size, and productivity gains. It wasn't about chasing the new and shiny.&lt;/p&gt;

&lt;p&gt;But staying current requires discipline. Following every new framework is career self-sabotage. You waste time on technologies that may vanish and never develop deep expertise because you're constantly context-switching. The developers who mastered Spring or Rails in 2010 and stuck with them have built more valuable skills than those who jumped to every micro-framework that promised to revolutionize web development.&lt;/p&gt;

&lt;p&gt;By 2020, I needed to fill a gap in my skillset: modern web UI development. My focus had been middleware and backend systems, but now I had to build a SaaS application from scratch. The question wasn't "what's hot?" but "what will get me productive and help me ship?"&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Criteria for Technology Choice
&lt;/h2&gt;

&lt;p&gt;When evaluating new technologies, I consider four factors that often conflict with each other:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Marketable Skill
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What actually pays the bills.&lt;/strong&gt; This requires nuance beyond "what's popular on GitHub." Marketability is regional. What's hot in Silicon Valley may not match demand in your location. It's also temporal. COBOL was marketable for 40 years; some Node.js frameworks barely lasted 4.&lt;/p&gt;

&lt;p&gt;Job listings are a useful signal, but they lag behind actual industry use. Enterprises move slowly; by the time a technology dominates job boards, it may already be maturing toward eventual decline.&lt;/p&gt;

&lt;p&gt;For consultants and contractors, marketability matters more than for product developers. If you're building your own SaaS application, productivity matters more than resume keywords.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Utility
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Does it solve real problems you actually have?&lt;/strong&gt; A framework optimized for single-page applications is useless if you're building content sites. One focused on developer experience matters more when you're a team of one than when inheriting a codebase from 50 developers.&lt;/p&gt;

&lt;p&gt;Utility isn't just about features. It's about fitness for purpose. Does the framework's architecture align with your problem domain? Does it integrate well with your existing stack? Does it force architectural choices you disagree with?&lt;/p&gt;

&lt;p&gt;Beware framework creep: tools that started focused and useful often add features that create new problems while solving problems you don't have.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Ease of Adoption
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Your time has value.&lt;/strong&gt; If Framework A takes three months to productivity and Framework B takes three weeks, B needs to be dramatically worse on other dimensions to justify A.&lt;/p&gt;

&lt;p&gt;Ease of adoption isn't just syntax simplicity. It includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Conceptual clarity&lt;/strong&gt;: Does it match your mental models or force you to unlearn established patterns?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning resources&lt;/strong&gt;: Quality documentation, tutorials, community knowledge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tooling maturity&lt;/strong&gt;: Build systems, debugging tools, error messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incremental path&lt;/strong&gt;: Can you start simple and add complexity gradually?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At different career stages, this criterion weighs differently. Junior developers often must grind through whatever the market demands. Experienced developers can be selective and optimize for productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contenders
&lt;/h2&gt;

&lt;p&gt;In 2020, the frontend framework landscape had three clear leaders: Angular, React, and Vue. All were mature, production-tested, and had substantial communities. All followed a client-side architecture with REST API backends. The server/client boundary was clear and clean.&lt;/p&gt;

&lt;h3&gt;
  
  
  Angular
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Enterprise Standard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Angular came from Google, had comprehensive documentation, and dominated enterprise development. It offered a complete, opinionated framework: dependency injection, routing, forms, HTTP client, testing tools. Everything included.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complete solution, batteries included&lt;/li&gt;
&lt;li&gt;Strong typing with TypeScript&lt;/li&gt;
&lt;li&gt;Dependency injection familiar to Java/C# developers&lt;/li&gt;
&lt;li&gt;Enterprise backing and long-term support&lt;/li&gt;
&lt;li&gt;Comprehensive official documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Steep learning curve with significant conceptual overhead&lt;/li&gt;
&lt;li&gt;Heavy tooling requirements before writing your first component&lt;/li&gt;
&lt;li&gt;Complex build pipeline&lt;/li&gt;
&lt;li&gt;RxJS everywhere, whether you want reactive programming or not&lt;/li&gt;
&lt;li&gt;"Hostile to learning quickly" because there's lots to master before becoming productive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For developers coming from enterprise Java or C# backgrounds, Angular's patterns are familiar. For someone who needed to ship a SaaS application quickly, the time to productivity was prohibitive.&lt;/p&gt;

&lt;h3&gt;
  
  
  React
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Market Leader&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;React from Facebook (now Meta) dominated mindshare by 2020. Job listings favored React developers, the community was massive, and the ecosystem was rich with libraries and tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Huge community and ecosystem&lt;/li&gt;
&lt;li&gt;Maximum job market demand&lt;/li&gt;
&lt;li&gt;"Just JavaScript" means learning React improves your JavaScript skills generally&lt;/li&gt;
&lt;li&gt;Flexible, unopinionated (use what you want for routing, state, etc.)&lt;/li&gt;
&lt;li&gt;Strong corporate backing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JSX requires mental adjustment (JavaScript inside HTML-like syntax)&lt;/li&gt;
&lt;li&gt;Ecosystem fragmentation: which router? which state management? which form library?&lt;/li&gt;
&lt;li&gt;Learning resources scattered and variable quality&lt;/li&gt;
&lt;li&gt;Rapid evolution meant tutorials went stale quickly&lt;/li&gt;
&lt;li&gt;"Thinking in React" requires functional programming mindset shift&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;React's marketability was compelling for career optionality. But for shipping a product, the fragmented ecosystem meant constant decisions about which supporting libraries to adopt, and the learning resources were inconsistent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vue
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Progressive Framework&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vue positioned itself as the "progressive framework". Use as much or as little as you need. It borrowed good ideas from both Angular (templates, directives) and React (component-based, reactive) while staying simpler than either.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gentle learning curve with fast time to productivity&lt;/li&gt;
&lt;li&gt;Template syntax that looks like HTML (familiar and readable)&lt;/li&gt;
&lt;li&gt;Clear, well-written official documentation&lt;/li&gt;
&lt;li&gt;Can start with a simple script tag, add build tools later&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vue Mastery&lt;/strong&gt;: Professional, high-quality tutorial platform&lt;/li&gt;
&lt;li&gt;Single-file components are intuitive&lt;/li&gt;
&lt;li&gt;Reactive data binding "just works"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smaller community than React (though still substantial)&lt;/li&gt;
&lt;li&gt;Less job market demand than React or Angular&lt;/li&gt;
&lt;li&gt;Fewer third-party libraries and tools&lt;/li&gt;
&lt;li&gt;Less corporate backing (individual creator, though now has team/foundation)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vue's ease of adoption was its killer advantage. Within days, I could build working components. Within weeks, I understood the framework deeply enough to make architectural decisions confidently.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Decision: Learning Infrastructure Matters
&lt;/h2&gt;

&lt;p&gt;I chose Vue, and the deciding factor was &lt;strong&gt;quality of learning resources&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Vue Mastery provided professional, pedagogically sound tutorials that treated teaching as a craft. The courses built skills incrementally, anticipated common misconceptions, and used realistic examples. This wasn't someone's webcam rambling through code. It was professionally produced education.&lt;/p&gt;

&lt;p&gt;Compare this to React's learning landscape in 2020: the official docs were improving but patchy, Medium was full of outdated or wrong articles, paid courses varied wildly in quality, and the rapid evolution of the framework meant yesterday's tutorial taught obsolete patterns.&lt;/p&gt;

&lt;p&gt;Angular had comprehensive documentation, but it was dense reference material assuming you already understood their architectural concepts. It was written for people who already knew Angular, not for learners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The business case was clear:&lt;/strong&gt; Vue Mastery cost a few hundred dollars annually, but got me productive weeks faster than piecing together free React tutorials. When you're building a business, your time has value. Investing in quality education has obvious ROI.&lt;/p&gt;

&lt;p&gt;This revealed a fifth criterion I hadn't initially considered: &lt;strong&gt;Learning Infrastructure Quality&lt;/strong&gt;. Framework features and syntax matter, but the ability to actually acquire skills efficiently matters more. A simpler framework with poor teaching materials can be harder to learn than a complex framework with excellent tutorials.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Conflict of Criteria
&lt;/h2&gt;

&lt;p&gt;The difficulty of choice comes from conflicting priorities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;New hotness&lt;/strong&gt; said: "Try Svelte! Try Solid! Try whatever just launched!"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Marketable skill&lt;/strong&gt; said: "Learn React, it dominates job listings"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Utility&lt;/strong&gt; said: "They all solve your problem, pick one and ship"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ease of adoption&lt;/strong&gt; said: "Vue gets you productive fastest"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For my situation (building a SaaS product, filling a skillset gap, needing to ship) ease of adoption dominated. I could always learn React later for consulting work if needed (marketability). I deliberately ignored new hotness. And utility was roughly equal across all three.&lt;/p&gt;

&lt;p&gt;Your priorities may differ. If you're job-hunting, marketability might dominate. If you're joining a team, you don't get to choose. You learn what they use. If you're building throwaway prototypes, ease of adoption matters most. If you're building a 10-year platform, architectural soundness and longevity matter more than any other factor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Beyond Frameworks
&lt;/h2&gt;

&lt;p&gt;The framework choice taught me about technology evaluation generally:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't chase hype.&lt;/strong&gt; New technologies need to prove themselves over years, not months. Let others debug the bleeding edge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning infrastructure is a feature.&lt;/strong&gt; A technology with worse technical characteristics but better teaching materials may be the better choice for actual skill acquisition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimize for shipping, not learning.&lt;/strong&gt; If your goal is to build something, pick the tool that gets you there fastest. You can always learn other tools later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context matters more than features.&lt;/strong&gt; There is no "best framework" in the abstract. There's only "best for your situation, constraints, and goals."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time has value.&lt;/strong&gt; Spending weeks learning for free instead of days learning with paid quality education is false economy.&lt;/p&gt;

&lt;p&gt;The frontend landscape has evolved since 2020. React now has Server Components. Next.js and SvelteKit blur the server/client boundary. New frameworks emerge constantly. But the evaluation framework remains sound: ignore hype, understand your actual needs, consider the full learning ecosystem, and optimize for productivity over resume keywords.&lt;/p&gt;

&lt;p&gt;Choose based on where you are and what you're building. Not based on what's trending on Twitter.&lt;/p&gt;

</description>
      <category>vue</category>
      <category>angular</category>
      <category>react</category>
    </item>
    <item>
      <title>Lessons from React2Shell</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Tue, 09 Dec 2025 22:34:25 +0000</pubDate>
      <link>https://forem.com/cheetah100/lessons-from-react2shell-1m8b</link>
      <guid>https://forem.com/cheetah100/lessons-from-react2shell-1m8b</guid>
      <description>&lt;p&gt;On December 3rd, 2025, React disclosed CVE-2025-55182, a critical remote code execution vulnerability with a CVSS score of 10.0, the maximum possible severity. Within hours, attackers were exploiting it in the wild. Nearly a million servers running React 19 and Next.js were vulnerable to unauthenticated remote code execution. For a framework that had maintained a remarkably clean security record over 13 years, just one minor XSS vulnerability (CVSS 6.1) in 2018 this represented a catastrophic failure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vulnerability
&lt;/h2&gt;

&lt;p&gt;The exploit exists in React's "Flight" protocol, a custom serialization format introduced with React Server Components. Flight handles the transfer of data and execution context between client and server. The vulnerability allowed attackers to craft malicious payloads that, when deserialized by the server, could execute arbitrary code. The attack required no authentication, just network access to send a crafted HTTP request to any Server Components endpoint.&lt;/p&gt;

&lt;p&gt;The technical root cause was unsafe deserialization of untrusted client data. The server accepted serialized objects from clients, deserialized and executed code based on their contents, including accessing object properties like &lt;code&gt;.then&lt;/code&gt; and &lt;code&gt;.constructor&lt;/code&gt; that allowed attackers to reach JavaScript's code execution primitives. React's defenses relied on the assumption that the serialization format itself would prevent malicious inputs, rather than treating all client data as untrusted by default.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are React Server Components?
&lt;/h2&gt;

&lt;p&gt;React Server Components (RSC) represent a fundamental shift in React's architecture. Traditionally, React was a client-side library that ran in the browser, rendering user interfaces and talking to backend APIs via standard REST or GraphQL endpoints. Your backend could be written in any language: Python, Go, Ruby, Java, whatever made sense for your use case.&lt;/p&gt;

&lt;p&gt;Server Components change this model. They allow React components to execute on the server, access databases directly, and serialize their results including promises and complex state to the client using the Flight protocol. Functions marked with &lt;code&gt;'use server'&lt;/code&gt; become server-side endpoints automatically. No explicit API routes required. The framework handles routing these "Server Actions" and serializing the data flow between client and server.&lt;/p&gt;

&lt;p&gt;The pitch is seductive: write your frontend and backend in the same files, using the same language, with "seamless" data flow between them. No API boilerplate, no context switching, just components that "magically" know whether to run on client or server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Violation of Security Principles
&lt;/h2&gt;

&lt;p&gt;This is where React abandoned decades of hard-won security wisdom. The fundamental principle of secure systems is simple: &lt;strong&gt;never trust client input&lt;/strong&gt;. Every mature framework and language ecosystem has learned this lesson through painful experience:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Java serialization vulnerabilities&lt;/strong&gt; plagued the ecosystem for years, leading to remote code execution in countless applications. The Java security team eventually concluded that deserializing untrusted data was simply too dangerous, leading to deprecation warnings and architectural guidance to avoid it entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PHP's &lt;code&gt;unserialize()&lt;/code&gt; function&lt;/strong&gt; became the attack vector for thousands of WordPress compromises. The PHP community learned to treat deserialization of user input as an anti-pattern to be avoided.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Python's pickle module&lt;/strong&gt; documentation explicitly warns: "The pickle module is not secure. Only unpickle data you trust." It's considered unsafe for any data that might come from untrusted sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ruby's Marshal&lt;/strong&gt; has the same warnings and the same history of vulnerabilities.&lt;/p&gt;

&lt;p&gt;React looked at this 50 year history and decided to build a custom serialization protocol that deserializes client data into server execution contexts. The Flight protocol needed to be "smarter" than JSON, capable of serializing promises, closures, and complex object graphs. This meant it needed to be more complex, more powerful, and inevitably, more dangerous.&lt;/p&gt;

&lt;p&gt;The vulnerability wasn't an implementation bug that slipped through code review. It was the predictable consequence of violating a fundamental security principle: &lt;strong&gt;complex deserialization of untrusted data leads to remote code execution&lt;/strong&gt;. If you can't do it perfectly don't do it at all.&lt;/p&gt;

&lt;p&gt;Traditional REST APIs avoid this entire class of vulnerabilities by using JSON, a deliberately limited data format that carries no execution context, no code, no object methods. JSON is "dumb" in exactly the right way: it's just data structures. The server receives JSON, validates it against expected schemas, and explicitly routes it to the appropriate handler. There's no deserialization of execution contexts, no automatic invocation of client specified code paths, no blurred boundaries between data and code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tight Coupling: The API That Isn't
&lt;/h2&gt;

&lt;p&gt;React Server Components don't just introduce security risks; they eliminate architectural flexibility. When you mark a function with &lt;code&gt;'use server'&lt;/code&gt;, you haven't created an API. You've created a React specific endpoint that can only be called by React clients using the Flight protocol.&lt;/p&gt;

&lt;p&gt;Consider a traditional REST API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@app.post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/api/posts&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;create_post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This endpoint can be called by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your React frontend&lt;/li&gt;
&lt;li&gt;Your mobile app (iOS/Android)&lt;/li&gt;
&lt;li&gt;Your CLI tool&lt;/li&gt;
&lt;li&gt;Partner integrations&lt;/li&gt;
&lt;li&gt;Third-party developers&lt;/li&gt;
&lt;li&gt;Any HTTP client in any language&lt;/li&gt;
&lt;li&gt;Testing tools like curl or Postman&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It can be documented with OpenAPI/Swagger. It can be monitored with standard HTTP tooling. It can be secured with standard WAF rules. It works with every language's HTTP library.&lt;/p&gt;

&lt;p&gt;Now consider a Server Action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;createPost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This can be called by... your React frontend. That's it. It uses a proprietary protocol (Flight) that only React understands. It can't be documented in a language agnostic way. Standard HTTP monitoring tools can't parse the payloads. Security tools can't inspect the traffic. If you want to build a mobile app, you'll need to create a separate REST API anyway.&lt;/p&gt;

&lt;p&gt;You haven't eliminated API boilerplate, you've just hidden it behind framework magic while simultaneously limiting who can use it. When your application inevitably needs to support multiple client types such as web, mobile, and CLI, you'll end up maintaining two parallel systems: Server Actions for your React web app, and a proper REST API for everything else. The "convenience" of Server Components becomes technical debt the moment you need to integrate with anything outside the React ecosystem.&lt;/p&gt;

&lt;p&gt;The reusability problem extends beyond just multiple clients. Modern applications often need to expose webhooks for third-party services, integrate with partner APIs, or provide data to analytics platforms. None of these can consume React Server Actions. You're forced back to building traditional API endpoints, making the Server Actions redundant; a solution in search of a problem that just created more problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  JavaScript Lock-In: Losing the Right Tool for the Job
&lt;/h2&gt;

&lt;p&gt;Perhaps the most insidious aspect of React Server Components is the way they eliminate architectural choice. For 13 years, React worked with any backend. Your API server could be written in Python for data science and machine learning, Go for high-performance services, Rust for systems programming, Java for enterprise integration, or Ruby for rapid development. The choice was yours, based on your team's expertise and your application's requirements.&lt;/p&gt;

&lt;p&gt;Server Components change this equation fundamentally. To use them, your server must be JavaScript—specifically, Node.js or a compatible runtime. The Flight protocol, the Server Actions routing, the serialization/deserialization. All of this requires a JavaScript runtime on the server.&lt;/p&gt;

&lt;p&gt;This matters more than React advocates want to admit. JavaScript is a fine language, but it's not the right tool for every job:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machine Learning and AI:&lt;/strong&gt; Python dominates this space with mature ecosystems (PyTorch, TensorFlow, scikit-learn) and tools that don't have JavaScript equivalents. If your application needs to serve ML models, you'll need Python services anyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-Performance Computing:&lt;/strong&gt; For CPU-intensive work, systems programming, or services requiring fine-grained control over memory and concurrency, languages like Rust, Go, or C++ are simply better suited. JavaScript's single-threaded nature and garbage collection can be limiting factors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enterprise Integration:&lt;/strong&gt; Many organizations have existing investments in Java or .NET ecosystems, with established patterns, libraries, and expertise. Forcing a JavaScript backend means either maintaining parallel systems or abandoning these investments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Processing:&lt;/strong&gt; For heavy data processing, languages like Python (with NumPy/Pandas), R, or even Julia provide better ergonomics and performance than JavaScript.&lt;/p&gt;

&lt;p&gt;Traditional REST APIs let you choose the right tool for each job. Your frontend can be React (or Vue, or Svelte) while your backend leverages Python's data science libraries, Go's performance, or Java's enterprise ecosystem. Each layer uses the language that makes the most sense for its requirements.&lt;/p&gt;

&lt;p&gt;Server Components eliminate this flexibility. Your entire stack must be JavaScript, regardless of whether it's the best choice for your backend requirements. This isn't just a technical limitation. It's an architectural straightjacket that forces technical decisions based on framework constraints rather than application needs.&lt;/p&gt;

&lt;p&gt;The irony is that React's original success came partly from its flexibility. It was just a view library that worked with any backend. Server Components abandoned this principle in pursuit of "full-stack" integration. It traded away the architectural freedom that made React attractive in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Broader Pattern
&lt;/h2&gt;

&lt;p&gt;CVE-2025-55182 isn't an isolated incident. It's a symptom of a broader problem in the JavaScript ecosystem. There's a pattern of frameworks prioritizing developer convenience over architectural soundness, of "innovation" that ignores lessons learned in other ecosystems, of complexity marketed as simplicity.&lt;/p&gt;

&lt;p&gt;React had a good thing. It was a solid client-side library with a clean security record. Then it tried to own the full stack, invented custom protocols to blur client-server boundaries, and ended up with CVSS 10.0 remote code execution vulnerabilities affecting nearly a million servers.&lt;/p&gt;

&lt;p&gt;The traditional approach of clear separation between frontend and backend, standard protocols like HTTP and JSON, explicit API boundaries might seem old-fashioned but it works. It's secure. It's flexible. It doesn't lock you into a single language or ecosystem. And it doesn't require inventing custom serialization protocols that recreate vulnerabilities we learned to avoid decades ago.&lt;/p&gt;

&lt;p&gt;Sometimes the boring solution is the right solution. Sometimes the old way was better. And sometimes "seamless developer experience" is just another way of saying "we hid the complexity until it exploded."&lt;/p&gt;

&lt;p&gt;React Server Components represent a fundamental architectural mistake. Patching one exploit doesn't fix the underlying problem: you're still deserializing untrusted client data into server execution contexts. The next vulnerability is already there, waiting to be discovered. Because when you violate basic security principles in pursuit of convenience, vulnerabilities aren't bugs, they're features.&lt;/p&gt;

</description>
      <category>react</category>
      <category>security</category>
    </item>
    <item>
      <title>The Hidden Divide in Developer Culture</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Wed, 19 Nov 2025 21:10:36 +0000</pubDate>
      <link>https://forem.com/cheetah100/the-hidden-divide-in-developer-culture-j6l</link>
      <guid>https://forem.com/cheetah100/the-hidden-divide-in-developer-culture-j6l</guid>
      <description>&lt;p&gt;Sooner or later you inherit a codebase that makes you wonder if the previous developer lost a bet. I gave candidates exactly that kind of application; circular dependencies, no separation of concerns, a structural mess and asked them to extend it. Their reactions exposed a deeper cultural divide in how developers think about their work.&lt;/p&gt;

&lt;p&gt;The initial app was a deliberate train wreck: violations of separation of concerns, circular dependencies, and no real interfaces. It exposed multiple endpoints such as HTTP and SMTP. The task was to add a new JMS endpoint.&lt;/p&gt;

&lt;p&gt;The idea was simple. Applicants received instructions written in the voice of a business owner. It wasn’t a trick question or a strict feature-delivery test. Yes, they had to add the endpoint, but the real question was whether they’d confront the underlying mess. The framing made it clear they had full authority; the previous developer had vanished with his girlfriend, and nobody else was touching this code. They were inheriting the whole thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feedback
&lt;/h2&gt;

&lt;p&gt;I ran this exercise inside a Fortune 500 team. One pattern I saw immediately was defensiveness. Some felt judged. That was never the point. The goal was to understand how each developer thinks when they inherit a broken system.&lt;/p&gt;

&lt;p&gt;The real question is simple:&lt;br&gt;
Do you recognize the problems? And do you feel empowered to fix them?&lt;/p&gt;

&lt;p&gt;Do you feel ownership of the whole application, or do you only feel responsible for the wording of the ticket?&lt;/p&gt;

&lt;p&gt;The sharpest feedback came from capable developers who had spent years in cultures where they were actively discouraged from taking broader responsibility. They were rewarded for clearing tickets, not for improving architecture. They weren’t wrong. Many companies punish initiative unless management explicitly orders it, and as you know, management never does.&lt;/p&gt;

&lt;p&gt;The lesson for me was that the reluctance wasn’t a lack of skill or values. It was learned behaviour. People internalize the norms of their past workplaces.&lt;/p&gt;

&lt;p&gt;So the test works, just not in the way some expected. It reveals how a developer believes software development should operate. Do they wait for perfect requirements? Or do they roll up their sleeves and improve the area they’re working in?&lt;/p&gt;

&lt;h2&gt;
  
  
  Refactoring Nightmares
&lt;/h2&gt;

&lt;p&gt;Since I’ve made such a strong case for taking ownership, it’s worth admitting the danger. Sometimes you spot a pattern in one class and realize the entire codebase is built on the same mistake. Fixing it “properly” means touching half the system.&lt;/p&gt;

&lt;p&gt;This is where discipline matters. Pride in craftsmanship is good. Accidentally triggering a month-long refactor because you changed one function is not.&lt;/p&gt;

&lt;p&gt;Developers can fall into this trap easily. I’ve seen people disappear for weeks, only to emerge with one enormous commit and a merge conflict from hell. Feature branches have the same risk; leave them running too long, and you’re basically refactoring in the dark. Merging daily avoids this, but I digress.&lt;/p&gt;

&lt;p&gt;The point is that architectural improvements must be balanced with delivering value. Nobody is suggesting you go rogue and rebuild the world. But when you’re working in a part of the codebase, keep it clean. Use simple patterns. Avoid piling on more chaos. That’s the level of stewardship that matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Be a Senior
&lt;/h2&gt;

&lt;p&gt;For me, seniority isn’t about wizard-level coding skills. It’s about responsibility.&lt;/p&gt;

&lt;p&gt;A junior has little responsibility. They get clear tickets and follow the patterns they see. Intermediates work independently and start taking responsibility for local quality. Seniors go wider. They feel accountable for the entire system and the people working on it. They mentor. They think about longevity. They make the call when something needs refactoring and when it doesn’t.&lt;/p&gt;

&lt;p&gt;This picture matters because the best indicator of senior-level thinking is the willingness to take broad responsibility. That’s what I’m actually testing for. Technical skill matters, but mindset matters more.&lt;/p&gt;

&lt;p&gt;Seek broad experience. It shapes not only what you know, but how you carry the responsibility that comes with being a senior.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>career</category>
      <category>refactoring</category>
    </item>
    <item>
      <title>Development Musical Chairs</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Thu, 06 Nov 2025 21:24:11 +0000</pubDate>
      <link>https://forem.com/cheetah100/development-musical-chairs-35hc</link>
      <guid>https://forem.com/cheetah100/development-musical-chairs-35hc</guid>
      <description>&lt;p&gt;I've been through a dry spell or two. The early 90's was a pretty tough time for someone new to the industry. The dot com bust also led to belt tightening in the industry. There has always been an ebb and flow in demand, but those who eat and breath software sailed through those seas. &lt;/p&gt;

&lt;p&gt;This time it's fundamentally different. Life has been pretty good for software developers over the last 40 years, but life does not owe software developers anything, and perhaps we are nearing the end of the gravy train.&lt;/p&gt;

&lt;p&gt;Up until 1900 the horse was the primary mode of transportation. In a space of about ten years the car replaced horses in volume. Horses were no longer required. Sure, you can still find them today, in horse racing or on tourist ventures perhaps. But there is no law of nature which says "there will always be new jobs for horses".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s8tnqszgp30hr7qll0m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s8tnqszgp30hr7qll0m.jpg" alt="Cars replacing horses" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And so it is I fear with programmers. Artificial Intelligence has arrived, runs at very low cost compared to humans, and can already perform much of the intellectual work of software developers. This is not to say it can replace developers, at least not yet, but the trend is unmistakable.&lt;/p&gt;

&lt;p&gt;The answer we are told is to embrace AI, to become Data Engineers, Data Scientists, and adopt all the shiny new AI technologies. To this call there have been broadly two camps in software development. The first camp prides itself on its intellect, and decries the use of LLMs. For these developers using AI is an admission of incompetence.  Or maybe they believe a machine will never be able to replace them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq8xq8622hii7fht6l1v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq8xq8622hii7fht6l1v.jpg" alt="Richard Stallman" width="460" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second camp is full on embracing AI. In some sense I am in that camp, as I have always been fascinated by AI and it has been a dream to work in that field. That said there is a more pragmatic reason; that state of the art AI really is a force multiplier. It enables me to accomplish far more than without it. From AI fans you may hear exhortations to learn about AI to remain relevant in the coming revolution. &lt;/p&gt;

&lt;p&gt;The reality however is that the pie itself will shrink. We have already seen companies laying off software developers, more in the anticipation of the success of AI to replace them than there being any compelling reason. At least so far.&lt;/p&gt;

&lt;p&gt;Today's LLMs are good. Five years ago the LLMs we see today would be considered science fiction. But despite the successes of LLMs they are not yet capable of replacing humans entirely. You still need humans in the loop. They can be a force multiplier for a software developer, allowing them to be far more productive than otherwise. But now there are also 'vibe coding' platforms where people with no development skills can develop software.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxuey00pl8n8npbl8ou7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxuey00pl8n8npbl8ou7.jpg" alt="Vibe Coding" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The AI revolution is not part of a cycle. There is a fundamental change occurring involving machine intelligence. In my naivety I initially believed that the technologists like myself would be the last to feel the pain of being replaced, but in fact it seems we are one of the first along with artists, writers and musicians.&lt;/p&gt;

&lt;p&gt;'Learn to code' they said.&lt;/p&gt;

&lt;p&gt;Even though AI won't replace developers outright initially, they will reduce the number needed. There will be an ever decreasing number of developers required. This will impact the salaries as the supply of developers will outstrip demand for them.&lt;/p&gt;

&lt;p&gt;None of this requires anything near as grand as some magical AGI or even super intelligence. Gradual improvement is all that is needed. It will be like musical chairs for software developers, with a ever decreasing number of chairs. To get a chair the demands in terms of skills and experience will only increase. This in itself will cut off the supply of younger less experienced developers.&lt;/p&gt;

&lt;p&gt;What is a software developer to do? Should we do what is needed to find a chair? Or are we horses in the age of cars?&lt;/p&gt;

&lt;p&gt;My advice is this: learn to be a hairdresser. Even with robots coming, letting sharp objects near our heads is perhaps something humans won't trust bots with. This may be all in jest, as an aging chubby white dude I'm not exactly the typical demographic for hairdresser myself. I guess more seriously we should be looking at options that involve less sitting at a desk moving a mouse.&lt;/p&gt;

&lt;p&gt;Or do we lobby to ban AI to protect the poor software developers?&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>jobs</category>
      <category>hr</category>
    </item>
    <item>
      <title>Battle Scars from the Cloud Front</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Fri, 31 Oct 2025 18:48:21 +0000</pubDate>
      <link>https://forem.com/cheetah100/battle-scars-from-the-cloud-front-34g8</link>
      <guid>https://forem.com/cheetah100/battle-scars-from-the-cloud-front-34g8</guid>
      <description>&lt;h2&gt;
  
  
  The Promise
&lt;/h2&gt;

&lt;p&gt;It is no secret that Cloud platforms have been adopted by most organisations for running their infrastructure. Virtualization of infrastructure brings many advantages. &lt;/p&gt;

&lt;p&gt;In the early 2000's I had to pay for the hardware and have it physically installed in a data center. You had to pay for a lease to host it. This was expensive and involved.&lt;/p&gt;

&lt;p&gt;With Cloud based Virtual Machines we could spin up a machine at a moments notice, perform some work and then tear it down, paying only for the time it was up. Then along comes Docker and containerization, which reduces the footprint for an instance and makes it possible to easily scale based on a image.&lt;/p&gt;

&lt;p&gt;Then comes Kubernetes to help manage those containers and configure the networking and create internal networks to interconnect your micro-services. Finally there are Lambdas and other net functions which eliminate even the need to worry about infrastructure at all. Just drop a function into AWS and connect it up to one of the services. Pay only for CPU you use.&lt;/p&gt;

&lt;p&gt;There has been an evolution from hosting physical machines to Cloud platforms where you no longer even see the underlying machines but rather a monolithic platform service. The service handles the management of the infrastructure for you, freeing you to focus on the code. This is the promise of Cloud, and frankly it has delivered. So what is my problem with it?&lt;/p&gt;

&lt;h2&gt;
  
  
  With Clouds comes Sink
&lt;/h2&gt;

&lt;p&gt;As any good glider pilot can tell you with any cloud comes sink, the smooth descending air which pulls you inexorably toward the earth. &lt;/p&gt;

&lt;p&gt;Having worked in a number of environments now that utilize the Cloud platforms I think there are some pitfalls we need to discuss.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inability to Run Locally
&lt;/h3&gt;

&lt;p&gt;Before you reach for your keyboards, I know there are ways to run code locally. This point is about how Cloud development has encouraged complex configurations, resources and permissions which create barriers to being able to run your code locally. By 'locally' I include not being able to set up realistic sandpits on your Cloud platform.&lt;/p&gt;

&lt;p&gt;In my experience developers have been expected to complete tickets and create a Pull Requests to bring code into a development branch from a feature branch on the basis of only passing unit tests. They may not have had access to databases to test their SQL against test data. They may not have had the opportunity to send message payloads to other services to validate they integrate properly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkoxxytxdgxp22dqnv2zt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkoxxytxdgxp22dqnv2zt.jpg" alt="Unable to run locally" width="512" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Is this just me? Is it that Peter can't hack change? Well, from what I have seen many of the developers around me have been struggling with this problem. In the bad old days you would simply run everything on your local machine. But today there are platform services, permissions, and configurations which are more than environment. Platform configuration has become part of the application proper, meaning you can't run it locally.&lt;/p&gt;

&lt;p&gt;Yes, I know Kubernetes can be run on your local box,only in my experience you can't just shift an application from your local system to the Cloud. Because configuration is now a explicit part of the app the infrastructure is more than just the substrate you run your app on. Similarly it is possible to run Lambdas locally, but there are limitations, and there really isn't a like for like substitute for running on the platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  No Deploy on Commit
&lt;/h3&gt;

&lt;p&gt;Related to the above point is that unit tests would not run when a feature branch build received a commit. Prior to this my experience was that every single commit would result in a build and unit tests run. If there were unit test failures you would get a report.&lt;/p&gt;

&lt;p&gt;In more recent environments the Feature Branch approach was set up only to build on the development and master branches commits. Typically this would be when a PR was merged into development.&lt;/p&gt;

&lt;p&gt;Of course, we should be building and running unit tests prior to commit. However, the lack of deployment means we don't get to see how a Lambda actually acts until it makes it into the development branch and is deployed into a development environment.&lt;/p&gt;

&lt;p&gt;The 'development' environment in this context is a single environment for everyone, not the same thing as a local environment which developers can work in without impacting others.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shifting System Complexity to the Platform
&lt;/h3&gt;

&lt;p&gt;This has some overlap with micro-services which have been all the rage in cloud environments. Software used to be more monolithic, all the code being in one binary. There would be separation of concerns, in that the Database might be separate from the Web Server, but they would each have distinct capabilities.&lt;/p&gt;

&lt;p&gt;The difference is that micro-services have been aligned with the domain, and so we now get a solution composed not of one or two major components, but dozens of interlocking small services playing a part in a complex web of dependencies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbgv7995sc3w16gbksjz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbgv7995sc3w16gbksjz.jpg" alt="Drowning in Projects" width="512" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In one recent example there was a system that involved sixteen separate repositories, each a separate deployable unit of Lambda functions. Some functions would directly call other Lambdas from other projects in code.&lt;/p&gt;

&lt;p&gt;Now, some might say this is clearly the wrong thing to do, and they would be 100% correct, but the deeper question is how we got there. We have essentially broken up what would have been separate packages each with their own purpose but running in the same machine into separate services with complex dependencies on other services.&lt;/p&gt;

&lt;p&gt;This has also resulted in complex deployment configurations, with the plumbing for all these service connections defined in the platform configuration rather than inside the application software.&lt;/p&gt;

&lt;h3&gt;
  
  
  Too many projects
&lt;/h3&gt;

&lt;p&gt;Something else I have seen is that software is decomposed into micro-services which are then handed to separate teams. Each team might handle each service in their own way. This is partly a result of new services being developed over time and being bolted on. &lt;/p&gt;

&lt;p&gt;As a result I have seen environments where projects are no longer maintained, where the developers have left, and there has been no continuity. As a result the new developers coming in are left with a menagerie of repositories, each dissimilar, written with different technologies.&lt;/p&gt;

&lt;p&gt;The CI/CD pipelines may not even be working, and the configurations might be so old they no longer function. Or worse, there is no real definition of what makes up the system such that critical Lambdas are not really visible until you need to change it or it breaks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Poor Service Dependency Visibility
&lt;/h3&gt;

&lt;p&gt;Related to the number of services is the fact it can be very difficult to actually visualize how everything is connected. The direct call of a Lambda in one project to a Lambda in another was an example of a hidden dependency. There was no contract or exposed service on the receiving service, just a Lambda that could be called. The clients of this service performed direct calls to the Lambda.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogo3ve267g25xnlgm9ej.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogo3ve267g25xnlgm9ej.jpg" alt="Developer in the Mist" width="512" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In monolithic apps the dependencies can be followed directly in the IDE. You can follow call chains from class to class easily. At debug time you can even follow the execution pointer around. But when debugging issues in systems that involve many separate micro-services we now face a challenge of finding out exactly where the problem occurs.&lt;/p&gt;

&lt;p&gt;Finding out how all the services depend on one another can be a real challenge, as there isn't one canonical place that this can be visualized. As discussed earlier it is even worse when you don't have debug access to a running system. You can't step through the code base to find an issue in a Lambda in a test system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We like to think progress goes only in one direction, that the old was worse than the new. With Cloud platforms there are undeniable benefits, but not all is a box of butterflies. I've seen some real challenges that are at least in part a consequence of the impact of Cloud on developers workflow.&lt;/p&gt;

&lt;p&gt;I want to be able to run and step debug a problem. I want an application to have a single well structured and maintained code base. I don't want many separate small services with ill defined interfaces running different technologies.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Full Stack Fatigue:</title>
      <dc:creator>Peter Harrison</dc:creator>
      <pubDate>Mon, 27 Oct 2025 01:01:28 +0000</pubDate>
      <link>https://forem.com/cheetah100/full-stack-fatigue-22de</link>
      <guid>https://forem.com/cheetah100/full-stack-fatigue-22de</guid>
      <description>&lt;p&gt;When I became a paid programmer I knew one language: DBase. Well, two if you count BASIC, but the less said about that the better. You could write a fully functional application in DBase, and distribute it as a exe file. You had to know something about data structures as well. The barrier to entry wasn't that steep.&lt;/p&gt;

&lt;p&gt;Today what do you need to write an web application? You will need HTML, CSS, and Javascript. You will need to adopt a front end framework such as React. You will need to learn a back end language, which these days can also be Javascript, aka Node. Or you can use Java or Python. You will probably use a SQL database, so you will need to know SQL. You will need to expose services via REST, so will need a REST API framework. You will need to store your code in version control, which these days is usually Git. To deploy your application you will usually need to know about Cloud services, Docker, Kubernetes. &lt;/p&gt;

&lt;p&gt;Your application will need to implement security and authentication, for which there is OAuth2. You will also need a CI/CD system, which was previously Jenkins, but now varies depending on the platform. You will need to learn Cloud Formation on AWS, Bicep on Azure to do infrastructure as code.&lt;/p&gt;

&lt;p&gt;This isn't an advanced stack. This is the kind of skill set expected of a "full stack developer". As a software developer I have always known that you can't stay still. The old COBOL programmers had a good wicket for a while, but you get stranded on a declining island. So constant learning has always been a absolute necessity to stay relevant.&lt;/p&gt;

&lt;p&gt;I had ended up specializing in integration and business automation, learning jBPM, Bonita, Activiti, but even there times are moving with new approaches which leave BPM type solutions in the dust.&lt;/p&gt;

&lt;p&gt;Even though I have raced to adopt new skill sets it has become increasingly difficult to stay ahead of the skills demanded. And if I am having trouble the situation for junior developers must look like Mt Everest. Much less the fact that AI is now encroaching, making it even tougher for entry level positions.&lt;/p&gt;

&lt;p&gt;There is a danger in thinking the present is just like the past. I would have thought the future would be easier for developers, better tooling, making life easier, but the evidence seems not to point there. AI in some respects makes things easier, but also more opaque, giving people power without tempering them with experience in software.&lt;/p&gt;

&lt;p&gt;Should we expect people to be jack of all trades, able to handle everything from the front end to designing cloud infrastructure? When I began everything was on one PC, applications compiled to a single file. Today we have highly available distributed clusters, complex deployment pipelines and quality gates. It is good for developers to at least understand the whole stack, have some exposure to it, but having deep understanding of the whole stack has become unrealistic.&lt;/p&gt;

&lt;p&gt;Are we cutting off the pipeline of young developers? Are we placing too many expectations on them? Is there a way to ease these expectations in the hiring process? Or do we think AI will be the silver bullet, removing the need for us to program at all?&lt;/p&gt;

</description>
      <category>fullstack</category>
      <category>webdev</category>
      <category>learningtocode</category>
    </item>
  </channel>
</rss>
