<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Joaquin Diaz</title>
    <description>The latest articles on Forem by Joaquin Diaz (@joacod).</description>
    <link>https://forem.com/joacod</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/joacod"/>
    <language>en</language>
    <item>
      <title>Context Is Not Memory, It Needs an Engine</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Fri, 17 Apr 2026 13:49:38 +0000</pubDate>
      <link>https://forem.com/joacod/context-is-not-memory-it-needs-an-engine-f2</link>
      <guid>https://forem.com/joacod/context-is-not-memory-it-needs-an-engine-f2</guid>
      <description>&lt;p&gt;In my previous article, &lt;a href="https://dev.to/joacod/human-code-review-is-not-the-last-frontier-60c"&gt;Human Code Review Is Not the Last Frontier&lt;/a&gt;, I argued that human code review is not the final bottleneck.&lt;/p&gt;

&lt;p&gt;Underneath that argument was something deeper, one of the real bottlenecks in agent-native engineering, the &lt;strong&gt;missing context&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Not context in the vague sense people usually mean when they say, "just give the model more information". I mean &lt;strong&gt;real engineering context&lt;/strong&gt;. The kind that tells you whether a change is actually correct or just looks correct for five minutes in isolation. The kind of context that lives in old pull requests, half-forgotten migrations, team decisions, outdated assumptions, scattered docs, local workarounds, and painful experience inside a codebase that has been evolving for years.&lt;/p&gt;

&lt;p&gt;That is usually the difference between a change that compiles and a change that actually &lt;strong&gt;belongs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A repository is never just its current code. It also contains what the system is trying to move away from, what is still in progress, which ugly pattern still exists for a reason, which assumptions used to be true but should not be repeated, which workaround only makes sense in one corner of the system, and which apparently harmless area is actually fragile. Humans recover that knowledge over time because they have lived through the system, talked to the people around it, reviewed old changes, and seen things break.&lt;/p&gt;

&lt;p&gt;Agents do not.&lt;/p&gt;

&lt;p&gt;They only work with what is surfaced to them, and most of the time that context is incomplete, stale, too generic, or trapped in the wrong place. That is why I do not think this problem will be solved by simply adding more notes, writing a better prompt, or relying on a memory feature and hoping it behaves like judgment.&lt;/p&gt;

&lt;p&gt;If context is one of the real bottlenecks in agent-native engineering, then it probably needs something more serious.&lt;/p&gt;

&lt;p&gt;It needs its own &lt;strong&gt;system&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context Is Not a Sidecar Problem
&lt;/h2&gt;

&lt;p&gt;A lot of current AI workflows still treat context like a sidecar. Something attached to a prompt. Something kept in a rules file. Something the tool "remembers". Something dumped into documentation and hoped to stay fresh.&lt;/p&gt;

&lt;p&gt;That can help, I use those things too, but they do not solve the core issue.&lt;/p&gt;

&lt;p&gt;The real problem is not whether context exists somewhere. The real problem is whether that context is current, scoped correctly, relevant to the task, and surfaced at the right moment in the workflow.&lt;/p&gt;

&lt;p&gt;That is a very different problem, it is not mainly a storage problem, it is a &lt;strong&gt;context quality&lt;/strong&gt; problem, because context has lifecycles.&lt;/p&gt;

&lt;p&gt;Context can be global and durable, or narrowly tied to a single repository. It may only matter during a migration, carry enough confidence to shape implementation, or stay weak and temporary because it comes from recent observations that could stop being true next week. Certain context matters most during planning, while other context only becomes important during debugging or validation. And once a transition is over, some of it should stop influencing future work entirely.&lt;/p&gt;

&lt;p&gt;Once you look at it that way, context stops being just text, it becomes &lt;strong&gt;living information&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And living information needs to be managed, not merely stored.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Memory Is the Wrong Mental Model
&lt;/h2&gt;

&lt;p&gt;This is why I do not think context should be treated as memory.&lt;/p&gt;

&lt;p&gt;Memory sounds passive, it suggests something you keep around and occasionally recall, something helpful but secondary, something sitting off to the side. That is not enough.&lt;/p&gt;

&lt;p&gt;What I am describing is an &lt;strong&gt;operational system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That distinction matters because an operational system participates in the workflow. It can be maintained, revised, ranked, scoped, and delivered when needed. The point is not just to retain context somewhere in the background. The point is to make it usable.&lt;/p&gt;

&lt;p&gt;This is also why "more memory" is not automatically better. A pile of remembered notes is still a pile. If context is stale, weak, conflicting, badly scoped, or poorly timed, then surfacing more of it may make the workflow worse, not better.&lt;/p&gt;

&lt;p&gt;The challenge is not accumulation, the challenge is &lt;strong&gt;usefulness&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Raw Files Are Not Enough
&lt;/h2&gt;

&lt;p&gt;This is where the idea starts to become more concrete, I am not talking about saving more notes, I am talking about giving engineering context a canonical system of record.&lt;/p&gt;

&lt;p&gt;Not because markdown is bad, but because raw files alone are a weak foundation for something that has scope, freshness, confidence, lineage, version history, review state, conflict states, and natural decay over time.&lt;/p&gt;

&lt;p&gt;If context has that kind of lifecycle, then the system behind it should be able to understand things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;where a piece of context applies&lt;/li&gt;
&lt;li&gt;how trustworthy it is&lt;/li&gt;
&lt;li&gt;what generated it&lt;/li&gt;
&lt;li&gt;whether it conflicts with something newer&lt;/li&gt;
&lt;li&gt;whether it is still active&lt;/li&gt;
&lt;li&gt;whether it should continue influencing future work&lt;/li&gt;
&lt;li&gt;whether it represents a preferred path or a known anti-pattern&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not minor details. They are part of what makes context &lt;strong&gt;usable&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is why I think the missing piece is not memory, and not documentation with better search. The missing piece is &lt;strong&gt;an engine&lt;/strong&gt;, a system capable of ingesting, classifying, refining, and retrieving context in a way that matches real engineering work.&lt;/p&gt;

&lt;p&gt;The important part is not storage, it is the engine. The storage layer is not the differentiator, the engine is.&lt;/p&gt;

&lt;p&gt;Storage matters only because it gives the system structure, but structure alone is not what solves the problem. What matters is what sits on top of that structure.&lt;/p&gt;

&lt;p&gt;A useful context system should be able to notice when a new piece of context overlaps with something already known. It should recognize when a previous assumption has started to decay. It should understand that a repository-specific rule should not be treated as a global one. It should learn that a certain implementation path repeatedly gets corrected by humans. It should preserve not only what worked, but also what repeatedly failed or had to be revised.&lt;/p&gt;

&lt;p&gt;That last part matters &lt;strong&gt;a lot&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A good system should not only remember successful patterns, it should also surface &lt;strong&gt;negative knowledge&lt;/strong&gt;, what to avoid, what keeps breaking, what looks reasonable but repeatedly turns out to be wrong in this specific codebase.&lt;/p&gt;

&lt;p&gt;That is one of the biggest gaps in current workflows. Agents are often good at producing plausible work. But plausible is not the same as correct, and it is definitely not the same as locally correct inside a messy, evolving system.&lt;/p&gt;

&lt;p&gt;What they are missing is not always capability, they are missing the context that actually &lt;strong&gt;matters&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Internally Rich, Externally Simple
&lt;/h2&gt;

&lt;p&gt;One of the most important design principles here is simple: &lt;strong&gt;internally rich, externally simple&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Internally, the system should be sophisticated. It should use structured storage, lifecycle management, ranking, revision logic, conflict handling, freshness scoring, and retrieval logic that can adapt to different stages of work.&lt;/p&gt;

&lt;p&gt;Externally, though, it should stay simple.&lt;/p&gt;

&lt;p&gt;Most agent workflows already consume text extremely well. Prompts, notes, handoff files, planning docs, constraint summaries, migration notes, contextual briefs. That part already works. So instead of forcing every harness, assistant, IDE, or workflow to understand a complicated internal schema, the engine can do the hard work inside and return the result as markdown.&lt;/p&gt;

&lt;p&gt;That is the abstraction.&lt;/p&gt;

&lt;p&gt;Internally, the system manages context properly. Externally, it delivers one or more markdown artifacts that can be consumed almost anywhere.&lt;/p&gt;

&lt;p&gt;That could mean a brief for a quick task, a planning pack for riskier work, migration notes when a system is in transition, recent learnings for debugging, known risks during validation. The exact filenames do not matter, what matters is that the output remains simple enough for almost any workflow to adopt without friction.&lt;/p&gt;

&lt;p&gt;That simplicity is &lt;strong&gt;not a compromise&lt;/strong&gt;, it is part of the design.&lt;/p&gt;

&lt;p&gt;If every team needs schema awareness, special adapters, and deep knowledge of how the engine thinks, then adoption becomes harder right where the system should be disappearing into the background. The workflow should not need to care how context was stored, promoted, revised, merged, or archived. It should just ask for the right context and receive a clean package.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Now
&lt;/h2&gt;

&lt;p&gt;This matters because the real world is not going to standardize around one assistant, one IDE, one orchestration layer, or one company workflow.&lt;/p&gt;

&lt;p&gt;Different teams use different tools, different companies have different processes, different repositories have different levels of entropy, different tasks need different amounts of context at different moments.&lt;/p&gt;

&lt;p&gt;So if this idea only works when everything is redesigned around it, then it is already weaker than it should be.&lt;/p&gt;

&lt;p&gt;But if the internal system stays rich and the external interface stays simple, then the same engine can plug into planning, implementation, debugging, validation, CI loops, pull request preparation, or longer autonomous workflows without forcing everyone into the same stack.&lt;/p&gt;

&lt;p&gt;The workflow asks for context, the engine returns markdown, the agent consumes it.&lt;/p&gt;

&lt;p&gt;That is &lt;strong&gt;much more realistic&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Missing Layer
&lt;/h2&gt;

&lt;p&gt;This is the missing layer I was pointing at in the previous article.&lt;/p&gt;

&lt;p&gt;When I said human code review is not the last frontier, part of what I meant was that the real frontier is not simply whether agents can generate more code. It is whether they can operate with the kind of context that real engineering work actually depends on.&lt;/p&gt;

&lt;p&gt;Not generic context, not bloated context, not stale context, &lt;strong&gt;useful context&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Context that knows where it applies, whether it is still true, how strong it is, what it conflicts with, and when it should stop influencing future work. Context that can be maintained instead of forgotten. Context that appears when it matters, instead of arriving as noise.&lt;/p&gt;

&lt;p&gt;That is why I think this deserves its own layer.&lt;/p&gt;

&lt;p&gt;Not another memory feature, not a prompt trick, not markdown folders pretending to be a system. A real "Context Engine", internally structured enough to manage context properly, and externally simple enough that almost nobody consuming it needs to care how it works.&lt;/p&gt;

&lt;p&gt;They just get a &lt;strong&gt;usable context package&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And honestly, that is not a small implementation detail, it is the &lt;strong&gt;whole point&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thought
&lt;/h2&gt;

&lt;p&gt;As agents get better at generating code, the bottleneck becomes easier to see. The issue is often not raw capability, it is &lt;strong&gt;contextual correctness&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That is why I think context needs something more deliberate behind it, not passive memory, not loose documentation, not a bigger pile of notes, &lt;strong&gt;an engine&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Because the difference between work that looks right and work that is right often lives in all the things the code alone does not tell you. And if those things are becoming one of the main constraints on agent-native engineering, then they should not remain scattered across prompts, docs, habits, and human memory.&lt;/p&gt;

&lt;p&gt;They should have a real system behind them.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>software</category>
      <category>architecture</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Human Code Review Is Not the Last Frontier</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Mon, 09 Mar 2026 15:31:24 +0000</pubDate>
      <link>https://forem.com/joacod/human-code-review-is-not-the-last-frontier-60c</link>
      <guid>https://forem.com/joacod/human-code-review-is-not-the-last-frontier-60c</guid>
      <description>&lt;p&gt;I found these two articles very interesting: &lt;a href="https://background-agents.com/" rel="noopener noreferrer"&gt;The Self-Driving Codebase&lt;/a&gt; and &lt;a href="https://www.latent.space/p/reviews-dead" rel="noopener noreferrer"&gt;How to Kill the Code Review&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And to be clear, I don't think they are totally wrong, I think they are pointing in the right direction, but I also think they jump too fast from "this is where things are going" to "we are almost there".&lt;/p&gt;

&lt;p&gt;That is the part &lt;strong&gt;I don't buy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I'm not anti-AI and I don't think agents are just hype either. I try tools, I use them, I see what works and what not. The progress is honestly better than many people think.&lt;/p&gt;

&lt;p&gt;But software engineering in the &lt;strong&gt;real world&lt;/strong&gt; is &lt;strong&gt;harder&lt;/strong&gt; than the clean version of the story, that is my problem with a lot of this conversation.&lt;/p&gt;

&lt;p&gt;When people talk about autonomous agents writing software, they often talk as if the main thing left is removing the human from code review. As if once the agents take care of that and humans stop reviewing every diff, then software development becomes mostly an automation problem.&lt;/p&gt;

&lt;p&gt;I don't think that is true, I think human code review is not the last frontier, I think it is just the last visible step in a much bigger mess, because most real codebases &lt;strong&gt;are a mess&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And I mean that in the most normal, boring, everyday way that anyone with enough years in software has seen many times.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reality
&lt;/h2&gt;

&lt;p&gt;Big codebases are full of &lt;strong&gt;technical debt&lt;/strong&gt;. They have different coding styles because hundreds of people touched them over the years, lots of "temporary changes" that were supposed to be improved later and became permanent, dead code because the business changed direction three times, bad abstractions because someone tried to generalize too early, weird edge cases because one big customer needed something five years ago and the company never removed it, missing tests, outdated docs, unclear ownership, and I can go on and on.&lt;/p&gt;

&lt;p&gt;So when I read ideas like &lt;em&gt;"review the spec, not the code"&lt;/em&gt; or &lt;em&gt;"kill the code review"&lt;/em&gt;, I get the point, the current review process does not scale well if agents start producing much more code than humans can read, that part is fair. But in many teams, code review is doing more than checking code quality, it is where people bring context back into the change.&lt;/p&gt;

&lt;p&gt;The reviewer knows that this ugly part of the system exists for a reason, that another team depends on a strange behavior that nobody documented, that this small looking refactor touches something fragile, which part of the code is annoying but harmless, and which part looks harmless but can break production.&lt;/p&gt;

&lt;p&gt;That knowledge is often not written down anywhere, not in the tickets, not in the docs, not in the tests, not in the spec.&lt;/p&gt;

&lt;p&gt;A lot of engineering work is not just implementing a clear idea, but discovering what the idea actually means while building it. You start with a requirement, and then you find out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it was incomplete&lt;/li&gt;
&lt;li&gt;it ignored an old system&lt;/li&gt;
&lt;li&gt;it breaks an edge case&lt;/li&gt;
&lt;li&gt;it creates a performance problem&lt;/li&gt;
&lt;li&gt;it conflicts with another team's flow&lt;/li&gt;
&lt;li&gt;it sounds simple at the product level but is messy at the data level&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I think some of these arguments assume a cleaner world than the one most engineers actually live in. If we consider that AI currently &lt;strong&gt;amplifies&lt;/strong&gt; any existing structure, we are not in the best of scenarios, and this is where I separate the future from the hype.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future
&lt;/h2&gt;

&lt;p&gt;Yes, this is probably the future, agents will keep getting better, human code review as we know it will probably matter less over time.&lt;/p&gt;

&lt;p&gt;But that doesn't mean we are close to autonomous software engineering, and even less so in environments where software is hardest: large companies, old systems, regulated industries, critical user facing products, codebases with years of debt and weak ownership. That is a &lt;strong&gt;different game&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;What works inside a frontier AI company does not automatically work inside a financial institution, that does not mean the frontier companies are faking it (well... at least not all of it), it means their environment is different. They are building the models, shaping the tools, and creating workflows around this new way of working. &lt;/p&gt;

&lt;p&gt;But the average company is not in that position. Most companies are not working with clean systems, top tier internal tooling, strong documentation, fast decision making, and teams built around agent workflows. Most companies are still trying to survive their &lt;strong&gt;own complexity&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Engineering
&lt;/h2&gt;

&lt;p&gt;Writing code is only one part of software engineering, and not even the hardest part. The hard part is working inside messy systems with incomplete context, unclear requirements, old decisions, conflicting constraints, and real consequences if something goes wrong.&lt;/p&gt;

&lt;p&gt;That is where things still break down, to me, that is the real frontier, not code review by itself, the real frontier is judgment, context, knowing what matters in this codebase in this company at this moment, knowing when a change is technically correct but still wrong for the system, knowing what to ignore, what to clean up, what to leave alone, and what risk is acceptable.&lt;/p&gt;

&lt;p&gt;And that my friends, is what &lt;strong&gt;software engineering really is&lt;/strong&gt;. This is the point that gets lost when the discussion becomes too abstract. Because once you leave the world of demos, mvps, benchmarks, and greenfield projects, software is not just a code problem. It is a history problem, a people problem, a maintenance problem, a tradeoff problem, and I didn't even talk about scalability, maintainability, security and many more important topics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where we are
&lt;/h2&gt;

&lt;p&gt;I am not arguing for a conservative view where nothing changes, a lot is already changing, I just think we should be &lt;strong&gt;honest&lt;/strong&gt; about where we really are.&lt;/p&gt;

&lt;p&gt;We are getting better at automating code production, we are not yet equally good at automating the deep context that real software engineering depends on.&lt;/p&gt;

&lt;p&gt;That is why I don't believe human code review is the last frontier. It's the last place where human judgment shows up before the code lands. A deeper frontier is whether that judgment can be made clear enough, structured enough, and trusted enough that the system no longer depends on humans carrying it in their heads.&lt;/p&gt;

&lt;p&gt;Maybe we'll get there, I don't know, or how we'll address the problems I mentioned.&lt;/p&gt;

&lt;p&gt;But from where most of the industry stands today, we may be on the way, but we certainly are &lt;strong&gt;not there yet&lt;/strong&gt;, it's not the same thing and it makes a big difference.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>software</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>My Personal Blog Is Finally Live</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Wed, 18 Feb 2026 17:45:32 +0000</pubDate>
      <link>https://forem.com/joacod/my-personal-blog-is-finally-live-46j9</link>
      <guid>https://forem.com/joacod/my-personal-blog-is-finally-live-46j9</guid>
      <description>&lt;p&gt;After 2 years writing on &lt;a href="https://dev.to/joacod"&gt;Dev.to&lt;/a&gt; and &lt;a href="https://medium.com/@joacod" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, I finally decided to build &lt;a href="https://joacod.com/blog/" rel="noopener noreferrer"&gt;my own blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I enjoy the platforms and they were great for a while, but having full control is always better. It took me some time but it was a nice project to ship, and I had fun building it.&lt;/p&gt;

&lt;p&gt;This wasn't just about creating the blog, I took the opportunity to rebuild my personal site &lt;a href="https://joacod.com/" rel="noopener noreferrer"&gt;joacod.com&lt;/a&gt; from scratch with a new look and feel as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech used
&lt;/h2&gt;

&lt;p&gt;I wanted something very fast, no vendor lock-in, static (SSG), and not tied to any specific frontend tech (React, Vue, Svelte, etc), so the choice was an obvious one, I went with &lt;a href="https://astro.build/" rel="noopener noreferrer"&gt;Astro&lt;/a&gt;, I love this framework and if in the future I need to extend it or add more client side heavy stuff I can do it, it's very powerful and I've been a huge fan of it for a couple of years now.&lt;/p&gt;

&lt;p&gt;Other than that, just plain HTML, Tailwind, and vanilla JS. &lt;strong&gt;That's it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Since &lt;a href="https://blog.cloudflare.com/astro-joins-cloudflare/" rel="noopener noreferrer"&gt;Astro joined Cloudflare&lt;/a&gt; early this year, it felt like a good idea to deploy it there, but there are multiple deployment options (if I ever want to change that), right now it just works for me, and I get an extra &lt;strong&gt;layer of security&lt;/strong&gt; as a bonus.&lt;/p&gt;

&lt;h2&gt;
  
  
  Did I use AI to create it?
&lt;/h2&gt;

&lt;p&gt;Well, we are in 2026. If you are not using AI to help and improve your work, you are living under a rock, so yeah, &lt;strong&gt;of course I use AI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I like to test different tools and models, so for this I used a mix of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude Code + &lt;a href="https://www.anthropic.com/news/claude-opus-4-6" rel="noopener noreferrer"&gt;Opus 4.6&lt;/a&gt; (Anthropic)&lt;/li&gt;
&lt;li&gt;OpenCode + &lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/" rel="noopener noreferrer"&gt;GPT-5.3-Codex&lt;/a&gt; (OpenAI)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My coding agent of choice is &lt;a href="https://opencode.ai/" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt;, it's such a great project, if you don't know it and you are using something like Claude Code, I'd suggest you give it a try, it's the same approach, but open source, with better performance. You can also use any model you want with it, it has a lot of different providers you can connect to, and it's very easy to get up and running.&lt;/p&gt;

&lt;p&gt;I was a user of Claude Code and Anthropic models for the past 6 months, but this month I switched entirely to the OpenCode + GPT-5.3-Codex combo. It may take you some time to get used to Codex if you are coming from Opus, Codex asks many more questions and requests clarifications (which is good), but if you know what you're doing and give precise directions, the results are amazing.&lt;/p&gt;

&lt;p&gt;That said, &lt;strong&gt;both options are great&lt;/strong&gt;, and at this point it's just a matter of personal preference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now what?
&lt;/h2&gt;

&lt;p&gt;Having the opportunity to make it work exactly how you want is awesome. So I'll be adding more functionality, little fixes, and creating new stuff that sounds cool to build. &lt;/p&gt;

&lt;p&gt;I'll continue writing as always, about the things that interest me, as a form of catharsis, or sharing news from the tech world. &lt;/p&gt;

&lt;p&gt;The new blog will be the main canonical source, but I'll keep posting each article on Dev.to and Medium as secondary channels.&lt;/p&gt;

&lt;p&gt;It was a good start of the year. It is always good to &lt;strong&gt;ship something and see it live&lt;/strong&gt;. If you have suggestions, comments, or improvements, they are always welcome.&lt;/p&gt;

&lt;p&gt;Thanks for reading, and see you on the web!&lt;/p&gt;

</description>
      <category>writing</category>
      <category>softwaredevelopment</category>
      <category>webdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>AI in Corporate, Navigating a Sea of Hot Air</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Fri, 16 Jan 2026 13:55:42 +0000</pubDate>
      <link>https://forem.com/joacod/ai-in-corporate-navigating-a-sea-of-hot-air-2h91</link>
      <guid>https://forem.com/joacod/ai-in-corporate-navigating-a-sea-of-hot-air-2h91</guid>
      <description>&lt;p&gt;I see several problems in corporate environments and in how AI is being used to speed up development.&lt;/p&gt;

&lt;p&gt;On one hand, the more senior devs, who paradoxically are the ones who &lt;strong&gt;benefit the most&lt;/strong&gt; from these tools are resisting. Some out of fear of being replaced, others are stuck with the idea of what an AI from two years ago could do. They don't try new models or tools, and a lot of them don't even know for example what Claude Code is...&lt;/p&gt;

&lt;p&gt;On the other side, the processes and methodologies companies use are &lt;strong&gt;outdated&lt;/strong&gt;. They already were years ago, but AI made it even more obvious. So no matter how much we speed things up on the dev side, the &lt;strong&gt;real bottleneck&lt;/strong&gt; is somewhere else, and none of the decision makers are willing to have that conversation.&lt;/p&gt;

&lt;p&gt;If we're doing a bit of future telling, the era of small structures and small teams is coming, roles merging, more ownership from start to finish. If you're a dev, learn product. If you come from product or management, learn code. None of these roles are going to exist the way we know them. How long will it take for the change to arrive? No idea. But it's coming, and like always in tech, you've gotta adapt. The time is now.&lt;/p&gt;

&lt;p&gt;And finally, stop parroting whatever people say online, &lt;strong&gt;including this post&lt;/strong&gt;. Research, experiment, see how far you can take those ideas you've always had. In my opinion, reality is somewhere in the middle, we're nowhere near the AGI the AI-Bros talk about, but the tools are way better than what the Anti-AI-Bros claim.&lt;/p&gt;

&lt;p&gt;Good luck navigating this sea of hot air!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>career</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I don't care about the algo</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Wed, 24 Dec 2025 14:42:02 +0000</pubDate>
      <link>https://forem.com/joacod/i-dont-care-about-the-algo-1idi</link>
      <guid>https://forem.com/joacod/i-dont-care-about-the-algo-1idi</guid>
      <description>&lt;p&gt;It's been an interesting year, a little experiment on social media and building some online presence, my approach right now and for the next year is this &lt;strong&gt;"I don't care about the algo"&lt;/strong&gt; not any more.&lt;/p&gt;

&lt;p&gt;I started as everybody, investigating what was the best approach in general, but at some point every article or YouTube video about it sounds the same, over and over again. Analyze the different way people talk depending on the platform, for example a post on X got nothing to do with how people post and behave on LinkedIn, hashtags yes or no, don't put external links on posts (that will limit your reach), when starting reply more than you post so people get used to your takes, and every other advice made by the gurus of social media.&lt;/p&gt;

&lt;p&gt;Despite some of the advices kind of work at first, the different algorithms and what platforms value most to keep users hooked change over time, and very often. If you want to be up to date with the latest updates you end up worrying more about the &lt;strong&gt;new algo&lt;/strong&gt; than your content or what you want to talk about.&lt;/p&gt;

&lt;p&gt;At some point I realized a couple of things. My intention was to be consistent so in order to do that I needed to enjoy what I was doing and doing it my way. Since I was aware of the &lt;em&gt;"magic formulas to make it"&lt;/em&gt;, when I read a post I recognize that stuff and is usually a sign to stop reading.&lt;/p&gt;

&lt;p&gt;Currently most posts or ideas have no original voice at all, it's all the same regurgitated text from different AIs, different models but the same kind of slop.&lt;/p&gt;

&lt;p&gt;I talked about my experience refining and editing with AI &lt;a href="https://dev.to/joacod/unsolicited-advice-about-posting-on-online-16j7"&gt;here&lt;/a&gt;, long story short, people want to know what &lt;strong&gt;real people&lt;/strong&gt; think, agree or disagree, share different ideas and make their own conclusions, at least that is what I want.&lt;/p&gt;

&lt;p&gt;We really need to make much more emphasis on &lt;strong&gt;critical thinking&lt;/strong&gt;, no shortcuts, have your own opinions, and be ready to change your mind if something just makes more sense or you were missing information, you don't have to be right, you just have to be open to new point of views, and the possibility that you may be wrong about a lot of things.&lt;/p&gt;

&lt;p&gt;For all of this, a few months ago I made a decision, &lt;strong&gt;I won't care about the algo any more&lt;/strong&gt;. That had a huge positive effect, I was no longer following a &lt;em&gt;formula&lt;/em&gt;, I just wrote things into internet's void, most of the time I don't get any reach at all, but the premise was &lt;em&gt;"I don't care"&lt;/em&gt; so no problem, other times I get some interactions and the chance to discuss some ideas or topics, and very few times something hit a nerve and gets many interactions, shares or likes, that is awesome not gonna lie, but again not the objective of it at all, the point is to be able to share my thoughts, get them out of my mind, and it works really good for me in a cathartic and psychological way.&lt;/p&gt;

&lt;p&gt;It was a great year, I learned a lot of things, and this is my approach for 2026: I'll keep writing stuff I'm interested in, posting when I feel like doing it, and &lt;strong&gt;let the algo be&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Happy holidays to everybody, see you in the future!&lt;/p&gt;

</description>
      <category>writing</category>
      <category>socialmedia</category>
      <category>learning</category>
      <category>discuss</category>
    </item>
    <item>
      <title>AI and the Loss of the Flow</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Wed, 05 Nov 2025 06:49:33 +0000</pubDate>
      <link>https://forem.com/joacod/ai-and-the-loss-of-the-flow-13hc</link>
      <guid>https://forem.com/joacod/ai-and-the-loss-of-the-flow-13hc</guid>
      <description>&lt;p&gt;Let's face it, we write less and less code every day. Software engineering changed for good. &lt;strong&gt;That ship has sailed&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And while we swing from &lt;em&gt;"oh no, I'm going to lose my job soon"&lt;/em&gt; to &lt;em&gt;"this clanker has no idea, of course I'm absolutely right"&lt;/em&gt;, depending on the size and complexity of what we're building, we are not noticing what we are really losing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Flow
&lt;/h2&gt;

&lt;p&gt;This one's short. Mostly because, despite growing up reading books, most of you now doomscroll your social media drug of choice and probably lost the attention span for more than a few lines of text. So if you made it this far, &lt;strong&gt;congrats!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What I actually want to talk about is something different, and honestly, a &lt;strong&gt;bigger&lt;/strong&gt; problem than we think.&lt;/p&gt;

&lt;p&gt;Those of you who enjoy programming will get this, back in the pre-AI days, coding felt like a &lt;strong&gt;craft&lt;/strong&gt;. You'd have a problem, understand it, design a solution, go through the specs... and finally, the rewarding part, writing the code.&lt;/p&gt;

&lt;p&gt;That moment was special because you'd already thought about it. You could see all the moving parts in your mind and you knew what you were about to build. You'd start typing, and soon enough, you'd enter that magical state a lot of people call &lt;strong&gt;"the flow"&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It's that pure focus where distractions fade, you're deeply immersed in what you're doing, and time kind of dissolves. It's not about finishing or releasing your project (though that's nice too). It's about that "in the zone" feeling.&lt;/p&gt;

&lt;h2&gt;
  
  
  But that was the past
&lt;/h2&gt;

&lt;p&gt;Now we just &lt;strong&gt;prompt&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;It's rare for AI to nail it on the first try. If it does, it's because you wrote a hyper-specific prompt, if not you'll probably have to reword again and again as you catch tiny details that are off. We all love longer context windows, but at some point the slop written starts to multiply.&lt;/p&gt;

&lt;p&gt;Sometimes I just bail out of the AI loop and start coding manually, especially when i get tired of arguing with a tool about something I already know how to do.&lt;/p&gt;

&lt;p&gt;If you actually use AI at work, you know most of those YouTube demo projects don't even come close to the complexity of a real production codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anyway, back to the flow
&lt;/h2&gt;

&lt;p&gt;We're losing it.&lt;/p&gt;

&lt;p&gt;It's the most automatic, immersive part of our job, and it's slowly disappearing. Sure, AI keeps getting better, and it'll eventually handle more and more of what we used to do manually. But beyond that whole debate, losing the flow means losing a huge part of the joy of programming.&lt;/p&gt;

&lt;p&gt;Instead of that &lt;strong&gt;deep focus&lt;/strong&gt; where ideas turn into working code, we now live in a loop of reviews, prompts, tweaks, and retries.&lt;/p&gt;

&lt;p&gt;I'm not sure where this is going. Maybe we'll learn to find that sense of flow at higher levels, when designing systems, architecting solutions, or abstracting problems. That can work for some.&lt;/p&gt;

&lt;p&gt;I don’t know.&lt;/p&gt;

&lt;p&gt;What I do know is that the old way felt better. It was more &lt;strong&gt;satisfying&lt;/strong&gt;. We're removing a crucial part of the craft, a part that made you care deeply about the quality of what you built, as you shaped it, line by line of code.&lt;/p&gt;

&lt;p&gt;And don't get me wrong, I’m not against AI. It's an incredible tool. I use it every day. But I can't help noticing that the more we rely on it, the less time we actually spend inside the problem.&lt;/p&gt;

&lt;p&gt;We need to regain attention. Or maybe we're doomed to trade joy for efficiency, satisfaction for speed. Maybe the flow was the price we paid for progress.&lt;/p&gt;

&lt;p&gt;Anyway, if you're reading this, maybe close your tab, open your editor, and code something today. No AI, no autocomplete. Just you, the problem, and that beautiful silence of being completely lost in it.&lt;/p&gt;

&lt;p&gt;Be water my friend.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
    <item>
      <title>A New Day, a New Security Attack on npm…</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Tue, 16 Sep 2025 17:48:54 +0000</pubDate>
      <link>https://forem.com/joacod/a-new-day-a-new-security-attack-on-npm-1jek</link>
      <guid>https://forem.com/joacod/a-new-day-a-new-security-attack-on-npm-1jek</guid>
      <description>&lt;p&gt;The number of attacks and vulnerabilities popping up every week in npm is becoming ridiculous, and the problem just keeps growing.&lt;/p&gt;

&lt;p&gt;Last week, we had a very serious attack where &lt;a href="https://socket.dev/blog/npm-author-qix-compromised-in-major-supply-chain-attack" rel="noopener noreferrer"&gt;the account of a major maintainer "Qix" was compromised&lt;/a&gt;, and today, once again, another important one, an &lt;a href="https://www.ox.security/blog/npm-2-0-hack-40-npm-packages-hit-in-major-supply-chain-attack/" rel="noopener noreferrer"&gt;attack on the tiny-color library&lt;/a&gt;, potentially affecting more than 180 packages.&lt;/p&gt;

&lt;p&gt;And of course, the most common type of attack is a "&lt;strong&gt;Supply Chain Attack&lt;/strong&gt;".&lt;/p&gt;

&lt;p&gt;The risks in the JavaScript ecosystem have always been there, especially with the huge number of dependencies whose origins we don’t fully know. But in recent weeks, we’ve seen more and more news of libraries with &lt;strong&gt;millions of downloads&lt;/strong&gt; being compromised.&lt;/p&gt;

&lt;p&gt;The problem is that in JavaScript we often rely on hundreds of packages, and it only takes &lt;strong&gt;one being contaminated&lt;/strong&gt; for the attack to reach your project. You may even be using dozens of packages without realizing it, because they are dependencies of dependencies of something else you’re using.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a “Supply Chain Attack”?
&lt;/h2&gt;

&lt;p&gt;A "Supply Chain Attack" is like a modern Trojan horse. Instead of attacking your application directly, they compromise a popular library, which then ends up running inside your application with different objectives. It can even reach CI/CD pipelines, production environments, and more, with countless possibilities for damage or exposure of sensitive data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is it so risky in npm?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High dependency&lt;/strong&gt;: a single npm install can bring in dozens of indirect libraries you don’t even know about.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Excessive trust&lt;/strong&gt;: we blindly assume that everything on npm is safe.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dangerous automation&lt;/strong&gt;: many CI/CD pipelines update dependencies automatically, opening the door for attacks to spread.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What to do?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lock versions in your package.json
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;caret "^"&lt;/strong&gt; is your worst enemy. It’s better to update dependencies consciously than to let any “Minor” or “Patch” version install automatically.&lt;/p&gt;

&lt;p&gt;Following Semantic Versioning, we have &lt;strong&gt;MAJOR.MINOR.PATCH&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"library-example": "^6.1.0" // installs Minor and Patch versions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;With the caret&lt;/strong&gt;, any version like 6.1.1, 6.1.2, or 6.2.0 will eventually be installed in your project. If one of those versions is compromised, you’ll be using it without knowing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"library-example": "6.1.0" // installs only that specific version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Audit your dependencies
&lt;/h3&gt;

&lt;p&gt;Don’t fall into the &lt;em&gt;"install and forget"&lt;/em&gt; trap. Dependencies &lt;strong&gt;must be reviewed periodically&lt;/strong&gt;, because even very popular libraries can be compromised.&lt;/p&gt;

&lt;p&gt;Check what you install and use tools such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.npmjs.com/cli/v8/commands/npm-audit" rel="noopener noreferrer"&gt;npm audit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://owasp.org/www-project-dependency-check/#" rel="noopener noreferrer"&gt;OWASP Dependency-Check&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools detect known vulnerabilities and help you prioritize security patches.&lt;/p&gt;

&lt;p&gt;Also, it’s good practice to check the state of a library before installing: &lt;em&gt;when was it last updated? is there activity on GitHub? how many maintainers does it have?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A library that’s abandoned or maintained by just one person is far more vulnerable to being taken over by attackers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimize dependencies
&lt;/h3&gt;

&lt;p&gt;Every new library you add is a potential entry point into your application. Many times, we install a package for very simple tasks, it’s often better to write 20 lines of your own code than to bring in yet another unknown dependency.&lt;/p&gt;

&lt;p&gt;Fewer dependencies = less attack surface, fewer updates to monitor, and a more predictable project. &lt;strong&gt;Not everything has to come from npm&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In npm, &lt;strong&gt;trust is delegated far too easily&lt;/strong&gt;. Be careful with what you bring into your project, &lt;strong&gt;your security depends on it&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>programming</category>
      <category>security</category>
    </item>
    <item>
      <title>Virtual "Team Building" activities are a massive red flag</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Fri, 18 Jul 2025 16:01:13 +0000</pubDate>
      <link>https://forem.com/joacod/virtual-team-building-activities-are-a-massive-red-flag-27cl</link>
      <guid>https://forem.com/joacod/virtual-team-building-activities-are-a-massive-red-flag-27cl</guid>
      <description>&lt;p&gt;Dear execs, HR folks, and managers, please, let’s stop with the virtual team building sessions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They waste time&lt;/li&gt;
&lt;li&gt;They force devs and QAs who are deep into real work to context-switch and lose focus&lt;/li&gt;
&lt;li&gt;They’re awkward&lt;/li&gt;
&lt;li&gt;90% of the people are just waiting for it to be over&lt;/li&gt;
&lt;li&gt;And they never, never, create any real bonding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just my humble opinion, but here are two things that actually do work&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Real-life team building activities, in person
&lt;/h3&gt;

&lt;p&gt;Not mandatory. Bonus points if there’s travel involved, even if it’s just for a day or two. The people who show up actually want to be there. When you bring together folks from different teams and roles who may have only interacted virtually for months, conversations happen naturally. Real groups form organically.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. A good work environment
&lt;/h3&gt;

&lt;p&gt;A healthy workplace, where people collaborate the right way, is actually team building by itself. Some of my closest friendships started like that, just working together every day, helping each other out, bonding over shared interests, games, movies, music, whatever. And again, but very very important, it all happens ORGANICALLY.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now, why do I say virtual team building is a red flag?
&lt;/h2&gt;

&lt;p&gt;In my experience, these things usually pop up when there are deeper problems on the team:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unrealistic deadlines&lt;/li&gt;
&lt;li&gt;People with weak soft skills being promoted to leadership&lt;/li&gt;
&lt;li&gt;A toxic environment where gatekeeping is rewarded instead of collaboration&lt;/li&gt;
&lt;li&gt;And the list goes on, you can probably add a few of your own, we all identify this kind of things&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re in a leadership role and thinking about starting these virtual bonding sessions, maybe take a step back and look at what’s really going on inside the team. Fix the root cause first.&lt;/p&gt;

&lt;p&gt;Let’s be a bit more self-aware. Stop throwing band-aids at deeper issues. And instead, create space for real, meaningful connections to grow.&lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>productivity</category>
      <category>leadership</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Unsolicited advice about posting online</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Wed, 16 Jul 2025 17:47:55 +0000</pubDate>
      <link>https://forem.com/joacod/unsolicited-advice-about-posting-on-online-16j7</link>
      <guid>https://forem.com/joacod/unsolicited-advice-about-posting-on-online-16j7</guid>
      <description>&lt;p&gt;About a year and a half ago, I started being more active on different platforms. The goal was simple, share experiences, news, memes, whatever I felt like, and connect with people in the software and startup world from different corners of the globe.&lt;/p&gt;

&lt;p&gt;I've always enjoyed writing. But thanks to impostor syndrome, the pressure to sound "professional", and that english is not my native language, I started filtering my ideas through AI, mainly ChatGPT. I’d write a raw draft, then polish it up and translate it with AI. And it felt like magic, the ideas were mine, but the result was perfectly written, not a typo in sight. Awesome! Right?&lt;/p&gt;

&lt;p&gt;Well... not quite. I slowly started noticing something, even though the message was what I wanted to say, it didn’t feel like me, the words were too polished, the vocabulary was different of what i normally use on the day to day, the main ideas were there, but definitely the guy that wrote that wasn't me.&lt;/p&gt;

&lt;p&gt;So, I stopped using AI for my posts. I went back to the old school way of doing things, just writing, directly in english, with my own words, editing manually a bit, fixing little things here and there, but more close to a first raw version of my ideas. And what I didn’t expect was how quickly the reach grew, more impressions, more engagement, more people commenting, talking to me on DMs, and actually connecting.&lt;/p&gt;

&lt;p&gt;Moral of the story? In this AI driven, hyper curated world where everyone tries to be "professional" and politically correct, we’ve kind of lost our voice and essence, especially on social media. We started to identify when something was written by AI. We read a paragraph, realize it’s just more of the same, and scroll on.&lt;/p&gt;

&lt;p&gt;I’m still figuring things out, but I try every day to show up here the same way I am in real life, less smoke and mirrors, more honesty.&lt;/p&gt;

&lt;p&gt;Authenticity is going to be the real differentiator in the future we’re walking into.&lt;/p&gt;

&lt;p&gt;I don’t want to meet a bot, I want to meet the human behind the screen. So here’s to the imperfections that make us humans, cheers!!&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>writing</category>
      <category>ai</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Technical Interviews in the AI Era</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Wed, 25 Jun 2025 18:44:27 +0000</pubDate>
      <link>https://forem.com/joacod/technical-interviews-in-the-ai-era-2phk</link>
      <guid>https://forem.com/joacod/technical-interviews-in-the-ai-era-2phk</guid>
      <description>&lt;p&gt;Let’s be real, you don’t need more than &lt;strong&gt;1 hour&lt;/strong&gt; for a proper technical interview.&lt;/p&gt;

&lt;p&gt;The way most companies are doing interviews right now? It’s &lt;strong&gt;embarrassing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;They completely miss the point of what makes someone a good software developer. And don’t even get me started on how out of touch they are with the current state of AI.&lt;/p&gt;

&lt;p&gt;Want to run an effective interview? Want to find out if someone not only has experience but also knows how to use modern AI tools to boost their work?&lt;/p&gt;

&lt;p&gt;Here’s the process. One hour. &lt;strong&gt;That’s all it takes&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔹 First 20 minutes: Let’s talk past projects
&lt;/h2&gt;

&lt;p&gt;Ask about the candidate’s previous experience. Dive a bit into the projects they mention. This alone tells you a lot, what kind of challenges they’ve faced, how deep their experience goes, and how they think about solving problems.&lt;/p&gt;

&lt;p&gt;Important: before the interview, send them a &lt;strong&gt;starter project&lt;/strong&gt; (depending on the stack) so they can clone it and have it ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔹 Next 40 minutes: Live coding, but make it real
&lt;/h2&gt;

&lt;p&gt;Now here’s where it gets interesting, build a feature live, inside that starter repo.&lt;/p&gt;

&lt;p&gt;But with a twist, let them use AI tools, VS Code with Copilot, Cursor, Windsurf, ChatGPT, Claude, Grok, whatever they want.&lt;/p&gt;

&lt;p&gt;Autocomplete, AI prompts, agent mode, copy paste, search on the internet, it’s all fair game.&lt;/p&gt;

&lt;p&gt;What matters isn’t if they use AI, it’s &lt;strong&gt;how&lt;/strong&gt; they use it.&lt;/p&gt;

&lt;p&gt;As they build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask about the code the AI generates, &lt;strong&gt;go deep&lt;/strong&gt; on the candidate explanations about it.&lt;/li&gt;
&lt;li&gt;Look at the way they prompt. Are they treating the AI like magic, or are they &lt;strong&gt;in control&lt;/strong&gt;? Do they give clear instructions about architecture, refactors, patterns?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll quickly spot the difference between someone who leads with intention and someone who’s just hoping the AI does it all for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what’s the catch?
&lt;/h2&gt;

&lt;p&gt;Simple, this kind of interview only works if the &lt;strong&gt;interviewer&lt;/strong&gt; actually knows the craft.&lt;/p&gt;

&lt;p&gt;That means &lt;strong&gt;real technical experience&lt;/strong&gt;, and being up to speed with modern AI workflows. If you don’t know how these tools work, you won’t know what to look for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Cut the nonsense, skip memorized LeetCode problems that won't have any impact on the actual job. Focus on what matters, real world experience and modern tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Yes, it really is that simple.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>softwareengineering</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Unlocking Developer Superpowers with Cursor</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Tue, 17 Jun 2025 19:29:44 +0000</pubDate>
      <link>https://forem.com/joacod/unlocking-developer-superpowers-with-cursor-219a</link>
      <guid>https://forem.com/joacod/unlocking-developer-superpowers-with-cursor-219a</guid>
      <description>&lt;p&gt;As a developer who’s been coding for 15+ years, I’ve recently shared my excitement about AI IDE's in general, and after trying a bunch of them my main editor for some time now is definitely &lt;a href="https://www.cursor.com" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Disclaimer: this is not a paid promotion (in fact I'm the one paying Cursor their Pro Plan), it's just my honest opinion on using it and my thoughts about the experience.&lt;/p&gt;

&lt;p&gt;This AI powered code editor, built as a fork of Visual Studio Code, has transformed how I work, and I want to dive deeper into that on this article.&lt;/p&gt;

&lt;p&gt;Here’s how I’ve been leveraging Cursor, why it feels like &lt;strong&gt;cheating&lt;/strong&gt;, and why my &lt;strong&gt;foundational skills remain key&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't Be a Passenger, Take the Wheel
&lt;/h2&gt;

&lt;p&gt;AI won’t drive the project for you. It’s fast, confident, and occasionally &lt;strong&gt;dead wrong&lt;/strong&gt;. I've seen AI output go sideways a hundred different ways.&lt;/p&gt;

&lt;p&gt;But with the right instincts, a bit of judgment and some cleanup, it becomes a serious asset (if you stay in control).&lt;/p&gt;

&lt;h3&gt;
  
  
  Know When to Step In
&lt;/h3&gt;

&lt;p&gt;Cursor agent mode can refactor code impressively. But sometimes it loops endlessly, bloats logic, or misses the point entirely. I’ve tried all the usual fixes: better prompts, reframing the task, feeding it more files. Most of the time? It slows me down.&lt;/p&gt;

&lt;p&gt;What works best: recognize when it’s off, stop the loop, fix it manually, finally hit the AI again with clearer intent. A small manual edit often saves more time than &lt;strong&gt;battling the prompt&lt;/strong&gt;, and keeps things flowing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Spot the Gaps It Can’t Fill
&lt;/h3&gt;

&lt;p&gt;LLMs doesn’t know your architecture the way you do, or why a certain hack was necessary three sprints ago. I’ve seen it confidently suggest changes that would silently break core business logic.&lt;/p&gt;

&lt;p&gt;If you know your system, you’ll catch these instantly. If you don’t, you risk shipping bugs the AI helped you write.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clean Up the Hallucinations
&lt;/h3&gt;

&lt;p&gt;Wrong API calls. Fabricated methods. Wild assumptions about project structure. They’re not rare. AI can sound convincing even when it’s &lt;strong&gt;totally wrong&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is where experience matters most. I don’t trust the output blindly, I debug it, cross check it, and steer Cursor more effectively the next time around.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why It Feels Like Cheating (If You’ve Got the Skills)
&lt;/h2&gt;

&lt;p&gt;AI code editors can write code, suggest fixes, refactor functions, and answer project specific questions. Sounds great. But here’s the catch: all of that only becomes powerful in the hands of a developer who can &lt;strong&gt;tell good output from bad&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;“cheating”&lt;/strong&gt; feeling doesn’t come from skipping the hard stuff. It comes from &lt;strong&gt;accelerating&lt;/strong&gt; through it, because you can instantly spot when the AI is almost right, and know exactly how to fix it. That’s not bypassing the challenge. That’s using your experience as leverage.&lt;/p&gt;

&lt;p&gt;Do it right, and speed of development starts to increase, like a lot. That’s when it starts to feel like cheating.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Edge Is Still Experience
&lt;/h2&gt;

&lt;p&gt;Cursor doesn’t replace your skills, it amplifies them. If you don’t already know what clean code looks like, what patterns to use, or how changes ripple through a system, AI won’t save you. It might even bury you deeper.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fluency in languages&lt;/strong&gt; helps you spot subtle errors AI introduces.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design pattern knowledge&lt;/strong&gt; lets you structure AI output into something maintainable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Software architecture&lt;/strong&gt; awareness ensures your edits fit the big picture.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without this, you're letting the AI take control while you guess what it’s doing, and that is not a good idea, trust me.&lt;/p&gt;

&lt;p&gt;Long story short, Cursor isn’t building my apps. It’s helping me build them faster, cleaner, and with more focus, because I bring the judgment, context, and correction it lacks.&lt;/p&gt;

&lt;p&gt;I spend less time rewriting boilerplate, more time refining architecture, reviewing logic, and keeping quality high.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of Software Development
&lt;/h2&gt;

&lt;p&gt;A lot of people are talking about completely replacing programmers, I don't see it, and to be totally honest I think most of those claims are bullshit.&lt;/p&gt;

&lt;p&gt;New models aren't growing exponentially like all the AI ​​companies are advertising. These announcements should be taken with caution. Most of the time, their objective is just to raise more VC money.&lt;/p&gt;

&lt;p&gt;That said, we cannot deny that AI is here to stay and that many jobs, while not completely replaced, will change radically. I believe they will change in a good way: fewer repetitive or boring tasks, and more focused on solving challenging problems.&lt;/p&gt;

&lt;p&gt;The future of software development isn’t man vs machine, it’s &lt;strong&gt;collaboration&lt;/strong&gt;. Tools like Cursor are getting more powerful: inline chat, project wide edits, agent mode. But none of it replaces &lt;strong&gt;the craft&lt;/strong&gt;. It just makes skilled developers with strong foundations even faster.&lt;/p&gt;

&lt;p&gt;Learn the basics of the languages ​​you use, learn about design patterns, learn software architecture.&lt;/p&gt;

&lt;p&gt;AI isn't magic, but you can significantly increase productivity if you &lt;strong&gt;know what you're doing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The future is now.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Corporate is the Bottleneck, AI Changed the Game</title>
      <dc:creator>Joaquin Diaz</dc:creator>
      <pubDate>Thu, 12 Jun 2025 12:43:47 +0000</pubDate>
      <link>https://forem.com/joacod/corporate-is-the-bottleneck-ai-changed-the-game-3p46</link>
      <guid>https://forem.com/joacod/corporate-is-the-bottleneck-ai-changed-the-game-3p46</guid>
      <description>&lt;p&gt;Working in a corporate company feels like time has stopped, the speed of development is ridiculously slow, they are trying to use AI to improve but the problem is not the developers, it is in the infinite processes and layers upon layers of management.&lt;/p&gt;

&lt;p&gt;The agile manifesto was created for developers to work better, corporate transformed it into a tool of control and bureaucracy, let's be honest whatever they are doing is not scrum and much less agile.&lt;/p&gt;

&lt;p&gt;Big companies aren't dumb, they understand the problem they're in, but the restructuring needed to fix it is massive and one misstep can backfire, and it is backfiring. The different round of layoffs in most cases very poorly managed, only worsen employee sentiment, with key people fired, many others resigning, and an incalculable loss of real knowledge, they are entering uncharted territories.&lt;/p&gt;

&lt;p&gt;It's the time of lightweight startups, the difference is abysmal, the speed is dizzying, corporate is the current champion, confident, poorly trained and pedantic, startups are the challenging underdogs, they are hungry, they have the eye of the tiger, and they are going for the title fight.&lt;/p&gt;

&lt;p&gt;The game has changed, the playing field has leveled, the time is now.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>leadership</category>
      <category>ai</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
