<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Lili DL</title>
    <description>The latest articles on Forem by Lili DL (@iacriolla).</description>
    <link>https://forem.com/iacriolla</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/iacriolla"/>
    <language>en</language>
    <item>
      <title>Humans, Machines, and Ratatouille 🐀</title>
      <dc:creator>Lili DL</dc:creator>
      <pubDate>Sun, 18 Jan 2026 05:42:09 +0000</pubDate>
      <link>https://forem.com/iacriolla/humans-machines-and-ratatouille-2d4p</link>
      <guid>https://forem.com/iacriolla/humans-machines-and-ratatouille-2d4p</guid>
      <description>&lt;p&gt;&lt;em&gt;A pragmatic response to system complexity in AI systems&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb9g21zk1eit7jtcaslv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb9g21zk1eit7jtcaslv.gif" alt="gif2" width="350" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a long time, we designed digital products the way many people imagine a kitchen works. If the dish looks good, the job is done.&lt;/p&gt;

&lt;p&gt;We optimized for presentation. Polished slides. Beautiful PDFs. Documents that look right. They became the default containers for organizational knowledge.&lt;/p&gt;

&lt;p&gt;And for humans, that mostly worked.&lt;/p&gt;

&lt;p&gt;Then agents and LLMs entered the kitchen. Suddenly, presentation was no longer enough.&lt;/p&gt;

&lt;p&gt;Today, the same information must be consumable not only by people, but also by systems that search, reuse, summarize, and reason over it. And yet, we keep handing machines beautifully plated dishes without recipes, and then blame the kitchen when the result is inconsistent.&lt;/p&gt;

&lt;p&gt;Many of the difficulties we attribute to AI complexity originate much earlier, in how knowledge is prepared.&lt;/p&gt;

&lt;h2&gt;
  
  
  When the kitchen meets RAG
&lt;/h2&gt;

&lt;p&gt;In many organizations, the first serious attempt to apply AI to internal knowledge comes through Retrieval Augmented Generation, or RAG. The idea sounds simple: take existing documents, connect them to a model, and ask questions.&lt;/p&gt;

&lt;p&gt;In practice, friction appears. When knowledge lives in PDFs or slide decks, RAG systems must reconstruct structure that was never explicit. Headings are inferred, sections are guessed, and chunking becomes heuristic.&lt;/p&gt;

&lt;p&gt;From the system’s point of view, this is not following a recipe. It is tasting the dish and guessing the ingredients.&lt;/p&gt;

&lt;p&gt;The symptoms are familiar. Noisy retrieval, inconsistent answers, and layers of fixes that increase cost and fragility. At some point, the question becomes unavoidable.&lt;/p&gt;

&lt;p&gt;Why are we spending so much effort interpreting information that humans already understand?&lt;/p&gt;

&lt;h2&gt;
  
  
  Ratatouille and coordination
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedugbce7qvjq3ouzreip.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedugbce7qvjq3ouzreip.gif" alt="gif1" width="245" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ratatouille captures this problem surprisingly well.&lt;/p&gt;

&lt;p&gt;Remy is an exceptional cook, and Linguini can function inside a professional kitchen. Individually, neither can consistently produce chef-level dishes.&lt;/p&gt;

&lt;p&gt;What makes their collaboration work is not talent but a shared operational language. Gestures, constraints, and conventions allow intent and execution to stay aligned.&lt;/p&gt;

&lt;p&gt;Without that language, the kitchen collapses into chaos. With it, results become repeatable.&lt;/p&gt;

&lt;p&gt;Modern AI systems face the same challenge. Humans understand meaning intuitively, while machines execute with precision. Without a shared representation of knowledge, systems are forced to guess.&lt;/p&gt;

&lt;h2&gt;
  
  
  The recipe as interface
&lt;/h2&gt;

&lt;p&gt;A professional kitchen does not rely on plating to function. It relies on recipes.&lt;/p&gt;

&lt;p&gt;Recipes make structure explicit and allow multiple cooks to coordinate without guessing. Designing for humans and machines follows the same logic.&lt;/p&gt;

&lt;p&gt;Humans need narrative and readability. Machines need explicit structure and unambiguous meaning. This is where text-based, structured formats become critical.&lt;/p&gt;

&lt;p&gt;Markdown works not because it is simple, but because it is explicit. It is readable by humans, easy to version and diff, and straightforward to process programmatically.&lt;/p&gt;

&lt;p&gt;More importantly, Markdown treats text as an interface. When agents and LLMs interact with knowledge, they do not consume visuals; they consume structure.&lt;/p&gt;

&lt;p&gt;A single Markdown source can be rendered for humans while remaining directly consumable by systems, pipelines, and automated agents. In that sense, Markdown is less about documentation and more about coordination.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why many RAG systems feel brittle
&lt;/h2&gt;

&lt;p&gt;Many limitations in RAG systems are not rooted in model capability but in how knowledge was structured in the first place.&lt;/p&gt;

&lt;p&gt;When rendered documents become the source of truth, downstream systems must reverse-engineer meaning after the fact. When knowledge is created in structured, semantically transparent formats first, entire classes of problems disappear.&lt;/p&gt;

&lt;p&gt;Retrieval improves, maintenance drops, and systems become easier to extend. In AI systems, representation is often the hidden bottleneck.&lt;/p&gt;

&lt;h2&gt;
  
  
  A final question from the kitchen
&lt;/h2&gt;

&lt;p&gt;Ratatouille is not really a story about learning how to cook. Remy already had the skills. The real challenge was making collaboration possible inside a complex system.&lt;/p&gt;

&lt;p&gt;Where in your systems are machines still guessing the recipe from the final dish? Sometimes, optimizing development starts with writing the recipe before plating the dish.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rag</category>
      <category>architecture</category>
      <category>development</category>
    </item>
    <item>
      <title>Embeddings? Your brain invented them first</title>
      <dc:creator>Lili DL</dc:creator>
      <pubDate>Sat, 23 Aug 2025 22:44:38 +0000</pubDate>
      <link>https://forem.com/iacriolla/embeddings-your-brain-invented-them-first-4f47</link>
      <guid>https://forem.com/iacriolla/embeddings-your-brain-invented-them-first-4f47</guid>
      <description>&lt;p&gt;Picture this:&lt;/p&gt;

&lt;p&gt;You walk onto a bus.&lt;br&gt;
Dozens of seats. Some taken, some empty.&lt;br&gt;
You scan the scene in under two seconds, and then sit.&lt;/p&gt;

&lt;p&gt;Not next to the loud teenager.&lt;br&gt;
Not next to the old man with the newspaper.&lt;br&gt;
You choose the seat next to the girl with headphones and a laptop.&lt;/p&gt;

&lt;p&gt;Why? You’re embedding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wait, what?
&lt;/h2&gt;

&lt;p&gt;Embeddings aren’t just for AI. They're a &lt;strong&gt;cognitive shortcut&lt;/strong&gt;: a way of mapping complex, fuzzy, emotional data into decisions we can make quickly.&lt;/p&gt;

&lt;p&gt;When your brain processes a space like a bus or a cafeteria, it runs this internal logic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“That person kinda looks like me.”&lt;/li&gt;
&lt;li&gt;“This side of the room feels chill.”&lt;/li&gt;
&lt;li&gt;“That group gives off strong ‘we all know each other’ energy, avoid.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’re not calculating dot products. &lt;br&gt;
But you &lt;em&gt;are&lt;/em&gt; assessing &lt;strong&gt;affinities&lt;/strong&gt; in high-dimensional social space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xgt8mj507gjkf9bq5k8.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xgt8mj507gjkf9bq5k8.gif" alt="gif1" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Homophily in action
&lt;/h2&gt;

&lt;p&gt;Psychologists call this &lt;strong&gt;homophily&lt;/strong&gt;: our tendency to associate with those we perceive as similar. And it’s not just deep stuff like values or beliefs. It’s surface-level cues: clothing, body language, vibe.  &lt;/p&gt;

&lt;p&gt;In a famous 1960s experiment, people were more likely to sit near someone they perceived as similar: in age, dress, gender, even posture.&lt;br&gt;&lt;br&gt;
They didn’t &lt;em&gt;talk&lt;/em&gt; to them. They just &lt;em&gt;sat closer&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Just like embeddings cluster similar meanings.&lt;br&gt;
Just like vector search finds the closest match.&lt;br&gt;
Just like Netflix recommends, “if you liked that, you’ll probably vibe with this.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Your brain: the OG vectorizer engine
&lt;/h2&gt;

&lt;p&gt;Humans have been doing this for millennia. &lt;br&gt;
When we scan a crowd, we mentally reduce everyone into &lt;strong&gt;latent features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trustworthy vs sketchy
&lt;/li&gt;
&lt;li&gt;Familiar vs foreign
&lt;/li&gt;
&lt;li&gt;Friendly vs intense
&lt;/li&gt;
&lt;li&gt;Safe vs unpredictable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then we act fast. Not always fairly. Not always consciously. &lt;br&gt;
But efficiently. It’s &lt;strong&gt;unsupervised learning with bias baked in&lt;/strong&gt;.&lt;br&gt;
Sound familiar?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdilgel2f0c44dijwpvqf.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdilgel2f0c44dijwpvqf.gif" alt="gif2" width="480" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Aaand this matters because...?
&lt;/h2&gt;

&lt;p&gt;We often talk about embeddings as machine magic. &lt;br&gt;
But they’re deeply inspired by &lt;strong&gt;how we, as humans, navigate meaning, similarity, and context.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We &lt;em&gt;vectorize&lt;/em&gt; every social situation we enter.&lt;/p&gt;

&lt;p&gt;So next time you sit on a bus and instinctively pick a seat near someone "like you", now that your brain just executed a high-dimensional similarity search.&lt;/p&gt;

&lt;p&gt;The ultimate embedding model isn’t on HuggingFace, it’s in your head.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>learning</category>
      <category>nlp</category>
    </item>
  </channel>
</rss>
