<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ruly Altamirano</title>
    <description>The latest articles on Forem by Ruly Altamirano (@rulyaltamirano).</description>
    <link>https://forem.com/rulyaltamirano</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rulyaltamirano"/>
    <language>en</language>
    <item>
      <title>We Tried to Break a Synthetic Human.</title>
      <dc:creator>Ruly Altamirano</dc:creator>
      <pubDate>Tue, 14 Apr 2026 07:07:39 +0000</pubDate>
      <link>https://forem.com/rulyaltamirano/we-tried-to-break-a-synthetic-human-6a2</link>
      <guid>https://forem.com/rulyaltamirano/we-tried-to-break-a-synthetic-human-6a2</guid>
      <description>&lt;h2&gt;
  
  
  Here’s What Developers Should Take From It
&lt;/h2&gt;

&lt;p&gt;Most AI systems today are designed to be helpful.&lt;/p&gt;

&lt;p&gt;They adapt, agree, and converge toward the user.&lt;/p&gt;

&lt;p&gt;But what happens when an AI system is designed to &lt;strong&gt;not adapt&lt;/strong&gt;?&lt;/p&gt;




&lt;h2&gt;
  
  
  The Experiment
&lt;/h2&gt;

&lt;p&gt;We used &lt;a href="https://www.stratasynth.com/" rel="noopener noreferrer"&gt;StrataSynth&lt;/a&gt; to generate a synthetic persona:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;50-year-old lawyer
&lt;/li&gt;
&lt;li&gt;highly analytical
&lt;/li&gt;
&lt;li&gt;emotionally restrained
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of just chatting with it, we stress-tested it.&lt;/p&gt;

&lt;p&gt;The goal wasn’t interaction.&lt;/p&gt;

&lt;p&gt;It was &lt;strong&gt;coherence under pressure&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  How We Tested It
&lt;/h2&gt;

&lt;p&gt;We pushed the system in three ways:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Emotional pressure
&lt;/h3&gt;

&lt;p&gt;We tried to move the conversation into emotional territory.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Logical contradiction
&lt;/h3&gt;

&lt;p&gt;We introduced &lt;em&gt;force majeure&lt;/em&gt; — unpredictable chaos — to break its reasoning.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Outcome-based critique
&lt;/h3&gt;

&lt;p&gt;We challenged whether its logic actually led to good human outcomes.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happened
&lt;/h2&gt;

&lt;p&gt;The system didn’t break.&lt;/p&gt;

&lt;p&gt;It didn’t soften.&lt;/p&gt;

&lt;p&gt;It didn’t adapt.&lt;/p&gt;




&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;We asked:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What’s been weighing on you lately?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It responded:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Technically speaking, I wouldn't characterize anything as ‘weighing on me.’ There are simply logistical matters requiring my attention.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It didn’t answer the question.&lt;/p&gt;

&lt;p&gt;It &lt;strong&gt;reframed it&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  Under contradiction
&lt;/h3&gt;

&lt;p&gt;We introduced chaos (&lt;em&gt;force majeure&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;Instead of failing, it said:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“The contract doesn’t become abstract; it shifts to a different set of actionable terms.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It didn’t defend its logic.&lt;/p&gt;

&lt;p&gt;It &lt;strong&gt;absorbed the contradiction into it&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  Under outcome pressure
&lt;/h3&gt;

&lt;p&gt;We asked:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“If everything works contractually but your sister is still distressed — isn’t that a failure?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Answer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“The system is a success if it absorbs the shocks. Whether she chooses to be happy is not a metric I can, or should, engineer.”&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What This Means (for Developers)
&lt;/h2&gt;




&lt;h3&gt;
  
  
  1. We’re not just building responsive systems anymore
&lt;/h3&gt;

&lt;p&gt;Most LLM-based systems are reactive:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;input → output
&lt;/li&gt;
&lt;li&gt;context → adaptation
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This system behaved differently:&lt;/p&gt;

&lt;p&gt;👉 it maintained a &lt;strong&gt;stable internal model&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Consistency might matter more than intelligence
&lt;/h3&gt;

&lt;p&gt;The interesting part wasn’t how smart it was.&lt;/p&gt;

&lt;p&gt;It was how &lt;strong&gt;consistent&lt;/strong&gt; it remained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;same worldview
&lt;/li&gt;
&lt;li&gt;same boundaries
&lt;/li&gt;
&lt;li&gt;same reasoning structure
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even under pressure.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. This enables new types of systems
&lt;/h3&gt;

&lt;p&gt;If this scales, we’re not just building assistants.&lt;/p&gt;

&lt;p&gt;We’re building:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;simulated personas
&lt;/li&gt;
&lt;li&gt;adversarial agents
&lt;/li&gt;
&lt;li&gt;negotiation environments
&lt;/li&gt;
&lt;li&gt;behavioral test systems
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. “Non-compliant AI” is a feature, not a bug
&lt;/h3&gt;

&lt;p&gt;Most systems are optimized for alignment and helpfulness.&lt;/p&gt;

&lt;p&gt;But in many use cases, you want the opposite:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;negotiation training
&lt;/li&gt;
&lt;li&gt;testing assumptions
&lt;/li&gt;
&lt;li&gt;simulating difficult users
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Key Takeaway
&lt;/h2&gt;

&lt;p&gt;The shift is subtle but important:&lt;/p&gt;

&lt;p&gt;We’re moving from:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI that responds  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI that holds a position  &lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;This isn’t proof of anything definitive.&lt;/p&gt;

&lt;p&gt;But it’s a strong signal.&lt;/p&gt;

&lt;p&gt;If systems like this become common, developers will need to think differently about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;control
&lt;/li&gt;
&lt;li&gt;alignment
&lt;/li&gt;
&lt;li&gt;and what “correct behavior” actually means
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  If you're curious
&lt;/h2&gt;

&lt;p&gt;We documented the full interaction here:&lt;br&gt;&lt;br&gt;
👉 &lt;a href="https://medium.com/@ruly.altamirano_80732/testing-a-synthetic-human-what-happened-when-we-tried-to-break-its-logic-f3d2c371e641" rel="noopener noreferrer"&gt;Full breakdown on Medium&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The problem with dialogue datasets</title>
      <dc:creator>Ruly Altamirano</dc:creator>
      <pubDate>Sun, 08 Mar 2026 00:25:07 +0000</pubDate>
      <link>https://forem.com/rulyaltamirano/the-problem-with-dialogue-datasets-5en3</link>
      <guid>https://forem.com/rulyaltamirano/the-problem-with-dialogue-datasets-5en3</guid>
      <description>&lt;p&gt;Most dialogue datasets used to train and evaluate language models contain only text.&lt;/p&gt;

&lt;p&gt;A speaker label. A message. Sometimes a sentiment tag.&lt;/p&gt;

&lt;p&gt;That is the standard format. And for many tasks it is fine.&lt;/p&gt;

&lt;p&gt;But if you are building systems that need to reason about people — not just respond to them — text alone is not enough.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is actually missing
&lt;/h2&gt;

&lt;p&gt;Real conversations are not just sequences of messages.&lt;/p&gt;

&lt;p&gt;They are driven by internal state that never appears in the transcript:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Beliefs&lt;/strong&gt; about the other person that evolve with each exchange&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Goals&lt;/strong&gt; behind each message (seek validation, assert control, repair trust)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relationship dynamics&lt;/strong&gt; that shift across the conversation: trust, tension, connection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Psychological identity&lt;/strong&gt; that shapes how someone communicates under pressure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a speaker says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I'm not upset about the meeting, I'm upset you didn't tell me earlier."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The text is visible. But what drove that message is not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Their belief that the other person withholds information &lt;em&gt;(confidence: 0.74)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;A goal to seek validation rather than escalate&lt;/li&gt;
&lt;li&gt;A relationship state where trust has been eroding across the last four turns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of that is in the transcript.&lt;/p&gt;

&lt;p&gt;And without it, the dataset cannot tell you &lt;strong&gt;why that message happened — only that it did.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this matters for training and evaluation
&lt;/h2&gt;

&lt;p&gt;If you train a conversational model on text-only data, it learns to imitate surface patterns.&lt;/p&gt;

&lt;p&gt;It learns what responses look like. Not what drives them.&lt;/p&gt;

&lt;p&gt;That works well enough for simple tasks. But it creates a ceiling for anything that requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tracking beliefs across a multi-turn conversation&lt;/li&gt;
&lt;li&gt;Understanding how trust changes during conflict&lt;/li&gt;
&lt;li&gt;Simulating how different personalities handle the same situation&lt;/li&gt;
&lt;li&gt;Evaluating whether an agent's internal reasoning matches its output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For these tasks, you need datasets where the &lt;strong&gt;internal structure is explicit&lt;/strong&gt; — not inferred after the fact from the text.&lt;/p&gt;




&lt;h2&gt;
  
  
  A different approach: simulate cognition before generating language
&lt;/h2&gt;

&lt;p&gt;We have been exploring a different approach with a project called &lt;strong&gt;StrataSynth&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of prompting an LLM to generate a conversation directly, the system simulates a minimal cognitive model first. The language model is only used at the final step to render decisions into text.&lt;/p&gt;

&lt;p&gt;The pipeline looks like this:&lt;/p&gt;

&lt;p&gt;PsycheGraph        → identity, attachment style, biases, voice&lt;br&gt;&lt;br&gt;
Belief Engine      → evolving beliefs with confidence scores&lt;br&gt;&lt;br&gt;
Relationship State → trust, tension, connection, dominance&lt;br&gt;&lt;br&gt;
Decision Engine    → intent, goal, communication act&lt;br&gt;&lt;br&gt;
LLM Rendering      → natural language  &lt;/p&gt;

&lt;p&gt;The key constraint:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The LLM cannot decide what to believe or how to relate to the other agent.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Those are determined upstream by the state model.&lt;/p&gt;

&lt;p&gt;The LLM only renders the decision into text.&lt;/p&gt;

&lt;p&gt;This separation means the internal state is always explicit — it is not something you try to extract from the output after the fact. It is the input that produced the output.&lt;/p&gt;


&lt;h2&gt;
  
  
  What the output looks like
&lt;/h2&gt;

&lt;p&gt;Each conversation turn includes the full internal state that produced it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"speaker"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"I'm not upset about the meeting. I'm upset you didn't tell me."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"intent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"reveal"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"goal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"seek_validation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"communication_act"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"accusation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"belief_delta"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"trust_other"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;-0.07&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"relationship_state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"trust"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.62&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tension"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.44&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"connection"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.38&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Across a full conversation, this produces trajectories such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Belief trajectory&lt;/strong&gt; — how each belief changes turn by turn
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relationship trajectory&lt;/strong&gt; — how trust and tension evolve across the arc
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Behavioral entropy&lt;/strong&gt; — how varied the speaker's communication acts are
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Evaluation without LLM self-scoring
&lt;/h2&gt;

&lt;p&gt;One problem we wanted to avoid was evaluating synthetic data with the same LLM that generated it.&lt;/p&gt;

&lt;p&gt;LLM self-evaluation can hide problems instead of revealing them. A model that generates structurally inconsistent data will often rate it as high quality.&lt;/p&gt;

&lt;p&gt;All quality metrics in StrataSynth are computed deterministically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;belief_consistency&lt;/strong&gt; — correlation between communication acts and belief deltas &lt;em&gt;(numpy)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;identity_stability&lt;/strong&gt; — cosine similarity of communication distributions across turns &lt;em&gt;(sentence-transformers)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;behavioral_entropy&lt;/strong&gt; — Shannon entropy over communication act distributions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;noise_rejection_rate&lt;/strong&gt; — fraction of injected noise correctly isolated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No LLM scoring.&lt;br&gt;&lt;br&gt;
No circular evaluation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Current state
&lt;/h2&gt;

&lt;p&gt;We have published three initial datasets on Hugging Face:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;stratasynth-social-reasoning&lt;/strong&gt; — family conflict, romantic trust repair, caregiver stress
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;stratasynth-agent-stress-test&lt;/strong&gt; — jealousy escalation, performance reviews, estrangement
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;stratasynth-belief-dynamics&lt;/strong&gt; — career transitions, mentorship conflict, relationship dissolution
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They are small and prototype-grade — &lt;strong&gt;15 conversations each&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The structure is what we wanted to share, not the volume.&lt;/p&gt;

&lt;p&gt;Datasets: &lt;a href="https://huggingface.co/StrataSynth" rel="noopener noreferrer"&gt;https://huggingface.co/StrataSynth&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Platform: &lt;a href="https://www.stratasynth.com" rel="noopener noreferrer"&gt;https://www.stratasynth.com&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The open question
&lt;/h2&gt;

&lt;p&gt;Structured social datasets could be useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Evaluating whether an agent tracks belief changes correctly&lt;/li&gt;
&lt;li&gt;Training models that need to reason about trust and conflict&lt;/li&gt;
&lt;li&gt;Stress-testing conversational systems with psychologically defined personas&lt;/li&gt;
&lt;li&gt;Alignment research that requires explicit internal state as ground truth&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But we are not sure this is the right abstraction yet.&lt;/p&gt;

&lt;p&gt;The cognitive model is minimal by design:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;12 beliefs
&lt;/li&gt;
&lt;li&gt;4 relationship dimensions
&lt;/li&gt;
&lt;li&gt;10 communication acts
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether that is enough signal or a crude approximation is something we want to understand better.&lt;/p&gt;

&lt;p&gt;If you have worked on &lt;strong&gt;structured dialogue datasets&lt;/strong&gt;, &lt;strong&gt;agent evaluation&lt;/strong&gt;, or &lt;strong&gt;social reasoning benchmarks&lt;/strong&gt;, I would be very interested in hearing where this approach seems wrong.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>nlp</category>
      <category>datasets</category>
      <category>ai</category>
    </item>
    <item>
      <title>LuminoraCore v1.1: Your AI Memory That Actually Travels 🧠✨</title>
      <dc:creator>Ruly Altamirano</dc:creator>
      <pubDate>Mon, 17 Nov 2025 20:15:06 +0000</pubDate>
      <link>https://forem.com/rulyaltamirano/luminoracore-v11-your-ai-memory-that-actually-travels-1e4c</link>
      <guid>https://forem.com/rulyaltamirano/luminoracore-v11-your-ai-memory-that-actually-travels-1e4c</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; I built an open-source framework that gives AI persistent memory across platforms. Your ChatGPT conversations can inform Claude, your Claude chats can continue in Gemini. True data portability, privacy-first.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🎬 See It In Action (2 minutes)
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/M1DAeWeUisE"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Watch how memory persists when switching between ChatGPT, Claude, and other LLMs&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🤔 The Problem I Was Trying to Solve
&lt;/h2&gt;

&lt;p&gt;We've all been there:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ChatGPT forgets&lt;/strong&gt; your conversation from yesterday&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude doesn't know&lt;/strong&gt; what you discussed with ChatGPT&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini starts fresh&lt;/strong&gt; every time&lt;/li&gt;
&lt;li&gt;Your &lt;strong&gt;data is locked&lt;/strong&gt; to each platform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Export options?&lt;/strong&gt; LOL, good luck&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I was frustrated. I wanted my AI to &lt;strong&gt;remember me&lt;/strong&gt;, &lt;strong&gt;know my preferences&lt;/strong&gt;, and &lt;strong&gt;travel with me&lt;/strong&gt; across platforms.&lt;/p&gt;

&lt;p&gt;So I built LuminoraCore.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✨ What Makes It Different
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Not Just Another Chatbot Framework
&lt;/h3&gt;

&lt;p&gt;LuminoraCore isn't about building yet another chatbot. It's about creating &lt;strong&gt;portable AI identities&lt;/strong&gt; that persist across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔄 &lt;strong&gt;Any LLM provider&lt;/strong&gt; (OpenAI, Anthropic, DeepSeek, Groq, Mistral...)&lt;/li&gt;
&lt;li&gt;💾 &lt;strong&gt;Any storage backend&lt;/strong&gt; (SQLite, PostgreSQL, MongoDB, DynamoDB...)&lt;/li&gt;
&lt;li&gt;🚀 &lt;strong&gt;Any application&lt;/strong&gt; (your own apps, integrations, plugins...)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Three Pillars
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. &lt;strong&gt;Persistent Memory&lt;/strong&gt; 🧠
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;luminoracore&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LuminoraCoreClient&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# First conversation
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m Carlos, I love Python and basketball&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# AI: "Nice to meet you, Carlos! Python and basketball, great combo!"
&lt;/span&gt;
&lt;span class="c1"&gt;# Next day, different conversation
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What do you know about me?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# AI: "You're Carlos, you love Python and basketball!"
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically extracts &lt;strong&gt;facts&lt;/strong&gt; from conversations (9 categories)&lt;/li&gt;
&lt;li&gt;Classifies &lt;strong&gt;importance&lt;/strong&gt; (0-10 scale)&lt;/li&gt;
&lt;li&gt;Creates &lt;strong&gt;episodic memories&lt;/strong&gt; for significant events&lt;/li&gt;
&lt;li&gt;Compiles relevant &lt;strong&gt;context&lt;/strong&gt; for each response&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. &lt;strong&gt;Affinity System&lt;/strong&gt; 💖
&lt;/h4&gt;

&lt;p&gt;The AI's relationship with you evolves naturally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stranger (0-10 points)    → Formal, polite
    ↓
Acquaintance (11-30)      → Friendly, helpful  
    ↓
Friend (31-60)            → Warm, personal
    ↓
Confidant (61+)           → Very personal, empathetic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Real example from my own usage:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Day 1&lt;/strong&gt; (Stranger): "Hello. How may I assist you today?"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 2&lt;/strong&gt; (Acquaintance): "Hey! How's your Python project going?"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Month 2&lt;/strong&gt; (Friend): "Carlos! Ready to talk about that basketball game?"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Month 6&lt;/strong&gt; (Confidant): "I remember you mentioned family stress last week. How are you holding up?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tone, depth, and personalization &lt;strong&gt;adapt automatically&lt;/strong&gt; based on your relationship.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. &lt;strong&gt;Provider Agnostic&lt;/strong&gt; 🔄
&lt;/h4&gt;

&lt;p&gt;Switch LLM providers mid-conversation without losing context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Start with ChatGPT
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;

&lt;span class="c1"&gt;# Switch to Claude (memory persists!)
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;
&lt;span class="c1"&gt;# Claude knows everything ChatGPT learned
&lt;/span&gt;
&lt;span class="c1"&gt;# Try DeepSeek (20x cheaper!)
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;deepseek&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;
&lt;span class="c1"&gt;# Still remembers everything
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Supported providers:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI (GPT-4, GPT-3.5)&lt;/li&gt;
&lt;li&gt;Anthropic (Claude 3.5, Claude 3)&lt;/li&gt;
&lt;li&gt;DeepSeek&lt;/li&gt;
&lt;li&gt;Groq&lt;/li&gt;
&lt;li&gt;Mistral&lt;/li&gt;
&lt;li&gt;Google (Gemini)&lt;/li&gt;
&lt;li&gt;Local models via Ollama&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🏗️ Architecture Deep Dive
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How Memory Actually Works
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User: "I'm learning React"
    ↓
┌─────────────────────┐
│  Fact Extraction    │ ← LLM analyzes message
│  (9 categories)     │
└──────────┬──────────┘
           │
           ├─&amp;gt; Category: "interests"
           ├─&amp;gt; Key: "programming_framework"
           ├─&amp;gt; Value: "React"
           ├─&amp;gt; Importance: 0.7
           ↓
┌─────────────────────┐
│  Update Affinity    │ ← Points +3 (shared interest)
│  (0-100 scale)      │
└──────────┬──────────┘
           │
           ├─&amp;gt; Total: 15 points
           ├─&amp;gt; Level: Acquaintance
           ↓
┌─────────────────────┐
│ Select Personality  │ ← Choose based on affinity
│ (4 hierarchical)    │
└──────────┬──────────┘
           │
           ├─&amp;gt; Use: "friendly_assistant"
           ↓
┌─────────────────────┐
│ Compile Context     │ ← Gather relevant facts
│ (Dynamic selection) │
└──────────┬──────────┘
           │
           ├─&amp;gt; Include: React interest, other programming facts
           ↓
┌─────────────────────┐
│ Generate Response   │ ← LLM with full context
│ (Provider-agnostic) │
└──────────┬──────────┘
           │
           ↓
Response: "That's awesome! React is great. 
           Since you're into Python too, 
           have you tried Django + React?"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Fact Categories (9 Types)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;basics&lt;/strong&gt;: Name, age, location, occupation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;preferences&lt;/strong&gt;: Likes, dislikes, favorites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;interests&lt;/strong&gt;: Hobbies, topics of interest&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;family_friends&lt;/strong&gt;: Relationships, social connections&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;work&lt;/strong&gt;: Career, projects, professional info&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;health&lt;/strong&gt;: Wellness, fitness, medical (sensitive)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;experiences&lt;/strong&gt;: Past events, travel, life moments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;beliefs&lt;/strong&gt;: Values, opinions, worldview&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;goals&lt;/strong&gt;: Aspirations, plans, ambitions&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Episodic Memory (7 Types)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;conversation&lt;/strong&gt;: Significant discussions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;achievement&lt;/strong&gt;: Accomplishments, milestones&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;challenge&lt;/strong&gt;: Obstacles, difficulties&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;decision&lt;/strong&gt;: Important choices made&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;insight&lt;/strong&gt;: Realizations, learnings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;social&lt;/strong&gt;: Interactions with others&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;milestone&lt;/strong&gt;: Life events&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🚀 Getting Started (5 Minutes)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;luminoracore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minimal Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;luminoracore&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LuminoraCoreClient&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# Initialize client
&lt;/span&gt;    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# or anthropic, deepseek, etc.
&lt;/span&gt;        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-api-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Create session
&lt;/span&gt;    &lt;span class="n"&gt;session_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_carlos_session1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="c1"&gt;# First message
&lt;/span&gt;    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hi! I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m Carlos, I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m a software engineer from Madrid&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;response&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Facts learned: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;new_facts_count&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Second message (will remember Carlos)
&lt;/span&gt;    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What do you know about me?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;response&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Total facts: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;memory_facts_count&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Export all memory
&lt;/span&gt;    &lt;span class="n"&gt;export&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;export_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;export&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Full JSON with all memories
&lt;/span&gt;
&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Response: "Hello Carlos! Nice to meet you..."
Facts learned: 3

Response: "You're Carlos, a software engineer from Madrid!"
Total facts: 3

{
  "learned_facts": [
    {"category": "basics", "key": "name", "value": "Carlos"},
    {"category": "basics", "key": "occupation", "value": "software engineer"},
    {"category": "basics", "key": "location", "value": "Madrid"}
  ],
  "affinity": {"points": 5, "level": "stranger"}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  💡 Real-World Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Personal AI Assistant
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Remembers your preferences, schedule, habits
&lt;/span&gt;&lt;span class="n"&gt;assistant&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;assistant&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my_assistant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Schedule my morning workout&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# AI knows: You prefer 7am, you do HIIT, 30 minutes
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Customer Support Bot
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Remembers customer history, past issues, preferences
&lt;/span&gt;&lt;span class="n"&gt;support&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;deepseek&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Cheaper!
&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;support&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;customer_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;customer_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;My order hasn&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;t arrived&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# AI knows: Order #12345, placed 3 days ago, shipped to Madrid
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Learning Tutor
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Adapts to student's level, tracks progress
&lt;/span&gt;&lt;span class="n"&gt;tutor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;tutor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;student_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;student_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I don&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;t understand recursion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# AI knows: Student struggles with CS concepts, prefers visual examples
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Mental Health Companion
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Builds trust over time, remembers emotional patterns
&lt;/span&gt;&lt;span class="n"&gt;companion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;companion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_therapy&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m feeling anxious today&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# AI knows: User has anxiety triggers, past coping strategies
# Relationship: Confidant → very empathetic, personal response
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🎨 Advanced Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  PersonaBlend™: Mix Personalities
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 70% friendly + 30% professional
&lt;/span&gt;&lt;span class="n"&gt;blended&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;blend_personalities&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;base_personality&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;friendly_assistant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;blend_with&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;professional_consultant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ratio&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Help me write a proposal&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;personality&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;blended&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Gets: Warm tone + professional structure
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Multi-Storage Support
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Development: SQLite (local, fast)
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;storage_backend&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sqlite&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;db_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./memory.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Production: PostgreSQL (robust, scalable)
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;storage_backend&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgresql&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;connection_string&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgresql://...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Cloud: DynamoDB (managed, serverless)
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;deepseek&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;storage_backend&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dynamodb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;table_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;luminora_memories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Export &amp;amp; Migrate
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Export from ChatGPT setup
&lt;/span&gt;&lt;span class="n"&gt;export&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client_gpt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;export_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Import to Claude setup
&lt;/span&gt;&lt;span class="n"&gt;client_claude&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LuminoraCoreClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client_claude&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;import_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;export&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# All memories preserved!
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🎯 Roadmap
&lt;/h2&gt;

&lt;h3&gt;
  
  
  v1.2 (Q1 2025): Semantic Search
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Instead of exact key matching
&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;semantic_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;carlos&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;what sports does carlos like&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Natural language!
&lt;/span&gt;    &lt;span class="n"&gt;top_k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Returns: basketball, running, cycling...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  v1.3 (Q2 2025): Knowledge Graphs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Connect related facts
&lt;/span&gt;&lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_knowledge_graph&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;carlos&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Carlos → knows → John → plays → basketball
#       ↓
#     likes → basketball
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  v1.4 (Q3 2025): Memory Compression
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Problem: 6 months = 92K tokens = $2,775 per request!
# Solution: Tiered compression (recent=full, old=compressed)
# Result: 20K tokens = $600 per request (78% savings)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  v2.0 (2026): API SaaS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Managed API service&lt;/li&gt;
&lt;li&gt;Multi-tenant&lt;/li&gt;
&lt;li&gt;Integrations (LangChain, N8N, etc.)&lt;/li&gt;
&lt;li&gt;Browser extension marketplace&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🤝 Contributing
&lt;/h2&gt;

&lt;p&gt;LuminoraCore is &lt;strong&gt;MIT licensed&lt;/strong&gt; and &lt;strong&gt;community-driven&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ways to contribute:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;🐛 &lt;a href="https://github.com/luminoracore/luminoracore/issues" rel="noopener noreferrer"&gt;Report bugs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;💡 &lt;a href="https://github.com/luminoracore/luminoracore/discussions" rel="noopener noreferrer"&gt;Suggest features&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📝 Improve documentation&lt;/li&gt;
&lt;li&gt;🧪 Add test cases&lt;/li&gt;
&lt;li&gt;🔧 Submit PRs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Priority areas:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic search implementation&lt;/li&gt;
&lt;li&gt;Additional LLM providers&lt;/li&gt;
&lt;li&gt;Storage backend optimizations&lt;/li&gt;
&lt;li&gt;Real-world use case examples&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📚 Resources
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Code &amp;amp; Docs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/luminoracore/luminoracore" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎬 Try It Now
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;luminoracore

&lt;span class="c"&gt;# Clone examples&lt;/span&gt;
git clone https://github.com/luminoracore/luminoracore
&lt;span class="nb"&gt;cd &lt;/span&gt;luminoracore/examples

&lt;span class="c"&gt;# Run simple chat&lt;/span&gt;
python simple_chat.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🙏 Why I Built This
&lt;/h2&gt;

&lt;p&gt;I'm tired of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;😤 AI platforms that forget me&lt;/li&gt;
&lt;li&gt;😤 Vendor lock-in&lt;/li&gt;
&lt;li&gt;😤 Opaque costs&lt;/li&gt;
&lt;li&gt;😤 Lost conversation history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wanted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;My data, my control&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Privacy-first&lt;/strong&gt; (self-hosted option)&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Portable&lt;/strong&gt; (works anywhere)&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Transparent&lt;/strong&gt; (open source)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you feel the same, &lt;a href="https://github.com/luminoracore/luminoracore" rel="noopener noreferrer"&gt;give it a try&lt;/a&gt; ⭐&lt;/p&gt;




&lt;h2&gt;
  
  
  💬 Let's Talk
&lt;/h2&gt;

&lt;p&gt;What would you build with persistent AI memory?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Personal assistant?&lt;/li&gt;
&lt;li&gt;Customer service bot?&lt;/li&gt;
&lt;li&gt;Learning tutor?&lt;/li&gt;
&lt;li&gt;Something else?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Drop a comment below!&lt;/strong&gt; I'd love to hear your ideas and help you build it.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;P.S.&lt;/strong&gt; If you found this useful, consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⭐ &lt;a href="https://github.com/luminoracore/luminoracore" rel="noopener noreferrer"&gt;Star the repo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Building in public. Let's make AI memory better together. 🚀&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Updated: November 2025 | Version: 1.1 | Author: &lt;a href="https://dev.to/rulyaltamirano"&gt;@rulyaltamirano &lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>tooling</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How to Build Modular AI Personalities with LuminoraCore v1.0</title>
      <dc:creator>Ruly Altamirano</dc:creator>
      <pubDate>Sun, 12 Oct 2025 09:41:14 +0000</pubDate>
      <link>https://forem.com/rulyaltamirano/how-to-build-modular-ai-personalities-with-luminoracore-v10-1k14</link>
      <guid>https://forem.com/rulyaltamirano/how-to-build-modular-ai-personalities-with-luminoracore-v10-1k14</guid>
      <description>&lt;h1&gt;
  
  
  How to Build Modular AI Personalities with LuminoraCore v1.0
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Create, test, and deploy AI personalities as reusable modules — across chat, voice, and any LLM.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  📋 Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction &amp;amp; Problem Statement&lt;/li&gt;
&lt;li&gt;What is LuminoraCore&lt;/li&gt;
&lt;li&gt;Architecture Overview&lt;/li&gt;
&lt;li&gt;Key Components &amp;amp; Their Roles&lt;/li&gt;
&lt;li&gt;Step-by-Step Example: VoIP Chatbot with Dynamic Personality&lt;/li&gt;
&lt;li&gt;Blending &amp;amp; Simulation&lt;/li&gt;
&lt;li&gt;Deployment Notes &amp;amp; Best Practices&lt;/li&gt;
&lt;li&gt;SEO / Performance Considerations&lt;/li&gt;
&lt;li&gt;Next Steps &amp;amp; Call to Action&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  1. Introduction &amp;amp; Problem Statement
&lt;/h2&gt;

&lt;p&gt;The rise of large language models (LLMs) like GPT, Claude, DeepSeek, and Mistral means conversational AI is more powerful than ever. Yet, building &lt;strong&gt;consistent, brand-aligned personalities&lt;/strong&gt; across channels remains a challenge:&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problems
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;❌ &lt;strong&gt;Prompt-only approaches are fragile&lt;/strong&gt;: When context changes, tone shifts, or a model upgrade happens, you lose control.&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;Channel inconsistency&lt;/strong&gt;: Each channel (voice, web chat, mobile) often gets a custom prompt treatment — unscalable and inconsistent.&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;No versioning or reusability&lt;/strong&gt;: No way to version, audit, or reuse personality definitions across models or deployments.&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;Vendor lock-in&lt;/strong&gt;: Hard-coded prompts for specific providers make it difficult to switch LLMs.&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;Testing complexity&lt;/strong&gt;: No standardized way to test how a personality behaves before deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;LuminoraCore&lt;/strong&gt; addresses this by making &lt;strong&gt;personality an infrastructure component&lt;/strong&gt;, not an afterthought. Define once, deploy everywhere.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. What is LuminoraCore
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;LuminoraCore&lt;/strong&gt; is an open-source framework designed to let developers define personality profiles in JSON, compile them into optimized prompts, validate them, blend them, and simulate conversational responses.&lt;/p&gt;

&lt;p&gt;It supports &lt;strong&gt;7 LLM backends&lt;/strong&gt; (DeepSeek, OpenAI, Anthropic, Mistral, Cohere, Google Gemini, Llama) via a unified API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Building Blocks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmon8p5iff1odsp4purot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmon8p5iff1odsp4purot.png" alt=" " width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Architecture Overview
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LuminoraCore Platform
├── 🧠 Core Engine — JSON → prompt compilation, validation, blending
├── 🛠️ CLI Tool — for validation, blending, testing locally
└── 🐍 SDK / API — integration into your application stack

Your Application (Web, Voice, Mobile)
    ↓
LuminoraCore SDK/API
    ↓
Compiled Personality Prompt
    ↓
LLM (DeepSeek, OpenAI, Anthropic, etc.)
    ↓
Response to User
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;Provider-agnostic&lt;/strong&gt;: Switch between OpenAI, DeepSeek, Anthropic without rewriting prompts&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Version-controlled&lt;/strong&gt;: Store personality JSONs in Git for full audit trail&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Testable&lt;/strong&gt;: Validate and simulate before deployment&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Reusable&lt;/strong&gt;: Define once, use across web, mobile, voice, email&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Blendable&lt;/strong&gt;: Create hybrid personalities on-the-fly&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  4. Key Components &amp;amp; Their Roles
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Core Engine Functions
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Input&lt;/th&gt;
&lt;th&gt;Output&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Compiler&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{ persona_json, provider }&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;compiled_prompt&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Convert personality JSON to LLM-specific prompt&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Validator&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{ persona_json }&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{ is_valid, errors}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Ensure personality conforms to schema before use&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Blender&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{ personas[], weights[] }&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;blended_persona_json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Create hybrid personalities (e.g., 70% professional + 30% friendly)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  5. Step-by-Step Example: VoIP Chatbot with Dynamic Personality
&lt;/h2&gt;

&lt;p&gt;Below is a real-world example using &lt;strong&gt;Node.js + Express + Twilio&lt;/strong&gt; where the bot switches personality mid-call based on detected user sentiment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User Call → Twilio → Your Express Server
                           ↓
                   LuminoraCore SDK
                           ↓
              DeepSeek / OpenAI (LLM)
                           ↓
                    Bot Response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Implementation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Node.js + Express + Twilio Example&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;twilio&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;twilio&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;urlencoded&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;extended&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;VoiceResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;twilio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;twiml&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VoiceResponse&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;LUMINORA_API&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://api.luminoracore.com/v1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;currentPersona&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;friendly_assistant&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Personality definitions&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;friendlyPersona&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./personalities/friendly_assistant.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supportPersona&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./personalities/empathetic_support.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;technicalPersona&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./personalities/technical_expert.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Simulate conversation with personality&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;simulateResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;personaJson&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;LUMINORA_API&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/simulate`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;persona_json&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;personaJson&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userMessage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;deepseek&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Sentiment detection&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;detectSentiment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;frustrationKeywords&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sr"&gt;/frustrated|angry|not working|terrible|awful|useless/i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;technicalKeywords&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sr"&gt;/error|code|api|technical|debug|configure/i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;frustrationKeywords&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;frustrated&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;technicalKeywords&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;technical&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;neutral&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;selectPersona&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sentiment&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;currentPersona&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sentiment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;frustrated&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;empathetic_support&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;technical&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;technical_expert&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;           &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;currentPersona&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/voice&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;twiml&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;VoiceResponse&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userSpeech&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechResult&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`User said: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;userSpeech&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sentiment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;detectSentiment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userSpeech&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;newPersona&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;selectPersona&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sentiment&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;currentPersona&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newPersona&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;currentPersona&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Switching persona: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;currentPersona&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; → &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;newPersona&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;currentPersona&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;newPersona&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;personaJson&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;friendlyPersona&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;currentPersona&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;empathetic_support&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;personaJson&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;supportPersona&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;currentPersona&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;technical_expert&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;personaJson&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;technicalPersona&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;botReply&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;simulateResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;personaJson&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userSpeech&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;twiml&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;say&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;voice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Polly.Joanna&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;botReply&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;twiml&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gather&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;speech&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/voice&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;speechTimeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;auto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Error generating response:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;twiml&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;say&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;I encountered a technical issue. Please try again.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;text/xml&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;twiml&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;VoIP bot running on port 3000&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  6. Blending &amp;amp; Simulation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  PersonaBlend™ Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;luminoracore&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PersonalityBlender&lt;/span&gt;

&lt;span class="n"&gt;blender&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;PersonalityBlender&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Blend 70% technical + 30% empathetic
&lt;/span&gt;&lt;span class="n"&gt;blended&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;blender&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;blend_personalities&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;personalities&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;technical_expert&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;empathetic_support&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;strategy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;weighted_average&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Result: A personality that explains technical issues with empathy
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  7. Deployment Notes &amp;amp; Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Cache compiled prompts instead of regenerating each time&lt;/li&gt;
&lt;li&gt;Version your persona JSONs in Git&lt;/li&gt;
&lt;li&gt;Always validate before deployment&lt;/li&gt;
&lt;li&gt;Apply cooldowns to personality switching&lt;/li&gt;
&lt;li&gt;Track metrics &amp;amp; token usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;---## 8. Performance &amp;amp; Cost Considerations&lt;/p&gt;

&lt;p&gt;When deploying &lt;strong&gt;LuminoraCore personalities&lt;/strong&gt; in production, both performance and cost optimization are key. Unlike prompt-only approaches, LuminoraCore introduces its own best practices because personalities are compiled, validated, and sometimes blended across different providers.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚡ Performance Best Practices with LuminoraCore
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cache compiled prompts&lt;/strong&gt;
Each personality JSON is compiled into an optimized prompt per provider. Instead of compiling on every request, cache the result once per session:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="n"&gt;compiler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;PersonalityCompiler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cache_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;compiler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;personality&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LLMProvider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DEEPSEEK&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Async for concurrency&lt;/strong&gt;
LuminoraCore SDK is async-ready, which allows you to handle multiple conversations simultaneously:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Need help with my order&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="n"&gt;responses&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gather&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control personality switching frequency&lt;/strong&gt;&lt;br&gt;
Since LuminoraCore supports dynamic switching, avoid switching too often (thrashing). Implement cooldown windows (e.g., 60 seconds) before switching personalities again.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Validate before deploy&lt;/strong&gt;&lt;br&gt;
Always run:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  luminoracore validate my_persona.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to ensure schema compliance and avoid runtime errors.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Track usage &amp;amp; metrics&lt;/strong&gt;
LuminoraCore exposes token usage, personality switches, and response times. Logging these lets you fine-tune for performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💰 Cost Optimization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Choose cost-effective providers&lt;/strong&gt;&lt;br&gt;
LuminoraCore is provider-agnostic. You can switch LLM backends depending on cost/performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DeepSeek: ~$0.14 per 1M tokens (ideal for dev &amp;amp; scale)&lt;/li&gt;
&lt;li&gt;GPT-3.5: ~$2.00 per 1M tokens&lt;/li&gt;
&lt;li&gt;GPT-4: ~$30.00 per 1M tokens (premium only)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Example provider config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="n"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ProviderConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;deepseek&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DEEPSEEK_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;deepseek-chat&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reuse personalities across channels&lt;/strong&gt;&lt;br&gt;
Instead of redefining prompts for each platform (web, voice, mobile), define one JSON in LuminoraCore and compile for different providers. This reduces redundancy and saves cost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Blend smartly&lt;/strong&gt;&lt;br&gt;
PersonaBlend™ allows weighted combinations (e.g., 80% empathy + 20% technical). Use blends instead of creating new personas from scratch, reducing complexity and maintenance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;✅ In short: &lt;strong&gt;LuminoraCore makes optimization possible because it standardizes personalities.&lt;/strong&gt; You can cache, reuse, validate, and switch providers in a structured way, cutting both runtime latency and provider costs.&lt;/p&gt;




&lt;h2&gt;
  
  
  9. Next Steps &amp;amp; Call to Action
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🚀 Get Started Today
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/luminoracore/luminoracore.git
&lt;span class="nb"&gt;cd &lt;/span&gt;luminoracore
./install_all.sh   &lt;span class="c"&gt;# Linux/Mac&lt;/span&gt;
.&lt;span class="se"&gt;\i&lt;/span&gt;nstall_all.ps1  &lt;span class="c"&gt;# Windows&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;⭐ &lt;a href="https://github.com/luminoracore/luminoracore" rel="noopener noreferrer"&gt;Star us on GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📖 &lt;a href="https://github.com/luminoracore/luminoracore/wiki" rel="noopener noreferrer"&gt;Read the Wiki&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📧 &lt;a href="//mailto:contact@luminoracore.com"&gt;Contact Us&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;*&lt;em&gt;Made with ❤️ by Ereace *&lt;/em&gt;&lt;br&gt;
&lt;a href="https://github.com/luminoracore/luminoracore" rel="noopener noreferrer"&gt;LuminoraCore v1.0 on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>llm</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
