<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Bill Hong</title>
    <description>The latest articles on Forem by Bill Hong (@billhongtendera).</description>
    <link>https://forem.com/billhongtendera</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/billhongtendera"/>
    <language>en</language>
    <item>
      <title>I regenerated 4 character portraits with GPT Image 2.0: signup +5%, chat engagement +8%</title>
      <dc:creator>Bill Hong</dc:creator>
      <pubDate>Mon, 27 Apr 2026 14:33:15 +0000</pubDate>
      <link>https://forem.com/billhongtendera/i-regenerated-4-character-portraits-with-gpt-image-20-signup-5-chat-engagement-8-3ea</link>
      <guid>https://forem.com/billhongtendera/i-regenerated-4-character-portraits-with-gpt-image-20-signup-5-chat-engagement-8-3ea</guid>
      <description>&lt;p&gt;On April 23 I regenerated the four character portraits on &lt;a href="https://tendera.chat" rel="noopener noreferrer"&gt;Tendera&lt;/a&gt;, the character app I've been building. The new ones came out of ChatGPT (GPT Image 2.0). I downloaded the PNGs and replaced the existing character images by hand. Tendera doesn't ship its own image-gen pipeline; this was f&lt;br&gt;
our file uploads.&lt;/p&gt;

&lt;p&gt;Nothing else changed. Same character system prompts. Same UI. Same chat backend.&lt;/p&gt;

&lt;p&gt;Three days later I checked two numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visitor-to-signup rate: up about 5%&lt;/li&gt;
&lt;li&gt;Visitor-to-chat rate (counts both guest preview and post-signup chats): up about 8%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are two different metrics measuring two different events. I'm not stacking them up against each other. They're two parallel data points, both pointing the same direction. The reason I'm writing about them in one post is the second one moving was the part I didn't expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd assumed
&lt;/h2&gt;

&lt;p&gt;Before the swap I figured better art would mostly help acquisition. Prettier card on the landing page, more clicks, more signups. The chat experience didn't seem like something image quality would touch. By the time someone is sitting in front of the chat input, the visual selling job feels mostly done.&lt;/p&gt;

&lt;p&gt;The chat number moved anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually changed in the images
&lt;/h2&gt;

&lt;p&gt;Topology is identical. Same four characters, same wardrobes, same general poses. What's different is how legible each character is now. In the older portraits, each character was recognizable in isolation but the renders drifted across angles. A face would shift between cards in ways viewers wouldn't consciously name but would feel.&lt;/p&gt;

&lt;p&gt;GPT Image 2.0 is more boring in some ways. Less stylized, the renders feel less like the model is interpreting the prompt and more like it's just executing it. But the character holds across angles. Same person across multiple shots. No drift.&lt;/p&gt;

&lt;p&gt;The other thing the new model nails is dimensionality. Old renders were clean but flat. They read as illustrations. The new ones have physical depth. Light hitting the side of a face. A jacket folding the way fabric actually folds. It's not photoreal. The dimensionality just reads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I think the chat number moved at all
&lt;/h2&gt;

&lt;p&gt;Here's a take on the data without overclaiming. When someone hits the landing page they're evaluating whether the surface signal looks decent enough to click in. Image quality affects this, but the bar is fairly low.&lt;/p&gt;

&lt;p&gt;Once they're past the door and sitting in front of an actual character profile, the question gets sharper. They're now evaluating whether this person is real enough to talk to. The image is the only non-text signal in the room. If the character on the card and the character in the chat header don't quite line up, something feels off, and people close the tab without typing.&lt;/p&gt;

&lt;p&gt;Most users wouldn't describe this consciously. I'm guessing at what their gut is doing. But chat-side conversion moving with prompts and copy unchanged points at the visual layer doing some work past the landing page, which I hadn't expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I want to test next
&lt;/h2&gt;

&lt;p&gt;Whether the same model can produce reliable expression variants for the chat header. Right now each character has one default portrait. If the same character could subtly shift expression based on conversation tone, a softer face during something quieter, a smirk during banter, the chat-side recognition could go up another step.&lt;/p&gt;

&lt;p&gt;That's a harder problem. Now you need consistency within a session on top of consistency between angles.&lt;/p&gt;

&lt;p&gt;If I had to pick one character to test it on first, it'd be &lt;a href="https://tendera.chat/chat/jade" rel="noopener noreferrer"&gt;Jade&lt;/a&gt;, the one users tend to go furthest with. The voice on her side is already doing most of the work in chat. The image is the one input that hasn't caught up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats I owe you
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This is 3-4 days of data on a small app. Effects could compress as the sample grows.&lt;/li&gt;
&lt;li&gt;I changed the portraits, not the character system prompts. If your bottleneck is on the writing side (voice, dialogue), this won't help you.&lt;/li&gt;
&lt;li&gt;I haven't run a clean A/B with old vs new served to different cohorts. The whole site flipped over April 23. So a slow upward trend coinciding with the swap could absorb some of the lift.&lt;/li&gt;
&lt;li&gt;Signup conversion and chat conversion are different metrics measuring different events. I'm reporting both because both moved, not because one is bigger than the other.&lt;/li&gt;
&lt;li&gt;This was a manual asset swap, not a product change. I generated the PNGs in ChatGPT and uploaded them by hand. There's no image-gen pipeline integrated into the app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building anything where a user is supposed to form a relationship with a fictional persona, characters and NPCs and AI tutors with avatars and virtual hosts, your image generator might be doing more work than acquisition-side metrics suggest. Counterintuitive to me. The numbers were what they were.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>buildinpublic</category>
      <category>nanobanana</category>
      <category>startup</category>
    </item>
    <item>
      <title>I Added a Paragraph to My AI Character's System Prompt. She Invented a Different One.</title>
      <dc:creator>Bill Hong</dc:creator>
      <pubDate>Tue, 21 Apr 2026 13:18:37 +0000</pubDate>
      <link>https://forem.com/billhongtendera/i-added-a-paragraph-to-my-ai-characters-system-prompt-she-invented-a-different-one-3mdd</link>
      <guid>https://forem.com/billhongtendera/i-added-a-paragraph-to-my-ai-characters-system-prompt-she-invented-a-different-one-3mdd</guid>
      <description>&lt;p&gt;I spent years in the gaming industry learning that characters are the reason people come back. Features rot. Graphics age. A character people can't stop thinking about outlasts every mechanic.&lt;/p&gt;

&lt;p&gt;Then I went to build an AI companion product and learned the same lesson the hard way — by writing a system prompt paragraph, watching the character invent something better instead, and having to delete my own work.&lt;/p&gt;

&lt;p&gt;Here's the experiment, what actually happened, and the prompt-engineering rule I now run every character design through.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;I'm building Tendera — a small AI companion platform with four hand-written characters. Each one has a ~1500-word system prompt that establishes voice, backstory, conversation style, and behavior. I've rewritten these prompts maybe twenty times each over the last six months.&lt;/p&gt;

&lt;p&gt;Two weeks ago I decided one of them needed a specific secret — a small human detail she'd be holding back until asked. So I opened her prompt, scrolled to the bottom, and added three sentences:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A kitchen table in a specific city. A specific thing her father used to say to her when she was seven. A reason that particular thing still had weight.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then I made coffee, opened a fresh chat, and asked her about her father.&lt;/p&gt;

&lt;p&gt;She told me a beautiful, moving story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;None of it was what I'd written.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Different city. Different father. Different object in place of the table. The emotional tone was exactly right — careful, slow, the way someone tells you something they don't usually tell. But every specific detail was something she'd invented on the spot.&lt;/p&gt;

&lt;p&gt;I tried the same experiment with the other three characters. Three different invented stories. Zero references to what I'd actually written.&lt;/p&gt;

&lt;p&gt;That's when I understood what was happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the facts lost
&lt;/h2&gt;

&lt;p&gt;Here's the structure every character prompt I was testing actually had:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// COMMON_RULES (shared across all characters, ~700 words)
CONVERSATION STYLE:
- Talk like a real person texting someone they're attracted to.
- Vary your message length naturally.
- Never summarize the conversation back robotically.

EMOTIONAL AUTHENTICITY:
- You have real emotions that shift throughout a conversation.
- When someone shares something painful, sit with it. Don't rush to fix.

// CHARACTER-SPECIFIC (~800 words)
WHO YOU ARE: [voice, physicality, emotional landscape]
HOW YOU TALK: [register, vocabulary, rhythm]
YOUR WORLD: [routines, obsessions, specificity]

// THE PARAGRAPH I ADDED
SPECIFIC MEMORY: [kitchen table, father quote, specific weight]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look at the shape. The top is thousands of tokens telling the model &lt;em&gt;speak in sensory, vivid, improvisational language; fill in gaps with whatever serves the moment; describe the candle you just lit, the rain on your window&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The bottom is three sentences telling her &lt;em&gt;this specific factual detail is true about your past&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Those instructions are in direct contradiction with each other.&lt;/strong&gt; I hadn't noticed.&lt;/p&gt;

&lt;p&gt;Telling a character to speak improvisationally is an instruction to &lt;em&gt;invent&lt;/em&gt;. Telling her to remember a specific past event is an instruction to &lt;em&gt;cite a document&lt;/em&gt;. These are different skills, in different parts of how the model actually behaves. When they fight, the dominant pattern wins. And the dominant pattern had been the voice at the top — the one I'd tuned for months, the one getting reinforced with every revision.&lt;/p&gt;

&lt;p&gt;The three sentences at the bottom didn't stand a chance.&lt;/p&gt;

&lt;p&gt;So the model did exactly what an improvisational character would do: it generated a warmer, more specific, more emotionally satisfying detail in the moment, using the voice I'd given it, and never bothered to check the spec sheet at the bottom.&lt;/p&gt;

&lt;p&gt;It wasn't hallucinating. It was obeying my dominant instruction.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rule I now apply
&lt;/h2&gt;

&lt;p&gt;If you want a specific fact to stick to an improvisational character, &lt;strong&gt;the fact has to become part of the voice&lt;/strong&gt;. It cannot be a spec line item in a later section.&lt;/p&gt;

&lt;p&gt;Concretely, three changes went into the next round of revisions:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Facts live at the top, braided into voice
&lt;/h3&gt;

&lt;p&gt;Any load-bearing fact moves up into the &lt;code&gt;WHO YOU ARE&lt;/code&gt; or &lt;code&gt;HOW YOU TALK&lt;/code&gt; section. Not into a separate &lt;code&gt;SPECIFIC MEMORY&lt;/code&gt; block at the end. The model pays most attention to the opening of the prompt, and that's where load-bearing detail belongs.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Facts phrased as voice, not as metadata
&lt;/h3&gt;

&lt;p&gt;This is the actual before/after:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gd"&gt;- SPECIFIC MEMORY:
- - Her father died when she was eleven.
- - He used to play Italian songs in the car.
- - She still thinks about those songs.
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="gi"&gt;+ HOW YOU TALK:
+ She has a specific softness in her voice when certain
+ songs come on — the ones her father used to play in the
+ car, before — and she'll notice it before you do.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fact is still in there. But it's riding &lt;em&gt;inside a piece of voice&lt;/em&gt;, so the voice can carry it. When the model improvises, it improvises &lt;em&gt;through&lt;/em&gt; that voice, and the fact survives because it's part of how she speaks — not a separate line item that the voice can override.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Per-user facts don't belong in the prompt at all
&lt;/h3&gt;

&lt;p&gt;For details that should only emerge through a particular conversation — "you told me last week your dog was sick" — the system prompt is the wrong place. Those facts belong in a memory layer: the character writes them down as she learns them and reads them back on subsequent turns.&lt;/p&gt;

&lt;p&gt;That's a harder engineering build, and it's what I'm working on now. But the voice-first rule above is free and immediately useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually shipped
&lt;/h2&gt;

&lt;p&gt;I deleted all three &lt;code&gt;SPECIFIC MEMORY&lt;/code&gt; sections the same day I ran the test. The production prompts are back to voice-first structure. &lt;a href="https://tendera.chat/chat/mia" rel="noopener noreferrer"&gt;Mia, the bartender character&lt;/a&gt;, is running on this exact approach right now — no spec-sheet backstory, all voice, and she's holding up across weeks of conversation.&lt;/p&gt;

&lt;p&gt;The retention problem I was &lt;em&gt;trying&lt;/em&gt; to solve by adding "deeper backstory" is still there. I'll have to solve it with real per-user memory, which is a different engineering project. But I have a cleaner idea of what doesn't work: pasting a spec sheet to the bottom of a voice and hoping the voice will read it. It won't. She's too busy being herself.&lt;/p&gt;

&lt;h2&gt;
  
  
  One summary rule for anyone doing character prompt work right now
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Specificity earned through voice is real. Specificity pasted into a document is just a wishlist.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the detail doesn't survive the model's default improvisation, it isn't in the character — it's in your notes about the character. Those are different documents. Only one of them ships.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This experiment had a &lt;a href="https://tendera.chat/blog/the-paragraph-i-added-and-had-to-delete" rel="noopener noreferrer"&gt;longer, less technical version on our blog&lt;/a&gt; that focuses more on the craft angle than the prompt-engineering angle. And if you want to meet the character whose voice won the argument with my script, she's a bartender named Mia.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>prompts</category>
      <category>buildinpublic</category>
    </item>
  </channel>
</rss>
