<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Felipe Lobo</title>
    <description>The latest articles on Forem by Felipe Lobo (@philipstark).</description>
    <link>https://forem.com/philipstark</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/philipstark"/>
    <language>en</language>
    <item>
      <title>I Built a Free Quiz That Matches Medical Students With Their Ideal Specialty</title>
      <dc:creator>Felipe Lobo</dc:creator>
      <pubDate>Wed, 01 Apr 2026 01:26:16 +0000</pubDate>
      <link>https://forem.com/philipstark/i-built-a-free-quiz-that-matches-medical-students-with-their-ideal-specialty-2533</link>
      <guid>https://forem.com/philipstark/i-built-a-free-quiz-that-matches-medical-students-with-their-ideal-specialty-2533</guid>
      <description>&lt;p&gt;Choosing a medical specialty is one of the highest-stakes career decisions a person can make. Most students rely on informal advice or generic 5-question quizzes that tell them to "be a surgeon because you like working with your hands."&lt;/p&gt;

&lt;p&gt;I wanted to build something better — so I created &lt;strong&gt;MediQuest&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is MediQuest?
&lt;/h2&gt;

&lt;p&gt;MediQuest is a free, scenario-based specialty discovery quiz for medical students. Instead of asking vague personality questions, it presents 40 real clinical situations and maps your responses across 8 professional dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Procedural vs. Cognitive orientation&lt;/li&gt;
&lt;li&gt;Acute vs. Longitudinal care preference&lt;/li&gt;
&lt;li&gt;Patient interaction style&lt;/li&gt;
&lt;li&gt;Team dynamics preference&lt;/li&gt;
&lt;li&gt;Work-life balance priorities&lt;/li&gt;
&lt;li&gt;Research vs. Clinical focus&lt;/li&gt;
&lt;li&gt;Diagnostic complexity tolerance&lt;/li&gt;
&lt;li&gt;Autonomy vs. Collaboration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The algorithm compares your profile against 20 ACGME-recognized specialties and generates a personalized radar chart showing where your clinical personality clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: Next.js + React, deployed on Vercel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scoring Engine&lt;/strong&gt;: Custom weighted algorithm mapping responses to specialty profiles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Payments&lt;/strong&gt;: Stripe integration for the optional $9.90 detailed report&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design&lt;/strong&gt;: Responsive, mobile-first — most med students take it on their phones between rotations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I Built This
&lt;/h2&gt;

&lt;p&gt;Every "what specialty should I choose" resource I found online was either:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A 5-question BuzzFeed-style quiz with zero clinical relevance&lt;/li&gt;
&lt;li&gt;A wall of text listing specialties with no personalization&lt;/li&gt;
&lt;li&gt;Behind a paywall with no free option&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;MediQuest gives everyone a complete free profile. The optional premium report ($9.90) provides deeper analysis for students who want more detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Free — takes about 10 minutes&lt;/strong&gt;: &lt;a href="https://mediquest-en.vercel.app/" rel="noopener noreferrer"&gt;https://mediquest-en.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'd love feedback from the DEV community on the UX, the algorithm approach, or anything else. Happy to answer questions about the technical implementation!&lt;/p&gt;

</description>
      <category>career</category>
      <category>showdev</category>
      <category>sideprojects</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Built 21 AI Skills That Write a Full Book From One Sentence</title>
      <dc:creator>Felipe Lobo</dc:creator>
      <pubDate>Tue, 24 Mar 2026 16:05:25 +0000</pubDate>
      <link>https://forem.com/philipstark/i-built-21-ai-skills-that-write-a-full-book-from-one-sentence-4lgd</link>
      <guid>https://forem.com/philipstark/i-built-21-ai-skills-that-write-a-full-book-from-one-sentence-4lgd</guid>
      <description>&lt;p&gt;Six months ago I had a problem. I wanted to generate a full-length book using AI — not a blog post, not a short story, but a real 60,000+ word manuscript. Every tool I tried hit the same wall: after 10,000 words, the prose collapsed into the same flat, predictable tone.&lt;br&gt;
So I built Book Genesis — an open-source system of 21 Claude Code skills that takes a single sentence and produces a complete manuscript through a 17-phase pipeline.&lt;br&gt;
The first output was a 68,000-word Portuguese memoir. The second was a 97,000-word English fantasy. Both scored above 9.0 on our calibration metric.&lt;br&gt;
This post is about the two hardest problems I had to solve to get there.&lt;br&gt;
Problem 1: The AI Voice Convergence Problem&lt;br&gt;
If you've ever asked an LLM to write fiction, you know the pattern. The first few paragraphs are fine. By page 10, every sentence starts sounding the same. By page 50, you're reading beige prose — grammatically correct, structurally sound, emotionally dead.&lt;br&gt;
This happens because LLMs are optimizing for the most probable next token. Over thousands of tokens, that optimization converges toward a mean. The result is prose that reads like it was written by a committee of English professors who all attended the same MFA program.&lt;br&gt;
I call these AI fingerprints, and I identified 20 of them:&lt;/p&gt;

&lt;p&gt;Adverb stacking — "she smiled warmly, nodding gently, speaking softly"&lt;br&gt;
Resolution before tension — solving problems before the reader feels the stakes&lt;br&gt;
Thematic over-signaling — "this moment reminded her of what truly mattered"&lt;br&gt;
Emotional labeling — telling emotions instead of showing them&lt;br&gt;
Tonal convergence — every character sounds identical by chapter 5&lt;/p&gt;

&lt;p&gt;The fix came in two parts.&lt;br&gt;
Part 1: Anti-AI Pattern Scan&lt;br&gt;
Every section of the manuscript gets scanned against these 20 patterns. If a section triggers 2 or more patterns, it gets flagged for a constrained rewrite — the system regenerates that section with explicit instructions to avoid the detected patterns.&lt;br&gt;
This alone improved readability significantly. But it wasn't enough for 60K+ words.&lt;br&gt;
Part 2: The Chaos Engine&lt;br&gt;
This was the breakthrough. I built an agent whose only job is to inject controlled imperfection into the manuscript.&lt;br&gt;
The Chaos Engine adds:&lt;/p&gt;

&lt;p&gt;Irrelevant micro-obsessions — a character who counts ceiling tiles during important conversations&lt;br&gt;
Failed composure management — someone who laughs at a funeral, not because they're cruel, but because they're overwhelmed&lt;br&gt;
Unprompted memory intrusions — mid-scene flashbacks that don't serve the plot but feel human&lt;br&gt;
Sentence rhythm breaks — deliberately varying sentence length and structure&lt;/p&gt;

&lt;p&gt;The key word is controlled. The chaos is bounded. Every imperfection is tagged with a narrative justification. The system knows why it's breaking the pattern.&lt;br&gt;
The result: prose that reads like it was written by a human having a bad day, not a machine having a perfect one.&lt;br&gt;
Problem 2: How Do You Measure "Good" Writing?&lt;br&gt;
You can't improve what you can't measure. Every book generation tool I looked at used one of two approaches:&lt;/p&gt;

&lt;p&gt;Vibes — "this sounds pretty good to me"&lt;br&gt;
Perplexity scores — mathematical measures that correlate poorly with reader enjoyment&lt;/p&gt;

&lt;p&gt;Neither worked. I needed something calibrated against writing that actually sells.&lt;br&gt;
Genesis Score V3.7&lt;br&gt;
I built a scoring system calibrated against 15 bestselling novels representing 350+ million copies sold. The system measures 10 dimensions:&lt;/p&gt;

&lt;p&gt;Prose quality — sentence-level craft&lt;br&gt;
Character depth — psychological complexity&lt;br&gt;
Dialogue authenticity — does this sound like a real person?&lt;br&gt;
Pacing — scene-to-scene momentum&lt;br&gt;
Emotional resonance — reader engagement potential&lt;br&gt;
World-building — setting specificity&lt;br&gt;
Voice distinctiveness — author fingerprint&lt;br&gt;
Structural coherence — plot architecture&lt;br&gt;
Thematic subtlety — theme integration without over-signaling&lt;br&gt;
Anti-AI score — absence of machine writing patterns&lt;/p&gt;

&lt;p&gt;Your final Genesis Score is your lowest dimension. Not the average — the floor. Because a book with brilliant prose but terrible pacing is still a bad book.&lt;br&gt;
This floor-based approach forces the system to address weaknesses rather than compensate with strengths.&lt;br&gt;
The 17-Phase Pipeline&lt;br&gt;
The full pipeline runs like this:&lt;/p&gt;

&lt;p&gt;Premise Analysis — decompose the one-sentence idea&lt;br&gt;
Deep Research — world, era, cultural context&lt;br&gt;
Character Architecture — full psychological profiles&lt;br&gt;
Plot Engineering — scene-by-scene structure&lt;br&gt;
Voice Calibration — establish the narrative voice&lt;br&gt;
First Draft — generate raw chapters&lt;br&gt;
Anti-AI Scan — detect and flag patterns&lt;br&gt;
Chaos Injection — add controlled imperfection&lt;br&gt;
Character Consistency Check — entity tracking&lt;br&gt;
Beta Reader Panel — 5 distinct reader personas evaluate&lt;br&gt;
Revision Pass — address feedback&lt;br&gt;
Genesis Scoring — calibrated quality measurement&lt;br&gt;
Deep Edit — targeted improvements on lowest dimensions&lt;br&gt;
Final Polish — line-level editing&lt;br&gt;
Continuity Audit — timeline and fact checking&lt;br&gt;
Editorial Package — synopsis, query letter, metadata&lt;br&gt;
Publication Prep — formatted manuscript&lt;/p&gt;

&lt;p&gt;There are 3 user approval checkpoints. You're not just hitting "go" and hoping — you review and approve at key stages.&lt;br&gt;
Results&lt;br&gt;
Book 1: 68,000-word Portuguese memoir → Genesis Score 9.0&lt;br&gt;
Book 2: 97,000-word English fantasy → Genesis Score 9.1&lt;br&gt;
Both still needed human editing. But the editing was revision, not reconstruction. The quality floor was high enough that a human editor could focus on voice and nuance rather than fixing broken plot logic or flat characters.&lt;br&gt;
Try It Yourself&lt;br&gt;
Book Genesis is MIT-licensed and fully open-source. All 21 skills are plain .md slash-command files — you can read, fork, and modify every one of them.&lt;br&gt;
Requirements: Claude Code CLI with a Claude Pro ($20/mo) or Max ($100/mo) subscription.&lt;br&gt;
bash# macOS/Linux&lt;br&gt;
curl -sL &lt;a href="https://raw.githubusercontent.com/PhilipStark/book-genesis/master/install.sh" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/PhilipStark/book-genesis/master/install.sh&lt;/a&gt; | bash&lt;/p&gt;

&lt;h1&gt;
  
  
  Windows
&lt;/h1&gt;

&lt;p&gt;irm &lt;a href="https://raw.githubusercontent.com/PhilipStark/book-genesis/master/install.ps1" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/PhilipStark/book-genesis/master/install.ps1&lt;/a&gt; | iex&lt;br&gt;
Then just run:&lt;br&gt;
/genesis-start&lt;br&gt;
Give it a one-sentence premise and let it work.&lt;br&gt;
GitHub: github.com/PhilipStark/book-genesis&lt;/p&gt;

&lt;p&gt;If you're working on long-form AI generation and have hit the convergence wall, I'd love to hear what approaches you've tried. Drop a comment or open an issue on the repo.&lt;br&gt;
Built by Felipe Lobo from Brazil. Shipping fast, building in public.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
