<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: mslugga35</title>
    <description>The latest articles on Forem by mslugga35 (@mslugga35).</description>
    <link>https://forem.com/mslugga35</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mslugga35"/>
    <language>en</language>
    <item>
      <title>What 300 YouTube Titles Taught Me About CTR (Data from Building TitleScore)</title>
      <dc:creator>mslugga35</dc:creator>
      <pubDate>Mon, 16 Mar 2026 18:00:09 +0000</pubDate>
      <link>https://forem.com/mslugga35/what-300-youtube-titles-taught-me-about-ctr-data-from-building-titlescore-1i7h</link>
      <guid>https://forem.com/mslugga35/what-300-youtube-titles-taught-me-about-ctr-data-from-building-titlescore-1i7h</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;I built &lt;a href="https://gettitlescore.com" rel="noopener noreferrer"&gt;TitleScore&lt;/a&gt; — a tool that scores YouTube titles 0–100 using a rubric fed into the Claude API. Since launching, I've had the chance to see a lot of real titles come through. Patterns emerge fast.&lt;/p&gt;

&lt;p&gt;This post is about what those patterns actually look like — the structural mistakes that reliably tank scores, and what the high-scoring alternatives have in common.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Scoring Dimensions
&lt;/h2&gt;

&lt;p&gt;TitleScore evaluates titles across five dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Curiosity gap&lt;/strong&gt; — does the title withhold something the viewer needs to click to get?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Front-load strength&lt;/strong&gt; — are the first 3–4 words doing real work?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Emotional stakes&lt;/strong&gt; — is something at risk, or is this just information delivery?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specificity&lt;/strong&gt; — numbers, names, concrete outcomes vs. vague generalities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action orientation&lt;/strong&gt; — does the title sell the click, or describe the content?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each dimension scores 0–10. The weighted total becomes the 0–100 score.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Patterns
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pattern 1: Describing Instead of Selling
&lt;/h3&gt;

&lt;p&gt;This is the most common low-score pattern. Titles like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"How I Film My Videos" → score: 22&lt;/li&gt;
&lt;li&gt;"My Morning Routine" → score: 18&lt;/li&gt;
&lt;li&gt;"What I Eat in a Day" → score: 31&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These describe the content accurately. They fail to make a case for &lt;em&gt;why you should click&lt;/em&gt;. Compare:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"The Camera Setup That Doubled My Watch Time (Under $400)" → score: 79&lt;/li&gt;
&lt;li&gt;"I Tracked Every Meal for 90 Days — Here's What Actually Changed" → score: 74&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same underlying content. Completely different framing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 2: Passive Front Loading
&lt;/h3&gt;

&lt;p&gt;The first three words carry outsized weight. Titles that open with "A Look At," "All About," "Let's Talk," or "In This Video" reliably score under 30 on front-load strength. These are filler openers — they delay the interesting part.&lt;/p&gt;

&lt;p&gt;High-scoring openers tend to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Numbers: "5 Mistakes That...", "I Spent $10K On..."&lt;/li&gt;
&lt;li&gt;Verbs: "Stop Doing This If...", "I Quit My Job To..."&lt;/li&gt;
&lt;li&gt;Named subjects: "[Person] Just Changed...", "The [Specific Thing] That..."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pattern 3: Zero Stakes
&lt;/h3&gt;

&lt;p&gt;Title stakes don't have to be dramatic. They just have to make it clear that &lt;em&gt;something matters&lt;/em&gt; in this video. The best stakes patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Money: "I Lost $2,000 Doing This"&lt;/li&gt;
&lt;li&gt;Time: "I Wasted 3 Years Before Learning This"&lt;/li&gt;
&lt;li&gt;Social proof reversal: "Why I Stopped Following [Common Advice]"&lt;/li&gt;
&lt;li&gt;Transformation: "The One Change That Fixed My [Specific Problem]"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Titles without any stakes signal tend to score 15–25 points lower across the board.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Rubric Into a Prompt
&lt;/h2&gt;

&lt;p&gt;The technical challenge was getting Claude to apply a consistent rubric rather than generating vibes-based feedback. The approach that worked:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define each dimension with explicit scoring criteria and examples in the system prompt&lt;/li&gt;
&lt;li&gt;Require structured JSON output with per-dimension scores AND reasoning for each&lt;/li&gt;
&lt;li&gt;Temperature = 0 for consistency&lt;/li&gt;
&lt;li&gt;Include a few-shot example showing a low-score title and a high-score title with full breakdowns&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The few-shot examples made the biggest difference. Without them, Claude's scores were accurate but the reasoning was generic. With them, it mirrors the specific vocabulary of the rubric.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://gettitlescore.com" rel="noopener noreferrer"&gt;gettitlescore.com&lt;/a&gt; — free, no account required. Paste a title, get the breakdown.&lt;/p&gt;

&lt;p&gt;If the scoring feels off for your niche, I'd genuinely want to know — calibration across different content categories is an ongoing project.&lt;/p&gt;

</description>
      <category>youtube</category>
      <category>ai</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Bridging the NLP Gap: How I Built a Thesis-to-Trade Translator for Prediction Markets</title>
      <dc:creator>mslugga35</dc:creator>
      <pubDate>Fri, 13 Mar 2026 20:00:40 +0000</pubDate>
      <link>https://forem.com/mslugga35/bridging-the-nlp-gap-how-i-built-a-thesis-to-trade-translator-for-prediction-markets-2led</link>
      <guid>https://forem.com/mslugga35/bridging-the-nlp-gap-how-i-built-a-thesis-to-trade-translator-for-prediction-markets-2led</guid>
      <description>&lt;h1&gt;
  
  
  Bridging the NLP Gap: How I Built a Thesis-to-Trade Translator for Prediction Markets
&lt;/h1&gt;

&lt;p&gt;Prediction markets like Kalshi are genuinely interesting instruments. Unlike traditional financial markets, they price discrete binary outcomes — will the Fed cut rates in Q2? Will CPI come in above 3%? The contracts are clean, the payoffs are defined, and the information embedded in prices is surprisingly rich.&lt;/p&gt;

&lt;p&gt;But there's a UX problem that I kept bumping into as someone who actually trades on these platforms: &lt;strong&gt;the gap between forming a thesis and executing a structured trade is large, and it's filled with friction&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I'd read a Fed statement, form a clear opinion — "the market is pricing in 60% odds of a May cut and that's too high given the language around persistent services inflation" — and then stare at Kalshi trying to figure out exactly how to express that as a trade. Which contract? What direction? How much? When do I exit?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Is Translation, Not Intelligence
&lt;/h2&gt;

&lt;p&gt;This isn't really an intelligence problem. Most active traders on prediction markets have opinions. They have reasons. What's hard is the &lt;strong&gt;translation step&lt;/strong&gt; — going from a natural language thesis to a structured, auditable trade spec.&lt;/p&gt;

&lt;p&gt;That translation involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Market identification&lt;/strong&gt;: Finding the specific contract that best expresses the thesis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Directional clarity&lt;/strong&gt;: Is a YES or NO position the right vehicle?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sizing logic&lt;/strong&gt;: Given conviction level and account size, what's an appropriate position?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Entry conditions&lt;/strong&gt;: Is now the right time, or should you wait for a trigger?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exit logic&lt;/strong&gt;: What outcome invalidates the thesis early? What's the target?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most traders do this in their head, under time pressure, and make at least one of those decisions poorly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://getpredictscript.com" rel="noopener noreferrer"&gt;PredictScript&lt;/a&gt; is a single-input tool that takes a plain English thesis and returns a structured trade spec covering all five of those dimensions. Under the hood, it uses an LLM to parse intent from natural language — extracting the underlying economic view, identifying the relevant Kalshi market category, inferring directional stance, and generating entry/exit logic that's consistent with the stated thesis.&lt;/p&gt;

&lt;p&gt;The output isn't a recommendation. It's a translation. The goal is to hand you back something you can actually &lt;strong&gt;review and audit&lt;/strong&gt; before executing, rather than a gut-feel decision made on the fly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The NLP Challenge
&lt;/h2&gt;

&lt;p&gt;The interesting engineering problem is ambiguity resolution. A thesis like "fade the cut pricing" assumes domain knowledge (Fed funds futures, FOMC language, rate expectations). A thesis like "I think the Lakers miss the playoffs" is much more direct. The model has to handle both ends of that spectrum without hallucinating markets that don't exist or producing sizing logic that ignores stated conviction levels.&lt;/p&gt;

&lt;p&gt;I'm using structured output with a defined schema so the response is always parseable — market category, contract name pattern, direction, size tier (small/medium/large based on stated conviction), and a plain-English rationale for each field. That schema enforcement has been more valuable than any amount of prompt engineering for making outputs actually usable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where It Is Now
&lt;/h2&gt;

&lt;p&gt;It's live and working at &lt;a href="https://getpredictscript.com" rel="noopener noreferrer"&gt;https://getpredictscript.com&lt;/a&gt;. It's early. The market lookup isn't connected to live Kalshi data yet, so the output is a spec you'd then go execute manually rather than a one-click trade. That's next.&lt;/p&gt;

&lt;p&gt;If you trade on prediction markets and want to try it, I'd genuinely value feedback on what the output gets wrong or what's missing from the trade spec structure.&lt;/p&gt;

</description>
      <category>fintech</category>
      <category>ai</category>
      <category>trading</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How I Used AI to Build a Personalized News Filter That Actually Saves Time</title>
      <dc:creator>mslugga35</dc:creator>
      <pubDate>Fri, 13 Mar 2026 20:00:38 +0000</pubDate>
      <link>https://forem.com/mslugga35/how-i-used-ai-to-build-a-personalized-news-filter-that-actually-saves-time-45oh</link>
      <guid>https://forem.com/mslugga35/how-i-used-ai-to-build-a-personalized-news-filter-that-actually-saves-time-45oh</guid>
      <description>&lt;h2&gt;
  
  
  The Problem: Too Much Signal, Not Enough Filter
&lt;/h2&gt;

&lt;p&gt;Every morning I'd open my laptop and immediately feel behind. TechCrunch had 18 new posts. My RSS feed had 200+ unread items. Three newsletters I'd subscribed to months ago were sitting unread. I knew somewhere in all that noise were 3-4 stories that were genuinely relevant to what I was building — but finding them cost me almost an hour.&lt;/p&gt;

&lt;p&gt;The irony: I was building software to save people time while spending my mornings doing something a machine should obviously be doing for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;DawnBrief is an AI-curated morning digest. The core mechanic is simple: you select your interest categories (SaaS, AI/ML, infrastructure, fintech, developer tools, product management — there are about 15), and every morning before 7 AM, it delivers a clean email with only the stories that match.&lt;/p&gt;

&lt;p&gt;Under the hood:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pulls from 50+ curated sources (tech news, research papers, SaaS-specific blogs, funding databases)&lt;/li&gt;
&lt;li&gt;Uses an LLM to score each story's relevance against a user's interest profile&lt;/li&gt;
&lt;li&gt;Deduplicates across sources (you don't need to see the same OpenAI funding round in four different framings)&lt;/li&gt;
&lt;li&gt;Generates short, opinionated summaries — not just headlines&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Technical Angle
&lt;/h2&gt;

&lt;p&gt;The scoring pipeline runs on a cron job. Each article gets embedded, compared against a per-user interest vector, and scored. Stories above a threshold make the digest. Stories below get dropped.&lt;/p&gt;

&lt;p&gt;The interesting engineering challenge was calibration. Early on, the AI was too conservative — users were getting 2-3 stories when they wanted 8-10. Too liberal and it started including tangentially-related noise. I ended up with a hybrid approach: hard category filters first, then LLM relevance scoring on the filtered set, then a "top N" cap with a floor to prevent empty digests.&lt;/p&gt;

&lt;p&gt;Delivery is straightforward — templated email, sent via transactional email API, personalized per user.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;The biggest surprise was how much users cared about &lt;strong&gt;consistency&lt;/strong&gt;. Getting it in their inbox before they reached for their phone in the morning mattered more than having perfect relevance. Reliability is the feature.&lt;/p&gt;

&lt;p&gt;Second surprise: people don't want to configure things. The first version had a detailed interest selector with 40+ options. Almost everyone picked 3-5 broad categories and never touched it again. I simplified the onboarding down to a single screen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;DawnBrief is live at &lt;a href="https://getdawnbrief.com" rel="noopener noreferrer"&gt;https://getdawnbrief.com&lt;/a&gt; — $19/month, cancel any time. If you're a developer or founder who's also drowning in tech noise, I'd genuinely love to hear what you think.&lt;/p&gt;

&lt;p&gt;What sources or categories are missing from your morning routine? Let me know in the comments.&lt;/p&gt;

</description>
      <category>saas</category>
      <category>ai</category>
      <category>news</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
