<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Funlingo</title>
    <description>The latest articles on Forem by Funlingo (@_funlingo_).</description>
    <link>https://forem.com/_funlingo_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/_funlingo_"/>
    <language>en</language>
    <item>
      <title>Why Your Flashcard App Is Showing You Words You Already Know — And What To Do About It</title>
      <dc:creator>Funlingo</dc:creator>
      <pubDate>Thu, 23 Apr 2026 16:20:50 +0000</pubDate>
      <link>https://forem.com/_funlingo_/why-your-flashcard-app-is-showing-you-words-you-already-know-and-what-to-do-about-it-1oj3</link>
      <guid>https://forem.com/_funlingo_/why-your-flashcard-app-is-showing-you-words-you-already-know-and-what-to-do-about-it-1oj3</guid>
      <description>&lt;p&gt;If you've used Anki, Duolingo, or any vocabulary app for more than a few months, you've had this experience:&lt;br&gt;
You're 50 cards into a review session. The app shows you "hello" for the tenth time this week. You have known "hello" for five years. You still have to mark it as easy and move on, one of the hundred little friction taxes that add up until you abandon the app entirely.&lt;br&gt;
This isn't a UX problem. It's a scheduling problem. The algorithm deciding which cards to show you is making bad decisions. And once you understand why it's making bad decisions, you can stop blaming yourself for not being "the kind of person who sticks with flashcard apps," because the apps have been lying to you about how learning works.&lt;br&gt;
This post is a decisions-and-tradeoffs walkthrough of what I learned picking a spaced-repetition algorithm for &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; — a Chrome extension that saves words users click on while watching Netflix. It's for developers building any app that involves remembering things over time. No code. Just the reasoning behind the choice.&lt;/p&gt;

&lt;p&gt;The algorithm you probably picked, and why it's wrong&lt;br&gt;
If you built a flashcard feature in the last 20 years and searched "spaced repetition algorithm," you were handed SM-2. SuperMemo shipped it in 1987. Anki uses it. Most tutorials teach it. It has been the default answer for almost four decades.&lt;br&gt;
SM-2 is also genuinely bad, and the reason it's bad is instructive.&lt;br&gt;
The algorithm works roughly like this: when you review a card and get it right, the gap until the next review multiplies by a factor — usually around 2.5x. Get it wrong, the gap collapses and you start over. Over time, a card you keep getting right gets shown less and less often, while a card you keep failing shows up constantly.&lt;br&gt;
At first glance, this is reasonable. At second glance, it has three problems that compound over months of use.&lt;br&gt;
Problem one: there is no model of forgetting. SM-2 doesn't predict how likely you are to remember a card when it's reviewed. It just multiplies numbers. Real memory follows a forgetting curve — after you learn something, your probability of recalling it drops exponentially, and the rate of that drop depends on how well-established the memory is. SM-2 assumes a review that goes well means the next interval should be 2.5x longer, regardless of whether that corresponds to a 50% chance of remembering or a 99% chance. The user can't control for retention because the algorithm has no concept of retention in the first place.&lt;br&gt;
Problem two: the difficulty signal is broken. SM-2 adjusts a "ease factor" based on your ratings, but the adjustment is so aggressive that a single bad rating can drag a card's ease factor to its floor and keep it there forever. This is why every serious Anki user ends up with "leech" cards — vocabulary that got a bad rating once, six months ago, and is now permanently stuck showing up every two days. The algorithm can't distinguish between "this word was hard on one particular day when I was tired" and "this word is fundamentally harder for me than other words." So it punishes the card permanently.&lt;br&gt;
Problem three: it refuses to learn from data. SM-2 uses the same constants for every user. Your forgetting curve and my forgetting curve are different. The Japanese word for "thank you" is easy for a Korean speaker and hard for an English speaker. SM-2 has no mechanism to adapt to who's using it. It's a 1987 algorithm running in a 2026 world where we have enormous amounts of review data and cheap statistical tools, and it ignores all of it.&lt;br&gt;
The frustrating part about SM-2 isn't that it's old. It's that the entire industry kept using it even after the research community moved on. Most flashcard apps you're using today are still running an algorithm that predates the web browser.&lt;/p&gt;

&lt;p&gt;What changed&lt;br&gt;
Between 2022 and 2024, a research group around Jarrett Ye released a family of algorithms called FSRS — Free Spaced Repetition Scheduler. It became Anki's optional algorithm in 2023, then the default recommendation for new implementations, and is now the quiet consensus among people who actually care about this stuff.&lt;br&gt;
The insight is simple: instead of scheduling reviews based on "how well did you do last time, multiplied by an arbitrary factor," FSRS models the forgetting curve explicitly and schedules your next review at the moment your predicted retention drops to a target threshold.&lt;br&gt;
The user sets the threshold. If you want to remember 90% of what you've learned, the algorithm schedules reviews at the point you'd otherwise forget. If you're cramming for an exam and want 95% retention, reviews tighten. If you're doing casual language learning and are fine with 85% retention in exchange for fewer reviews, they loosen.&lt;br&gt;
Two things fall out of this design that SM-2 literally cannot do.&lt;br&gt;
The algorithm adapts to you. Every review you complete is a data point — "I predicted you'd remember this with 73% probability, and you actually got it right." The gap between prediction and reality updates the model. Over a few hundred reviews, the parameters converge on your forgetting curve, not a generic one. Your flashcard app gets smarter the longer you use it.&lt;br&gt;
The user has a meaningful dial. Target retention is a real, understandable parameter. A learner can reason about it: "I'm preparing for a test in three weeks, I want 95% retention" or "I'm doing this for fun, 85% is fine." SM-2 has no equivalent. The user's only knob is "do I click Hard or Good," which is confusing, unstable, and punishes them for being honest about a card being difficult.&lt;/p&gt;

&lt;p&gt;The part most algorithm posts skip: what it does to the app&lt;br&gt;
If all I've told you is "use FSRS instead of SM-2," you'd be fine implementing it and shipping. But the interesting product decisions are downstream of the algorithm choice, and most articles don't cover them.&lt;br&gt;
Here's what changes when you switch.&lt;br&gt;
The four review buttons start meaning something&lt;br&gt;
In an SM-2 app, the user sees four buttons — Again, Hard, Good, Easy — and has no idea what they do. They click "Good" by reflex. Sometimes they click "Easy" on a card they found easy and the app punishes them for it (Anki users call this "ease hell," where too many Easy ratings explode the interval so far that the user stops remembering the card entirely).&lt;br&gt;
FSRS exposes predicted next-review dates for each button. A well-designed FSRS app shows the user: "Again (10 min), Hard (2 days), Good (18 days), Easy (2 months)." Now the rating means something. The user can reason about the decision. Mis-clicks on Easy drop dramatically.&lt;br&gt;
This is a small UI change that depends entirely on the algorithm underneath. SM-2 can compute these numbers too, but they're so arbitrary (just multiplications of the current interval) that exposing them makes the mechanism feel fake. FSRS's numbers feel like real predictions because they are.&lt;br&gt;
The "new card" state becomes a separate surface&lt;br&gt;
FSRS distinguishes between four card states: new (never seen), learning (seen once or twice, still unstable), review (stabilized in memory), and relearning (failed a review, rebuilding). These aren't just internal states — they correspond to genuinely different experiences for the user.&lt;br&gt;
Most flashcard apps ignore this and pile all four into a single review queue, which feels chaotic. Your brain has to context-switch between "here's a word I've never seen before, let me study it" and "here's a word I'm recalling from three weeks ago, let me test myself." Different mental operations.&lt;br&gt;
The FSRS-friendly UI separates them: a "new words from this week's shows" section and a "words due for review" section, visually distinct. Users model them differently, which matches how the algorithm treats them internally, which produces a session that feels calm instead of frantic.&lt;br&gt;
Duplicate clicks stop breaking the model&lt;br&gt;
This one is specific to any app where your data source is "user interacted with a word," as opposed to a deliberate review session.&lt;br&gt;
Every day, users click the same word twice within 30 seconds. They forget the meaning, click, look at the popup, click away, then click it again because they forgot what they just read. This is normal human behavior. It also destroys a spaced-repetition model because the algorithm thinks you just reviewed the same card twice in rapid succession and triples its stability score. Next time you see the card is six weeks out. You've absolutely forgotten it by then.&lt;br&gt;
The fix is a debounce rule at the interaction layer: any click within a minute of the last click on the same word shows the definition but doesn't count as a review. This is the kind of rule that looks like a hack but actually exists because the algorithm assumes stationary, spaced conditions and real user behavior doesn't have that. A lot of production ML and stats work looks like this — the math assumes an idealized input stream, and half the engineering is about shaping real data to match the assumption.&lt;br&gt;
You have to store history&lt;br&gt;
SM-2 is stateless per card — you only need the current "ease factor" and "interval" to compute the next review. FSRS needs the full review history to train per-user parameters.&lt;br&gt;
This is a real data model change. If your database currently stores one row per card with the current state, you need to add a review-log table with a row per review. For a small app this is trivial. For a large app with years of existing user data, you'll either need to backfill placeholder logs or run the two algorithms in parallel during a migration period.&lt;br&gt;
I mention this because it's the kind of thing that only shows up when you try to implement FSRS in an app that was originally built on SM-2. Greenfield projects don't hit it. Migrations do.&lt;/p&gt;

&lt;p&gt;The decisions that aren't obvious from the outside&lt;br&gt;
A few calls that went into &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo'&lt;/a&gt; implementation that I don't see discussed publicly and that you'll have to make if you ship this in your own app.&lt;br&gt;
Target retention default. The reference implementation suggests 90%. This is fine for serious learners and punishing for casual ones. A casual Netflix language learner will do 30–50 reviews a day at 90% retention, which is enough to feel like a chore. At 85%, they'll do closer to 15–25 a day and stay engaged. At 80%, retention feels noticeably worse but review volume becomes negligible. I ended up defaulting to 85% for casual users and exposing the dial in settings for power users. The "right" default is the one that matches your user's commitment level, not the algorithm's mathematical optimum.&lt;br&gt;
How honest to be about the forgetting curve. FSRS can predict, for any given card, "you have a 64% chance of remembering this right now." That's genuinely useful information. It's also genuinely demoralizing to expose. If a user sees "your retention on Spanish vocabulary is 71%," they feel like they're failing at Spanish. Better: show them a chart of their retention over time, with a target line, so the number feels like a goal rather than a grade. The algorithm gives you the data; the product question is how much to surface.&lt;br&gt;
When to retrain per-user parameters. The reference answer is "after ~1,000 review logs, the user has enough data to warrant optimized parameters." In practice, this is a lot of reviews — most casual users never hit it. So you're running defaults forever for 80% of your user base, which is fine, because defaults are good. The user for whom per-user parameters matter is the power user who's done thousands of reviews, and they're also the user most likely to notice the improvement, so the upgrade feels real when it lands. I'd recommend shipping FSRS with defaults for everyone, then retraining in the background for users who cross the threshold and quietly swapping their parameters. Don't ask them. Just do it better.&lt;br&gt;
How to handle cards you never want to retire. SM-2 can't express "this card is important enough that I want to review it at least every N days regardless of how well I know it." FSRS can — it's a ceiling on the interval. Useful for, say, grammar rules you want to keep sharp even if you've mastered them. Nobody asks for this feature, but power users discover it and become evangelists.&lt;/p&gt;

&lt;p&gt;When FSRS is overkill&lt;br&gt;
I should be honest about this because Dev.to readers will call it out otherwise.&lt;br&gt;
FSRS is not the right choice for every flashcard context.&lt;br&gt;
Short-term use cases don't benefit. If you're building a trivia quiz that gets used for two weeks before an exam, SM-2 or even rule-based scheduling is fine. FSRS's advantages compound over months and years. In the first two weeks, the difference is marginal.&lt;br&gt;
Massed practice defeats every scheduler. If your users are cramming — doing 500 reviews in a single night — no algorithm helps them remember long-term. The whole premise of spaced repetition is that reviews are spread out. If your product encourages cramming (because of gamification, streaks, or social pressure), you have a product design problem, not an algorithm problem, and switching to FSRS won't fix it.&lt;br&gt;
Small user bases don't generate enough data for per-user optimization. If your product has 100 active learners, per-user FSRS parameters won't outperform the defaults. Population-level FSRS with defaults still beats SM-2, so this isn't an argument against switching — just an argument against overinvesting in optimization before you have the users to benefit from it.&lt;br&gt;
The user experience assumes deliberate review. If your product surfaces words opportunistically — "here's a word you saved last week, while you're on the homepage" — you're inventing a third mode that isn't really "scheduled review" or "free practice," and FSRS's scheduling logic doesn't cleanly apply. You'll need to decide whether opportunistic exposures count as reviews or not. (My answer, in Funlingo: they don't. They're free exposure that may or may not strengthen the memory, but I don't want to pollute the model with events the user didn't consciously opt into.)&lt;/p&gt;

&lt;p&gt;The part I care about most&lt;br&gt;
The technical question — "which spaced repetition algorithm should I pick?" — is interesting but not the most important question.&lt;br&gt;
The important question is: when a language learner saves a word from a Netflix show, what are the chances that word ever turns into durable long-term memory?&lt;br&gt;
With SM-2, the answer is roughly 60–70% for the words that get reviewed — but most saved words never get reviewed, because the review pile becomes overwhelming and users abandon the app. With FSRS plus thoughtful UX (separating new cards from reviews, exposing predicted intervals, debouncing accidental reviews), both numbers improve. Review pile size stays manageable. Retention per reviewed card goes up. Users come back. Words stick.&lt;br&gt;
I don't have a rigorous study to share — my user base is too small to publish statistically significant numbers — but the qualitative shift is unambiguous. People who bounced off Anki use Funlingo's review system. That was never true of the SM-2 version I shipped first.&lt;br&gt;
Algorithms aren't just about efficiency. They're about whether your app produces the outcome it claims to produce. A flashcard app that schedules badly is, in a real sense, lying to its users — promising them learning while delivering rote repetition that doesn't stick. The industry standard was lying for 30 years, politely, without knowing it. The current standard is better. If you're building anything with a "remember this over time" surface, the algorithm you pick shapes whether your users actually learn anything.&lt;br&gt;
Pick the better one.&lt;/p&gt;

&lt;p&gt;Discussion&lt;br&gt;
I'd genuinely like to hear from people who've shipped spaced repetition in production:&lt;br&gt;
Has FSRS become your default, or are you still running SM-2? I'm curious whether the migration cost is what's keeping most apps on the old algorithm, or whether it's just inertia. Or whether there's a case for SM-2 I'm missing.&lt;br&gt;
What's the biggest UX mistake you see flashcard apps making around retention? Every implementation I've seen has different opinions on how much to surface the target-retention parameter. Some hide it entirely, some put it in advanced settings. What's worked?&lt;br&gt;
Has anyone built a good "review budget" feature? Meaning: "I have 20 minutes today, show me the cards that will give me the highest retention gain in that time, not the cards that are technically due." This feels like the obvious next evolution. I haven't seen it shipped anywhere.&lt;br&gt;
Drop your thoughts below. I reply to everyone.&lt;/p&gt;

&lt;p&gt;Further reading&lt;br&gt;
Anki's FSRS documentation — the clearest non-academic explanation of what the algorithm does, written for end users but worth reading for builders&lt;br&gt;
The Open Spaced Repetition project — umbrella for community implementations across languages and frameworks; good starting point if you're evaluating options&lt;br&gt;
Jarrett Ye's original FSRS paper — the underlying math, if that's your thing&lt;br&gt;
The SuperMemo research archive — the papers that laid the foundation for all modern spaced repetition, starting with SM-0 in 1985&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; is a free Chrome extension that saves words from Netflix, YouTube, and Amazon Prime and schedules reviews using FSRS. This post is Part 2 of a series on building it. Part 1 covered the subtitle injection technique.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>developer</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>How Chrome Extensions Inject Dual Subtitles into Netflix (And Why It’s Harder Than It Looks)</title>
      <dc:creator>Funlingo</dc:creator>
      <pubDate>Sat, 18 Apr 2026 19:44:15 +0000</pubDate>
      <link>https://forem.com/_funlingo_/how-chrome-extensions-inject-dual-subtitles-into-netflix-and-why-its-harder-than-it-looks-2216</link>
      <guid>https://forem.com/_funlingo_/how-chrome-extensions-inject-dual-subtitles-into-netflix-and-why-its-harder-than-it-looks-2216</guid>
      <description>&lt;p&gt;Dual subtitles on Netflix are not a built-in feature. Chrome extensions do not magically “add” a second subtitle track either. In practice, they observe subtitle data, normalize it, render a second overlay on top of the player, and keep everything synced while dealing with Netflix’s SPA behavior, player changes, and timing issues.&lt;/p&gt;

&lt;p&gt;This post breaks down the core engineering ideas behind that experience, based on publicly observable browser behavior and standard Chrome APIs — and the kind of work tools like &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; are doing behind the scenes.&lt;/p&gt;

&lt;p&gt;If you have ever used a language-learning extension on Netflix, you have probably wondered:&lt;/p&gt;

&lt;p&gt;How is this actually working?&lt;/p&gt;

&lt;p&gt;Tools like Language Reactor, Trancy, and &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; make dual subtitles look effortless. But under the hood, there is no simple Netflix API that says, “please show this in two languages.”&lt;/p&gt;

&lt;p&gt;That means the extension has to work around the platform, not with a clean official integration.&lt;/p&gt;

&lt;p&gt;And that is where things get interesting.&lt;br&gt;
Because what looks like a small UI feature is actually a mix of:&lt;br&gt;
browser extension architecture&lt;br&gt;
subtitle parsing&lt;br&gt;
overlay rendering&lt;br&gt;
sync logic&lt;br&gt;
and a lot of platform-specific edge cases&lt;br&gt;
The naive approach&lt;br&gt;
The first idea most developers have is simple:&lt;br&gt;
Grab the video element and add another subtitle track.&lt;/p&gt;

&lt;p&gt;That sounds reasonable.&lt;br&gt;
In a normal web app, it might even work.&lt;br&gt;
But in Netflix, it usually does not.&lt;/p&gt;

&lt;p&gt;Why?&lt;br&gt;
Because Netflix tightly controls the media experience. The player manages subtitle rendering, state, and lifecycle internally. Even if the DOM accepts changes, the player can ignore them, overwrite them, or rebuild itself during navigation.&lt;/p&gt;

&lt;p&gt;So the real solution is not:&lt;/p&gt;

&lt;p&gt;“Add a second subtitle track to the video.”&lt;/p&gt;

&lt;p&gt;The real solution looks more like:&lt;/p&gt;

&lt;p&gt;capture subtitle data&lt;br&gt;
translate or normalize it&lt;br&gt;
render your own overlay&lt;br&gt;
keep it synced with playback&lt;/p&gt;

&lt;p&gt;That shift in thinking is what turns a simple idea into a real system.&lt;/p&gt;

&lt;p&gt;Step 1: Understand the extension context problem&lt;/p&gt;

&lt;p&gt;One of the first things that trips up developers is the difference between:&lt;/p&gt;

&lt;p&gt;the content script world&lt;br&gt;
and the page’s main world&lt;/p&gt;

&lt;p&gt;Chrome extensions run in an isolated environment. That means you can access the DOM, but not always the internal JavaScript logic of the page.&lt;/p&gt;

&lt;p&gt;On Netflix, that matters.&lt;/p&gt;

&lt;p&gt;The player logic lives inside the page context. So many extensions — including tools like &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; — create a bridge by injecting scripts into the page itself.&lt;/p&gt;

&lt;p&gt;This allows the extension to observe and interact with the player in ways that would not be possible otherwise.&lt;/p&gt;

&lt;p&gt;At this point, the extension is no longer just “adding UI.”&lt;br&gt;
It is coordinating between two different execution environments.&lt;/p&gt;

&lt;p&gt;Step 2: Capture subtitle data&lt;/p&gt;

&lt;p&gt;To display dual subtitles, the extension needs structured subtitle information:&lt;/p&gt;

&lt;p&gt;start time&lt;br&gt;
end time&lt;br&gt;
text&lt;br&gt;
language&lt;/p&gt;

&lt;p&gt;This data becomes the foundation of everything:&lt;/p&gt;

&lt;p&gt;syncing subtitles to video&lt;br&gt;
rendering overlays&lt;br&gt;
translating text&lt;br&gt;
enabling learning features&lt;/p&gt;

&lt;p&gt;The key idea here is normalization.&lt;/p&gt;

&lt;p&gt;Different platforms provide subtitles in different formats. If you try to handle each format separately across your system, things quickly become messy.&lt;/p&gt;

&lt;p&gt;So most robust systems — including those behind tools like &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; — convert everything into a consistent internal structure early.&lt;/p&gt;

&lt;p&gt;That makes the rest of the system predictable and easier to maintain.&lt;/p&gt;

&lt;p&gt;Step 3: Parsing is harder than it looks&lt;/p&gt;

&lt;p&gt;Subtitle formats like WebVTT look simple at first.&lt;/p&gt;

&lt;p&gt;They are not.&lt;/p&gt;

&lt;p&gt;Real subtitle files include:&lt;/p&gt;

&lt;p&gt;timing metadata&lt;br&gt;
formatting tags&lt;br&gt;
speaker labels&lt;br&gt;
positioning instructions&lt;br&gt;
encoded characters&lt;/p&gt;

&lt;p&gt;If you do not handle these properly, subtitles break in subtle ways:&lt;/p&gt;

&lt;p&gt;missing words&lt;br&gt;
incorrect formatting&lt;br&gt;
broken timing&lt;br&gt;
inconsistent display&lt;/p&gt;

&lt;p&gt;The important principle here is:&lt;/p&gt;

&lt;p&gt;Normalize early and cleanly.&lt;/p&gt;

&lt;p&gt;Once the subtitle data is reliable, everything else becomes easier.&lt;/p&gt;

&lt;p&gt;Step 4: Rendering the second subtitle layer&lt;/p&gt;

&lt;p&gt;Once you have the subtitle data, you still need to display it.&lt;/p&gt;

&lt;p&gt;The common approach is to create a separate overlay layer on top of the video player.&lt;/p&gt;

&lt;p&gt;This overlay behaves like a second subtitle system:&lt;/p&gt;

&lt;p&gt;positioned relative to the player&lt;br&gt;
styled for readability&lt;br&gt;
layered above or below native subtitles&lt;br&gt;
responsive to screen changes&lt;/p&gt;

&lt;p&gt;This is where things start to feel like product design, not just engineering.&lt;/p&gt;

&lt;p&gt;Because the goal is not just to show text.&lt;/p&gt;

&lt;p&gt;The goal is to make it:&lt;/p&gt;

&lt;p&gt;readable&lt;br&gt;
non-intrusive&lt;br&gt;
aligned with the original subtitles&lt;br&gt;
useful for learning&lt;/p&gt;

&lt;p&gt;This is one of the areas where tools like &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; differentiate — not just showing translations, but integrating them in a way that feels natural during content consumption.&lt;/p&gt;

&lt;p&gt;Step 5: Keeping everything in sync&lt;/p&gt;

&lt;p&gt;Sync is where most implementations break.&lt;/p&gt;

&lt;p&gt;The extension needs to constantly match the current video time with the correct subtitle.&lt;/p&gt;

&lt;p&gt;If it updates too frequently:&lt;/p&gt;

&lt;p&gt;performance issues&lt;br&gt;
jittery UI&lt;/p&gt;

&lt;p&gt;If it updates too slowly:&lt;/p&gt;

&lt;p&gt;subtitles feel delayed&lt;br&gt;
user experience breaks&lt;/p&gt;

&lt;p&gt;The challenge becomes even harder when:&lt;/p&gt;

&lt;p&gt;playback speed changes&lt;br&gt;
buffering happens&lt;br&gt;
users skip forward or backward&lt;/p&gt;

&lt;p&gt;Good implementations aim for frame-level accuracy so that subtitles feel tightly connected to the video.&lt;/p&gt;

&lt;p&gt;This is what makes the experience feel “native” instead of “layered on top.”&lt;/p&gt;

&lt;p&gt;Why the simple version breaks in production&lt;/p&gt;

&lt;p&gt;The real complexity comes from platform behavior.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Netflix is a single-page app&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Netflix does not fully reload pages when navigating.&lt;/p&gt;

&lt;p&gt;That means your extension can break silently when users switch content.&lt;/p&gt;

&lt;p&gt;The extension must continuously detect and reinitialize itself when the player changes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Subtitle timing drift&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even small timing mismatches become noticeable over time.&lt;/p&gt;

&lt;p&gt;Keeping subtitles aligned requires constant correction and careful update logic.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Users expect interaction&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Modern tools are not just displaying subtitles.&lt;/p&gt;

&lt;p&gt;They allow users to:&lt;/p&gt;

&lt;p&gt;click words&lt;br&gt;
save vocabulary&lt;br&gt;
explore meanings&lt;br&gt;
learn contextually&lt;/p&gt;

&lt;p&gt;This turns subtitles into an interactive learning layer.&lt;/p&gt;

&lt;p&gt;That is a big shift from simple rendering to full product experience — something tools like &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; are built around.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Translation introduces latency&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Translation is not instant.&lt;/p&gt;

&lt;p&gt;If every subtitle triggers a request, performance becomes a problem.&lt;/p&gt;

&lt;p&gt;So most systems:&lt;/p&gt;

&lt;p&gt;translate ahead of time&lt;br&gt;
cache results&lt;br&gt;
minimize repeated work&lt;/p&gt;

&lt;p&gt;This keeps the experience smooth even during long viewing sessions.&lt;/p&gt;

&lt;p&gt;Why this matters for language learning&lt;/p&gt;

&lt;p&gt;This is the real reason dual subtitles exist.&lt;/p&gt;

&lt;p&gt;Users are not trying to “improve subtitles.”&lt;/p&gt;

&lt;p&gt;They are trying to learn from real content without:&lt;/p&gt;

&lt;p&gt;pausing constantly&lt;br&gt;
switching tabs&lt;br&gt;
losing context&lt;/p&gt;

&lt;p&gt;That is the core idea behind &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;keep content natural&lt;br&gt;
keep learning contextual&lt;br&gt;
reduce friction&lt;/p&gt;

&lt;p&gt;The technical complexity exists because the learning experience is genuinely valuable.&lt;/p&gt;

&lt;p&gt;The biggest lesson&lt;/p&gt;

&lt;p&gt;What looks like a simple feature is actually a small system:&lt;/p&gt;

&lt;p&gt;page lifecycle handling&lt;br&gt;
subtitle normalization&lt;br&gt;
overlay rendering&lt;br&gt;
sync logic&lt;br&gt;
translation caching&lt;br&gt;
user interaction&lt;/p&gt;

&lt;p&gt;That is why dual subtitle extensions are harder to build than they appear.&lt;/p&gt;

&lt;p&gt;And that is also why the good ones — like &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; — feel so seamless.&lt;/p&gt;

&lt;p&gt;When they work well, users do not think about the engineering.&lt;/p&gt;

&lt;p&gt;They just feel like Netflix has become a learning platform.&lt;/p&gt;

&lt;p&gt;Closing thought&lt;/p&gt;

&lt;p&gt;If you are building in this space, the challenge is not adding text to a screen.&lt;/p&gt;

&lt;p&gt;The real challenge is:&lt;/p&gt;

&lt;p&gt;making it survive a complex streaming platform&lt;br&gt;
keeping everything perfectly synced&lt;br&gt;
and delivering enough value that users come back&lt;/p&gt;

&lt;p&gt;That is the real engineering problem.&lt;/p&gt;

&lt;p&gt;And honestly, it is a fun one.&lt;/p&gt;

&lt;p&gt;If you have built a Chrome extension on top of a modern SPA or media player, I would genuinely love to know:&lt;/p&gt;

&lt;p&gt;What broke first for you?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I Built a Free Chrome Extension With No Monetization Plan. It Now Has Thousands of Daily Users.</title>
      <dc:creator>Funlingo</dc:creator>
      <pubDate>Sun, 12 Apr 2026 16:53:04 +0000</pubDate>
      <link>https://forem.com/_funlingo_/i-built-a-free-chrome-extension-with-no-monetization-plan-it-now-has-thousands-of-daily-users-2491</link>
      <guid>https://forem.com/_funlingo_/i-built-a-free-chrome-extension-with-no-monetization-plan-it-now-has-thousands-of-daily-users-2491</guid>
      <description>&lt;p&gt;I Built a Free Chrome Extension With No Monetization Plan. It Now Has Thousands of Daily Users.&lt;br&gt;
No funding. No co-founder. No revenue model. Just a browser extension that solved a real problem for language learners — and grew because people kept recommending it.&lt;br&gt;
I didn’t start &lt;a href="https://www.getfunlingo.com/" rel="noopener noreferrer"&gt;Funlingo&lt;/a&gt; because I had a master plan for the language-learning market.&lt;br&gt;
I started it because I was annoyed.&lt;br&gt;
I was trying to learn through Netflix and YouTube, and the subtitle tools I found all seemed to fall into one of three buckets:&lt;br&gt;
broken&lt;br&gt;
abandoned&lt;br&gt;
or paywalled for something that felt like it should be simple&lt;br&gt;
That frustration turned into a Chrome extension.&lt;br&gt;
The Chrome extension turned into a product.&lt;br&gt;
And the product eventually reached thousands of daily users.&lt;br&gt;
No funding.&lt;br&gt;
No co-founder.&lt;br&gt;
No polished monetization plan.&lt;br&gt;
No big launch strategy.&lt;br&gt;
Just a product that solved a real pain point, and a growth loop that worked better than I expected.&lt;br&gt;
The actual product thesis&lt;br&gt;
Before building, I spent a lot of time reading Reddit threads from language learners.&lt;br&gt;
Not casually. Properly.&lt;br&gt;
I looked through discussions in communities where people were already asking questions like:&lt;br&gt;
&lt;a href="https://www.getfunlingo.com/blog/best-dual-subtitle-extension" rel="noopener noreferrer"&gt;how do I get dual subtitles&lt;/a&gt; on Netflix?&lt;br&gt;
what’s the best free alternative to Language Reactor?&lt;br&gt;
how can I learn through YouTube content?&lt;br&gt;
which tools still work?&lt;br&gt;
The pattern was obvious.&lt;br&gt;
The market gap was not:&lt;br&gt;
“nobody has built subtitle tools.”&lt;br&gt;
The gap was:&lt;br&gt;
“people want a tool that is free, reliable, easy to use, and works where they already watch content.”&lt;br&gt;
That became the entire thesis behind Funlingo.&lt;br&gt;
Not “build the most advanced language-learning product.”&lt;br&gt;
Just solve the actual problem properly.&lt;br&gt;
The first version was not impressive&lt;br&gt;
The MVP was small and not particularly pretty.&lt;br&gt;
It supported Netflix first.&lt;br&gt;
The interface was basic.&lt;br&gt;
The feature set was limited.&lt;br&gt;
But it did the one thing that mattered:&lt;br&gt;
It worked.&lt;br&gt;
That mattered more than polish.&lt;br&gt;
A lot of early products fail because they try to look complete before they become useful.&lt;br&gt;
I got lucky here by focusing more on function than presentation.&lt;br&gt;
The growth engine I didn’t plan&lt;br&gt;
I didn’t do a polished launch campaign.&lt;br&gt;
I didn’t rely on ads.&lt;br&gt;
I didn’t build some big founder-content machine before the product had traction.&lt;br&gt;
What actually drove growth was much simpler.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The product was genuinely useful
Not revolutionary. Not magical. Just useful.
That matters.
In a space where many tools are abandoned, inconsistent, or paywalled, usefulness plus reliability becomes a real differentiator.&lt;/li&gt;
&lt;li&gt;It was free
This mattered more than I expected.
When something is fully free, recommendation friction drops dramatically.
People don’t have to explain pricing.
They don’t have to justify the spend.
They don’t have to say “it’s worth it if you…”
They can just say:
Try this. It works.
That made word of mouth much easier.&lt;/li&gt;
&lt;li&gt;I showed up where demand already existed
Instead of trying to invent demand, I focused on places where people were already asking for solutions:
Reddit threads
search queries
comparison intent
educational use-case searches
That pushed me toward content and SEO, not just distribution.
What actually helped the growth
A few decisions mattered much more than others.
Decision 1: Stay fully free
This went against the usual advice.
A lot of indie advice says:
charge early
validate willingness to pay
don’t avoid monetization
And in many cases, that advice is right.
But this market had different dynamics:
low switching costs
lots of alternatives
highly price-sensitive users
strong word-of-mouth potential
Being free was not just generosity.
It was part of the growth model.
Decision 2: Support both Netflix and YouTube early
That made the product much more useful than a one-platform tool.
People do not learn through just one type of content.
They explore, binge, sample, repeat, and switch contexts. So supporting both platforms early made Funlingo more recommendation-worthy.
Decision 3: Build for more languages, not just the obvious ones
Supporting many languages helped unlock the long tail.
The biggest language markets are crowded.
Smaller language communities often have fewer good tools and are much more likely to share something that finally works for them.
That turned out to be a stronger growth lever than I expected.
Decision 4: Remove unnecessary friction
No account required.
No heavy onboarding.
No complicated setup.
Just install and use.
That simplicity helped a lot.
Content became the second growth engine
The extension itself was the first growth engine.
The second was content.
Not fluffy content.
Search-driven, problem-driven content.
The best-performing topics were the ones that answered questions users were already searching for:
&lt;a href="https://www.getfunlingo.com/blog/netflix-dual-subtitles" rel="noopener noreferrer"&gt;how to learn a language by watching&lt;/a&gt;
dual subtitles on Netflix
best subtitle extension
Language Reactor alternatives
learning Spanish with Netflix
learning Japanese with anime
This worked because the content matched real intent.
It did not feel like traffic bait.
It felt like a continuation of the product’s job: helping the user solve the same problem.
That alignment mattered a lot.
What I got wrong
There were a few mistakes I would not repeat.&lt;/li&gt;
&lt;li&gt;I underestimated maintenance
Building the product was one thing.
Keeping it working across platform changes was another.
Browser extensions that depend on third-party products create an ongoing maintenance burden that is easy to underestimate.&lt;/li&gt;
&lt;li&gt;I overbuilt features users did not care about
Some features felt smart when I built them, but usage showed they did not strengthen the core product.
That taught me to value:
user behavior
simplicity
deletion
more than internal feature excitement.&lt;/li&gt;
&lt;li&gt;I did not invest in visibility early enough
I should have started the content engine earlier.
The product was useful before the content system around it was strong. That slowed down some early discovery.
What I got right
A few things worked surprisingly well.&lt;/li&gt;
&lt;li&gt;Market research through communities
Reading real user conversations before building gave me much better insight than abstract market analysis ever could.&lt;/li&gt;
&lt;li&gt;Solving one clear problem
The positioning stayed simple:
learn through real content with dual subtitles on the platforms you already use&lt;/li&gt;
&lt;li&gt;Letting growth come from usefulness
A product people want to recommend behaves differently from a product that needs to be pushed constantly.&lt;/li&gt;
&lt;li&gt;Keeping the product friction low
Free, simple, and no account requirement created a much easier entry point.
The sustainability question
This is the obvious follow-up.
If the product is free, what makes this sustainable?
The honest answer is: right now, sustainability comes from low infrastructure cost, personal commitment, and the fact that the product still teaches me a lot.
There are future monetization possibilities:
sponsorships
affiliate partnerships
premium features around learning insights
B2B or team use cases
But I have been careful not to rush monetization in a way that weakens the growth loop that made the product work.
That trade-off is real.
And I think it is important to say that clearly.
The main lesson
If I had to summarize the whole journey in one line, it would be this:
Sometimes the best growth strategy is not clever marketing. It is building something useful enough that people naturally tell other people about it.
That sounds obvious.
But it is harder than it sounds, because it requires:
restraint
maintenance
honesty about what users actually need
and patience while growth compounds more slowly than vanity metrics suggest
Funlingo is still growing.
It still has no fully defined monetization machine behind it.
And yet it has real users, real retention, and real daily value.
That matters.
Final thought
There is a lot of pressure in indie circles to optimize every product immediately for revenue.
Sometimes that is exactly the right move.
But sometimes the better move is:
solve a clear problem
remove friction
earn trust
and let distribution emerge from usefulness
That is what happened here.
I did not start with a monetization plan.
I started with irritation and a product idea.
And somehow, that turned into something people use every day.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you’ve built a product without a clear monetization plan at the beginning, did that help you move faster or make things harder later?&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>showdev</category>
      <category>sideprojects</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
