<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Cara Jung</title>
    <description>The latest articles on Forem by Cara Jung (@carasjung).</description>
    <link>https://forem.com/carasjung</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/carasjung"/>
    <language>en</language>
    <item>
      <title>Pick Your Auth: An Interactive Guide</title>
      <dc:creator>Cara Jung</dc:creator>
      <pubDate>Mon, 13 Apr 2026 16:00:00 +0000</pubDate>
      <link>https://forem.com/carasjung/pick-your-auth-an-interactive-guide-44</link>
      <guid>https://forem.com/carasjung/pick-your-auth-an-interactive-guide-44</guid>
      <description>&lt;p&gt;Most auth tutorials focus on how authentication works such as how to drop in a component, spin up a dev server, and get a login screen running. There's no shortage of guides that tell you which method to use for your use case. What's missing is the hands-on part: actually experiencing each flow the way your users do, so you can feel the friction, see the session it produces, and make an informed decision from the ground up.&lt;/p&gt;

&lt;p&gt;Magic link or passkey? Social login or OTP? The answer changes depending on whether you're building a consumer app, a fintech product, a B2B SaaS, or an internal tool. The choice is a product decision that affects activation, security posture, compliance, and long-term maintainability.&lt;/p&gt;

&lt;p&gt;To tackle this dilemma, I built an interactive demo called &lt;strong&gt;Auth Decision Kit&lt;/strong&gt; that lets you try three Descope auth flows live: magic link, social login, and passkey. This demo focuses on how each approach fits different product contexts and the tradeoffs you need to consider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demo: &lt;a href="https://auth-decision-kit.vercel.app/" rel="noopener noreferrer"&gt;auth-decision-kit.vercel.app&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;GitHub: &lt;a href="https://github.com/carasjung/auth-decision-kit" rel="noopener noreferrer"&gt;github.com/carasjung/auth-decision-kit&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each method has five tabs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;01 Auth Flow&lt;/strong&gt;&lt;br&gt;
Authenticate for real using a live Descope integration. See the actual UX users experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;02 Session Inspector&lt;/strong&gt;&lt;br&gt;
After authenticating, inspect every claim in your JWT payload. Each field is annotated with what it means, why it matters, and when you'd use it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;03 Decision Matrix&lt;/strong&gt;&lt;br&gt;
Green / yellow / red ratings across six product contexts: B2B, consumer app, developer tool, internal tool, fintech, SaaS, and mobile-first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;04 Failure Simulator&lt;/strong&gt; &lt;br&gt;
Trigger each failure mode and see the Descope error code and the correct handling code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;05 Code&lt;/strong&gt; &lt;br&gt;
Copy-ready implementation snippets for Next.js&lt;/p&gt;


&lt;h2&gt;
  
  
  The Session Inspector: JWT Breakdown
&lt;/h2&gt;

&lt;p&gt;One of the most useful things I learned building this is how different the JWT payload looks depending on which auth method you used and why they matter for your backend logic.&lt;/p&gt;

&lt;p&gt;After a &lt;strong&gt;magic link&lt;/strong&gt; auth, your session contains &lt;code&gt;authenticationMethod: "magiclink"&lt;/code&gt; and &lt;code&gt;verifiedEmail: true&lt;/code&gt;. The email verification is implicit, clicking the link is proof of inbox access. This is a meaningful signal for risk scoring and it also shows that magic link is a single factor (access to your inbox). For products that require two factors like healthcare and fintech, magic link on its own won’t satisfy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdgj5efcxw28gvywwfou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdgj5efcxw28gvywwfou.png" alt="Magic link session" width="800" height="814"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After &lt;strong&gt;social login&lt;/strong&gt;, you get the provider's access token nested under &lt;code&gt;oauth.google.accessToken&lt;/code&gt; (or whichever provider). You also get &lt;code&gt;externalIds.google&lt;/code&gt;, a stable provider-specific user ID that won't change even if the user changes their email address on Google's side. That's the field you want for account linking. Since you get free profile data, this is effective for consumer and developer tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyaisvibzux3vf9xzg3lz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyaisvibzux3vf9xzg3lz.png" alt="Social login session" width="800" height="1211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a &lt;strong&gt;passkey&lt;/strong&gt; auth, the &lt;code&gt;amr&lt;/code&gt; (Authentication Methods References) claim contains &lt;code&gt;"hwk"&lt;/code&gt; (hardware key) and &lt;code&gt;"user"&lt;/code&gt;. This is the claim compliance teams care about. It's proof that a hardware-bound credential was used, not just a password or a link. Passkey is also the only method here where the private key never leaves the user’s device. Even a full Descope breach couldn’t expose user credentials. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hsm5v3asdr9722ij2gy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hsm5v3asdr9722ij2gy.png" alt="Passkey session" width="800" height="892"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  The Decision Matrix
&lt;/h2&gt;

&lt;p&gt;Here's a condensed version of what I found after thinking through six product contexts for each method:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4s1by4v9a4f8ms740j2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4s1by4v9a4f8ms740j2f.png" alt="Decision matrix" width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Magic link&lt;/strong&gt; is the sweet spot for B2B SaaS and early-stage products. Zero password management, implicit email verification, and simple implementation. However, it falls apart on mobile (context switch to email app kills conversion) and in high-security contexts where email as a sole factor isn't enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Social login&lt;/strong&gt; is the fastest path to activation for consumer and developer tools. GitHub login in particular gives you free org and repo data via the OAuth token, which is useful for developer-focused products. Avoid it for fintech and banking where regulations often require you to own the identity directly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Passkey&lt;/strong&gt; is genuinely the best option for mobile-first and high-security context. Phishing-proof by design, the private key never leaves the device. The catch: users still need education on what a passkey is and you need a fallback for older browsers and lost devices.&lt;/p&gt;

&lt;p&gt;Most products should offer at least two methods where one can be the default while the other an alternative. For instance, using magic link as the default and passkey as the upgrade path once users are comfortable.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Failure Simulator
&lt;/h2&gt;

&lt;p&gt;Auth flows break in predictable ways. Understanding those failure points from day one lets you design a seamless recovery experience so users can continue without friction and avoid escalating to support.&lt;/p&gt;

&lt;p&gt;The failure simulator surfaces these scenarios using real Descope error codes and responses. While it doesn’t make live network calls, it replays actual API error outputs so you can explore failure cases without having to intentionally break a real session.&lt;/p&gt;

&lt;p&gt;Magic links expire (Descope's default is 2 minutes). When they do, the &lt;code&gt;onError&lt;/code&gt; callback fires with error code &lt;code&gt;E011303&lt;/code&gt;. Your UI should catch this and offer to send a new link, not show a generic error message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu90m6m8cyp4cr8woo4ap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu90m6m8cyp4cr8woo4ap.png" alt="Error code for expired magic link" width="800" height="710"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Social login gets cancelled. Users click "Continue with Google," see the permissions screen, and hit Cancel. That fires &lt;code&gt;E062503&lt;/code&gt;. The right response is to return the user silently to the login screen and treat a cancellation as a choice, not an error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobkmy2bwzlh81m8690e4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobkmy2bwzlh81m8690e4.png" alt="User denied session error for social login" width="800" height="779"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Passkeys on new devices fire &lt;code&gt;E083002&lt;/code&gt; (WebAuthn NotAllowedError). The recovery flow is: fall back to magic link or OTP to verify identity, then offer to enroll a passkey on the new device. This is also why you should never make passkey the only auth method since you always need a fallback for device loss.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fersspagsktjwj64cobgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fersspagsktjwj64cobgb.png" alt="Passkey failure error" width="800" height="873"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Stack and Setup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Next.js 15&lt;/strong&gt; with App Router&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Descope Next.js SDK&lt;/strong&gt; (&lt;code&gt;@descope/nextjs-sdk&lt;/code&gt;) for auth flows and session management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framer Motion&lt;/strong&gt; for tab transitions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailwind CSS&lt;/strong&gt; for layout&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The entire setup is about 800 lines of TypeScript across nine files. All core data (steps, session highlights, decision matrix scores, failure scenarios, and code snippets) lives in a single &lt;code&gt;lib/auth.ts&lt;/code&gt; file. Adding a new auth method requires only a single entry point, keeping the system easy to extend.&lt;/p&gt;

&lt;p&gt;To run it yourself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/carasjung/auth-decision-kit
&lt;span class="nb"&gt;cd &lt;/span&gt;auth-decision-kit
npm &lt;span class="nb"&gt;install
cp&lt;/span&gt; .env.local.example .env.local
&lt;span class="c"&gt;# add your NEXT_PUBLIC_DESCOPE_PROJECT_ID&lt;/span&gt;
npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll need a free Descope account. Once you’ve created your account, grab your Project ID from &lt;a href="https://app.descope.com/settings/project" rel="noopener noreferrer"&gt;app.descope.com/settings/project&lt;/a&gt; and configure a &lt;code&gt;sign-up-or-in&lt;/code&gt; flow with whichever methods you want to test.&lt;/p&gt;




&lt;h2&gt;
  
  
  From Demo to Decision
&lt;/h2&gt;

&lt;p&gt;There are plenty of great auth demos that show how things work. This one focuses on how to choose between them.&lt;/p&gt;

&lt;p&gt;Auth is infrastructure and like many infrastructure decisions, the cost of getting it wrong rarely shows up immediately. It appears later through conversion drop-offs, security tradeoffs, compliance constraints, and migrations.&lt;/p&gt;

&lt;p&gt;While modern tools make it easier to support multiple methods and evolve your approach over time, the decision of what to use and when still requires good judgement upfront. This project is designed to help make that choice more intentional.&lt;/p&gt;

&lt;p&gt;The live demo is at &lt;strong&gt;&lt;a href="https://auth-decision-kit.vercel.app/" rel="noopener noreferrer"&gt;auth-decision-kit.vercel.app&lt;/a&gt;&lt;/strong&gt; and the full source is on &lt;strong&gt;&lt;a href="https://github.com/carasjung/auth-decision-kit" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/strong&gt;. &lt;/p&gt;

</description>
      <category>authentication</category>
      <category>nextjs</category>
      <category>webdev</category>
      <category>security</category>
    </item>
    <item>
      <title>What Predicts a Hit? I Trained 3 ML Models to Find Out</title>
      <dc:creator>Cara Jung</dc:creator>
      <pubDate>Mon, 06 Apr 2026 07:00:00 +0000</pubDate>
      <link>https://forem.com/carasjung/what-predicts-a-hit-i-trained-3-ml-models-to-find-out-31mj</link>
      <guid>https://forem.com/carasjung/what-predicts-a-hit-i-trained-3-ml-models-to-find-out-31mj</guid>
      <description>&lt;p&gt;In many entertainment adaptation decisions, content selections are still instinct-driven. Maybe a producer was vibing with a story or overheard their Gen Alpha nephew mentioning a GOAT title. This subjective approach has often led to expensive missteps and wasted resources for studios when the feature or show turns into a flop. &lt;/p&gt;

&lt;p&gt;As someone who has worked in the breeding ground of popular webcomics, I asked: what if there was a system that could measure “success potential” of IPs based on real user behavior? Using ML, I wanted to see if I could build a forecasting model that could rank unadapted titles by their predicted commercial success. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For my endeavor, I worked with three datasets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source material metadata of roughly 1,500 titles that included engagement metrics such as views, likes, subscribers, genre, release schedule, and creator usernames&lt;/li&gt;
&lt;li&gt;Produced show metadata of 1,977 titles including ratings, watcher counts, genre, episode count, and cast&lt;/li&gt;
&lt;li&gt;Historical webcomic adaptation records of 424 cross-referenced titles that went from source material to screen, with data pulled from both sides&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before any modeling, I ran exploratory data analysis on all three and found a few things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Engagement metrics (likes, views, subscribers) were strongly correlated with each other and overall popularity&lt;/li&gt;
&lt;li&gt;Genre and tags correlated with watcher counts in the produced show data&lt;/li&gt;
&lt;li&gt;Creator frequency showed no statistically significant impact on adaptation success, which directly contradicted what studios commonly assume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhirqnr4guj1s9aabpv0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhirqnr4guj1s9aabpv0v.png" alt="Modeling Pipeline" width="800" height="853"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Engineering the Target Variable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One hurdle I ran into was that I couldn't directly measure adaptation “success" from the source material side alone. So I engineered a composite Popularity Score by normalizing and combining views, likes, and subscribers into a single metric representing audience appeal, which became the target variable for prediction.&lt;/p&gt;

&lt;p&gt;For the produced show data, I created a parallel score using rating and watcher count.&lt;/p&gt;

&lt;p&gt;Since correlation analysis confirmed that source popularity and show popularity moved together in historical adaptations, I used source popularity as a proxy target.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4zeu7ke58soe0baxo2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4zeu7ke58soe0baxo2z.png" alt="Close overlap between actual and predicted curves" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple vs Complex Models&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I implemented three models: Random Forest, XGBoost, and Ridge Regression. If you worked with ML models, there’s an expectation that the more complex models will win. However, this wasn’t the case. Ridge Regression became the unexpected underdog model that won:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dqerf5x3rq1u3tnxs0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dqerf5x3rq1u3tnxs0d.png" alt="Cross-validation applied across all three models to reduce overfitting risk and validate stability on the adaptation dataset." width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I cross-validated all three models to reduce overfitting and validate stability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Likes = Success&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using standardized coefficients for feature importance in the Ridge model, the ranking was as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Likes (strongest predictor by a significant margin)&lt;/li&gt;
&lt;li&gt;Views&lt;/li&gt;
&lt;li&gt;Subscribers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The factors that studios often focus on such as creator reputation, genre, rating, and engagement rate showed weak or no statistical significance.&lt;/p&gt;

&lt;p&gt;I validated this further using Mann-Whitney U tests comparing adapted titles against the general pool. Adapted titles showed significantly higher “likes” than non-adapted ones and the difference was meaningful.&lt;/p&gt;

&lt;p&gt;Feature Importance for Ridge regression(standardized coefficients)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pnlaapyh3eq148fjh7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pnlaapyh3eq148fjh7m.png" alt="Creator, genre, and rating showed no statistically significant impact and were excluded from the final model" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So why “likes”? &lt;/p&gt;

&lt;p&gt;One interpretation is that likes are intentional. A view can be passive while a subscription can be habitual. But giving a “like” is an act of emotional investment and this behavior is exactly what translates from IP to screen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The final model produced a ranked list of the top 10 unadapted webcomic titles by predicted success, along with contextual signals for each including genre appeal, subscriber trends, engagement consistency, and creator track record.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fj4jgpy0pa8lwvh94wa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fj4jgpy0pa8lwvh94wa.png" alt="Top unadapted titles" width="800" height="603"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Qualitative review of the top 10 confirmed alignment with the engagement patterns seen in historically successful adaptations. Cliff's Delta calculations showed that the predicted top titles had significantly higher likes than past adaptations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations on the Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Part of doing good data work is being honest about the limitations. There were a few things that fell short:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Small adaptation dataset. 424 entries is workable, but more data would reduce overfitting risk and better generalization.&lt;/li&gt;
&lt;li&gt;Proxy target variable. Using source popularity instead of actual show performance is a justified simplification, but it means the model can't fully capture real-world production quality, casting, or distribution reach.&lt;/li&gt;
&lt;li&gt;Categorical features dropped. Creator and genre have too many levels and their coefficients dominated the model without adding significance. Excluding them improved interpretability but at the cost of losing some nuance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What I'd Do Next&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If I extended this project, I'd rethink how signal is captured and focus on the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use NLP for deeper context

&lt;ul&gt;
&lt;li&gt;Synopsis embeddings or sentiment analysis on reader reviews could capture thematic richness that raw engagement metrics miss.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Take a hybrid ranking approach

&lt;ul&gt;
&lt;li&gt;Combining regression with a learning-to-rank algorithm could improve recommendation quality at the top of the list, where small differences actually matter.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Longitudinal validation

&lt;ul&gt;
&lt;li&gt;The real test is tracking what happens when predicted titles actually get produced. Building a feedback loop into the model would sharpen it over time.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The core insight here doesn’t only strictly apply to entertainment. It can apply to decisions that are being made by intuition or legacy practice. As the models showed, behavioral signals from real users outperform assumptions about what will succeed.&lt;/p&gt;

&lt;p&gt;Likes beat creator prestige. Engagement beat genre conventions. The audience’s preferences, not the ones from industry decision makers, predicted outcomes more reliably.&lt;/p&gt;

&lt;p&gt;Whether you're choosing which content to produce, which features to build, or which markets to enter, the same principle applies. The answers are within the data, but we often overlook the right signals. &lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>python</category>
      <category>datascience</category>
    </item>
  </channel>
</rss>
